Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!

Some example tasks that are now more-or-less now automatable

  • Please log in to reply
6 replies to this topic




  • Members
  • PipPipPipPipPipPipPipPip
  • 1,961 posts

So, you've no doubt heard about OpenAI's GPT-3 large language model, and its ability to learn to perform new tasks given only a few examples.  They're not very "deep" examples, mind you; but what it can do indicates to me that a lot of mundane office tasks are within reach of being automated.  A few slightly artificial examples I posted a few days ago:




What I want to do now, instead, is point out actual tasks in my own job that are automatable.  I should be careful here:  they have been automatable for some time; however, for each new task, you have to buy a new piece of software to automate that particular task.  What OpenAI GPT-3 large language model would allow you to do is to quickly automate any task of the type I'm going to mention, using just that one program.  No need to buy 100 different pieces of software for 100 different tasks.  


I'm not so naive to think that the model is perfect, and is already an AGI.  You don't need anything like that to automate a lot of tasks.  Templates would almost work -- but not quite (which is why you would need to buy custom-built software).  There are also "robot process automation" tools that one could use; but you have to show it exactly what you want it to do, which can be time-consuming for smaller tasks (and there are a lot of those smaller tasks).  


Ok, so here goes:


Writing short summaries of graduate application files


A lot (at least 1,000) of students apply, and their folders are large, and time-consuming to read through.  There are:  transcripts, biographical information, GRE scores, personal statements, letters of recommendation, and possibly even papers.  


What you'd like to do is to write a short summary pointing out the highlights -- e.g. they got such and so score on the subject GRE exam; such and so well-known letter writer at well-known school said, "best student in 5 years"; maybe they are younger; maybe they did some organizing of student research groups or tutoring; and so on. 


I'd say about, probably, 10 or 20 examples of what you're looking for would suffice to produce a summary and overall score to rank the candidate.  


It's pretty formulaic, and close to being extractive or abstractive summarization.  But it's a real pain in the ass to read many files.  


However, it's not exactly summarization, because you have to know how to weigh the information.  For example, if one letter writer is not well-known and says something good, that will count less than if the writer is a big-shot.  GPT-3 probably has absorbed enough information on the web to know when a letter writer is famous and when they aren't; but this would be time-consuming to program into a system to automate the evaluation process.


Because the program may make mistakes (e.g. some famous letter writers are famous for only saying good things, so their opinion should be taken with a grain of salt), one would need to read and check its work.  Even though the summaries would have to be checked, the whole process would still be sped considerably thanks to the system's help.


Writing promotion letters


Promotion letters are similar.  The input is a CV, research statement, teaching statement, service statement, and external letters.  The output is a pretty formulaic letter that begins by saying who the person is, how many years they have on their tenure clock, and what field they're in.  Then, the field is described, and the importance of their research is described.  Next, some comments from the anonymized letter writers are extracted and printed -- they say things like, "Reviewer A said that `the progress made he has made is nothing short of remarkable'; reviewer B said, `these new results on integrable systems will have a far-reaching impact' ".  Finally, teaching and service is maybe mentioned.


The game here is mainly about picking out what lines from the letters to include, as well as summarizing the research statement and teaching statement.  Again, it's something you could probably automate, given access to a state-of-the-art summarization system + text synthesis + grammar checker.  But you'd have to pay some company a lot of money for that; and there aren't that many files to consider -- but it's still time-consuming.


I'd say probably about 10 or 20 examples of how to do it might be enough.  If it's too difficult for the system, then you could meet it half-way:  you provide it the summary, biographical information (basically the important stuff from the CV), and the letters, and other statements, and then it outputs a very formulaic letter for you.  It would have to find some key lines in the letters tot include -- but it could probably do that.  And if it can't, you could also provide it those lines.  It would still save a lot of time.


Reading over papers for basic errors


Refereeing is very time-consuming.  There are multiple levels of refereeing:


* You could check that all the grammar and use of symbols and terminology are correct.  That should be automatable, if the system has been trained or fine-tuned with enough LaTeX files of papers.  But, again, that requires paying someone for a specific piece of software to do the job.  Wouldn't it be great if you could show it some examples, instead?


* You could look for errors in notation.  Maybe the variable x on one page means one thing, and then it's used in a different way on another page, later in the writeup.  That kind of error is common.


* Maybe it could catch simple errors, like where the "inequalities go the wrong way" (e.g. you have x < y, y > z, and then try to conclude x < z), sign errors, a missing term, and so on.


I wouldn't expect it to catch deeper errors, and actually understand the proofs, without training it on formal reasoning.  But you might be able to catch some things that just look suspicious -- and I would expect it to be able to do this, if it has been trained on enough papers.  


I'd say that you could feed the system about 20 papers, with some errors flagged, and it would get the idea how to look for them, and speed up the refereeing process.


Summarizing meetings


Just record a meeting, and transcribe it using speech recognition.  Then, show the system what a meeting summary should look like (with about 10 examples); and it will start producing summaries from transcripts.  This is a summarization task, really -- but it's of a specific form, so would normally take a separate piece of software.


Writing reference letters


They're generally formulaic, but should be written with an enthusiastic style.  What you could do is train a model to take as input a few random comments -- "best student I've had this year"; "surprised how he was able to solve the ...etc"; "would do well at any school" -- and then weave them into a coherent, grammatically correct, enthusiastic letter.  Give the system a couple examples of how the translation process works, and it can do it.




In fact, pretty much everything I do could be at least partially automated, thanks to a system like GPT-3.  Even parts of my research could be automated.  e.g. maybe present it with a problem, and see if it can generate a proof.  The proof will probably be wrong, unless you teach it to do formal reasoning; but I could imagine if you trained it on enough text, it could write some things that look like they are in the direction of a proof -- and then any skilled person could turn the basic idea into a rigorous approach.


Also, just coming up with good problems is important -- and something a model like GPT-3 could probably do.  You start by giving it some examples of what you're looking for, and then it generates some more; and you say, "Yeah!  That's a nice problem!  I never would thought of that!"




  • Members
  • PipPipPipPipPip
  • 477 posts
  • LocationAustralia
eacao likes this.

eacao likes this a little bit too much.

If you're going through hell, keep going. - Winston Churchill

You don't decide your future. You decide your habits, and your habits decide your future.
Nearly all men can stand adversity, but if you want to test a man's character, give him power. - Abraham Lincoln.




  • Members
  • PipPipPipPipPipPipPipPip
  • 1,961 posts

A few more tasks:


Text --> Javascript


I haven't worked much with Javascript, but have a little.  I have learned probably 10 programming languages over the years, and forgotten a lot of them.  I have decided to make Python my primary language, as it so easy to use -- however, Javascript is one that comes up occasionally in some things I do.  It would be great if I could just type in a few comments, and the language model could output a 20 or 30 line Javascript program.  Evidence shows that this is doable, as OpenAI showed how to fine-tune GPT-2 to do it (they've demoed it for Python, but probably also works for Javascript).  They could do the same with GPT-3; or, if it wasn't part of the training data, add a lot of programs and comments of different types, so that it has that capability built-in.  I wouldn't mind if there were small errors, as I could correct; getting the gist of the program is the most important thing.


I'd say you could automate a breathtaking number of programming jobs this way.  There are lots of contract workers that do little jobs like that -- e.g. they have to tweak a little section of Cobol, so have to know how it works.  Or, they have to write little scripts to do certain things.  Or, some other set of tweaks.  I would guess -- but don't know for sure -- that you could probably take someone with a lot less skill, and put them in front of a system like GPT-3 that you can teach to do some of these tasks, and it would do it.  The low-skill person would just have to cut and paste, and do some elementary checking to make sure it works.


Computer support:  text --> Linux command


I often forget exactly how to do certain tricky file manipulations from Linux, and don't like having to search the internet or look through the manual pages to find exactly what I need (at one time, I knew Linux better; but one forgets what one doesn't use).  It would great to be able to type in English exactly what I want, and it output the commands to do it (which I would then check, to make sure it doesn't something bad to my files).


Help find an approach in the literature to solve a problem -- "Have you seen this before?"


A common thing academics spend a lot of time on is this:  they work on some problem, and then get stuck on a technical issue that they feel must be solved in literature somewhere.  So, what do they do?  A first pass might be to go to Google, and type in some terms related to the equations one is looking at.  Usually, it doesn't work.  A next step might be to take the formula (if it involves a function), compute some values, and then see if they appear in the Encyclopedia of Integer Sequences.  That sometimes works, sometimes doesn't.  Another thing to try is to type it into WolframAlpha or Maple.  Or email a colleague.


Regardless, a large percent of the time, it's not going to work.  


If the language model has been trained with a large number of LaTeX files, it should be able to figure out things that your problem looks similar to.  If you gave it a few examples of how to map some short piece of LaTeX to a general method that is relevant, then when presented with a little LaTeX representing your problem, it would output a method.  You may even be able to get it to show you how to simplify your problem, some large percent of the time.


I wouldn't be the least bit surprised if it worked.  


It would speed up research a lot.  




The more I think about it, the more I think large language models are going to have a profound effect on jobs.  It's going to have to be shrunk down some, so that people can actually use it on work computers, and maybe it needs to be improved some more, to increase the accuracy of the outputs.


It's going to be incredibly disruptive, especially for those low-skill programming jobs.




  • Members
  • PipPipPipPipPipPipPipPip
  • 1,961 posts
This is really funny:  OpenAI just released an API to do some of the things I listed above.  Here is a short video showing the one with "Text --> Linux Command":

We're releasing an API for accessing new AI models developed by OpenAI. You can "program" the API in natural language with just a few examples of your task. See how companies are using the API today, or join our waitlist

Sounds like GPT-3.




  • Members
  • PipPipPip
  • 58 posts

I'm particularly excited for AI Channels -https://aichannels.app/- simply for its commercial usability.  Brilliance, pure brilliance in its infancy.


MMW:  It will be a greater hit than either Al Dungeon or AI Dungeon 2.


I'll definitely try to get into the beta.




  • Members
  • PipPipPipPipPipPipPipPip
  • 1,961 posts
Speaking of AI Dungeon, its creator wrote on the Machine Learning reddit:

We've been using their API the last several weeks and it's got way better capability than anything open source. It's extremely impressive.




  • Members
  • PipPipPipPipPipPipPipPip
  • 1,961 posts
Here is a blog post about the AI Channels project, by its creator:
He's completely blown away by how amazing GPT-3 is.  
For example, he asked it to produce Neil Gaiman's response to a question:

Andrew Mayne: How do I get great ideas for science fiction stories?

Neil Gaiman: I like to watch alien films, and I like to ask myself - what if? What if you went to Stockholm and you got mugged by Darth Vader?

And he says that sounds so on-the-nose for Gaiman.

He also asked the model to imitate Stephen King:

Stephen, how do I build tension?

Stephen King: Here's how I build it. I take bits of information from the background and drop it in front of you, but don't complete the full story immediately. So, for example, you might find a corpse and you're drawn to it, but there's a sense of not knowing what the hell it is in front of you. You don't know that someone has killed this person and that person doesn't know what happened.

The most amazing example of all, which he wrote a tweet about, is where he asked Jules Verne:

Andrew Mayne: Jules Verne, your books often celebrated the idea of exploration.

Jules Verne: This is true. I think part of that is the spirit of the French as explorers, but also that when I began to write we'd reached almost all of the earthly frontiers and I was looking for new ones, whether they be the center of the Earth of the moon.

(I think he meant "or on the moon".)


Here's an even more amazing example from the AI Weirdness project:


Me: Hello, how are you?

Bot (wrote this line for it): Foolish mortal! I am the god of the internet.

Me: Prove it.

Bot: No need to prove it, you will bow down and worship me and the wonders of the internet.

Me: What if I don't?

Bot: Then I shall unleash a bot attack on your computer.

Me: Go ahead and try it.

Bot: But I told you who I was! You must worship me!

Me: Let's see the bot attack first.

Bot: Oh, for the love of... Fine! (insert name of random bot here) attacks you!

Me: That... is not a bot attack. Literally nothing happened.

Bot: ... Seriously?

Me: Yeah dude, sorry.

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users