Answering my own status update, the chief thing I expect an early/non-conscious AGI to be applied to by a contemporary organization would have to be energy consumption and the optimization thereof. That is: figuring out the best way to reduce the amount of energy used by that organization and by the AI itself.
We've seen this with Google: they've used machine learning to increase their savings on energy expenditures. Finding a way to get more out of less power has tangible implications for AI research. In order to use more compute, you need more energy. With compute doubling every 3.5 months, eventually only multinational corporations could possibly keep seeing progress, and not long after, even they would go bankrupt trying to squeeze more out. If one had a purely functional AGI to find the right patterns and occurrences needed to reduce the energy consumed by increasing compute to 1:1 or better (i.e. using the same baseline amount of energy for twice the compute or more), it would be possible to power extraordinarily large and deep neural networks for extraordinarily little power, something which the AGI could retroactively benefit from.
This obviously would have benefits for the world's energy needs to boot.
Being able to effectively reduce energy consumption could also "hide" an AGI's existence. Just going off of current requirements, the amount of energy needed to power even a weak, functional AGI would be insane even to a schizophrenic. If this organization wants to keep its AI a secret, it absolutely has to figure out how to reduce its energy expenditures immediately or else governments could easily trace massive energy spikes while its very operation could cause negative effects on nearby communities.
Once energy use has been reduced, the very next task I'd personally seek out would be resolving tasks in natural language understanding, such as creating a chatbot to regularly prod and expand the limits of its abilities. Related to that, I'd also see it be able to accomplish certain tasks by understanding commands— tell it in simple language to open and operate a program (like a text document or video game), and see how well it does this. This would essentially let it become a cognitive agent, which means it's possible to automate many tasks related to using a computer, such as troubleshooting or even programming. Similarly, I'd test its ability to translate text.
What are others' opinions? What do you think an early/functional AGI would be used for first?
"Functional AGI" (previously "Weak General Artificial Intelligence") is my term for a type of AGI that is non-sapient, non-conscious, and likely not human-level intelligence. It can learn anything and generalize across all domains and operate as fast as electrons flow, but that alone does not mean it is able to think like a human or even with the same quality as a human. It is effectively "just" GPT-2 with 100,000x more data parameters and transfer learning capabilities. I realized that this bears a strong resemblance to Starspawn0's concept of a Zombie AGI. It might even be the same thing.
Edit: "First-Gen AGI" is a good term
Edited by Yuli Ban, 03 September 2020 - 10:00 PM.