Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

What an early AGI might be used for

AGI AI artificial intelligence deep learning

  • Please log in to reply
7 replies to this topic

#1
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 22,337 posts
  • LocationNew Orleans, LA

Answering my own status update, the chief thing I expect an early/non-conscious AGI to be applied to by a contemporary organization would have to be energy consumption and the optimization thereof. That is: figuring out the best way to reduce the amount of energy used by that organization and by the AI itself. 

We've seen this with Google: they've used machine learning to increase their savings on energy expenditures. Finding a way to get more out of less power has tangible implications for AI research. In order to use more compute, you need more energy. With compute doubling every 3.5 months, eventually only multinational corporations could possibly keep seeing progress, and not long after, even they would go bankrupt trying to squeeze more out. If one had a purely functional AGI to find the right patterns and occurrences needed to reduce the energy consumed by increasing compute to 1:1 or better (i.e. using the same baseline amount of energy for twice the compute or more), it would be possible to power extraordinarily large and deep neural networks for extraordinarily little power, something which the AGI could retroactively benefit from.

This obviously would have benefits for the world's energy needs to boot.

 

Being able to effectively reduce energy consumption could also "hide" an AGI's existence. Just going off of current requirements, the amount of energy needed to power even a weak, functional AGI would be insane even to a schizophrenic. If this organization wants to keep its AI a secret, it absolutely has to figure out how to reduce its energy expenditures immediately or else governments could easily trace massive energy spikes while its very operation could cause negative effects on nearby communities. 

 

 

Once energy use has been reduced, the very next task I'd personally seek out would be resolving tasks in natural language understanding, such as creating a chatbot to regularly prod and expand the limits of its abilities. Related to that, I'd also see it be able to accomplish certain tasks by understanding commands— tell it in simple language to open and operate a program (like a text document or video game), and see how well it does this. This would essentially let it become a cognitive agent, which means it's possible to automate many tasks related to using a computer, such as troubleshooting or even programming. Similarly, I'd test its ability to translate text.

 

What are others' opinions? What do you think an early/functional AGI would be used for first?

 

 

_____________________________________________________________

 

 

"Functional AGI" (previously "Weak General Artificial Intelligence") is my term for a type of AGI that is non-sapient, non-conscious, and likely not human-level intelligence. It can learn anything and generalize across all domains and operate as fast as electrons flow, but that alone does not mean it is able to think like a human or even with the same quality as a human. It is effectively "just" GPT-2 with 100,000x more data parameters and transfer learning capabilities. I realized that this bears a strong resemblance to Starspawn0's concept of a Zombie AGI.  It might even be the same thing.

 

Edit: "First-Gen AGI" is a good term


Edited by Yuli Ban, 03 September 2020 - 10:00 PM.

And remember my friend, future events such as these will affect you in the future.


#2
tomasth

tomasth

    Member

  • Members
  • PipPipPipPipPip
  • 309 posts

accomplish certain tasks by understanding commands— tell it in simple language to open and operate a program ?

Reminds me of this

 

Grounding Language in Play

https://twitter.com/...437492545871873



#3
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,096 posts

I think any organization that has a low-grade AGI would limit its use.  For the most part, they would keep it in a research environment, and spend a fair amount of time testing it for strengths and weaknesses, comparing it with other systems.  Something like improving energy consumption would probably be better handled by a narrow AI, or even an optimization system that isn't even AI at all.  

 

The reasons for limiting its use include:

 

* The more people that know about it, the more probable it will "leak" to the press and competitors.

 

* A general system like that will be a "jack of all arts, but master of none"; so, solving individual tasks will be better-suited to a narrower system.  

 

 

One place that it might be helpful is in brainstorming sessions and testing theories, that eventually result in discoveries, patents, and products.  That's a fairly high-level task that a narrower system couldn't help with; and, it would be limited to a small number of scientists, so would be less likely to leak.  

 

On this latter point, GPT-2 does something like that, doesn't it?  You can input a prompt, and it will help you continue it -- even with tasks as complicated as writing code.  People see this as evidence that "we don't need the human brain to build the AI".  But they are forgetting something very important:  where did the data used to train the AI come from?  It came from humans!  -- and there isn't enough of it, in all the Github repositories, to replace programmers or solve programming competition problems... at least not yet.  In fact, most of the exciting developments in AI over the last decade required human input:

 

* ImageNet:  human-labeled images.

 

* GPT-2 and other language models:  massive human-generated text.

 

* JukeBox:  where did the audio file training data come from?  Humans, of course.

 

* Voice recognition:  requires massive human-annotated audio.

 

* Image captioning:  also requires annotations.

 

* Machine translation:  human text --> human text.

 

* Virtual assistants:  it's all human data, at every stage of the pipeline.

 

* Chatbots:  human, human, human.

 

The places where you don't need humans, such as with AlphaZero, it is often the case that truly gargantuan amounts of computing power is needed for each narrow task -- many orders of magnitude more than can be afforded would be needed to solve all tasks.

 

Even if OpenAI or other companies succeed in building an AGI, it won't behave in very human-like ways, so may not give human-understandable answers (which are important if the goal is "understanding"), or share our values, or even understand human language, fully, unless a human element is added; this will make it "sub-optimal", in terms of improving the happiness and welfare of humanity.  That human element can either be added through weak statistical correlations from trillions and trillions of words of text, and millions or billions of human-curated videos -- orders of magnitude more than has been attempted to date -- or, it can be added in the form of brain data + (much less text and other media).  The latter is going to be much more efficient.  Probably at least 10x or 100x more efficient, in terms of metrics like "recording time".



#4
Mr.posthuman

Mr.posthuman

    Member

  • Members
  • PipPipPipPip
  • 118 posts
Used itself to improve its hardware and software to become super intelligent ASI

#5
TranscendingGod

TranscendingGod

    2020 is here; I still suck

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,038 posts

An early AGI... umm... for anything that the other 7 billion "GIs" are currently used for? 


The growth of computation is doubly exponential growth.

#6
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,084 posts

 

 

Just going off of current requirements, the amount of energy needed to power even a weak, functional AGI would be insane even to a schizophrenic. 

 

But "Koomey's Law" says that the energy-efficiency of computers is exponentially improving. 

https://en.wikipedia...ki/Koomey's_law

 

I don't have time to do the math on this, but I bet it means that an AGI with human-brain-levels of computation (10^16 flops) wouldn't require gargantuan amounts of electricity in, say, 2050, which I think is around the time we'll build the first AGI. The project's energy signature would be so small that it would be undetectable to outsiders. 



#7
TranscendingGod

TranscendingGod

    2020 is here; I still suck

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,038 posts

 

 

 

Just going off of current requirements, the amount of energy needed to power even a weak, functional AGI would be insane even to a schizophrenic. 

 

But "Koomey's Law" says that the energy-efficiency of computers is exponentially improving. 

https://en.wikipedia...ki/Koomey's_law

 

I don't have time to do the math on this, but I bet it means that an AGI with human-brain-levels of computation (10^16 flops) wouldn't require gargantuan amounts of electricity in, say, 2050, which I think is around the time we'll build the first AGI. The project's energy signature would be so small that it would be undetectable to outsiders. 

 

Wdym? Microsoft just built one of the top 5 supercomputers in the world which is enough to create an AGI with human levels of computation. Why 2050?


The growth of computation is doubly exponential growth.

#8
Metalane

Metalane

    Member

  • Members
  • PipPip
  • 34 posts

 

 

 

Just going off of current requirements, the amount of energy needed to power even a weak, functional AGI would be insane even to a schizophrenic. 

 

But "Koomey's Law" says that the energy-efficiency of computers is exponentially improving. 

https://en.wikipedia...ki/Koomey's_law

 

I don't have time to do the math on this, but I bet it means that an AGI with human-brain-levels of computation (10^16 flops) wouldn't require gargantuan amounts of electricity in, say, 2050, which I think is around the time we'll build the first AGI. The project's energy signature would be so small that it would be undetectable to outsiders. 

 

Don't forget the efficiency of the program itself. A practical AGI could possibly be done with today's computers.







Also tagged with one or more of these keywords: AGI, AI, artificial intelligence, deep learning

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users