Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

Are we approaching an AI winter?


  • Please log in to reply
3 replies to this topic

#1
Metalane

Metalane

    Member

  • Members
  • PipPip
  • 32 posts

The 2010's decade saw an eruption of progress from Neural Networks, Machine Learning, GAN's, and more. This progress and developments blew away the researchers that follow these industries, companies that worked on this tech themselves, and even the average laymen who has little to no experience with these fields all together. The march for the inevitable AGI/ASI seems bright and promising (while still holding onto debates on what "AGI" or "ASI" actually is), however, obstacles such as funding, lack of research, ethical issues, and more challenge this progress. 

 

Now, the way we will achieve AGI may come in many forms, with many doubting that current methods will lead us there. BCI's, for example, could open a new explosion of research and progress. In-fact akin to nanotechnology I feel that news of new breakthroughs will seemingly come out of nowhere, and will escalate from nothing to something very quickly. Also, the media and the general public gaining awareness of these matters has also risen in recent years, which I believe is great.

 

Essentially, my point is that the current progress will only probably get us so far, and I fear that there may be gap in time between one field of evolution to the next. A loss of momentum could damage the public eye, and people will go back to assuming that these technologies are still a "long time away", even though they grip us when we're the most doubtful.

 

So, do any of you think we'll be slowing down anytime soon?



#2
sasuke2490

sasuke2490

    veteran gamer

  • Members
  • PipPipPipPipPip
  • 477 posts

no. there is to much potential for data mining customers to not put effort into making better, more refined algorithms. 


https://www.instagc.com/scorch9722

Use my link for free steam wallet codes if you want


#3
Zaphod

Zaphod

    Esteemed Member

  • Members
  • PipPipPipPipPipPip
  • 852 posts
  • LocationUK

With the disclaimer of me not being an AI researcher or expert - I would say no.

 

The "AI winter" of the 1970's was caused by a combination of factors, but while advances were being made in research, computational restraints meant that real applications were not realised. This led to a lack of research and funding. The opposite is true today - there has never been more funding or research into AI, with a large proportion of the brightest minds going into AI research. The largest and most powerful companies are nearly all tech companies with large AI divisions (e.g. Google's DeepMind). Many countries are making large scale investments into AI research.

 

The question is then more whether we have 1) reached a theoretical dead end in terms of novel AI approaches/software required for AGI/ASI or 2) reached computational bottlenecks. The former is difficult to predict and it's true that some of the deep learning approaches that have been so successful over the last decade may have their limits and we will need some breakthroughs to make truly general AI. The latter may be true when applied to current machine learning, where increasing the number of parameters requires increasingly larger computational power. In addition, some have suggested that Moore's law has slowed and may stop in the next few years unless there is some unforeseen breakthrough, but this does not mean that the scale of computation cannot still be increased.

 

Even if you took the worst case prediction that both hardware and software development slows to a halt, it would still be strange to call it an AI winter in my view. For one thing, we have scarcely scratched the surface on what can be achieved with applying current machine learning methods across different domains. There just isn't enough AI expertise and software developers to keep up and generate novel applications of AI, so we will see a proliferation of new applications even if research advancement ceased.

 

The other vital thing is the coalescence of disciplines which could generate more powerful general AI. Until recently, neuroscience and computer science had much less overlap. Now we are at the very nascent stages of Brain Computer Interfaces (BCI) and there is much that could be achieved by combining approaches in various disciplines even if those respective disciplines see few advances within their own fields. The largest and most influential AI companies (and new startups) have a combination of neuroscience and computer science expertise (e.g. DeepMind, Neuralink etc.). 

 

All in all, it seems very unlikely that we are approaching an AI winter. In my view it's more probable that we approaching an AI summer, where all the fruits of the recent AI spring are beginning to ripen. 



#4
Metalane

Metalane

    Member

  • Members
  • PipPip
  • 32 posts

With the disclaimer of me not being an AI researcher or expert - I would say no.

 

The "AI winter" of the 1970's was caused by a combination of factors, but while advances were being made in research, computational restraints meant that real applications were not realised. This led to a lack of research and funding. The opposite is true today - there has never been more funding or research into AI, with a large proportion of the brightest minds going into AI research. The largest and most powerful companies are nearly all tech companies with large AI divisions (e.g. Google's DeepMind). Many countries are making large scale investments into AI research.

 

The question is then more whether we have 1) reached a theoretical dead end in terms of novel AI approaches/software required for AGI/ASI or 2) reached computational bottlenecks. The former is difficult to predict and it's true that some of the deep learning approaches that have been so successful over the last decade may have their limits and we will need some breakthroughs to make truly general AI. The latter may be true when applied to current machine learning, where increasing the number of parameters requires increasingly larger computational power. In addition, some have suggested that Moore's law has slowed and may stop in the next few years unless there is some unforeseen breakthrough, but this does not mean that the scale of computation cannot still be increased.

 

Even if you took the worst case prediction that both hardware and software development slows to a halt, it would still be strange to call it an AI winter in my view. For one thing, we have scarcely scratched the surface on what can be achieved with applying current machine learning methods across different domains. There just isn't enough AI expertise and software developers to keep up and generate novel applications of AI, so we will see a proliferation of new applications even if research advancement ceased.

 

The other vital thing is the coalescence of disciplines which could generate more powerful general AI. Until recently, neuroscience and computer science had much less overlap. Now we are at the very nascent stages of Brain Computer Interfaces (BCI) and there is much that could be achieved by combining approaches in various disciplines even if those respective disciplines see few advances within their own fields. The largest and most influential AI companies (and new startups) have a combination of neuroscience and computer science expertise (e.g. DeepMind, Neuralink etc.). 

 

All in all, it seems very unlikely that we are approaching an AI winter. In my view it's more probable that we approaching an AI summer, where all the fruits of the recent AI spring are beginning to ripen. 

Thank you so much for your response! I want to mention that the very definition of AGI itself is still debated, and debate itself may cause a delay in achieving a "genuine" AGI. Do we need consciousness in order for an AGI to be achieved for example? Does it need the ability to question, ponder, and elucidate itself in order to be considered AGI? Only time will tell I suppose. In-fact, if I recall correctly Ray Kurzweil was the one that proposed that if an AI *tells* us it's conscious, then we'll just have to take its word for it.

 

As far as hardware goes, I don't think we have to worry about it. Hell, an AGI could likely be made with today's hardware for all we know. I mean, eventually we'll need an AGI to be inserted into nanotechnology and other more efficient utilities. Either way, I'm very confident that Moore's Law will pick up again this decade, and might even progress faster than the alleged "2x the transistor every 2 years" claim. 3D chips/Graphene here we come!

 

Anyway, even if we don't achieve anything close to AGI or anything in these upcoming years will still surely blow us away to smithereens. 3D printers, BCI helmets, AI passing the Turing Test (not AGI and arguably not even a good way to measure verbal communication), AR/MR/VR glasses, etc! 






0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users