Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

How has the change in perception of AI changed over the past decade?

AGI Singularity Machine Learning

  • Please log in to reply
3 replies to this topic

#1
Metalane

Metalane

    Member

  • Members
  • PipPip
  • 34 posts

I believe I first discovered this site all the way back in 2011, and since then my interest in futurology has skyrocketed. It is also clear that in this past deacde futurology has entered the zeitgeist in an unprecedented fashion. It is safe to assume that optimism (such as the singularity being a "when" as opposed to an "if") has also sprouted and dispersed through many communities tackling this matter.

 

How do you think the perception of AI has tranformed over the past decade? This could be the public perception or the perception from a scientific standpoint. 



#2
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 22,318 posts
  • LocationNew Orleans, LA

I was writing something in PhoenixRu's "your utopia essay" earlier that kinda related to this: the idea that we're living in "sci-fi" times is no longer a joke or something you have to seriously stretch, and people are starting to seriously return to the idea of high-tech solutions to otherwise "mundane" social, political, economic, ecological, and esoteric problems. 

 

That's not to say most people are particularly attuned to these sorts of things, and indeed, in some cases, it might even be better if most of the high-tech work were left in the background so that people don't seriously dwell on it because the idea of directly interfering with or surpassing nature seems profane to a lot of people. And not even luddites, hippies, and artsy types either. For a lot of people, the idea of not working for a living sounds unnatural, but it's been talked about so much in the past decade that even the normies have at least dwelled on it once or twice. Likewise, it has not escaped notice that tech is still getting better at an accelerated rate, causing plenty of people to wonder if the experiences they had as a child, the experiences they felt were real and honest and true, are even possible for the next generation to experience. But most people don't know how quickly tech is improving. Indeed, plenty still operate under the mindset that it's stagnated entirely or at least slowed since the late '90s, and that's why they'll still argue that things will remain the same for a hundred more years.

 

Still... Big names are talking about strange things. Billionaire Elon Musk has talked extensively about AI. Bill Gates is concerned by it. Stephen Hawking was concerned enough to mention the threat before he died. And you see these videos of things like Sophia talking to people, and it really makes you wonder. You don't have the technical knowledge to know it's all scripted and they said icons were talking about something a bit more "out there" than what exists nowadays. But people smarter than you say there's nothing to be worried about, and let's be fair, Siri is still pretty dumb. 

 

But what about deepfakes? Haven't you see that computers can make Obama and Trump say silly stuff? Or take Putin's face and put it on yours? Isn't that dangerous? Could it be used to start World War III or get access to your bank account? So scary! This AI stuff is getting out of hand! And what about that AI that beat the Chinese guy at the board game? It was supposedly an ancient game that computers weren't going to be able to beat for a whole century, and it did it this decade! Crazy!! Oh, and cars are starting to drive themselves! Some kids even assume that all new cars can do it and get disappointed if they can't!

 

All that I wrote is really peripheral to the main point: there is actual discussion going on in the mainstream. It's not quite as nuanced as it is in futurist circles (it's often not even nuanced there either), but there is discussion. The big media networks will air segments where some talking head personality talks about how amazing AI is getting but throw in lines about its limitations. There's always at least one link to some AI application in your Facebook feed even if you're a Lifetime and A&E watching mother who last cared about computer science in your high school computer lab class in the 80s. It's inescapable and part of modern discourse; 8 out of 10 people you meet will have some opinion on AI, especially if they're under 30 and keep up with trends. You can bring up the concept of automation in many school or college classes and start a discussion about its feasibility. And you can even hold serious conversations with professionals about the prospect of near future AI (even if they're still talking like 2014-era deep learning is the state of the art) and not be ridicule.

 

This wasn't the case a decade ago to any group except of futurists.

 

I've talked before about the time in the summer of 2010 that I got into a debate on YouTube with a few people who clearly knew what they were talking about and were fairly serious people (i.e. not standard YouTube commenters or trolls) over the issue of why communism does or doesn't work, and my Glenn Back-backed take on it at the time was that socialism & communism could only work if there were robots doing all labor. I didn't even claim that this was particularly near; I outright mentioned that "we'll be able to do socialism properly in about a century." 

And the professionalism they had evaporated quickly, and out came the "Star Trek" insults. 

It wasn't much better on other sites either. Read through Reddit circa 2009-2010. Any discussion of artificial intelligence being capable in an imminent time frame (i.e. less than 20 years) was cast as lunacy and "Kurzweilian science fiction".

No, seriously, look for yourself. Set the end-date to either January 1st 2010 or 2011, and marvel at how little-discussed the prospect of near-future AI and automation was. Whenever it was mentioned, it was slapped down as pie-in-the-sky idealism driven by reading Kurzweil and watching science fiction just a little too much. Driverless cars were barely a blip. And you have to remember something critical: Reddit circa 2005 to 2010 was the nerdy, comp-sci student website that just happened to have other subfields attached. /r/Programming was one of the biggest subreddits for several years if I recall, so it's not like these were people who didn't care about technology.

 

It might've been that the seeming lack of extraordinary tangible progress in the 2000s burnt people out, considering a lot of people in the 80s and 90s assumed that the year 2000 would magically be a sci-fi paradise. To that end, less discussion of artificial intelligence & its effects meant fewer people cared about it in the mainstream, and thus fewer people took it seriously. After all, we were still coming off the Second AI Winter and the fact there was no clear boom period to follow it.

 

Truncated version

 

It's changed quite a lot. Back in 2010, it could be difficult to even talk about concepts and projections we casually discuss today without being considered hopelessly optimistic. People would assume that AlphaGo & GPT-3 tier AI was decades away at minimum and that advanced automation was a problem for our great-grandchildren's great-grandchildren. Claiming that anything we've done by 2020 would be accomplished any time before 2050 would probably get you funny looks, or the occasional "Yeah, things are changing pretty quickly," though half-hearted.

Nowadays, AI is an extraordinarily huge topic that's being taken seriously by all corners. Indeed, the main reason people might even be skeptical is because they're operating along outdated perceptions of or information about the state of the art.

 

Of course, this is all a layman's perspective. I'm sure Starspawn0 would have a better take on the perspective of those actually in the field. Though from what I've heard and deduced just from reading posts from comp-sci & data-sci types, it probably isn't that much better.


And remember my friend, future events such as these will affect you in the future.


#3
Metalane

Metalane

    Member

  • Members
  • PipPip
  • 34 posts

I was writing something in PhoenixRu's "your utopia essay" earlier that kinda related to this: the idea that we're living in "sci-fi" times is no longer a joke or something you have to seriously stretch, and people are starting to seriously return to the idea of high-tech solutions to otherwise "mundane" social, political, economic, ecological, and esoteric problems. 

 

That's not to say most people are particularly attuned to these sorts of things, and indeed, in some cases, it might even be better if most of the high-tech work were left in the background so that people don't seriously dwell on it because the idea of directly interfering with or surpassing nature seems profane to a lot of people. And not even luddites, hippies, and artsy types either. For a lot of people, the idea of not working for a living sounds unnatural, but it's been talked about so much in the past decade that even the normies have at least dwelled on it once or twice. Likewise, it has not escaped notice that tech is still getting better at an accelerated rate, causing plenty of people to wonder if the experiences they had as a child, the experiences they felt were real and honest and true, are even possible for the next generation to experience. But most people don't know how quickly tech is improving. Indeed, plenty still operate under the mindset that it's stagnated entirely or at least slowed since the late '90s, and that's why they'll still argue that things will remain the same for a hundred more years.

 

Still... Big names are talking about strange things. Billionaire Elon Musk has talked extensively about AI. Bill Gates is concerned by it. Stephen Hawking was concerned enough to mention the threat before he died. And you see these videos of things like Sophia talking to people, and it really makes you wonder. You don't have the technical knowledge to know it's all scripted and they said icons were talking about something a bit more "out there" than what exists nowadays. But people smarter than you say there's nothing to be worried about, and let's be fair, Siri is still pretty dumb. 

 

But what about deepfakes? Haven't you see that computers can make Obama and Trump say silly stuff? Or take Putin's face and put it on yours? Isn't that dangerous? Could it be used to start World War III or get access to your bank account? So scary! This AI stuff is getting out of hand! And what about that AI that beat the Chinese guy at the board game? It was supposedly an ancient game that computers weren't going to be able to beat for a whole century, and it did it this decade! Crazy!! Oh, and cars are starting to drive themselves! Some kids even assume that all new cars can do it and get disappointed if they can't!

 

All that I wrote is really peripheral to the main point: there is actual discussion going on in the mainstream. It's not quite as nuanced as it is in futurist circles (it's often not even nuanced there either), but there is discussion. The big media networks will air segments where some talking head personality talks about how amazing AI is getting but throw in lines about its limitations. There's always at least one link to some AI application in your Facebook feed even if you're a Lifetime and A&E watching mother who last cared about computer science in your high school computer lab class in the 80s. It's inescapable and part of modern discourse; 8 out of 10 people you meet will have some opinion on AI, especially if they're under 30 and keep up with trends. You can bring up the concept of automation in many school or college classes and start a discussion about its feasibility. And you can even hold serious conversations with professionals about the prospect of near future AI (even if they're still talking like 2014-era deep learning is the state of the art) and not be ridicule.

 

This wasn't the case a decade ago to any group except of futurists.

 

I've talked before about the time in the summer of 2010 that I got into a debate on YouTube with a few people who clearly knew what they were talking about and were fairly serious people (i.e. not standard YouTube commenters or trolls) over the issue of why communism does or doesn't work, and my Glenn Back-backed take on it at the time was that socialism & communism could only work if there were robots doing all labor. I didn't even claim that this was particularly near; I outright mentioned that "we'll be able to do socialism properly in about a century." 

And the professionalism they had evaporated quickly, and out came the "Star Trek" insults. 

It wasn't much better on other sites either. Read through Reddit circa 2009-2010. Any discussion of artificial intelligence being capable in an imminent time frame (i.e. less than 20 years) was cast as lunacy and "Kurzweilian science fiction".

No, seriously, look for yourself. Set the end-date to either January 1st 2010 or 2011, and marvel at how little-discussed the prospect of near-future AI and automation was. Whenever it was mentioned, it was slapped down as pie-in-the-sky idealism driven by reading Kurzweil and watching science fiction just a little too much. Driverless cars were barely a blip. And you have to remember something critical: Reddit circa 2005 to 2010 was the nerdy, comp-sci student website that just happened to have other subfields attached. /r/Programming was one of the biggest subreddits for several years if I recall, so it's not like these were people who didn't care about technology.

 

It might've been that the seeming lack of extraordinary tangible progress in the 2000s burnt people out, considering a lot of people in the 80s and 90s assumed that the year 2000 would magically be a sci-fi paradise. To that end, less discussion of artificial intelligence & its effects meant fewer people cared about it in the mainstream, and thus fewer people took it seriously. After all, we were still coming off the Second AI Winter and the fact there was no clear boom period to follow it.

 

Truncated version

 

It's changed quite a lot. Back in 2010, it could be difficult to even talk about concepts and projections we casually discuss today without being considered hopelessly optimistic. People would assume that AlphaGo & GPT-3 tier AI was decades away at minimum and that advanced automation was a problem for our great-grandchildren's great-grandchildren. Claiming that anything we've done by 2020 would be accomplished any time before 2050 would probably get you funny looks, or the occasional "Yeah, things are changing pretty quickly," though half-hearted.

Nowadays, AI is an extraordinarily huge topic that's being taken seriously by all corners. Indeed, the main reason people might even be skeptical is because they're operating along outdated perceptions of or information about the state of the art.

 

Of course, this is all a layman's perspective. I'm sure Starspawn0 would have a better take on the perspective of those actually in the field. Though from what I've heard and deduced just from reading posts from comp-sci & data-sci types, it probably isn't that much better.

Thank you so much for that write up! You made many excellent points. I pretty much whole-heartedly agree. Artificial Intelligence is something that truly was exclusive to the world of science-fiction just merely decades ago, but now that it has melted into the zeitgeist the world is now reacting in all colorful sorts of ways. You have the pessimists (potentially "realists") who nearly deny that anything good could come out of this, only to be opposed by the optimists who are also potentially fearful, but still allow the sheer unpredictability of the tech world to grasp them. Ah, I know I'm probably sounding like a poet right now, but I don't know how else to express my emotions while discussing this. To me at least, the tech world really is poetry in action, it's beautiful.

 

The dangers that these technologies provide is alarming, however akin to Kurzweil's beliefs, I think we will work together with our artificial counterparts, and build a batter world. A world that has the audacity to counter things such as climate change, politics, war, etc.  

 

Also, Yuli-Ban, if you don't mind there's one more thing I would like to ask you regarding GPT-3...  



#4
caltrek

caltrek

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPipPip
  • 11,692 posts

How has the  your change in perception of AI changed over the past decade?

 

To be honest, not all that much. I suppose because many of my thoughts concerning AI derived from my work experiences.  I retired half way through the decade, so the only real stimulus to thought on that issue have come from Yuli Ban, Starspawn0, and others on this forum.  Not that their observations have not been good or original.  They have been both.  Just that I often end up relating them to my own professional experience. I should add that the experience in question was with what I suppose you all would call narrow AI.  Not just at work, but in recreation, such as playing (and losing at) chess with computers. AGI is something that I have no experience with.  However, I would point out that AGI seems to me to be an aspiration to make computers think like humans.  All well and good. 

 

Again, my problem comes with the idea of substituting human intelligence with AGI. The problem comes with the concept that such AGI will be "better" than humans.  By what yard stick?

 

I had a pet cat that I often think of as a genius among cats.  Now, I am sure AGI will some day exceed the gross computational power contained within that cat's brain.  Still, I think I prefer the companionship of such a cat to AGI.   Why?

 

Because of the obvious intelligent use to which that cat put its brain power.  It communicated in ways that have only been approached by subsequent cats, hence my insistence of the "genius" level.  

 

...and I am not just talking about communicating on the emotional level.  Put another way, that cat actually and truly seemed to understand symbolic communication.  Better than many humans that I know....


The principles of justice define an appropriate path between dogmatism and intolerance on the one side, and a reductionism which regards religion and morality as mere preferences on the other.   - John Rawls






Also tagged with one or more of these keywords: AGI, Singularity, Machine Learning

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users