I was writing something in PhoenixRu's "your utopia essay" earlier that kinda related to this: the idea that we're living in "sci-fi" times is no longer a joke or something you have to seriously stretch, and people are starting to seriously return to the idea of high-tech solutions to otherwise "mundane" social, political, economic, ecological, and esoteric problems.
That's not to say most people are particularly attuned to these sorts of things, and indeed, in some cases, it might even be better if most of the high-tech work were left in the background so that people don't seriously dwell on it because the idea of directly interfering with or surpassing nature seems profane to a lot of people. And not even luddites, hippies, and artsy types either. For a lot of people, the idea of not working for a living sounds unnatural, but it's been talked about so much in the past decade that even the normies have at least dwelled on it once or twice. Likewise, it has not escaped notice that tech is still getting better at an accelerated rate, causing plenty of people to wonder if the experiences they had as a child, the experiences they felt were real and honest and true, are even possible for the next generation to experience. But most people don't know how quickly tech is improving. Indeed, plenty still operate under the mindset that it's stagnated entirely or at least slowed since the late '90s, and that's why they'll still argue that things will remain the same for a hundred more years.
Still... Big names are talking about strange things. Billionaire Elon Musk has talked extensively about AI. Bill Gates is concerned by it. Stephen Hawking was concerned enough to mention the threat before he died. And you see these videos of things like Sophia talking to people, and it really makes you wonder. You don't have the technical knowledge to know it's all scripted and they said icons were talking about something a bit more "out there" than what exists nowadays. But people smarter than you say there's nothing to be worried about, and let's be fair, Siri is still pretty dumb.
But what about deepfakes? Haven't you see that computers can make Obama and Trump say silly stuff? Or take Putin's face and put it on yours? Isn't that dangerous? Could it be used to start World War III or get access to your bank account? So scary! This AI stuff is getting out of hand! And what about that AI that beat the Chinese guy at the board game? It was supposedly an ancient game that computers weren't going to be able to beat for a whole century, and it did it this decade! Crazy!! Oh, and cars are starting to drive themselves! Some kids even assume that all new cars can do it and get disappointed if they can't!
All that I wrote is really peripheral to the main point: there is actual discussion going on in the mainstream. It's not quite as nuanced as it is in futurist circles (it's often not even nuanced there either), but there is discussion. The big media networks will air segments where some talking head personality talks about how amazing AI is getting but throw in lines about its limitations. There's always at least one link to some AI application in your Facebook feed even if you're a Lifetime and A&E watching mother who last cared about computer science in your high school computer lab class in the 80s. It's inescapable and part of modern discourse; 8 out of 10 people you meet will have some opinion on AI, especially if they're under 30 and keep up with trends. You can bring up the concept of automation in many school or college classes and start a discussion about its feasibility. And you can even hold serious conversations with professionals about the prospect of near future AI (even if they're still talking like 2014-era deep learning is the state of the art) and not be ridicule.
This wasn't the case a decade ago to any group except of futurists.
I've talked before about the time in the summer of 2010 that I got into a debate on YouTube with a few people who clearly knew what they were talking about and were fairly serious people (i.e. not standard YouTube commenters or trolls) over the issue of why communism does or doesn't work, and my Glenn Back-backed take on it at the time was that socialism & communism could only work if there were robots doing all labor. I didn't even claim that this was particularly near; I outright mentioned that "we'll be able to do socialism properly in about a century."
And the professionalism they had evaporated quickly, and out came the "Star Trek" insults.
It wasn't much better on other sites either. Read through Reddit circa 2009-2010. Any discussion of artificial intelligence being capable in an imminent time frame (i.e. less than 20 years) was cast as lunacy and "Kurzweilian science fiction".
No, seriously, look for yourself. Set the end-date to either January 1st 2010 or 2011, and marvel at how little-discussed the prospect of near-future AI and automation was. Whenever it was mentioned, it was slapped down as pie-in-the-sky idealism driven by reading Kurzweil and watching science fiction just a little too much. Driverless cars were barely a blip. And you have to remember something critical: Reddit circa 2005 to 2010 was the nerdy, comp-sci student website that just happened to have other subfields attached. /r/Programming was one of the biggest subreddits for several years if I recall, so it's not like these were people who didn't care about technology.
It might've been that the seeming lack of extraordinary tangible progress in the 2000s burnt people out, considering a lot of people in the 80s and 90s assumed that the year 2000 would magically be a sci-fi paradise. To that end, less discussion of artificial intelligence & its effects meant fewer people cared about it in the mainstream, and thus fewer people took it seriously. After all, we were still coming off the Second AI Winter and the fact there was no clear boom period to follow it.
It's changed quite a lot. Back in 2010, it could be difficult to even talk about concepts and projections we casually discuss today without being considered hopelessly optimistic. People would assume that AlphaGo & GPT-3 tier AI was decades away at minimum and that advanced automation was a problem for our great-grandchildren's great-grandchildren. Claiming that anything we've done by 2020 would be accomplished any time before 2050 would probably get you funny looks, or the occasional "Yeah, things are changing pretty quickly," though half-hearted.
Nowadays, AI is an extraordinarily huge topic that's being taken seriously by all corners. Indeed, the main reason people might even be skeptical is because they're operating along outdated perceptions of or information about the state of the art.
Of course, this is all a layman's perspective. I'm sure Starspawn0 would have a better take on the perspective of those actually in the field. Though from what I've heard and deduced just from reading posts from comp-sci & data-sci types, it probably isn't that much better.