Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

What 2029 will look like


  • Please log in to reply
105 replies to this topic

#1
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,119 posts

Ok, so here is what I think the world will look like in 2029. I decided to take an “optimistic view”; I could have taken a more conservative view, but that would make it boring. I fully expect some large percent of the things I write won’t come to pass.

Here’s what the typical day of someone (not a yuppie, not a rich person, not a poor person) in 2029 will be like; for the sake of brevity, I will refer to them as “you”: you wake up in the morning, and are greeted by your virtual assistant:

Assistant: Good morning. It’s time to get ready for work. Based on what I heard during your day yesterday, and from your emails, I see you have a meeting at 9:30 to discuss the new construction plans. Would you like me to summarize the key points that would come up?

You: That’s ok. We discussed it yesterday. Can you tell me how long the meeting will last?

Assistant: Probably about 1 hour. That’s how long the last meeting lasted when you discussed similar material.

You: Hmm… it’s going to be a boring day.

Assistant: In that case, you definitely should have some coffee to perk you up. I could make some for you.

You: Yeah, go ahead.

And the Assistant activates your kitchen robot and coffeemaker. The robot places a cup in the machine, and starts the process. It then goes to the freezer, pulls out a frozen bagel, and pops it into the microwave. After it is heated, it pulls it out, smears on some cream cheese, and puts it on a plate on the dinner table.

The robot is vaguely humanoid, but very rudimentary; it is, however, reasonably efficient at completing small tasks, like moving objects, pushing buttons, fetching items from the fridge and cabinet. It can also clean off a table, vacuum the floors, and do some very basic tidying-up. It can’t make your bed, do your laundry, fold clothes, clean your toilet, fix plumbing problems, or many other things; though, there are more high-end models that can do many of these. The bot cost about $10,000; though, the most impressive part of it is its software brain. Getting robots to the point where they could do all those basic tasks was a Herculean effort, comparable in difficulty to building high-performing self-driving cars; billions of dollars were involved, massive amounts of computation devoted to “cloud robotics”, and millions of cumulative man-hours of work by thousands of engineers. All those years of improvements in computer vision, navigation, planning, and grasping / dexterous manipulation added up what would have seemed like pure science fiction just 10 years ago -- but, now, people take it for granted.

After eating breakfast, your assistant asks you on the way out the door, “You’re getting low on coffee. I could order some for you, if you want. Would you like me to?” You say “yes”, and it places the order. An hour later, while you are away at work, a little sidewalk robot wheels it up to your front porch, drops it off, and your house robot collects it, and puts it away.

The assistant is so good at these basic conversations that, sometimes, you wonder if it is alive. You can even have some basic conversations about movies, books, and people / celebrities, and it seems to always have intelligent things to say. It does sometimes screw up, and say things that no human would ever utter; but it does seem like another couple years of improvement, and it will be passing a full Turing Test. It can even answer questions about all your old emails and books, as well as a human. However, if you ask it something that requires being inventive or applying sophisticated reasoning, it just says, “I’m sorry, I can’t do that.” Sometimes it’s hard to tell what it can and cannot do.

On your way out the door in the morning, you hop into your electric car, and notice that your route is already planned. The car starts itself, and drives you all the way there -- you don’t have to even touch the steering wheel the whole time. You are lucky to live in one of the geo-fenced areas where your car can work at full autonomy, even in heavy rain and with lots of pedestrians milling about. About 5% of the cars you see in your neighborhood are similarly electric; and most of the newer cars have at least Level 3 self-driving features. Yours is a Level 4 car. It cost about $35,000.

On your way to work, you notice lots of drones flitting overhead. Some are news crew drones, some are for the police, some are responding to emergencies, some are delivering packages -- all manner of uses.

You also notice many different types of robots on the street. There are little robots wheeling packages to people’s houses; some are cleaning up the street; some are police-bots patrolling the neighborhood; others are carrying things for people. You fleetingly remember how people said that the future will feel like the future when you see robots walking down the street -- well, you see that now, but it doesn’t feel that futuristic. It’s just part of the scenery, like how Lime Scooters became part of the scenery by 2020.

A few hours later, after your day at work, and once back home, you decide you want to relax and listen to some music. You put on your Brain-Computer Interface device, and listen to some suggestions by your assistant. As you listen, your assistant learns what you don’t like about the selections, and improves; and within a few tries, it finds the perfect music.

After about a half-hour of listening, you decide to play a video game. Being a minimalist, you don’t have TV or monitor -- you prefer to watch using with your Mixed Reality Glasses (a mixture of VR and AR). You put them on, in your living room, and disappear into another world, rendered at very high resolution with holographic sound and 6-degree-of-freedom tracking, and projected with light-field optics. “So realistic… and light as a pair of glasses. How did they ever fit all of that technology into such a compact device?!” you say to yourself, even after several years of experiencing cutting-edge VR/AR.

The game system -- which uses your home computer network for doing the computational heavy-lifting -- projects your arms and legs perfectly into the virtual world, using input from your BCI + cameras, which can also be used to give your game character superpowers. There are no clumsy drop-down menus or remembering what controller buttons and movements to make -- if you want to cast any of 100 spells or fly away at an odd angle, you just think it, and it happens.

Some games use your brain input to maximize enjoyment. If the game system reads that you are bored, it will try to make it more exciting -- perhaps by changing the scenery, music, monsters, whatever suits your mood. Likewise, if you are confused -- for example, that you don’t know exactly where you are in a dungeon -- it will notice, and offer to help, if you have the game set to “novice”.

After an hour of bliss on a faraway planet, you decide to return to the real world.

Your assistant says, “You’ve spent a lot of time in VR by yourself this week. I suggest you play with some of your friends next time. How about I set up a dinner-and-movie outing tomorrow night with your neighbors? You can talk to them about your gaming experiences.” You say, “Ok, but make it Friday at 6:00.” Your assistant calls them, like Google Duplex only several generations better, and sets it up for you.

Time for bed. You lie down, and your Assistant turns off the lights, makes sure the door is locked, and says, “Good night. See you tomorrow morning.”

….

In addition to all that, I think we can expect, by 2029:

# AI doctors that not only can diagnose basic illnesses reasonably well, like Babylon Health’s system can right now, but also systems that can even diagnose pretty much every illness, by interviewing patients, reading doctors notes, medical scans, and more. Doctors will say that the human touch is irreplaceable, and they might be right -- but at least for the diagnosis and medicine recommendations, AI systems will be on-par or better. They will even know to ask the right questions and read the emotional response of patients, like a well-trained doctor. Doctors might be superior, still, at improvising -- coming up with a creative solution to a tough problem. And they might be better at looking at the big picture, and weighing options, given the surrounding context that may be hard to process.

# There will be all kinds of media-synthesis Deep Learning-based systems. They will be able to generate realistic videos (that already exists, if you give it the rough outlines of what it is supposed to generate -- but by 2029, the systems won’t even need that); realistic 3D VR scenes, perhaps even ones with animation; very high quality pieces of music (so good people would buy them); news article synthesis (this already exists, but it is based on templates -- in the future, the AI system won’t need templates, and will write news articles from scratch. 10 to 11 years is a LONG time, enough to where there are sure to be breakthroughs); essay and argument-generation (again, already exists in some form); short story-synthesis. I am basing this on the rapid improvements in GAN technology, along with recent advances like the BERT system designed at Google. I also foresee brain data being used to improve AI, which will provide an independent approach to these very hard problems.

# Machine Translation will be so good in 2029 that humans will only be needed for very delicate stuff, like interpreting a novel, taking into account cultural sensitivities and subtleties. These kinds of translations are far from “literal”. Business and legal translations, though, will be done almost 100% by machine.

# Brain-Computer Interfaces will have so many uses that it’s hard to even begin to list them all. They will be used to improve education (e.g. to pinpoint exactly what people are getting stuck on), improve video game experience (as I described above), improve psychological diagnosis (to help model what is going wrong in the brain), improve AI, improve people’s writing (e.g. you could maybe describe an argument or a few paragraphs of a story, in words, and the AI would combine that with your brain data to build a cleaned-up version), and even to make it much easier to write computer code (e.g. you describe what you want a program to do, and the AI listens and combines that with your brain scan to help disambiguate, so that it doesn’t have to ask many follow-up questions, to figure out what your program is suppose to do). Some of these will be here in 10 years -- others may take longer. Hard to predict…

# Virtual Reality and Augmented Reality will be perfect. You’ll put on glasses-like devices (maybe cupped around the eyes) in a dark room, and get holographic sound and full light-field, high-resolution images. It will be multiple levels more advanced than anything we have today. And the software to run it will also be a lot more sophisticated. Various tricks will be used to reduce the computational load, while also upping the realism. Neural net-based approximate ray-tracing is one example.

# We’ll see lots of restaurants with virtual / automated cashiers. There are several companies working on this right now; and by 2029 it will be perfected. You’ll walk up to a flat screen (like many fast food restaurants are experimenting with right now), and tell a virtual robot your order. It will even understand you if you stutter or change your mind, or ask for modifications.

# Biometric tech will be perfected by 2029. Face recognition will be essentially better than human in just about every way you want to measure it. I’m not sure whether people will allow it to be used everywhere; there may be laws in place prohibiting it. But it is so convenient, that people might not complain that much. You could walk into a store, and just take things off the shelves without interacting with a human, and your account will get charged. Look to China for the future of this technology.

# With the continuing rapid decline in cost of sensors, actuators and other robot components, with rapid advances in Deep Learning and Reinforcement Learning applied to specific robotics tasks (like grasping), with further advances in battery tech, and further improvements in computing power per dollar (Moore’s Law), I think robots of 2029 will do some amazing things. I’ve already discussed home robots; but I think they will also be deployed en masse to do things like: assist in trash cleanup (on public streets and in parks), janitorial work, firefighting, security / policing, and construction work. It’s possible -- though, I wouldn’t want to put a probability estimate on it -- that there were will general-purpose robots that can build entire buildings from scratch. It would require breaking the task down into a large number of sub-tasks, each of which would be completed in a semi-structured environment. I don’t think we would see such a robot being widely deployed by 2029; only that experimental versions would exist -- headline in Wired magazine reads, “Robot builds building from scratch.”

# Speaking of sensor tech, I think there will be tiny, flat, ultra low-power (down around 2 milliwatts, 13 frames per second, streaming; other experimental cameras are measured in microwatts!) cameras everywhere, streaming content to the web. Prototypes already exist -- there are several papers on this that came out in the past 2 years. 10 more years of improvement will do wonders for that technology. It will be a scary time to be alive for people obsessed about their privacy!

# So much is going to change with the tech landscape that I can’t even imagine what it will do to shopping. There will probably still be some malls and brick-and-mortar stores; but with further gains by online shopping, drone and robot-delivery, and cahierless stores (Amazon Go), their character is probably going to change a lot. Small, locally-owned stores will still be around, and may actually thrive. Larger stores are a another matter.

# Energy? More gains from solar -- further cost-reductions and efficiency improvements to PVs, and the same for battery technology and converters and inverters. Nuclear may make a comeback; and there may be some breakthroughs in fusion -- hard to know for sure.

# Space? Probably a few moon missions and more satellites. Maybe there will be an asteroid or moon-mining mission or two. Oh, and there will be another robot or two on Mars.

# Geopolitics? China will be ahead of the U.S. in tech-deployment, and will be seen as a potential serious threat. They will be so big and powerful that we won’t have much leverage over them. Russia will probably decline even further, though may stabilize. Large parts of Africa will continue to see rapid economic growth. Europe will have about the same level of growth they’ve always had -- nothing to write home about. Japan may make a comeback; and Korea may rise further. India will also see a lot of growth, but not at the rate of China. South America is going to be plagued by more little wars -- dictators, socialists, capitalists, all fighting for dominion. Saudi Arabia will decline; Iraq, Iran, Syria, Egypt, and other countries might rise. It’s such a volatile area, though, that it’s hard to know for sure.

# There will be an AI arms race, as China and the U.S. try to add more and more AI to their weapons and defense systems. There will be killer drones and probably some type of killer robot, at least as an experimental weapon.

That’s about all for now. There are probably lots of caveats and clarifications I should have mentioned; but life is too short to qualify everything.


  • Yuli Ban, Enda Kurina, funkervogt and 1 other like this

#2
Jakob

Jakob

    Stable Genius

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 6,122 posts

My main critiques:

 

  • Utility robots are going to be more common, but not that much more common in just 10 years. Adoption of home utility robots probably won't reach parity by 2029, maybe 20% adoption in middle class neighborhoods. Parity by the mid 30s is plausible though. You aren't going to see huge hordes of them crawling all over the streets. Maybe if you look around you on a crowded street, you'll have a good chance of seeing one, but they won't be everywhere.
  • BCIs will still be the domain of researchers and upper-middle or upper class tech geeks. One with the capabilities you describe would probably cost some thousands of dollars. Parity (50% adoption) would probably be reached in the early 40s in North America, Western Europe, and East Asia.
  • Elon will have probably put people on Mars at least once or twice 10 years from now. 10 years ago the only thing they ever launched was Falcon 1, and now they have Falcon Heavy, Dragon 2, and Starhopper. So far they are keeping pace with their development goals; I would give them a 50% chance of putting people on Mars by 2027.

 

All in all, a pretty balanced, middle of the road set of predictions.


  • Yuli Ban, starspawn0 and Jessica like this

#3
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,119 posts
About Elon and his trip to Mars: given the percent of Mars missions that have failed,

https://en.wikipedia...issions_to_Mars

I suspect what might happen is that he tries to send up a manned crew, and then it fails, and the people die. That would create a lot of bad press, and any subsequent trips would get pushed back for years "until they get it right". Also keep in mind that it's going to be a lot harder to put up a manned crew than a robot.

That almost happened during the moon landing mission. If it had happened, I don't think the U.S. would have tried again for a long time; and Russia would have put the first man on the moon.
  • Yuli Ban, Jakob, Erowind and 1 other like this

#4
Jessica

Jessica

    Member

  • Members
  • PipPipPipPipPipPip
  • 553 posts

My main critiques:

 

  • Utility robots are going to be more common, but not that much more common in just 10 years. Adoption of home utility robots probably won't reach parity by 2029, maybe 20% adoption in middle class neighborhoods. Parity by the mid 30s is plausible though. You aren't going to see huge hordes of them crawling all over the streets. Maybe if you look around you on a crowded street, you'll have a good chance of seeing one, but they won't be everywhere.
  • BCIs will still be the domain of researchers and upper-middle or upper class tech geeks. One with the capabilities you describe would probably cost some thousands of dollars. Parity (50% adoption) would probably be reached in the early 40s in North America, Western Europe, and East Asia.
  • Elon will have probably put people on Mars at least once or twice 10 years from now. 10 years ago the only thing they ever launched was Falcon 1, and now they have Falcon Heavy, Dragon 2, and Starhopper. So far they are keeping pace with their development goals; I would give them a 50% chance of putting people on Mars by 2027.

 

All in all, a pretty balanced, middle of the road set of predictions.

 

 

I find the middle of the road the safest and probably the most likely to occur as the past 40-50 years has been quite a bit slower compared to the predictions made. Elon is probably our best hope to get the mars to be honest.



#5
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,619 posts
  • LocationNew Orleans, LA

I foresee a joint manned/robotic lunar landing next decade, but no serious attempt to land on Mars.  

 

Luna is right there. It's the perfect way to test our exploratory capabilities. It takes just under 3 days to get there. We shouldn't skip it for Mars.


  • funkervogt likes this

And remember my friend, future events such as these will affect you in the future.


#6
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPipPip
  • 745 posts

 

 

The assistant is so good at these basic conversations that, sometimes, you wonder if it is alive. You can even have some basic conversations about movies, books, and people / celebrities, and it seems to always have intelligent things to say. It does sometimes screw up, and say things that no human would ever utter; but it does seem like another couple years of improvement, and it will be passing a full Turing Test. It can even answer questions about all your old emails and books, as well as a human. However, if you ask it something that requires being inventive or applying sophisticated reasoning, it just says, “I’m sorry, I can’t do that.” Sometimes it’s hard to tell what it can and cannot do.

I largely agree with that. By the end of 2029, I predict a machine will pass the Turing Test, but will fail some of the subsequent Tests where it is asked harder questions designed to probe its intelligence. Nevertheless, the milestone will be big news, though it will only highlight the limited value of the Turing Test, which is something AI experts have been talking about for years. 


  • starspawn0 likes this

#7
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPipPip
  • 745 posts

I predict the following will be true by 2029:

  • "Foldable" smartphones will be commercially available. Folded up, they will be the same size as today's smartphones, but you'll be able to open them like a manila folder, exposing a larger inner screen. They will obsolete mini tablets. 
  • Augmented reality glasses that have fixed Google Glass' shortcomings will be on the market. The device category will come back. 
  • It will be cheaper to buy an electric version of a particular model of car than it will be to buy the gas-powered version. 
  • China's GDP will be higher than America's. 

  • starspawn0 likes this

#8
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,619 posts
  • LocationNew Orleans, LA

I'm also surprised. About three years ago, I did a short blog series called "The World of 2029", and it seems you and I share many of the same ideas.


And remember my friend, future events such as these will affect you in the future.


#9
Jakob

Jakob

    Stable Genius

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 6,122 posts

 

 

 

The assistant is so good at these basic conversations that, sometimes, you wonder if it is alive. You can even have some basic conversations about movies, books, and people / celebrities, and it seems to always have intelligent things to say. It does sometimes screw up, and say things that no human would ever utter; but it does seem like another couple years of improvement, and it will be passing a full Turing Test. It can even answer questions about all your old emails and books, as well as a human. However, if you ask it something that requires being inventive or applying sophisticated reasoning, it just says, “I’m sorry, I can’t do that.” Sometimes it’s hard to tell what it can and cannot do.

I largely agree with that. By the end of 2029, I predict a machine will pass the Turing Test, but will fail some of the subsequent Tests where it is asked harder questions designed to probe its intelligence. Nevertheless, the milestone will be big news, though it will only highlight the limited value of the Turing Test, which is something AI experts have been talking about for years. 

 

We already had a machine that "passed" the Turing Test. All that did was show how flawed and limited the test actually is. Unless you want to make a super-Turing Test where a) the test lasts one hour instead of five minutes, b) it has to fool 50% of judges instead of 30%, c) it has to react convincingly when you ask it basic facts that the average adult would know or easily be able to look up, and d) more generally, it can't game the system by pretending to be an idiot, child, or non-native English speaker. This bar could maybe be cleared by 2029, though of course it's no indicator of sentience.


  • Yuli Ban likes this

#10
Jakob

Jakob

    Stable Genius

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 6,122 posts

To expand upon the last point, a test of sentience would take years to administer and would involve the AI freely interacting with broader society and would also require observing the AI to see what it does when no one is observing it or giving it instructions. If it passes flawlessly for human during a conversation, but then when it's alone, just stands in the corner and doesn't do anything, that suggests non-sentience. But if it starts doing things that nobody told it to do when it thinks no one is watching it, then that's a more plausible sign of sentience. It's an even bigger sign if these activities are of an inquisitive and/or creative nature, like reading books and papers, making works of art (even if simple and childlike, or highly abstract), or engaging in something that resembles play.



#11
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,619 posts
  • LocationNew Orleans, LA

The Turing Test can easily be edited. Given new requirements and whatnot.

 

It's easy to forget it was conceived in an era where AI had not yet defeated humans at checkers. Neuroscience was in its infancy. We could not fathom the human brain's complexity— we didn't even have the necessary concepts in place to do so. We believed we could completely replicate the brain's capabilities with a few good lines of code plugged into megaflops of computing power.

Nowadays, we hope we can create a very rough facsimile of certain processes by feeding direct brain data into computers quadrillions of times more powerful. 

 

Yet we still rely on the same test! As Jakob mentioned, it's easy to cheat on tests. The Turing Test relies on the human speaker just as much as the machine— if you want to reliably pass the test with a chatbot from ten years ago, you need only convince the testers from the outset to expect certain answers and to ask certain questions. 

Eugene Goostman, I think it was called, cheated because the testers were told it was a juvenile Ukrainian boy. What's more, certain tics and quirks were added to it that further threw off the testers. We respond well to such quirks. 

 

Turing himself laid out that conversations need only last five minutes to make such a judgment call. In my eyes, it's no different from the Chinese room experiment or a modified version thereof— where a man who only understands Chinese is given 500 cards with English written on them and is told to use them to hold a silent conversation with a natural English speaker. The Chinese man may convince 30% of certain testers that he understands English some of the time, but if the test were held several times in a row, he'd be shown to have no understanding whatsoever. 

 

 

What the Turing Test needs is modification. It can still be useful.

Cleverbot as it currently exists can pass the Turing Test because it needs only convince 30% of testers. But imagine if it had to hold a conversation that lasted 30 minutes and then a second conversation later that drew from the first, with no handicaps on Cleverbot's part (such as "it's supposed to be a foreign teenager"). If it still convinced at least 30% of the testers (preferably more, such as 50%), would that not be more significant?


  • Jakob likes this

And remember my friend, future events such as these will affect you in the future.


#12
Jakob

Jakob

    Stable Genius

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 6,122 posts

Ah, I hadn't thought of that. Yes, holding multiple separate conversations with the same examiner is a very clever idea, and one that would far and away stump any AI currently existing (though maybe in 2029 the state-of-the-art could rise to the challenge). An even more difficult test would be the following format:

  • Three one-hour conversations in a day with a given examiner
  • The format is audio or video chat with an avatar, rather than text based
  • This format is repeated for three days
  • After each day of testing, the examiners can convene and compare notes

What this this test that the regular Turing test does not?

  • Ability to maintain a consistent persona over extended periods of time
  • Ability to maintain a consistent persona with different people
  • Ability to adapt to changes in tone of voice and body language
  • Ability to remember previous conversations

 

@Yuli: When do you think an AI would be able to beat this? And how would you try to defeat it if you were an examiner?

 

Personally, I'd try to throw "get to know" it first, with a few open-ended things like Tell me about yourself. What do you do for a living? What was your proudest moment? Tell me about your family and friends. Then go on to asking questions about human experience What was your first childhood memory? What was your favorite vacation and what did you do there? Who was your first crush? Favorite teacher in school/professor in college? What was your favorite hobby? How did you get into that? How did you decide on your career? Why and how questions would be harder for an AI, so I'd prioritize them over what questions. Then I'd try to probe their opinions on political and social issues, as well as popular culture. Even a person of 80-110 IQ is going to have opinions on these topics and be able to coherently describe them even if they couldn't win a formal debate or read complex literature. Also I'd ask their opinions of me and the other examiners based on my personality and what I've told them about myself.

 

Also, I'd throw a few curveball questions like What is 6,356,403 times 2,549,892?, if they can successfully answer that then they're obviously the AI. And I'd repeat key questions at least once a day to ensure that they have a consistent persona, as well as verifying that they know what I told them about myself in previous days. But also throw in a few curveball questions that they shouldn't be able to remember, either because they were very trivial or because they never happened.

 

Oh and, I might as well throw in a few tests for physical movement, like does their gaze follow my hand if they move it around, etc.

 

Now I really wanna be a Turing Test administrator...


Edited by Jakob, 11 February 2019 - 10:08 PM.

  • Yuli Ban likes this

#13
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,619 posts
  • LocationNew Orleans, LA

^ Don't ask me; it's entirely coincidental that my predictions almost completely aligned with Starspawn0's. If I had to answer, I'd answer based on what I want and what I think is realistic.

 

want this modified Turing Test to be passed by 2025. Realistically, it will require artificial general intelligence. Not even strong artificial expert intelligence (or Zombie AGI), but legitimate AGI. Unless BCIs bring us weak AGI within ten years, this is decades away.

Of course, China might very well achieve such a feat. They are likely going to promote BCIs for added state control over the next decade, and that will give them a stupendously huge amount of data.


  • Jakob likes this

And remember my friend, future events such as these will affect you in the future.


#14
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPipPip
  • 745 posts

 

 

 

 

The assistant is so good at these basic conversations that, sometimes, you wonder if it is alive. You can even have some basic conversations about movies, books, and people / celebrities, and it seems to always have intelligent things to say. It does sometimes screw up, and say things that no human would ever utter; but it does seem like another couple years of improvement, and it will be passing a full Turing Test. It can even answer questions about all your old emails and books, as well as a human. However, if you ask it something that requires being inventive or applying sophisticated reasoning, it just says, “I’m sorry, I can’t do that.” Sometimes it’s hard to tell what it can and cannot do.

I largely agree with that. By the end of 2029, I predict a machine will pass the Turing Test, but will fail some of the subsequent Tests where it is asked harder questions designed to probe its intelligence. Nevertheless, the milestone will be big news, though it will only highlight the limited value of the Turing Test, which is something AI experts have been talking about for years. 

 

We already had a machine that "passed" the Turing Test. All that did was show how flawed and limited the test actually is. Unless you want to make a super-Turing Test where a) the test lasts one hour instead of five minutes, b) it has to fool 50% of judges instead of 30%, c) it has to react convincingly when you ask it basic facts that the average adult would know or easily be able to look up, and d) more generally, it can't game the system by pretending to be an idiot, child, or non-native English speaker. This bar could maybe be cleared by 2029, though of course it's no indicator of sentience.

 

https://www.thedaily...the-turing-test



#15
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,119 posts
I just want to point out a really cool fact about the BERT system that Google recently developed, that is built from nothing more than word co-occurrence stats.  It shows that it is possible to get machines to learn world knowledge this way, and to do some basic kinds of inference over this knowledge:
 
https://openreview.n...m?id=rkgfWh0qKX
 

To make LM scoring applicable, knowledge tuples of the form Relation(head, tail) from ConceptNet are first converted to a form that resembles natural language sentences. For example, UsedFor(post office, mail letter) is converted to "Post office is used for mail letter." by simply concatenating its head, relation, and tail phrases in order. Although this simple procedure results in ungrammatical
sentences, we find our LMs can still adapt to this new data distribution and generalize extremely
well to test instances.

 
Did you catch that?  After some very basic training to set parameters (and also not so basic unsupervised pre-training on large amounts of unstructured text), if they wanted to know whether a relation like UsedFor(post office, mail letter) was true, they first converted it to a sentence, and then asked the system to score whether it was a true statement or not.  Remarkably, it does better than the previous state of the art methods.  
 
If it's possible to do that, might it also be possible to do something a lot more sophisticated?... such as checking whether a potential conversational response is a good one?  And all trained just using unstructured data, along with some examples of good and bad conversations?
 
It's not so easy to say that it can't be done.  You might think that, "Surely BERT isn't learning anything complex."  But see this:
 
https://mobile.twitt...717916821843969
 

I expected the Transformer-based BERT models to be bad on syntax-sensitive dependencies, compared to LSTM-based models.

So I run a few experiments. I was mistaken, they actually perform *very well*.


Ok, proficiency at some basic world knowledge is impressive... but it's also good at syntax-sensitive dependencies?

What if they add visual and audio data to the mix? And find a way to deepen the complexity of the logic it is capable of processing?

It's hard to say how far the idea can be pushed, especially given 10 more years of research and development. In that time, we might have models built with orders of magnitude more data, use orders of magnitude more compute, and even whole new training algorithms.

There is also brain data that can be used.

If it continues to improve at anywhere near the rate of the last couple years, Deep Learning applied to language understanding probably will produce chatbots that are easily mistaken for humans, in the not-too-distant future.
  • Yuli Ban likes this

#16
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,119 posts
One more thing I'd like to mention: not only will we have AI doctors that can diagnose really well by 2029, but we will have very good preventative medical care, greatly extending human lifespan. We will have devices around us monitoring us 24 hours a day, 7 days a week, for any little sign of potential problems -- these include smartphones, AR glasses, BCIs, smartwatches, fitbit-like devices, and possibly even devices inside our bodies. This was a point made by Sebastian Thrun, mentioned in this piece 2 years ago:

https://www.newyorke...03/ai-versus-md
 

Thrun blithely envisages a world in which we’re constantly under diagnostic surveillance. Our cell phones would analyze shifting speech patterns to diagnose Alzheimer’s. A steering wheel would pick up incipient Parkinson’s through small hesitations and tremors. A bathtub would perform sequential scans as you bathe, via harmless ultrasound or magnetic resonance, to determine whether there’s a new mass in an ovary that requires investigation. Big Data would watch, record, and evaluate you: we would shuttle from the grasp of one algorithm to the next. To enter Thrun’s world of bathtubs and steering wheels is to enter a hall of diagnostic mirrors, each urging more tests.


To someone young, this may not sound very exciting -- "Who cares about diagnosing stuff early? Let's hear more about flying cars and robots!!" -- but I can tell you that if you were in your mid-30s+, or as old as me (a decade+ older, still), you would have seen a lot of death, and some of it needless -- "If only we had known!..." One example was my aunt, who had cancer; but she didn't know it, and the doctor initially thought she only had pneumonia. By the time it was discovered, it was too late, just like Thrun's mother in that article. I remember hearing about how much her son (my cousin) and husband cried in the hospital when she died, and how her son told his father that if he didn't stop crying, he would kill himself.

Boring it may seem, but this kind of technology will alleviate a great deal of mental and physical suffering. If you're young, you just don't how much. I promise you!

If conditions can be caught really early, the ability to stop them from growing worse expands greatly -- perhaps to such a degree that about the only way you will die is due to old age or an accident (e.g. a car hits you).

I just want to mention a few more examples, besides cancer:

* You may not think much of arthritis -- another boring illness that old people contend with -- but knowing long in advance that you are about to get it could have serious life-extending benefits. You could have the proper surgery when you are younger and stronger, and avoid pain.

* Tooth decay: what if you knew that one of your teeth was about to get a cavity, before it actually happened? You could go to the dentist, or brush it more, with more fluoride, to build it up.

* Heart health: what if you knew hours before a heart attack that there was a 90% you would suffer one? You could go to the ER before it happened.

* Stroke: same as heart attack.

And the list goes on and on...
  • Yuli Ban, Jakob and johnnd like this

#17
Jakob

Jakob

    Stable Genius

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 6,122 posts

Again, I doubt that would be everywhere by 2029. For something to reach widespread adoption by that point, it would have to be released right about now. Even among new cars, only a handful will probably have smart steering wheels, and a lot of people will still have older cars. But incremental progress will certainly be made in that direction.

 

People don't usually throw away their old stuff just because something newer and shinier exists, and I think that's important to remember.

 

Also, I have reached 6 kiloposts.


  • starspawn0 and waitingforthe2020s like this

#18
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,119 posts
I agree with you that advanced self-driving cars (level 4) won't be widespread by 2029; it will be a few percent of all cars on the road, at best. AI-based diagnostic systems will certainly exist, and be really good -- but there may be regulatory hurdles. However, I agree with Thrun that we will have lots of sensors in the very near future, scanning for signs of medical problems. If it's preventative, the regulations are going to be laxer, and the AI will be denoted a "health app" or something.

A lot of people have smartphones, and they can already diagnose a lot of illnesses with decent accuracy (e.g. Parkinson's); it's just that these aren't baked-into virtual assistants yet. There are probably issues around privacy that big companies are worried about getting right, so that users don't balk and leave en masse. I know Apple and Google have signaled they are interested in pursuing this. I think Apple has a lot more health applications planned for their Apple Watches, and probably also smartphones.
  • Yuli Ban and Jakob like this

#19
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPipPip
  • 745 posts

By the end of 2029, I think there will be vastly better autonomous cars than we have now, but they will still be the playthings of technophiles and upper-income people. Maybe 5% of the new private passenger vehicles sold to Americans in 2029 will be capable of fully autonomous driving. 

 

However, by the end of 2029, much more progress will have been made turning over the commercial truck fleet to autonomous vehicles. Machines will be able to drive big rig trucks across long but simple highway routes with the same efficiency and safety as human drivers. This category of vehicle will turn over faster because of the profit motivation and because it will be easier from a technological standpoint. About 25% of the new big rig trucks sold in America in 2025 will be capable of fully autonomous driving. It will be a bleak time for human truck drivers, but very profitable for rich guys who own big truck fleets. 

 

I think the 2030s will be the decade when autonomous vehicles start to dominate the roads. Not only will the majority of new vehicles sold in rich countries be autonomous, but old, human-driven vehicles will be reaching the ends of their service lives and piling up in the junkyards. 


  • Zaphod and Casey like this

#20
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,119 posts
Regarding what I said about media synthesis, OpenAI just presented a breakthrough:
 
https://blog.openai....anguage-models/
 

SYSTEM PROMPT (HUMAN-WRITTEN)
In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.

MODEL COMPLETION (MACHINE-WRITTEN, 10 TRIES)
The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science.

Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.

Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow.

Pérez and the others then ventured further into the valley. “By the time we reached the top of one peak, the water looked blue, with some crystals on top,” said Pérez.

Pérez and his friends were astonished to see the unicorn herd. These creatures could be seen from the air without having to move too much to see them – they were so close they could touch their horns.

While examining these bizarre creatures the scientists discovered that the creatures also spoke some fairly regular English. Pérez stated, “We can see, for example, that they have a common ‘language,’ something like a dialect or dialectic.”

Dr. Pérez believes that the unicorns may have originated in Argentina, where the animals were believed to be descendants of a lost race of people who lived there before the arrival of humans in those parts of South America.

While their origins are still unclear, some believe that perhaps the creatures were created when a human and a unicorn met each other in a time before human civilization. According to Pérez, “In South America, such incidents seem to be quite common.”

However, Pérez also pointed out that it is likely that the only way of knowing for sure if unicorns are indeed the descendants of a lost alien race is through DNA. “But they seem to be able to communicate in English quite well, which I believe is a sign of evolution, or at least a change in social organization,” said the scientist.


Their system can be used to improve on a range of AI tasks, like question-answering.
  • Zaphod, Yuli Ban, Jakob and 1 other like this




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users