Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

What 2029 will look like


  • Please log in to reply
110 replies to this topic

#61
tomasth

tomasth

    Member

  • Members
  • PipPipPipPipPip
  • 212 posts

So getting one's brain scanned as soon as possible , is an insurance in case something happens ; before a better technology arrives ?



#62
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,121 posts
A really big disconnect between what executives and researchers think about the future of AI:
 
https://mobile.twitt...302905555042305

A new survey from
@EdelmanPR
suggests that 73% of tech executives believe that AI will surpass human intelligence within the next ten years. Among researchers in the field, predictions are far more mixed.


  • Yuli Ban likes this

#63
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPipPip
  • 751 posts

What counts as a "tech executive"? 



#64
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,121 posts
Click the first link, and then click on the "white paper" link on that site, which is on the right side of the screen.

"Tech executive" breaks down into the following subdivisions (see the "Methodology" section on page 38):

* Senior Executive (CEO, CTO, CSO, President) -- 15% of respondents.

* "Executive Level" (Executive Vice President, General Manager) -- 10% of respondents.

* Upper Level Management (Vice President, Senior Vice President) -- 18%

* Mid-Level Management (Director, Senior Manager) -- 50%

* Company Owner -- 6%

* Part Owner -- 1%

Furthermore, the sizes of the companies involved break down as follows:

* 1-24 employees -- 12%

* 25 - 49 employees -- 3%

* 50 - 149 employees -- 11%

* 150 - 249 employees -- 11%

* 250 - 499 employees -- 9%

* 500 - 999 employees -- 15%

* 1,000 - 1,499 employees -- 11%

* 1,500+ employees -- 29%

#65
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,121 posts
This is interesting:

https://arxiv.org/abs/1904.06472

It's a paper by some people at a relatively new company called PolyAI Limited. The first-named author, Matthew Henderson, worked at Google (as I recall), and got his ph.d. at Cambridge with Steve Young, a leader in spoken dialog system design, and co-founder of VocalIQ, which was bought by Apple 2 or so years ago.

The paper details PolyAI's work on training conversational response selection models on extremely large conversational datasets -- e.g. including nearly 700 million responses from Reddit, nearly 300 million subtitles / lines from movie and TV shows, and over 3 million question-answer pairs from AmazonQA.

Basically, they train the model to predict a score for a response, given the context. For example, the context might be:

Line 1: Hello, how are you?
Line 2: I am fine. And you?
LIne 3: Great. What do you think of the weather?


And then the top response is:

It doesn't feel like February.



So, they have a large database of (context, response) pairs.

And this is how they score their model: they fish a (context, response) pair out of the dataset, and then generate 99 more potential (context, response) pairs, having the same context value but different responses. The responses in these 99 extra pairs are randomly selected from the entire dataset, and serve as "negative examples". Then, they see whether the model predicts the true (context, response) pair to have the highest score.

What they find is that their model is accurate 61% of the time of pairs coming from Reddit, 30.6% accurate on pairs coming from movie and TV lines, and 84.2% accurate on AmazonQA pairs.

That's pretty good! Even picking out the correct answer from a multiple choice with 100 possibilities 30.6% of the time is good.

....

What they could do with this is pair their system with one to generate responses; and then the two systems, together, might produce highly engaging conversations.

I see this line of work as leading towards chatbots and socialbots that converse much more naturally. Throw enough data and large enough neural nets at the problem, and we will see some amazing things -- unbelievable things, probably.
  • Yuli Ban likes this

#66
tomasth

tomasth

    Member

  • Members
  • PipPipPipPipPip
  • 212 posts

Misunderstanding what those chatbots could do can explain the Tech executive disconnect.



#67
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,121 posts

I think it has to do with underestimating the gap between machines that can converse seemingly competently, and ones that can also reason and learn as well as a human -- and also control a body and perceive as well as a human.  The gap is titanic.  And convincing 90% of humanity during a Turing Test that the system is at human level does not mean it can do all those other things.

 

I do suspect we are probably close to having machines that are shockingly good at conversation.  It's possible we could see it this year or next.  If you probe deeply, you will see how limited the reasoning is -- but most people won't notice.  In ordinary conversation, most people are on autopilot.  It's understandable, given that there are time constraints that limit the reasoning during conversation.


  • Yuli Ban likes this

#68
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,121 posts

Facebook is working on a virtual assistant to interface with "Oculus products':

 

https://www.theverge...-vr-ar-products

 

I'm guessing they've at least discussed interfacing it with the BCI we will hear more about later this year.  That BCI is considered an "Oculus product", too, or more accurately, a "Reality Labs" product.  AI + BCIs is a powerful combination.  Just imagine what it will look like in 2029!


  • Yuli Ban likes this

#69
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPipPip
  • 751 posts

 

I predict the following will be true by 2029:

  • "Foldable" smartphones will be commercially available. Folded up, they will be the same size as today's smartphones, but you'll be able to open them like a manila folder, exposing a larger inner screen. They will obsolete mini tablets. 
  • Augmented reality glasses that have fixed Google Glass' shortcomings will be on the market. The device category will come back. 
  • It will be cheaper to buy an electric version of a particular model of car than it will be to buy the gas-powered version. 
  • China's GDP will be higher than America's. 

Let me add that, at the end of 2029, we will look back on the 2020s as the decade in which virtual reality went mainstream. Judging by where the technology is now, I think we're less than 10 years from price, visual quality/immersiveness, and content (e.g. - number of available games, movies and other content) getting past the thresholds to mainstream acceptance. In 10 years, VR gaming won't be the domain of hardcore gamers who are willing to spend $1,000+ on a system. 

 

To be clear, I'm envisioning VR headsets as being a different technology category from AR glasses. Yes, circa 2029, the different devices will have some functional overlap (e.g. - your AR glasses will be able to play video clips, and your VR headset might have forward-facing cameras that can display footage of your real world surroundings with some overlaid digital images), but they will remain optimized for different roles. Importantly, I think upper-end VR headsets in 2029 will still need to be plugged into game consoles, desktop computers, or internet routers, whereas AR glasses will be designed for untethered use. 

 

 

Moreover, by the end of 2039, I predict that the best commercial VR headsets will also still need to be plugged in, but they will deliver an audiovisual experience that is nearly indistinguishable from real reality. It will be standard practice for AIs to be doing hyperrealistic game renderings, and for VR game NPCs to behave very intelligently thanks to better AI. 


  • Casey likes this

#70
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,121 posts

Google I/O 2019 reveals next-generation of Google Assistant:

 

https://blog.google/...e-assistant-io/

 

No Turing Test passing conversational ability, but still very cool stuff:

 

* First, they have found out how to drastically shrink the speech recognition and language understanding (and perhaps even some of the question-answering) down to where it can fit on a smartphone.  When they do this, there is hardly any latency, and the Assistant potentially runs 10x as fast as normal!

 

* Assistant will have "Duplex on the Web".  Basically, if you ask to book a hotel, it will go to the web, fill in the forms, click through the menus, and set it all up for you -- asking you to approve at the end.  It will make doing anything on the web much easier.  This sounds like it is in the direction of "mass automation" by "web agents" I wrote a post on a few years back; or, similar to OpenAI's "Mini-world of Bits".

 

* There are lots of new Google Lens features -- e.g. when you read a menu in a restaurant, it will tell you what their most popular items are, perhaps nutritional contents?

 

* More natural controls; and, the ability to combine apps together -- similar to what Viv Labs had promised, but that Bixby has not yet delivered.


  • Casey and Yuli Ban like this

#71
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,121 posts
And Microsoft is about to release some very powerful conversational AI built with its Semantic Machines acquisition tech:
 
https://blogs.micros...tural-language/
 
See that video.
 

The Semantic Machines technology extends the role of the machine learning beyond intents all the way through to enabling what the system does. Instead of a programmer trying to write a skill that plans for every context, the Semantic Machines system learns the functionality for itself from data.

In other words, the Semantic Machines technology learns how to map people’s words to the computational steps needed to carry out requested tasks.

For example, instead of executing a hand-coded program to get the score of the football match, the Semantic Machines approach starts with people who show the system how to get sports scores across a range of example contexts so that the system can learn to fetch sports scores itself.

What’s more, machine learning methods then enable the system to generalize from contexts it has seen to new contexts, learning to do more things in more ways. If it learns how to get sports scores, for example, it can also get weather forecasts and traffic reports. That’s because the system has learned not just a skill, but the concept of how to gather data from a service and present it back to the user.


  • Casey, Yuli Ban and Alislaws like this

#72
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,995 posts
  • LocationLondon

 

 

I predict the following will be true by 2029:

  • "Foldable" smartphones will be commercially available. Folded up, they will be the same size as today's smartphones, but you'll be able to open them like a manila folder, exposing a larger inner screen. They will obsolete mini tablets. 
  • Augmented reality glasses that have fixed Google Glass' shortcomings will be on the market. The device category will come back. 
  • It will be cheaper to buy an electric version of a particular model of car than it will be to buy the gas-powered version. 
  • China's GDP will be higher than America's. 

Let me add that, at the end of 2029, we will look back on the 2020s as the decade in which virtual reality went mainstream. Judging by where the technology is now, I think we're less than 10 years from price, visual quality/immersiveness, and content (e.g. - number of available games, movies and other content) getting past the thresholds to mainstream acceptance. In 10 years, VR gaming won't be the domain of hardcore gamers who are willing to spend $1,000+ on a system. 

 

To be clear, I'm envisioning VR headsets as being a different technology category from AR glasses. Yes, circa 2029, the different devices will have some functional overlap (e.g. - your AR glasses will be able to play video clips, and your VR headset might have forward-facing cameras that can display footage of your real world surroundings with some overlaid digital images), but they will remain optimized for different roles. Importantly, I think upper-end VR headsets in 2029 will still need to be plugged into game consoles, desktop computers, or internet routers, whereas AR glasses will be designed for untethered use. 

 

 

Moreover, by the end of 2039, I predict that the best commercial VR headsets will also still need to be plugged in, but they will deliver an audiovisual experience that is nearly indistinguishable from real reality. It will be standard practice for AIs to be doing hyperrealistic game renderings, and for VR game NPCs to behave very intelligently thanks to better AI. 

 

New valve VR headset has front facing cameras for feed through while using external tracking, so at least one part of your point is spot on.

 

My hope is that BCI advances will mean FIVR Is possible by 2039 but even assuming we get the hardware needed for this level of BCI it may be very difficult to get the software right. It might also be trivial, so I'll keep my fingers crossed. 

 

 

The Semantic Machines technology extends the role of the machine learning beyond intents all the way through to enabling what the system does. Instead of a programmer trying to write a skill that plans for every context, the Semantic Machines system learns the functionality for itself from data.

In other words, the Semantic Machines technology learns how to map people’s words to the computational steps needed to carry out requested tasks.

For example, instead of executing a hand-coded program to get the score of the football match, the Semantic Machines approach starts with people who show the system how to get sports scores across a range of example contexts so that the system can learn to fetch sports scores itself.

What’s more, machine learning methods then enable the system to generalize from contexts it has seen to new contexts, learning to do more things in more ways. If it learns how to get sports scores, for example, it can also get weather forecasts and traffic reports. That’s because the system has learned not just a skill, but the concept of how to gather data from a service and present it back to the user.

This sounds amazing, but possibly to good to be true? still even the most rudimentary form of this would be amazing. 



#73
Nick1984

Nick1984

    Member

  • Members
  • PipPipPipPipPipPip
  • 602 posts
  • LocationUK
People often get carried away when predicting the future that's only decades away.

Here's my 2029 predictions...

- America will still be working on reversing Trump's policies following Trump losing the 2024 election

- Baggy jeans are back in fashion

- Autonomous cars have launched but are incredibly niche and rare. Tokyo has implemented a taxi service by 2029

- Folding computers and phones are becoming mainstream, but most people are happy with the current form factor. People upgrade their phones as often upgrade their laptops today

- PlayStation, Xbox and Nintendo are still the preferred way to play games. Streaming failed to take off, but the vast majority of people now download rather than buy physical. Only big AAA games now come on disc.

- Apple have fallen behind Microsoft and Google in terms of innovation and street credit

- Video streaming has killed off quite a few of the niche channels/networks that came about following the mid-90s digital TV revolution. Netflix has fallen behind the likes of Disney, Amazon and Warner in terms marketshare following the massive loss of content.
  • Raklian likes this

#74
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,121 posts
I suppose I should add a link here to the recent work by Microsoft on Winograd Schemas:

https://www.futureti...sions/?p=265283

I had suspected that "big data methods" (+Machine Learning) would make this kind of progress at about this time (I recall predicting the milestone about 4 years ago on another forum, and got it almost exactly right). However, keep in mind that one could see where the system gets things wrong and bias the test to where it's weakest -- this still wouldn't detract from the fact that it will perform well most of the time in real-world hard commonsense reasoning cases.

....

This seems to be more evidence that there are really no barriers to producing AI systems that can at least fake human-like interaction convincingly. The often-touted "commonsense reasoning" barrier can be crossed, or reasoning can be faked sufficiently well to where it won't be an issue in most real-world conversational interactions.

I don't mean to suggest it will be easy to produce full Turing Test-passing AI systems; just that the barriers to getting there keep falling as Deep Learning advances.
  • Casey, Yuli Ban and funkervogt like this

#75
Jakob

Jakob

    Stable Genius

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 6,122 posts

 

following Trump losing the 2024 election

What happened to term limits?



#76
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPipPip
  • 751 posts

 

 

following Trump losing the 2024 election

What happened to term limits?

 

That's what they'll be saying in 2023, only be shouted down with "TRUMP! TRUMP! TRUMP!" 



#77
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,995 posts
  • LocationLondon

 

 

 

following Trump losing the 2024 election

What happened to term limits?

 

That's what they'll be saying in 2023, only be shouted down with "TRUMP! TRUMP! TRUMP!" 

 

That or maybe it will be a different member of the Trump family?

 

That is traditionally how wealthy Americans have gotten around the system deliberately designed to prevent political dynasties and inherited power. (Kennedy, Clinton, Bush Etc.)

 

Or more realistically, some chosen successor who is basically trump 2.0? 



#78
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,121 posts
There has been a further advance on language understanding from some people from Google Brain and CMU:

https://arxiv.org/abs/1906.08237

Performance on the Winograd Schemas part of GLUE benchmark (WNLI) now stands at 90.4% accurate. At the rate things are progressing, I wouldn't be surprised if it's up to 92% or 93% by next month. See the updated leaderboard:

https://gluebenchmark.com/leaderboard/

That's a 1.4% absolute jump in performance over Microsoft; in relative terms, the error has been decreased by about 13%.
  • Casey, Yuli Ban, Erowind and 1 other like this

#79
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,121 posts

Incidentally, Amazon recently announced the teams to compete for the 2019-2020 Alexa Prize:

 

https://developer.am...com/alexaprize/

 

Just in time to make use of XLNET.  I wonder if any of the teams will do it -- or if Amazon will bake it into the NLP modules behind the scenes.


  • Yuli Ban likes this

#80
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,121 posts
This work implies that machine learning models based on nothing more than predicting word co-occurrence patterns, can absorb a lot of implicit scientific knowledge not directly trained for:
 
 
With Little Training, Machine-Learning Algorithms Can Uncover Hidden Scientific Knowledge 
 
 
https://newscenter.l...ific-knowledge/
 

"Without telling it anything about materials science, it learned concepts like the periodic table and the crystal structure of metals,” said Jain. “That hinted at the potential of the technique. But probably the most interesting thing we figured out is, you can use this algorithm to address gaps in materials research, things that people should study but haven’t studied so far.”


So... models like GPT-2 might have acquired a lot of scientific knowledge not explicitly mentioned in the training corpus.
  • Yuli Ban and Alislaws like this




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users