Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

What 2029 will look like


  • Please log in to reply
87 replies to this topic

#41
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 975 posts
Bill Gates's list of 10 coming breakthrough technologies included much smarter virtual assistants (1 to 2 years away), and dexterous robots:
 
https://www.technolo...hnologies/2019/

Smooth-talking AI assistants
Why it matters
AI assistants can now perform conversation-based tasks like booking a restaurant reservation or coordinating a package drop-off rather than just obey simple commands
Key players
Google
Alibaba
Amazon
Availability
1-2 years
We’re used to AI assistants—Alexa playing music in the living room, Siri setting alarms on your phone—but they haven’t really lived up to their alleged smarts. They were supposed to have simplified our lives, but they’ve barely made a dent. They recognize only a narrow range of directives and are easily tripped up by deviations.

But some recent advances are about to expand your digital assistant’s repertoire. In June 2018, researchers at OpenAI developed a technique that trains an AI on unlabeled text to avoid the expense and time of categorizing and tagging all the data manually. A few months later, a team at Google unveiled a system called BERT that learned how to predict missing words by studying millions of sentences. In a multiple-choice test, it did as well as humans at filling in gaps.

These improvements, coupled with better speech synthesis, are letting us move from giving AI assistants simple commands to having conversations with them. They’ll be able to deal with daily minutiae like taking meeting notes, finding information, or shopping online.

Some are already here. Google Duplex, the eerily human-like upgrade of Google Assistant, can pick up your calls to screen for spammers and telemarketers. It can also make calls for you to schedule restaurant reservations or salon appointments.

In China, consumers are getting used to Alibaba’s AliMe, which coordinates package deliveries over the phone and haggles about the price of goods over chat.

But while AI programs have gotten better at figuring out what you want, they still can’t understand a sentence. Lines are scripted or generated statistically, reflecting how hard it is to imbue machines with true language understanding. Once we cross that hurdle, we’ll see yet another evolution, perhaps from logistics coordinator to babysitter, teacher—or even friend? —Karen Hao


  • eacao, Yuli Ban, Alislaws and 1 other like this

#42
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPipPip
  • 685 posts

Bill Gates's list of 10 coming breakthrough technologies included much smarter virtual assistants (1 to 2 years away), and dexterous robots:
 
https://www.technolo...hnologies/2019/
 

Smooth-talking AI assistants
Why it matters
AI assistants can now perform conversation-based tasks like booking a restaurant reservation or coordinating a package drop-off rather than just obey simple commands
Key players
Google
Alibaba
Amazon
Availability
1-2 years
We’re used to AI assistants—Alexa playing music in the living room, Siri setting alarms on your phone—but they haven’t really lived up to their alleged smarts. They were supposed to have simplified our lives, but they’ve barely made a dent. They recognize only a narrow range of directives and are easily tripped up by deviations.

But some recent advances are about to expand your digital assistant’s repertoire. In June 2018, researchers at OpenAI developed a technique that trains an AI on unlabeled text to avoid the expense and time of categorizing and tagging all the data manually. A few months later, a team at Google unveiled a system called BERT that learned how to predict missing words by studying millions of sentences. In a multiple-choice test, it did as well as humans at filling in gaps.

These improvements, coupled with better speech synthesis, are letting us move from giving AI assistants simple commands to having conversations with them. They’ll be able to deal with daily minutiae like taking meeting notes, finding information, or shopping online.

Some are already here. Google Duplex, the eerily human-like upgrade of Google Assistant, can pick up your calls to screen for spammers and telemarketers. It can also make calls for you to schedule restaurant reservations or salon appointments.

In China, consumers are getting used to Alibaba’s AliMe, which coordinates package deliveries over the phone and haggles about the price of goods over chat.

But while AI programs have gotten better at figuring out what you want, they still can’t understand a sentence. Lines are scripted or generated statistically, reflecting how hard it is to imbue machines with true language understanding. Once we cross that hurdle, we’ll see yet another evolution, perhaps from logistics coordinator to babysitter, teacher—or even friend? —Karen Hao

 

To be clear, is Bill Gates saying he predicts those things will definitely be available by their given deadlines, or is he saying he thinks it would be cool if they were available by the deadlines?  


  • Yuli Ban and starspawn0 like this

#43
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 975 posts
I don't think Bill Gates is predicting the timeline. I think he just gave Tech Review his list, and then they asked their journalists to write articles about it.

This particular article is written by Karen Hao, who is a data scientist and was an engineer at the first startup spun out of Google X. So, I think she is qualified to express an informed opinion about this. The writing is of high quality; though some may object to the timeline. Certainly, given those BERT models, and the more recent work by OpenAI, you can bet Google is trying to figure out how to use it to improve Assistant -- and other companies are doing the same (e.g. Apple and SIRI; Amazon and Alexa).

It's possible that Gates did also supply the timeline, and believes it will definitely be first available by that time.

He does still have an advisory role at Microsoft, and probably has seen some of their experimental-not-yet-public products, especially given his professed interest in AI (he has talked about his interest in AI many times, I believe once saying how if he had to do it all over again, would have studied AI). Microsoft acquired several companies, like Maluuba, that had been working on conversational AI a few years back, and have probably been toiling away in secret on some projects -- I've certainly not heard what they have been up to for a while.
  • Yuli Ban likes this

#44
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 975 posts
You know OpenAI has really done something good when it impresses Oren Etzioni (Allen AI director, and world-class researcher at the University of Washington):
 
https://hbr.org/2019...i-based-forgery

In February, AI-based forgery reached a watershed moment–the OpenAI research company announced GPT-2, an AI generator of text so seemingly authentic that they deemed it too dangerous to release publicly for fears of misuse. Sample paragraphs generated by GPT-2 are a chilling facsimile of human- authored text. Unfortunately, even more powerful tools are sure to follow and be deployed by rogue actors.



#45
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,492 posts
  • LocationNew Orleans, LA

[D] How long are we from: Voice Style Transfer | Voice to Voice, Male to Female, Adding and Removing Accents, & Swapping Vocalists in Music

Plenty of interesting replies here.


  • starspawn0 likes this

And remember my friend, future events such as these will affect you in the future.


#46
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 975 posts

That subreddit is not for futurism.  They probably think you are a developer.  

 

Also, be careful about linking to this site from a large subreddit like /r/futurology or /r/machinelearning.  I noticed today there were THOUSANDS of people online at FutureTimeline, reading one of your climate change threads.  After checking, I noticed you posted a link to the site from one of the major subreddits.  This is going to cost Will Fox money.

 

I treat small forums like this one like Fight Club:  you don't talk about Fight Club on another forum (unless to a small forum, where you won't get too many outbound viewers)


  • Yuli Ban likes this

#47
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,492 posts
  • LocationNew Orleans, LA

That subreddit is not for futurism.  They probably think you are a developer.

Oh don't worry, I still remember your caution against posting there. The intention wasn't for futurist discussion, however. The reasoning behind the question was entirely based around current practical uses and developments. I've been trying to gauge our current abilities in various synthesis fields, and there was arguably no better place to ask than a community of actual experts and developers rather than somewhere like /r/Futurology or /r/Singularity where I'd receive plenty of answers based on hopes, skimmings of pop-sci news articles, and science fiction movies and virtually none based on actual progress and papers.
 

In the case of audio synthesis/manipulation on the level I imagined, there are plenty of potential uses and refinements that will go a very long way for this specific thread. 

 

Also, be careful about linking to this site from a large subreddit like /r/futurology or /r/machinelearning.  I noticed today there were THOUSANDS of people online at FutureTimeline, reading one of your climate change threads.  After checking, I noticed you posted a link to the site from one of the major subreddits.  This is going to cost Will Fox money.

On the contrary, having more traffic is what nets Will money considering he still uses AdSense. It's been a source of angst for a while that site traffic has been down. The main FutureTimeline site, of course, is what attracts most visitors; the forums are peripheral. So while it might stretch bandwidth for the moment, I wouldn't worry too much about it making a negative impact. Indeed, I'm more concerned about increased attention attracting trolls, as had happened once before. It was not long after that was posted that we gained some really nasty users as well as a load of spam.


And remember my friend, future events such as these will affect you in the future.


#48
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPipPip
  • 685 posts

Related:

 

 

[By the end of the 2030s]

 

  • Movie subtitles and the very notion of there being “foreign language films” will become obsolete. Computers will be able to perfectly translate any human language into another, to create perfect digital imitations of any human voice, and to automatically apply CGI so that the mouth movements of people in video footage matches the translated words they’re speaking.
  • Computer will also be able to automatically enhance old films by accurately colorizing them, removing defects like scratches, and sharpening or focusing footage (one technique will involve interpolating high-res still photos of long-dead actors onto the faces of those same actors in low-res moving footage). Computer enhancement will be so good that we’ll be able to watch films from the early 20th century with near-perfect image and audio clarity.
  • CGI will get so refined than moviegoers with 20/20 vision won’t be able to see the difference between footage of unaltered human actors and footage of 100% CGI actors.
  • Lifelike CGI and “performance capture” will enable “digital resurrections” of dead actors. Computers will be able to scan through every scrap of footage with, say, John Wayne in it, and to produce a perfect CGI simulacrum of him that even speaks with his natural voice, and it will be seamlessly inserted into future movies. Elderly actors might also license movie studios to create and use digital simulacra of their younger selves in new movies. The results will be very fascinating, but might also worsen Hollywood’s problem with making formulaic content.

https://www.militant...2019-iteration/


  • Yuli Ban likes this

#49
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 975 posts
OpenAI has spun out a new profit-making venture called "OpenAI LP".  The stated goal seems to be about raising the money necessary to build AGI.  They seem to think it will take billions of dollars in compute, hardware, and talent:
 
https://openai.com/blog/openai-lp/
 

Our mission is to ensure that artificial general intelligence (AGI) benefits all of humanity, primarily by attempting to build safe AGI and share the benefits with the world.

We’ve experienced firsthand that the most dramatic AI systems use the most computational power in addition to algorithmic innovations, and decided to scale much faster than we’d planned when starting OpenAI. We’ll need to invest billions of dollars in upcoming years into large-scale cloud compute, attracting and retaining talented people, and building AI supercomputers.


Also see

https://blog.gregbro...-openai-mission

Some on social media think this is just about greed, and that it's just a clever new type of startup pitch, perhaps leveraging some of the social credit OpenAI has accrued.
  • Yuli Ban and Erowind like this

#50
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 975 posts
Richard Sutton (Reinforcement learning god) on how search and learning at scale -- "brute force" -- always win:
 
http://www.incomplet...tterLesson.html
 

This is a big lesson. As a field, we still have not thoroughly learned it, as we are continuing to make the same kind of mistakes. To see this, and to effectively resist it, we have to understand the appeal of these mistakes. We have to learn the bitter lesson that building in how we think we think does not work in the long run. The bitter lesson is based on the historical observations that 1) AI researchers have often tried to build knowledge into their agents, 2) this always helps in the short term, and is personally satisfying to the researcher, but 3) in the long run it plateaus and even inhibits further progress, and 4) breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning. The eventual success is tinged with bitterness, and often incompletely digested, because it is success over a favored, human-centric approach.


This is why I am skeptical of approaches we hear about coming from MIT, that seek to understand how the brain works, and then build that in to ML models. A much better approach is to just say, "You know what? We don't understand it. Let's just have our model learn it."

This "just learn it" includes learning from brain data.

This is a good Twitter thread:

https://mobile.twitt...508628918448128

Deep into the thread Kording writes:
 

What neuroscientists do to brains now is pretty comparable to what ML researchers did to ML problems in the past - they believe that humans can see the relevant structure.


Exactly why I think e.g. @neurowitz's proposal of:

Scan brains --> generate understanding --> use understanding to improve ML



is doomed.
  • Yuli Ban likes this

#51
ralfy

ralfy

    Member

  • Members
  • PipPipPipPip
  • 117 posts

Likely the effects of peak oil, global warming, and increasing global debt will continue to kick in, but now coupled with declining economic output.



#52
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 975 posts
This is one of the pitfalls of using public data to build AI that I described above:
 
IBM’s photo-scraping scandal shows what a weird bubble AI researchers live in

https://www.technolo...rchers-live-in/

Using brain data would bypass a lot of these issues, because you would need a lot, lot less of it (in terms of # of words). Maybe the data from just 10 to 100 individuals, for a few hundred hours, would be enough. In fact, maybe that much brain data from just one individual would be enough. Of course, we still don't have the brain scanning equipment to make this cheap and easy.
  • Yuli Ban likes this

#53
SotoRoss

SotoRoss

    New Member

  • Members
  • Pip
  • 2 posts
  • LocationLenexa, MO

I think self-driving mobile homes will be the technology of the future. Changing the habit of living in a spesific place will change a lot of things including economy, ownership, culture, etc.


  • eacao and Alislaws like this

#54
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,927 posts
  • LocationLondon

I think self-driving mobile homes will be the technology of the future. Changing the habit of living in a spesific place will change a lot of things including economy, ownership, culture, etc.

Never really thought of this, we could get a weird resurgence of nomads!

 

living on a self driving ships would also be neat!


  • Yuli Ban likes this

#55
Lavabawl

Lavabawl

    New Member

  • Members
  • Pip
  • 1 posts

All cyberpunk, I guess. At least I'd wish it to appear like that. I really adore the new cinematic tradition of bringing future elements into series.



#56
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPipPip
  • 685 posts

 

living on a self driving ships would also be neat!

 

Isn't that the idea behind seasteading? 



#57
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 975 posts

This is an interesting podcast interview with Stanford professor Jay McClelland, who is a well-known old-timer in "parallel distributed processing" (which includes Deep Learning and theories about modelling cognition via deep neural nets):

 

https://www.stitcher.../brain-inspired

 

(He was formerly the chair of the psychology department at Stanford.)

 

He is trying to build a system with the ability to reason about mathematics using deep neural nets.  It will be able to take an exam in geometry, say, reason about the questions, come to the right answers, and then explain how it got those answers, just like a human.  He sees this as an important and difficult step towards AGI.  It's a very hard problem -- if he solves it, then a scaled-up version of his system could learn how to prove theorems, for example, that stump even the best mathematical minds.  I would also say that if he could do this, then it should be possible to build a system that can reason in any axiomatic framework; for example, in physics or biology.  It wouldn't necesssarily invent new, plausible axioms -- but could help solve important and hard problems, once the ground rules of the "game" are determined.

 

He says he thinks it will take at least 10 years to build it, though admits that he said the same thing 5 years ago.  Still, he does feel he is making progress.

 

He seems to agree with Richard Sutton, in that building-in lots of structures just because you think they are important and will speed up the learning process, might be a bad idea.  For example, hard-coding in symbolic reasoning is not necessary -- deep neural nets are capable of symbolic reasoning, without any add-ons.  It sounds like he takes a minimalist approach, and thinks that feeding the networks the right data and experience is at least as important (if not more so) as choosing the right architecture.  I recall (but could be mistaken) that he thought that Deep Learning and deep neural nets could learn in very human-like ways if they were fed the sensor-rich data that we humans take in constantly.

 

One of the stories he mentions is a talk he gave at MIT many years ago.  He mentions how it was customary (at the time) when one gave a talk on a paper to have people show up to challenge it.  In his case, two challengers showed up, one being Steven Pinker.  He said the two challengers each spent an hour basically saying why he was wrong -- and then he got to speak.  LOL!  He didn't agree then, and doesn't agree now, with Pinker's theories symbolic computation.

 

I would say the fact that the challengers had to work so hard, for so two whole hours, proves that they found his theories to be a threat.


  • Yuli Ban likes this

#58
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 975 posts
Consumer Reports piece about voice and chatbots:

https://www.consumer...ange-your-life/
 

“The holy grail,” says Ashwin Ram, who led the artificial intelligence (AI) research team for Alexa and now works at Google, “is being able to interact with machines the way we do with each other, which is through voice.”

Innovators like Ram envision today’s computers—perched on desktops or tucked into pockets—fading in importance as chatty AIs become the primary gateways to all that can be done digitally.

....

I recently experimented with the capability myself, instructing the Assistant to make a reservation for two people at an upscale restaurant near my home in Northern California. A couple of days later, I got a notification from Google that the reservation was confirmed.

Curious about how smoothly the process had gone, I phoned the restaurant and spoke with the hostess. She said the automated caller had immediately identified itself as Google Assistant and had been “polite but awkward.” There was only one problem; the Assistant didn’t pronounce my name correctly. So instead of correctly taking down a reservation for “James Vlahos,” the hostess had booked a table for two under the name of “Yasmine Bauhaus.” (Not a bad pseudonym, really.)

That was an amusing misstep by Google Assistant, but as AIs take on a bigger communication role the consequences of mistakes can grow. Misunderstandings matter a lot more if you're calling for blood test results or even making an airline reservation. Voice computing experts promise that their creations will rapidly become more capable, but it's unclear just how reliable these AIs need to be before they deserve our trust. What’s more, as we outsource everyday communications to our new digital servants, we may also surrender a bit of our humanity.


He goes on to describe a chatbot he had built to imitate his father, who died of cancer. In the future, these chatbots will be built with BCI data, and be much more faithful imitations of the person.
  • Yuli Ban likes this

#59
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPipPip
  • 685 posts

Consumer Reports piece about voice and chatbots:

https://www.consumer...ange-your-life/
 

He goes on to describe a chatbot he had built to imitate his father, who died of cancer. In the future, these chatbots will be built with BCI data, and be much more faithful imitations of the person.

 

In the farther future, the chatbots will incorporate data from the dead person's genome as well. 


  • starspawn0 likes this

#60
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 975 posts

I have thought of that, in terms of "resurrecting" some of my dead relatives (and even thought about doing it based on hair samples, or even samples from an exhumation); but convinced myself that the DNA wouldn't be too predictive of how they behave (which depressed me a little).  It would be above chance level; but combined with text and brain data, it could perhaps be made pretty accurate.  Brain data, alone, would give pretty accurate predictions, and even allow you to infer large parts of the DNA.

 

....

 

On another matter:

 

In the following interview

 

https://vux.world/talk-to-me/

 

the author says that he thinks Facebook could be interested in building AIs that model users -- or how users want to appear to the world -- to represent them on social media when they aren't online.  He said this is pure speculation on his part... but it makes one wonder why he would think this is something they are actively working on.  


  • Casey and Yuli Ban like this




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users