Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

How advanced will Alexa’s get?


  • Please log in to reply
2 replies to this topic

#1
Maximum7

Maximum7

    Member

  • Members
  • PipPipPipPip
  • 141 posts
Today we have Alexa that can answer questions and play music all based on basic google searches. Will we ever get an “Alexa” that can answer incredibly hard questions like “Why does this girl not like me” or “Is he/she lying”?

#2
Future historian

Future historian

    Member

  • Members
  • PipPipPip
  • 72 posts

I suspect not much...

Alexa will be replaced with better personal assistants

Amazon makes more money by making a new model


  • As We Rise likes this

#3
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 963 posts
Doesn't look like a sincere question (is why some girl doesn't like you really a question you want a virtual assistant to answer?). But I will treat it as such:

All the major virtual assistant makers (Google, Amazon, Apple, Samsung, Microsoft) have a lot more in store for the future. The rollout is just slow, because they have to get it super-accurate.

For example, Google recently added the ability to service multiple requests in a single sentence -- e.g. "Turn on the fan, set the security system and read me the mail." They also added "continuous conversation", so that you don't have to keep saying "ok, Google" after each follow-up question. Amazon has given Alexa similar abilities. There are many ways machines can fail to handle these seemingly simple tasks. For example, in continuous conversation, if you make an aside to someone else in the room, Google needs to be smart enough to "realize" you aren't addressing it.

Ashwin Ram (now at Google, formerly at Amazon) demoed last year next-gen Alexa capability at a conference. One thing it will be able to do is unpack requests -- e.g. if you ask, "How old is Obama's wife?" It has to first look up who Obama's wife is (Michelle), and then look up her age. That kind of request is what is known as "two hops", as in "two hops through the knowledge graph". Google is adding similar capability.

Amazon has also added (or is adding) the ability to connect with its 40,000+ skills without your having to know what they are. This is a big leap beyond how it worked a year ago:

https://developer.am...customer-s-need

And they have built a sophisticated meaning representation language for handling requests, and so far they haven't even scratched the surface of what they can do with it:

https://m.media-amaz...1517613703_.pdf

In the future, they will probably try to integrate it with a technology called "program synthesis" to combine skills together. Samsung's Bixby 2.0 is said to be able to do this (it will be released in a few months), too. So, for example, you'll be able to ask something like, "Alexa, I'm going to my sister's house tomorrow. What kind of wine should I buy on the way to have with lasagna?" And Alexa will call up skills for:

* Wine pairing with a meal.

* Route-planning (to figure out which route you will take to see the sister).

* Store inventory look-up to determine which items can be found in which stores.


And then it will combine the information from those skills together to service your request.

Viv Labs was supposed to be able to do that; they got bought out by Samsung, and their tech will appear in the new Bixby 2.0.

Socialbots is another area of expansion Amazon is pursuing -- basically, bots you can have a conversation with, about anything. This will likely merge with Alexa, just as Microsoft's Xiaoice can engage in chitchat and service requests, at the same time.

https://arxiv.org/abs/1801.01957

Assistant makers are also adding "full duplex" capability, so that you can talk to assistants in a natural way -- you will even be able to interrupt them mid-sentence, and they will handle it as gracefully as a human. In fact, you can already do this with Xiaoice.

In a few years, all these advancements will fuse, to produce virtual assistants that are like the AI in the movie "Her".

I think AIs built using BCI data will result in even smarter, more natural conversations; but it isn't the only path to the AI of science fiction.
  • Casey and Yuli Ban like this




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users