Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

The first smartphones and smart watches


  • Please log in to reply
4 replies to this topic

#1
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPipPip
  • 690 posts

IBM invented the first smartphone in 1992, and Microsoft unveiled the first smart watch in 2004. Both devices proved ahead of their times and were commercial failures due to their high costs and limited capabilities. 

 

https://www.textrequ...ion-smartphone/

https://wear.guide/s...tch-smartwatch/

 

This gives me hope that other promising consumer technologies that disappeared will return and live up to their original promises. What comes to mind first is augmented reality glasses. Google Glass was released in 2013 to much hype, but proved a commercial failure. I tried a pair of them a few years ago and quickly understood why they didn't become popular: they were too expensive, the augmented vision only filled a small fraction of the wearer's field of view, and it was too hard to interface with the device to give it commands. But there's no reason to think all of these deficiencies can't be solved with time, and I predict augmented reality glasses will return someday and become popular, as happened with smartphones and smart watches. 

 

https://en.wikipedia...ki/Google_Glass

 

See also: https://www.smartins...-10-years-away/


  • Casey, Yuli Ban and Erowind like this

#2
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,494 posts
  • LocationNew Orleans, LA

There's no doubt in my mind that there's a future for augmented reality visors. The song remains the same: smartphones in the 1990s had to grapple with stark technological limitations, limitations that fell in the 2000s as computer technology miniaturized and sped up. Smartwatches in the 200s had to grapple with stark technological limitations, limitations that fell in the 2010s as computer technology miniaturized and sped up. Smartglasses in the 2010s have had to grapple with technological limitations. 

 

But there are also a few limitations that come down to form factor— smartwatches never approached the impact of smartphones because of the fundamental flaw that you can only control them with one hand, greatly limiting their utility. Smartwatches proved very functional for health, however, and that's why they became glorified FitBits. You can't easily control smartglasses with your hands because there's no tactile feedback. In order for smartglasses to take off, we need advancements in three fields:

  • Gesture control
  • Voice control
  • Brain control

 

When smartphones first came out, their main limitation was that you couldn't use a mouse. They really needed touch screens to reach their peak utility. But it was always still fairly easy to use them because we could use both hands to type, navigate, and manipulate what was on screen. As aforementioned, smartwatches use up one hand unless you want to try using a supremely awkward crab-hand. Smartglasses simply won't have even that option. 

As I've been saying for, god, five years now, AR headsets will take off when we have competent brain-computer interfaces. Gesture and voice control comes across as too impersonal and ultimately inaccurate. 

Actually, even smartphones would recursively be improved by BCIs. I can't think of anything that wouldn't be massively improved with cyberkinesis.


  • Erowind likes this

And remember my friend, future events such as these will affect you in the future.


#3
Erowind

Erowind

    Member

  • Members
  • PipPipPipPipPipPip
  • 978 posts

/\ Adding to this. 

 

I think AR could work before BCI on a combination of gesture based interfaces and traditional buttons. Valve's HTC Vive controllers which I use regularly are very good with anything but typing (even then I could see people younger than me getting almost to smartphone levels of speed with their thumbs on those touchpads) and precise tasks. Their next controllers are adding finger tracking and the leap motion controllers have always been stellar. I think with some clever UI design and another decade or so of R&D some really comfortable and usable controllers could be made for AR. The jump from carrying nothing to carrying smartphones everywhere happened pretty quickly and relatively seamlessly. It's possible the future will be both very interesting and somewhat similar in regards to early commercial AR. I can see a world where people carry two remotes with them everywhere to interact with their AR HUDS. Even without BCI (which I do hope happens but isn't guaranteed in the shorterm) it might be possible to jury rig AR.


Current status: slaving away for the math gods of Pythagoras VII.


#4
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPipPip
  • 690 posts

One solution to the "interface problem" would be to have users sync their smartwatches with their AR glasses. Since the smartwatches could continuously emit short-range signals, it would be easy for the person's AR glasses to track the smartwatches' location in 3D space. This would probably be easier and less energy-intensive than having the glasses keep constant visual track of the movements of the wearer's arms, hands and fingers. The arm around which the smartwatch was attached would serve as your "input" arm for your AR glasses, and movements of that arm, or if its hand and fingers, would send commands. I could easily see how technologies such as these could be integrated into smartwatches at low cost for this purpose: 

 

https://www.theverge...house-employees

https://yangzhang.de.../Tomo/tomo.html



#5
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPipPip
  • 690 posts

Even still, we might just have to accept that our ability to interface with AR glasses is fundamentally more limited than our ability to interface with other types of computing devices, like PCs and smartphones. I don't think this is necessarily a show-stopping problem. Smartwatches themselves have these kinds of limitations (tiny screen, nearly impossible to type characters on it), yet millions of people have bought them and are happy with them, and smartwatches are now "mainstream" devices. 






0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users