Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

Covid-19 and exponential growth of technology


  • Please log in to reply
10 replies to this topic

#1
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,961 posts

People have underestimated the exponential growth of Covid-19 just like they do the exponential growth of technology.  One minute, the virus is not very much of a threat, and one may be tempted to laugh at people taking it too seriously, and then the next minute tens of thousands die and the virus shuts down the global economy!  Likewise, one minute people laugh at the stupidity of SIRI and crack jokes at Alexa's bloopers, then suddenly they do a lot better, and before long, you'll be able to hold a conversation with them.  The ink will be barely dry on news articles about how "we need our A.I. to have common sense" or "deep learning is only good for pattern recognition, and we're still far from having reasoning", and then these will be solved, and a new thing the machines can't do will need to be found.  I think the same is also true of advanced new BCI (=Brain Computer Interface) technology -- the articles mention just how limited and primitive the technology is today, and how far away we are from translating imagined speech to text; but before you know it, these will be solved, and the world, transformed.

 

This should also serve as a fore-warning about what can go wrong, not just about what can go right.  For example, maybe in the not-too-distant future a rouge A.I. system will be developed as a simple hacking tool, but that will take over the world's computer networks at an alarming rate -- at first, it will seem like a joke; but in weeks, days, or hours, large numbers of people will die:

 

https://www.futureti...trophe-of-2030/

 

 

 

 



#2
Cyber_Rebel

Cyber_Rebel

    Member

  • Members
  • PipPipPipPipPip
  • 398 posts
  • LocationNew York

Most people just cannot comprehend exponential data because it runs contrary linear timescales. Change to most people is a gradual process where they can still understand what is happening. Getting from point A to B is easy, but no one ever considers point Z until where are actually there. There have been many articles about this in the past: 

 

Your Brain Cannot Fathom Exponential Growth, and Your Computer Network Cannot Survive It

 

https://www.endsight...nnot-survive-it

 

What our brains are missing

 
I just got finished reading an interesting book called Abundance by Peter Diamandis and Steven Kotler. It’s a great book that presents a realistic case for why the future is likely to be much better than we suppose on account of technology advancements. One idea in the book that stuck with me is this – the human brain is not wired to think in exponential terms. Here is a thought experiment presented in the book to make the point:
 
If I asked you to go out your front door and take 30 steps, I’ll bet you could guess about where you’d end up, without even taking that short walk.
 
It’s a simple linear step-by-step experience you’ve done a thousand times.
 
But, what if I told you to take 30 exponential steps. That is to say, double the size of your step each time. 1 + 2 + 4 + 8 + 16 + 32 + 64... etc...
 
If you took 30 exponential steps out of your front door, where do you think you’d end up?
 
The answer? You’d go around the earth 26 times!

 

It's possible that a "consequence" of Covid-19 is that people will start thinking along these terms, rather than outdated assumptions about growth.



#3
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,961 posts
Diamandis is right, and so was Kurzweil, about people not fully comprehending it.  The psychologist (and psychometrician) Stuart Ritchie wrote an article recently about how even his fellow psychologists had not properly understood the Covid-19 pandemic:
 
https://unherd.com/2...on-coronavirus/
 
He was especially chiding them for their smug analyses about how people who were worried were besot by cognitive biases and neuroses and such.  Steven Pinker wrote a tweet seemingly endorsing one of these analysis, before Gary Marcus (his former student) wrote a critical tweet in response -- something like, "It's rare that I disagree with my former mentor Steven Pinker..."; I don't recall exactly how he phrased it. 

Ritchie wrote:
 

We can even play the psychologists at their own game: there are also biases in the opposite direction to those discussed above. For instance, “exponential growth bias” might mean that our standard ways of reasoning break down in a situation where a threat accelerates like this virus. As observed in an insightful article on Italy’s ongoing horrific experience, there’s also “confirmation bias”, where we seek out evidence that confirms our previous beliefs or desires.

In this case, many were understandably desperate to believe that the virus wouldn’t be too much of a problem, and it seems to have led them to underreact very badly indeed. None of the psychological writers above seemed to consider these opposing biases — and ironically, it might have been because they were biased against thinking about them.


On this last point, even conservatives understand that sometimes you have to give pessimists their due:

https://www.national...n-of-the-virus/
 

At a time like this, people understandably dislike pessimists. But that doesn’t mean that those who bring bad news are wrong.



#4
PhoenixRu

PhoenixRu

    Member

  • Members
  • PipPipPipPipPipPip
  • 994 posts

I do not expect any additional covid-related growth. What, IMHO, will really happen is accelerated implementation of already existing technologies not yet used due to social inertia such as online education, work at home and, yes, technologies of social surveillance and control.



#5
tomasth

tomasth

    Member

  • Members
  • PipPipPipPipPip
  • 294 posts

There is in addition the different of horizontal change vs vertical , a technology that advance in one field is easier to get then the many improvement it will make to others at the same time.

Same with various impacts of climate change or the pandemic.

The economic crises coming from the epidemic will also have many side effects working with those other exponential changes.

 

Its like the tide the rise every ere , you may watch a tree being submerge slowly (and then fast because of exponential increase) , but miss all the landscape around that get exponentially submerged.

 

There are too many interactions and side effects to track and even notice when there are in the slow phase , when increasing , its drowning.



#6
Poncho_Peanatus

Poncho_Peanatus

    Member

  • Members
  • PipPip
  • 20 posts

Yeah, I to dont see any particolar new technology coming out of this, new medic scanners perhaps? but beside this, more political awareness and readyness and more effectiively utilization of all remote based tech around. Like home jobs, food and merchandise home deliveries, faster mail, drones, robots, more automatisation and so on....



#7
Poncho_Peanatus

Poncho_Peanatus

    Member

  • Members
  • PipPip
  • 20 posts

Most people just cannot comprehend exponential data because it runs contrary linear timescales. Change to most people is a gradual process where they can still understand what is happening. Getting from point A to B is easy, but no one ever considers point Z until where are actually there. There have been many articles about this in the past: 

 

Your Brain Cannot Fathom Exponential Growth, and Your Computer Network Cannot Survive It

 

https://www.endsight...nnot-survive-it

 

What our brains are missing

 
I just got finished reading an interesting book called Abundance by Peter Diamandis and Steven Kotler. It’s a great book that presents a realistic case for why the future is likely to be much better than we suppose on account of technology advancements. One idea in the book that stuck with me is this – the human brain is not wired to think in exponential terms. Here is a thought experiment presented in the book to make the point:
 
If I asked you to go out your front door and take 30 steps, I’ll bet you could guess about where you’d end up, without even taking that short walk.
 
It’s a simple linear step-by-step experience you’ve done a thousand times.
 
But, what if I told you to take 30 exponential steps. That is to say, double the size of your step each time. 1 + 2 + 4 + 8 + 16 + 32 + 64... etc...
 
If you took 30 exponential steps out of your front door, where do you think you’d end up?
 
The answer? You’d go around the earth 26 times!

 

It's possible that a "consequence" of Covid-19 is that people will start thinking along these terms, rather than outdated assumptions about growth.

 

 

mindblown 



#8
Kynareth

Kynareth

    Member

  • Members
  • PipPipPipPip
  • 189 posts

I was told not to underestimate exponential growth after IBM Watson won Jeopardy in January 2011. It had 16 terabytes of RAM. Now's April 2020 and AI still hasn't changed our lives (especially when you don't live in an English-speaking country), home computers have only 16 gigabytes of RAM. Futurists overestimated pace of change.



#9
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,961 posts

I, too, was taken in by IBM's bold claims, at least for a small number of years.  The problem with it is that it requires a lot of hand-engineering to build.  It's not an example of "exponential growth".  It's an example of:  if you want exponentially more out of the method, you're going to have to put in exponentially more effort.  But since people are lazy and companies don't want to spend billions and billions of dollars even getting to a minimally-usable chatbot, say, the approach has languished.

 

What is needed is a method that scales in the following way:  you put in just a little extra effort to build an AI model, and you suddenly double the performance or cut the error in half.  Do a little more work, and you cut the error in half again -- continuing until you reach or exceed human performance.

 

Deep Learing models aren't quite as good as that, but they are closer to that than is IBM Watson.  Human laziness doesn't slow the progress down very much; especially since people get a much larger reward for the extra effort coding DL models.

 

What is standing in the way at the moment going from the lab to product you can buy is mostly things like:

 

* Fear that chatbots will say something racist, sexist, off-color, immoral, etc.;

 

* Fear that they will do something dangerous -- such as recommend people take pills that can kill them, to treat certain diseases;

 

* Need to make it work in many different languages, respond to many different accents, and many different styles of communication.  If they don't, then the New York Times will write an article about how Google Assistant is biased against Latinos, say.

 

These "human issues" are so difficult to overcome with 99.99% accuracy (which is the level required to fend off nasty NYT articles), that it's going to be a long while before you get to use a super-smart Assistant or chatbot, even though Google could easily build one right now that is lightyears beyond any you've used to date.

 

(The same is true for driverless cars.  Current A.I. methods will probably work safely about 99.99% of the time, in a wide range of contexts; but they need to work 99.9999% of the time.)

 

In fact, here's the recipe:  train a giant Transfomer neural net language model on a trillion words, and maybe also add images.  Fine tune it to be an assistant.  It will work great 99% of the time; but about 1 time in 1,000 it might tell you to go kill yourself.

 

BCI data might be a solution to make these A.I. models robust enough to where they don't screw up very often.  We'll see...



#10
Kynareth

Kynareth

    Member

  • Members
  • PipPipPipPip
  • 189 posts

Theoretically I have an "A.I." voice assistant in my 4K TV, but it's utterly useless. It needs to understand me and work in at least two languages at the same time. Full-blown disappointment.



#11
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,961 posts

Like I said, it can be made to work a lot, lot better using existing methods... if you can tolerate it telling you to kill yourself about once every couple months.






0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users