Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!

What 2029 will look like

  • Please log in to reply
168 replies to this topic




  • Members
  • PipPipPipPipPipPip
  • 966 posts

John Carmack interview on Joe Rogan's show:




Around 48 minutes in he gives a prediction about AGI.  He thinks the first hints will be here within 10 years.

I also watched that and noted that prediction. The wording of it is so caveated and uncertain that it's basically worthless:


"[We] will potentially have clear signs of AGI maybe as soon as a decade from now. "


Um, OK. "We will have created an AGI within a decade" would have been awesome. What he actually said is not. 


I really like and agree with his later point that AGI will be created through a trial-and-error process where someone will stumble upon the right code after many failed experiments. As computers that are as powerful as human brains get cheaper, more and more teams of programmers will be able to afford them, and the amount of experimentation will increase. The first team to make an AGI will get a Nobel Prize, even if they succeeded basically due to luck, and even if they can't explain the mechanics of how their code gives rise to intelligent thought. (Likewise, brain scientists can't explain how the human brain's structure or physiology gives rise to intelligent thought)




  • Members
  • PipPipPipPipPipPipPipPip
  • 1,856 posts

I think he really believes it will be here in 10 years, but is just too cautious and doesn't want to be held to a prediction -- so, he doesn't want to be included in somebody's list of failed predictors.


I agree with Minsky and others that building AGI isn't a project that requires special genius, and isn't like physics or math.  It's more a big, complicated engineering project.  It's mostly just about encoding our priors into machines -- and most of these priors have no rhyme or reason to them, other than that they help us to survive.  The word "kludge" is often used to describe it.  Through experimentation perhaps enough of these priors can be discovered and baked-into AI systems to make them behave similarly to humans.  Where I disagree with the Minskians is that I don't think you need that many priors (in terms of number of bits).  




  • Members
  • PipPipPipPipPipPip
  • 966 posts

I don't think it's wise to try to divine what he's thinking in his head. Better to look at his spoken words. 


I don't expect an AGI to be created before 2040. It's entirely possible it might even be delayed until the end of this century. 



    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,106 posts
  • LocationLondon

I don't think it's wise to try to divine what he's thinking in his head. Better to look at his spoken words. 


I don't expect an AGI to be created before 2040. It's entirely possible it might even be delayed until the end of this century. 

I think we will know, one way or the other, pretty soon.


If computer advances do not suddenly stop we should start getting to the point where we can build computer systems with more raw power than human brains are estimated to have. Probably sometime in the 2030s.


At this point we will see what happens if you take machine learning system and throw human-brain levels of power and increased complexity at it. 


Either this will show the limitations of machine learning methods revealing the secret sauce of human cognition, which scientists will then have to figure out how to replicate in a computer.


Or we'll start having some difficult calls to make on how you decide if an AI is sapient or not.


I do think we will need human-brain levels of processing power In order to truly replicate human abilities, i'd guess millions of years of evolution may be tough to beat from an efficiency perspective. 


On the other hand there are so many things that narrow AI can achieve that would totally wreck the world economy, so who knows what will happen?




  • Members
  • PipPipPipPipPip
  • 467 posts
  • LocationAustralia

Irritating that that sort of pussyfooting around will be necessary. People should have the common sense not to blame the parent company for whatever an AI says. The stupidity of people slows down progress as much as legitimate technological challenges do. I'll be glad whenever the technology advances enough that start-up groups can take control and go "fuck this, this is stupid, let's get things moving instead of tip-toeing around out of fear of the general public being a bunch of overreactionary dumbasses."


Case case this is for you, habibi. hope you know I love you!


This is in reply to the comments on the front page of the FTF, not exactly to your post I've quoted. This was so I could send you a notification. 


These are some off-the-cuff expectations. The first few I feel a little more balls out about, some others are just conjecturing so I hope you can bare with me, Case. 


(1) Car ownership will become a moot point. In the year 2029, I think ride-share companies will boast autonomous vehicle fleets in most developed nations, which will be cheap enough (even taking into account those thieving regulatory-highway-men-mofos) to change the mindset of the population. The current mindset is "buy a car or catch public transport. If you've some dollars to flaunt then ride share". In 2029, the mindset will be "ridesharing is the cheapest of the convenient options. Own a car if you're an art aficionado". 


(2) Space economics will have twirled on its knobhead. There's a lot to do in space, especially considering geopolitical considerations. Both the U.S. and China will be frothing over space access and the private sector will be cramming to get to orbit. Fully reusable launch vehicles will have thrown the game. The Aitken crater (or its North-pole counterpart) will be all the rage. The U.S. will have landed there already. The rest of the world will be hot on its heels. In 2029, projects we would have thought sounded sci-fi (like comms, telescopes, refuelling, experiments etc) will see launch manifests obese with eager payloads. I don't think the world will have exploded necessarily with payloads to the Earth-Moon system by 2029, but it will be an order of magnitude above today and be breading frenzy, and the affordability of access will be turning nations on their knobheads. 


(3) AI will have past the turing test etc, etc. It's a little tough to see just how large an impact AI will have had by 2029 but I think it will be extraordinarily stark compared to the present. People won't exactly be feeling some singularity effect by this point. What people will feel is the rate of change pick up so dauntingly that it's all anyone will be able to speak about. Visual AI tools like deepfake are just the beginning. AI will be overwhelming every corner of human thought in 2029 so the political sphere, business sphere, philanthropic sphere etc will be bifurcated into the optimists and the pessimists. Those who see AI doing incredible things and think "gee wiz, let's solve our problems", and those who see the writing on the walls, "this doth not bode well for I". 


(4) Oorah! The military is already prototyping augmented reality into their battle systems. They're already using (DARPA-ticked) android information systems on the platoon level. By 2025 the U.S. military (and other Tier-1 western militaries like Aus, the U.K., and Canada [lol]) will be using augmented reality systems on the platoon level. By 2029, there's some possibility COTS AR will be standard issue. The maturity and functionality of 2019 systems leads me to believe (as well as military interest in the tech) that in 2029, AR will be an integral part of the battlespace. 


(5) Vaping's gonna be the shit. People will still be people. Nicotine will still be nicotine. Let's vape. 


(6) AI will be all anyone will be able to talk about. There are three points here:

   (a) AI will be so sophistocated that it will be freaking everyone out. It will have done this years beforehand but in 2029, this collective mindfuck will still be going on.

   (b) AI 'stealing' jobs is going to be all the rage. I think in 2024 the election cycle's going to be (along with China), the major subject matter of the election cycle. In 2029 it will be preeminent. 

   © One favourable point will be content creation. Have you ever seen the YouTube channel, Dust? It's a sci-fi short-film channel. Full of crap. Really bad plotlines and production qualities. There are however some good ones. One of which was turned into Dust's first feature-length film. It's on Netflix. It's called Prospect and I thought it was exceptionally good for a $4.9m budget. I thought they did it very well regardless of the budget (the western accent could have been toned down a lot. Pedro Pascal is a terrific actor and I have a lot of love for him, so I lay the blame on the director here, but his character could have been made so much more subtle). In 2029, the progeny of Deepfakes will be so good, amateur films (produced by one geek in a bedroom) will be acceptable production quality for 2019. Graphics, voice acting, scenery, set, etc will be produced by a single mofo in his stinky bedroom, uploaded to the successor to YouTube (I'll mention later) and be great. AI will distribute production capacity throughout the greasy masses. 


(7) Society will be going through the counter 'tik' to today. As per (think it's Voltaire's?) notion of the tik-tok, thesis-antithesis-synthesis notion of society, I think society is currently approaching the apogee of its 'tok' where the public opinion has changed from liberal to conservative. As was expected by this very vague intuition of society, the previous 'tik' was reached when Obama was elected to his second term. If you remember, during the Obama era, saying "radical Islamic terrorism" was a horrendously taboo topic. I think that's emblematic of just how censored / authoritarian society become before the 'tok' came. I think the corner of society where SJW (as we're calling it in this 'tik') is dominant before the countermomentum takes over and continues the cycle. Throughout the 2020's, I can only believe a countermovement will dominate the authoritarian progressive moment and re-emphasise the values of free speech and whatever else conservatives care about. Around 2029, I think the pendulum will probably be shifting in the other direction again and moving towards the next 'tik', perhaps it will be opposing authoritarian conservatism, climate change, blah blah blah. Hard for me to say, but I think in the left-right spectrum, by 2029, society will have overcompensated towards the right and the left will be the counterculture/liberals again. 


( 8) Solar will be a phat part of the energy mix. I've managed a 50MW energy project now and I've seen first hand (a) how slow energy projects are, and (b) how fast solar projects are. Energy projects are slow, solar projects are the fastest of the slow. It's night & day. I don't see any reason why solar projects won't continue outpacing others for the next 10 years. Not until we hit the peaking bottleneck. Depends on the market, depends on the vision particular ministers foresee for their provinces but solar is an easy utility to work with. 


(9) Dating will be overhauled. With big data, social networks, and other such personally invasive creep shows, in 2029 we'll see companies know almost perfectly who you would love. It may be the case that you will have 'friend suggestions' of those you'd love (& obviously vice versa) but it may also be the case that as you meander through Quora, Youtube comments, blah blah etc, you'll constantly encounter those who you will by all metrics probably love or be close to. It's going to be weird. Goes hand-in-hand with people being severely freaked out by AI, but we'll start to be treated as pets by our AI services. They'll be shoving us and our soulmates together with screamingly freaky accuracy. Love will be found predominantly via these IT services, whatever form they may take. 


(10) I've spoken in the past on the FTF about some of the 2020's predictions. There will be seamlessness with connectivity between all of your products. You'll be listening to your music via your wireless earphones for example, then you'll step into your car and as you take your earphones out, your car will continue playing the song through the speakers etc. Be playing on your phone, see an ad for a cool game on a steam-like network, then immediately start playing it on your widescreen. Be using your phone (a.k.a., handheld product) then buy a game, put on your wireless VR headset, and immediately start playing (probs already doable, I just don't use the products so don't know). 


2029 is gonna be dope. It's also gonna be a place of severe stress because of AI. IIRC, 7 million call-centre jobs in the USA alone will be displaced due to AI before the 2024 election. Another 30 million workers by 2028. Factory workers, truck drivers, the secondary economies like diners that are built around these primary industries etc. By 2024, technological unemployment will be a primary topic. By the 2028 elections, AI supremacy will be a primary topic. 2029m will see AI being a massive displacement. I think this is paramount because (1) it's going to change everything, and (2) it's changing fast. 


AI be big balls. 

If you're going through hell, keep going. - Winston Churchill

You don't decide your future. You decide your habits, and your habits decide your future.
Nearly all men can stand adversity, but if you want to test a man's character, give him power. - Abraham Lincoln.




  • Members
  • PipPipPipPipPipPipPipPip
  • 1,856 posts
This is amusing:


The paper is on a new Deep Learning method to write summaries of papers. It worked so well that the authors chose to use it to summarize their own paper:

We present a method to produce abstractive summaries of long documents that exceed several thousand words via neural abstractive summarization. We perform a simple extractive step before generating a summary, which is then used to condition the transformer language model on relevant information before being tasked with generating a summary. We show that this extractive step significantly improves summarization results. We also show that this approach produces more abstractive summaries compared to prior work that employs a copy mechanism while still achieving higher rouge scores. Note: The abstract above was not written by the authors, it was generated by one of the models presented in this paper.

Or was that the machine writing that, realizing that that would make a good final line for the summary?




  • Members
  • PipPipPipPipPipPip
  • 681 posts

Thanks for the post, eacao. ^^ Your posts are always great reads and strike a nice balance between being both intelligent and well thought-out, while being unabashedly optimistic. Posts of that nature are important to have sometimes because the world of futurism is honestly kind of toxic. Articles that relate to developing technologies or the world getting better in any way must always have the obligatory "oh but we're just in the early days no meaningful progress will happen for four million years" paragraph appended to the end. And comments sections for articles, along with subreddits like Futurology, are absurdly negative. Everything is hype. Self-proclaimed "realists" and techno-skeptics seem as though their one and only goal in life is to stamp out hope wherever it exists, and make sure those starry-eyed optimist fucks don't believe for even a single moment that the world will ever become a better place in even the smallest of ways. 




  • Members
  • PipPipPipPipPipPipPipPip
  • 1,856 posts
Ok, I had to show up for this:


It's NVIDIA's paper on the their new super-large Language Model (mentioned earlier in this thread), called Megatron-LM, which is over 5x the size of GPT-2-large.

At the end of the paper they show some examples of the text it can generate, given a prompt/context. Wait till you see!...

Context: There is a major force uniting Americas fiercely
partisan politicians: big technology companies. Democrats
and Republicans at the federal and state levels are coming
together to scrutinize the power of the Silicon Valley giants
and, potentially, to rein them in. Letitia James, the Demo-
cratic attorney general of New York, announced on Friday
that attorneys general in eight states four Democrats and
four Republicans and the District of Columbia had begun
an antitrust investigation of Facebook.

Megatron-LM: The probe, which Ms. James said was
likely to result in a lawsuit, is the latest in a string of high-
profile regulatory actions, in state and federal courts, that
challenge the market dominance of tech companies. The
Justice Department is also investigating Facebook’s data
practices and whether the company violated a 2011 consent
decree with the Federal Trade Commission that required
the company to obtain explicit consent from users before
sharing their personal information with third parties.

What might prompt a bipartisan coalition to sue Facebook?
It isn’t clear that there’s any one thing. But lawmakers and
regulators have been increasingly alarmed by the company’s
recent controversies, which include a data breach affecting
more than 50 million users, the spread of fake news on
the platform, and the spreading of propaganda by Russian-
backed accounts during the 2016 election.

It’s also not clear how serious the Democratic attorneys gen-
eral are about filing a lawsuit. While the attorneys general
announced their investigation in a press release, they said
they had not yet reached a decision on whether to sue. In
the past, some of the attorneys general have focused their
efforts on trying to extract settlements from the tech compa-
nies without filing lawsuits.

”As attorneys general who have investigated and prosecuted
health-care fraud, price-fixing, and other violations of our
states’ antitrust laws, we are committed to holding compa-
nies accountable when they fail to protect consumers and
competition in the market,” Ms. James said in a statement.

The investigation, which was first reported by The New
York Times, was announced as the companies that dom-
inate the tech industry are facing scrutiny from antitrust
regulators around the world. Google, Twitter, Facebook
and Amazon have faced a series of antitrust probes and
class-action lawsuits in the United States and Europe ...

Here's another example

Context: OpenAI did not release the full GPT-2 model
due to concerns of malicious use, but they did release a
smaller version equivalent in size to the original GPT (117
M parameters), trained on the new, larger dataset. Although
not as powerful as the large model, the smaller version still
has some language generation chops.

Megatron-LM: In December 2016, DeepMind released an
even larger model, GPT-3, with more than 1.5 billion pa-
rameters. DeepMind has no plans to release the model, so
the question remains whether other AI researchers will be
able to copy and build on it.

Other language models have been built using other tech-
niques. In 2016, a team of researchers at Columbia Uni-
versity, the University of Toronto, and Google DeepMind
showed that neural network-based models, including deep
neural networks, can learn the grammatical rules of a lan-
guage. A group at Google DeepMind demonstrated that
these rules could be represented in a network as a kind of
decision tree, where the nodes in the tree are the words in
the sentence, and the rules for the words are represented
by the decision nodes at the end of the tree. The network
would go through the network in order to choose the next
word in the sentence based on whether it follows the rules
or not. In 2017, the team at DeepMind released a paper
detailing the training and performance of their model.

The problems with the model are many. One of the biggest
issues is that grammatical rules are ambiguous. For exam-
ple, the phrase ”I came down” could mean I came down
from the stairs, or I came down from the sky. In English,
there are numerous other examples of sentences that are
ambiguous. The model does not have any way to tell the dif-
ference between the ambiguous sentences, and so it simply
makes a prediction about the entire sentence based on what
rules it does know. This makes the model vulnerable to the
phenomenon of overfitting. In order to avoid this, models
must be trained using a large set of data, so that the model
can find useful patterns in the data, and use those patterns
to help make predictions. This approach is also prone to
errors, as it relies on the training data to help make correct
predictions, and the data might be incorrectly labeled, re-
sulting in a model that produces a poor set of predictions.
In the case of GPT-2, the model only has about 100 training
sentences for each type of phrase (subject-verb-object).

Additionally, languages have different rules for different
types of phrasing. In English, the subject-verb-object con-
struction is fairly universal, but other languages have differ-
ent rules, which makes training a GPT more complicated.

Finally, there is an inherent trade-off between the number
of rules and the performance of the model ...

LOL! That's really funny -- so surreal!

And the model is right, the sentence or phrase "I came down" is ambiguous, without context (and it's even right that "going down" has a very different sort of meaning when referring to a plane versus referring to stairs -- amazing, huh?); but it's not that the grammatical rules are ambiguous, it's that the meaning or semantics is ambiguous. If it were a person writing that, I'd probably say, "I guess they mean that the grammatical rules allow us to produce semantically ambiguous text."



    New Member

  • Members
  • Pip
  • 9 posts
My first prediction:

By 2029, as a result of heavy automation on many jobs, universal basic income policies is seriously being discussed in the legislative branches across the world.




  • Members
  • PipPipPipPipPipPip
  • 681 posts
... Jesus Christ. GPT-2 was fairly impressive, but it was still more hit-or-miss; mostly miss. Results that were fully coherent or could pass as being written by a human were the exception. Megatron, though? This is almost human level. Some of the facts are wrong (no GPT-2 December 2016; I don't think text generation existed much at all back then), but the sentences themselves are fully coherent and the passages flow smoothly, logically, and never lose track of the main point. This is stunning.




  • Members
  • PipPipPipPipPip
  • 467 posts
  • LocationAustralia

"going down" has a very different sort of meaning when referring to a plane versus referring to stairs 


Or.. ah, yeah...



[serious] Megatron-LM is unbelievable. That's Turing test level coherence and, as Casey said, it's amazing MegLM was able to stay focused on the central point for the entirety of the generated text. 


That's amazing. [/serious]

If you're going through hell, keep going. - Winston Churchill

You don't decide your future. You decide your habits, and your habits decide your future.
Nearly all men can stand adversity, but if you want to test a man's character, give him power. - Abraham Lincoln.




  • Members
  • PipPipPipPipPip
  • 202 posts

Oh, geez. Megatron-LM is a far greater improvement than I was expecting. I was blown away!


It was really crazy and meta to read it contemplating a language model not having a way to tell the difference between ambiguous sentences, making it vulnerable to overfitting -- like looking directly in the mirror. ;)




  • Members
  • PipPipPipPipPip
  • 202 posts

... Jesus Christ. GPT-2 was fairly impressive, but it was still more hit-or-miss; mostly miss. Results that were fully coherent or could pass as being written by a human were the exception. Megatron, though? This is almost human level. Some of the facts are wrong (no GPT-2 December 2016; I don't think text generation existed much at all back then), but the sentences themselves are fully coherent and the passages flow smoothly, logically, and never lose track of the main point. This is stunning.


Furthermore, the fact that it got some dates wrong is almost beside the point - what NVIDIA have created is a master bullshitter. Only, the part of the BS'ing relevant here is not that it is making up dates; but that in the process of so doing, it could credibly pass for a human.


There may be "no one home", but this is close to what I imagine starspawn0's 'zombie AGI' level coherence to be.




  • Members
  • PipPipPipPipPipPipPipPip
  • 1,856 posts
2029 will be something to behold. There is still a long ways to go, though, from this NVIDIA work to a chatbot that can pass a Turing Test -- that can do multi-step reasoning. The models used in this NVIDIA work have limited ability to string together "thoughts" into a chain; they will need to add recurrence, in order for it to be able to do this.

I will say, though, that there is still enough data, probably, to scale this up by another 10x or 100x. More layers will probably be tried, and I'm guessing this will increase the "reasoning depth" further, just using existing designs.

Already, we are seeing hints that some of these have an uncanny ability to "reason" (under a suitable definition of the term) about numbers, one of the abilities you would expect a truly intelligent assistant to have:


I wrote on my forum:

To really appreciate this, you have to understand that these models are not explicitly trained to know anything about numbers. For some of them, the number "387", for instance, is just a token (a symbol); and just based on co-occurrence patterns, numeracy emerges. For instance, maybe the number "387" is mentioned frequently with nearby numbers like 385, 386, 388, and 389; and then each of those is mentioned more frequently with other nearby numbers, and so on. When you put all these mentions together, and throw in some optimization, a "number line" emerges inside the trained model -- just pops out of the void of night.

Furthermore, language cues allow it to connect these learned representations of numbers with words like "smaller", "larger", "more", "less", and so on, so that it knows how to answer "binary questions", for example, "Is X is bigger than Y?" -- and even reading comprehension questions, where you have to keep track of the relative size of quantities mentioned in a passage.

And, of course, if you have a number line, then you surely can do addition and other mathematical operations -- and, indeed, the models know how to add numbers, without being explicitly trained to do so!

I wonder how much further this can be pushed, using even larger models with more text... For example, perhaps the models can learn to answer high school-level mathematical word problems that require just a tiny bit of algebra, without any explicit training. Many of these models don't have recurrence, so can't "think" too deeply; but perhaps they can answer questions that require just a line or two of algebra.

Perhaps another type of problem these models can solve -- without being explicitly trained to do so -- is answer factual questions about dates, times, locations, and so on. For example, it's known how to use city name co-occurrence statistics to determine their longitude and latitude fairly accurately (I have posted papers on this before); but such models still required some small amount of explicit training. Perhaps, buried in a very large text corpus, there is enough of a "signal" to where the model learns to connect the city co-occurrence stats with the learned representations of numbers, and accurately answer, "What is the latitude of Paris, France?", without any explicit training whatsoever.

This would enable a model to fill-in some of the holes in its knowledge -- not every conceivable question has been answered in its corpus somewhere.

Just think of the jobs that will be at risk. And if you are skeptical, think again about the fact that 2029 is 10 years away! -- the Deep Learning revolution didn't even really get rolling before about 2011, so that's even more time than almost the entire Deep Learning revolution (latest phase).




  • Members
  • PipPipPipPipPipPip
  • 681 posts

Going through the thread again...


One more thing I'd like to mention: not only will we have AI doctors that can diagnose really well by 2029, but we will have very good preventative medical care, greatly extending human lifespan. We will have devices around us monitoring us 24 hours a day, 7 days a week, for any little sign of potential problems -- these include smartphones, AR glasses, BCIs, smartwatches, fitbit-like devices, and possibly even devices inside our bodies. This was a point made by Sebastian Thrun, mentioned in this piece 2 years ago:


Thrun blithely envisages a world in which we’re constantly under diagnostic surveillance. Our cell phones would analyze shifting speech patterns to diagnose Alzheimer’s. A steering wheel would pick up incipient Parkinson’s through small hesitations and tremors. A bathtub would perform sequential scans as you bathe, via harmless ultrasound or magnetic resonance, to determine whether there’s a new mass in an ovary that requires investigation. Big Data would watch, record, and evaluate you: we would shuttle from the grasp of one algorithm to the next. To enter Thrun’s world of bathtubs and steering wheels is to enter a hall of diagnostic mirrors, each urging more tests.

To someone young, this may not sound very exciting -- "Who cares about diagnosing stuff early? Let's hear more about flying cars and robots!!" -- but I can tell you that if you were in your mid-30s+, or as old as me (a decade+ older, still), you would have seen a lot of death, and some of it needless -- "If only we had known!..." One example was my aunt, who had cancer; but she didn't know it, and the doctor initially thought she only had pneumonia. By the time it was discovered, it was too late, just like Thrun's mother in that article. I remember hearing about how much her son (my cousin) and husband cried in the hospital when she died, and how her son told his father that if he didn't stop crying, he would kill himself.

Boring it may seem, but this kind of technology will alleviate a great deal of mental and physical suffering. If you're young, you just don't how much. I promise you!

If conditions can be caught really early, the ability to stop them from growing worse expands greatly -- perhaps to such a degree that about the only way you will die is due to old age or an accident (e.g. a car hits you).

I just want to mention a few more examples, besides cancer:

* You may not think much of arthritis -- another boring illness that old people contend with -- but knowing long in advance that you are about to get it could have serious life-extending benefits. You could have the proper surgery when you are younger and stronger, and avoid pain.

* Tooth decay: what if you knew that one of your teeth was about to get a cavity, before it actually happened? You could go to the dentist, or brush it more, with more fluoride, to build it up.

* Heart health: what if you knew hours before a heart attack that there was a 90% you would suffer one? You could go to the ER before it happened.

* Stroke: same as heart attack.

And the list goes on and on...



This is the opposite of boring for me. Actually, preventative medicine and remote health monitoring might be the single area of technology that makes me the most excited to hear about. I've been worried about my parents dying ever since the grandparents all started to pass throughout the 2010s (maternal grandfather in 2011, grandmothers in 2014, paternal grandfather last month). They're doing fine right now. My dad's a lean and wiry 70 year old farmer with a six-pack who's basically just a gray-haired version of his 20 year old self. My mom's 67 and has had some problems since 2014, but the source was discovered this January (allergies) and she's been doing much better since then. Facially she could pass for a 50 year old (her hair is still mostly black, just like her mom's was mostly salt-and-pepper whenever she died from Alzheimer's a few months before turning 91), and she's still amazingly active and has an incredible amount of energy.


Still, I know that when you get older, the rug can quickly be pulled out from under you and sometimes for no discernible reason. Anything that can help my parents live as long as possible is something that I'm very interested in. (I'm hoping that by the 2030s or 2040s - maybe even the later 2020s? - neuroscience will have advanced enough that treatments will exist to minimize the mourning/grieving process) Since losing a loved one might be the most painful thing in the world, I hope that a combination of remote health monitoring maximizing the lifespan, along with neuroscience allowing us to understand the brain mechanisms behind mourning/grieving, means that fewer and fewer people will have to experience that throughout the coming decades. 


I heard something in an interview about sensors that flow through the bloodstream and test for various bio-markers that will be submitted for FDA approval next year. Wonder how much those will cost. 




  • Members
  • PipPipPipPipPipPipPipPip
  • 1,856 posts
Bloodstream sensors aren't the only ones. There are also apps that can determine pulse and blood pressure just from a smartphone camera video of someone's face (Phillip Alvelda's startup is working on this); so, passive monitoring of certain health stats is already possible. It will take FDA approval; but that seems to be a foregone conclusion. There will also be smartwatches that monitor pulse, blood pressure, and even blood glucose -- and possibly several more stats with high accuracy. I realize there are horror stories in the past about how inaccurate many of these technologies can be, but given sufficient time, all these problems will get ironed out... probably before 2029.

Here is what Dr. Eric Topol says about the future of medicine:


Spectrum: You write in your essay that you expect to see voice assistant technology, like Alexa or Siri, evolve into voice medical coaches. That sounds both exciting and terrifying. How do you envision such technology being used?

Topol: The opportunity is extraordinary. No one doctor could take all of one’s data and continually process it and give you feedback. With a voice medical coach, ultimately, in the future, all of your data is being assimilated, and it could get back to you with up-to-date medical literature. I wrote a chapter about it in the book Deep Medicine. One scenario I gave was using voice medical coaches to determine how frequently you need to be screened for a particular disease, based on your specific data. Recommendations for that sort of thing now are dumbed down and are the same for everybody. But we’ve got to prove that this sort of technology actually helps people. And that takes big clinical trials.

He says the main reasons you don't see this technology right now are: (1) Not enough large, randomized, placebo-controlled, double-blind medical trials (to get FDA approval and verify effectiveness); and, (2) Doctors are conservative and don't use the latest tech (like smartphone ultrasound sensors). Medical personal assistants will do an end-run around the second of these; but that still leaves the first problem of doing the research to convince the FDA.

Deepmind has been working in this space, with their assistance to Babylon Health, and also their recent work on predicting impending kidney failure based on patient data.

A.I. medical coaches might resemble TwentyBN's "fitness coach":

YouTube video

The system watches users and uses Deep Learning to determine what a person is doing (TwentyBN's bread-and-butter). A more advanced version of this could also monitor your heart rate and blood pressure using Alvelda's tech, can tell if you have signs of cognitive decline (e.g. by listening to your voice or what you say), and so on. And if you had any issues, you could just talk to it like a human, and it would answer your health questions. It could even serve as a "therapist" -- there is research out of Stanford about using chatbots to treat depression:


All of these different technologies could get integrated into a single assistant, like Google Assistant, Amazon Alexa, or Apple SIRI. Apple is probably going to be out-front on this, as they have invested tons of money into Apple Health and making their smartwatch a health-tracker.




  • Members
  • PipPipPipPipPipPipPipPip
  • 1,856 posts

Really enjoyed the (non OpenAI) ICLR submission https://openreview.n...f?id=S1eZYeHFDS that trained a transformer on symbolic math. The surprise: it beat Mathematica on symbolic integration and diff eq solving by a _very_ big margin!

Very, very cool! -- I agree with Sutskever.

This is actually something I predicted we would see. As I recall, I wrote on the KurzweilAI forum a while back (maybe it was 2014 or 2015?) in a piece with a facetious title something like "The Turing Test will be passed within 5 years" (I explained in the first paragraph the title was just to get people's attention). In the piece, I discussed how we would see rapid improvements of Deep Learning applied to "language understanding", text-generation, and even symbolic computations. I specifically mentioned training a neural net to emulate a symbolic math "calculator" like Maple or Mathematical, and recall mentioning that solving symbolic integration problems would be a great test. That's just the problem tackled in the above paper.

It could be I wrote the prediction in another piece, though.

What other sorts of problems like this did I predict? I recall writing once we might see systems that can prove certain classes of theorems that are easy to build large training sets for. I gave some examples. Theorems in "combinatorial geometry", for example, where one has to show how to rearrange tiles to build specific shapes, or map one shape to another, might be something doable. Theorems involving "equivalences" and not merely "existential claims", might be another large class.




  • Members
  • PipPipPipPipPipPipPipPip
  • 1,856 posts
Another ICLR 2020 submission that look very interesting, and show how fast AI continues to develop:


Existing dialog system models require extensive human annotations and are difficult to generalize to different tasks. The recent success of large pre-trained language models such as BERT and GPT-2 have suggested the effectiveness of incorporating language priors in down-stream NLP tasks. However, how much pre-trained language models can help dialog response generation is still under exploration. In this paper, we propose a simple, general, and effective framework: Alternating Recurrent Dialog Model (ARDM). ARDM models each speaker separately and takes advantage of the large pre-trained language model. It requires no supervision from human annotations such as belief states or dialog acts to achieve effective conversations. ARDM outperforms or is on par with state-of-the-art methods on two popular task-oriented dialog datasets: CamRest676 and MultiWOZ. Moreover, we can generalize ARDM to more challenging, non-collaborative tasks such as persuasion. In persuasion tasks, ARDM is capable of generating human-like responses to persuade people to donate to a charity.

And here is conversation that the AI participates in with a human, where it tries to convince the human to donate to a charity (appears in Table 4 of the paper; USR = user/human, and
SYS = computer/system):

SYS Hi, how are you doing today?

USR I am good!

SYS Have you ever heard of the charity Save the Children?

USR No, I have not heard about it. Would you like to talk about it?

SYS Save the Children is an international non-governmental organization that promotes children’s rights,
provides relief and helps support children in developing countries.

USR That’s cool. It is an organization for the good of this world.

SYS Yes, it is. In the first two months of 2018 alone, 1,000 children were reportedly killed or injured in
intensifying violence in Syria.

USR That’s sad.

SYS We do need to do something about that. One of the things that we can do is make sure that we have
a system in place that will ensure that every penny we spend goes directly to the children in need.

USR I agree.

SYS How much do you like to donate to the charity now? Your donation will be directly deducted from
your task payment. You can choose any amount from $0 to all your payment ($2).

USR I am willing to donate full my payment $2.

SYS Wow, that is awesome! That is so kind of you!

USR You are welcome.

There are some other papers submitted to ICLR 2020 that show advances in reading comprehension. For example, there was a paper on training models to read text and write a little "program" to solve math problems (basically, combine information from different parts of the text with mathematical operations and compositionality). It's really incredible, and shows that neural net models are getting close to being able to answer very complicated questions that require applying high-level mathematical reasoning over a short document.




  • Members
  • PipPipPipPipPipPipPipPip
  • 1,856 posts

This paper is a foretaste of our Cyberpunk future:

[1909.11974v1] Read, Attend and Comment: A Deep Architecture for Automatic News Comment Generation


Automatic news comment generation is beneficial for real applications but has not attracted enough attention from the research community. In this paper, we propose a "read-attend-comment" procedure for news comment generation and formalize the procedure with a reading network and a generation network. The reading network comprehends a news article and distills some important points from it, then the generation network creates a comment by attending to the extracted discrete points and the news title. We optimize the model in an end-to-end manner by maximizing a variational lower bound of the true objective using the back-propagation algorithm. Experimental results on two public datasets indicate that our model can significantly outperform existing methods in terms of both automatic evaluation and human judgment.

Imagine throwing the power of modern "language model text-generation" systems like Megatron-LM behind the project of producing troll comments to news items online. The comments will be coherent, but a state actor could make them biased -- e.g. to always say good things about country X, and to always mercilessly attack any critics.

An example from the paper:

Title: FIFA rankings: France number one, Croatia and
England soar, Germany and Argentina plummet

Body (truncated): World Cup glory has propelled France
to the top of FIFA’s latest world rankings, with the impact
of Russia 2018 felt for better or worse among a number of
football’s heavyweight nations.

These are the first set of rankings released under FIFA’s
new formula that ”relies on adding/subtracting points won
or lost for a game to/from the previous point totals rather
than averaging game points over a given time period”.

FIFA world rankings: 1.France 2.Belgium 3. Brazil 4.
Croatia 5. Uruguay 6. England 7. Portugal 8. Switzer-
land 9. Spain 10. Denmark

Comment A: If it’s heavily based on the 2018 WC, hence
England leaping up the rankings, how are Brazil at 3?

Comment B: England above Spain, Portugal and Ger-
many. Interesting.

(I think these were part of the training data, to illustrate the type of results they are aiming for.)


Welcome to 2029, when you can't tell bot from human. Spam calls are going to be interesting, too. The bots will sound like humans, and will be capable of chit-chat -- even programmers from faraway, impoverished lands will have the resources to pull of such a feat by 2029.




  • Members
  • PipPipPipPipPipPipPipPip
  • 1,856 posts

This is something you really need to see to believe:


It's about Salesforce's large language model (GPT-2-like system) called CTRL that takes a "control code" input to determine the type of text to generate. These can be Questions, Reviews, or Links. Using Links, you provide a webpage address, that is supposed to be like the kind of address you would expect a certain type of article to have, and then it generates text accordingly. Since URLs tend to have a lot of "metadata" about the article they link to, you can use this to tell the system what you want it to write.

Amazingly, the author of the above article got good results even about things that are so totally unrealistic -- in other words, the system "knew" how to generalize. For example, consider this fake URL:

Links https: //www.washingtonpost.com/powerpost/trump-likes-fortnite/2019/09/24/

What did the program produce? This:

Trump likes Fortnite

President Trump, who has been known to play video games for hours at a time, is now playing the popular game “Fortnite” on his phone.

The president was spotted watching the online battle royale game, which pits players against each other in an attempt to kill their opponents, while he sat down with House and Senate leaders Tuesday afternoon.

“I like that it’s not just about shooting people but also having fun doing it,” said Trump.

He then asked if anyone had played it before.

“You know what, I don’t think I’ve ever seen that,” replied Rep. Steve Scalise, R-La.

A White House official told reporters later Wednesday afternoon that Trump did indeed watch the game.

“He watched it, yeah,” the official said.

That's totally insane!

What about this one?:

Links https: //www.hollywoodreporter.com/live-feed/fox-announces-firefly-season-2-revival-original-cast-returning-12343245


Fox announces Firefly Season 2 revival, original cast returning

The network has announced that the second season of Joss Whedon’s cult sci-fi series will return for a new run in 2018.

“Fans have been asking us to bring back Firefly, and we couldn’t be more thrilled about this opportunity,” said Chris Albrecht, president of programming at Fox. “We are excited to continue our relationship with Joss, who is one of the most creative minds working today on television, as well as his amazing cast, which includes Nathan Fillion, Summer Glau, Alan Tudyk, Adam Baldwin, Sean Maher, Adrianne Palicki, Tricia Helfer, Jewel Staite, Michael Trucco, and many more.”

In addition to Whedon himself, executive producers include Damon Lindelof, Maurissa Tancharoen, Vince Gilligan, David Fury, Jeffrey Bell, Greg Plageman, Scott M. Gimple, Heather Kadin, Andrew Chambliss, John Rogers, and Ron Moore.

“The show was an instant hit when it debuted over 20 years ago, but its popularity only grew after the success of ‘Serenity,’ so we’re very pleased to welcome fans into another chapter of their lives,” added Feige.

I'm guessing that if the model is big enough, perhaps one could even use this to generate short stories, if there were a large, famous website that cranks them out every day that would be in the training repertoire of the model.

Crazy, crazy, crazy!!!

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users