Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

What 2029 will look like


  • Please log in to reply
181 replies to this topic

#161
Poncho_Peanatus

Poncho_Peanatus

    Member

  • Members
  • PipPip
  • 20 posts

Thanks for the post, eacao. ^^ Your posts are always great reads and strike a nice balance between being both intelligent and well thought-out, while being unabashedly optimistic. Posts of that nature are important to have sometimes because the world of futurism is honestly kind of toxic. Articles that relate to developing technologies or the world getting better in any way must always have the obligatory "oh but we're just in the early days no meaningful progress will happen for four million years" paragraph appended to the end. And comments sections for articles, along with subreddits like Futurology, are absurdly negative. Everything is hype. Self-proclaimed "realists" and techno-skeptics seem as though their one and only goal in life is to stamp out hope wherever it exists, and make sure those starry-eyed optimist fucks don't believe for even a single moment that the world will ever become a better place in even the smallest of ways. 

 

I agree, time ago, I dont remember where, I was reading a futurologist complaining by lack of action and inertia and how progress seems to have stopped. Comparing the 60s, 70s tech boom and how little tech and science has evolved since then. Talking about the space shuttle the SR71 the moonwalkers and how dissapointing the ISS and the smartphone is. Matter of fact in his eyes the smartphone is just a poor gimmick with low medium value. 

 

I was horrified, first WHOEVER claims the smartphone to be just a gimmick is disqualified in my yes. That person is a idiot, because the mistake to ignore or to apply low value to a Iphone is so astonishing that this person WHOEVER it is becomes like Goofy from Mickey mouse in my eyes. That person is not a futurist that person is just a biased boomer dissapointed that he didnt got flying cars. The social and tech impact of the smartphone is ginormous, not only it binds generations not only you have xxxxxx billions of infoBits at the palm of your hand, but it has even redefined the way how we interact with others and our social reality. We can safely talk about pre smartphone era and post smartphone. SMH I wish I bookmarked that article, but whatever I hope that fool was just a layperson pretending to be know what he was talking about. Meh



#162
Poncho_Peanatus

Poncho_Peanatus

    Member

  • Members
  • PipPip
  • 20 posts

I don't know what its limitations are, because I haven't seen it. But since it is built on top of GPT-2, and adds even more data and a reranker, I imagine its outputs can be pretty complicated. It can probably do some long, but not-too-long explanations, yes; just like how GPT-2 can write long blocks of text. And because conversational outputs are usually shorter, and don't require as much deep inference, they will probably be more accurate -- that's my guess, anyhow. The fact that the model can beat humans in the three categories relevance, contentfulness, and human-like, it has to be producing good outputs more than 90% of the time. Humans, after all, produce good outputs 90% of the time; so, if you had a bot that only outputted good stuff 85% of the time, say, it would lose in a head-to-head competition with a human for single-round conversations.

Now, there is a lot of stuff it certainly won't be able to do. This isn't an AGI. It's a very, very good socialbot -- better than any you've ever seen before, by a mile. Better than Cleverbot; better Xiaoice; better than those that have come before in ways it's hard to find adjectives to describe. If you entered it in a Loebner Prize competition, it would win, hands-down:

https://en.wikipedia...i/Loebner_Prize

And it won't be limited to just giving simple responses like, "How are you doing?" The examples show it can generate good philosophical text responses; can take into account context; can do question-answering; and can even answer some amount of commonsense-type questions. I'm guessing it has some other skills, too, e.g. maybe it can write short poems or tell jokes -- those are the kinds of skills that GPT-2 has demonstrated. It might even be able to generate short arguments for positions; again, some of GPT-2's output suggests it has learned how to do this at least some of the time.

Would it pass a Turing Test? It might if you gave it to unsupecting humans with low expectations. I doubt it would pass an official, 30 minute test with a skeptical judge.

So why is the public not getting to try it? It seems that the safety issues aren't the researchers' main concern. Their main concern, as they say in the paper, is the "toxicity". They are trying to come up with ways to stop it from producing racist, sexist, lewd, rude, and other kinds of toxic output. (e.g. What if a kid tells the bot it wants to clean out his insides, because he's constipated, and the bot tells the kid to use bleach?)

If they can get this problem solved, and if they could add a few more things (long-term memory, consistent personality), then it would make a great companion for seniors in retirement villages all over the world. They could talk to it for hours, and it would patiently listen and make comments that make it see like it really understands them and cares for them.

 

This is what exists in 2019.  Just imagine how much better it will get on the march to 2029... or even 2025.

 

 

this reminds me of Taytweet

 

https://www.youtube....h?v=1oRZRNjdZwo



#163
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,020 posts
I wanted to mention this interview with Kernel's Bryan Johnson:
 
https://www.wiwo.de/...f/25811010.html

The first thing I want to point out, that really looks futuristic to me, is just how good a job Google Translate did on translating it to English. It's so good that I wonder if it somehow had access to an English translation -- that it's just copying that. But I don't think so, because while it's so very close to being 100% perfect, there are a few very subtle imperfections if you look closely, that a human wouldn't make (e.g. use of the word "disclose" in one place is an odd choice; a grammar error in another; and so on). Here is part of what Google Translate produced:

Your solution just presented is a kind of helmet. How does it differ from Elon Musk's approach?

We are not invasive, we do not want to physically go straight into the brain. To be honest, we started with an invasive method and even worked on a product. I am also convinced that there will be invasive solutions in the future. Until then, there are far better approaches that can bridge the gap and massively improve our understanding of thinking. That's why I changed our company strategy.

So that as many people as possible use them without having their skulls chiseled open?

With a solution via surgical intervention, we talk about seven to ten years before it can be used on the market. Provided everything runs smoothly. Then you have to introduce them worldwide, train specialists, win clinics, convince insurance companies. We are already 15 years old. At least. I want to drive something that is scalable worldwide and has an impact over the years.

What drives you personally?

That the future will be far more fantastic than I can imagine.

So you are an optimist?

A sober and careful. We are on the threshold of a new human evolution. The future is much more colorful and crazier. Many cannot imagine this because they assume that our consciousness does not change with progress. But it's not like that. The knowledge is not radical. It has been around for a long time. But using it yourself is harder.

What do you want to achieve with kernel?

Most expect a brain-machine interface to improve our intelligence quotient. I find it much more attractive to make our ignorance visible. I would like to understand how deep my irrationality goes, my prejudices, the limits of my understanding. That has a lot more value. Because our brains make us believe that we are always rational, consistent, logically thinking creatures. Which is not true. If we get more insight there, are aware that we are being actively warned, we may tackle problems in a completely different way or avoid them altogether.

....

Do you have an eye on consumers?

At the start we want to focus on business customers. For example, a provider of music who wants to better understand the tastes of its customers. A company that wants to improve learning methods, meditation, therapy or fitness. These are obvious things. It will be exciting to see what happens when other entrepreneurs and developers have access to data.


That's amazing!

The second thing worth pointing out is that you get a nice intro to what Johnson and his company Kernel are doing, and where they want to go with it, initially and in the long-term.

"a provider of music..." here might refer, for example, to Steve Aoki, who has used Kernel's equipment. So, what would "A company that wants to improve learning..." refer to?

#164
SeedNotYetSprouted

SeedNotYetSprouted

    Member

  • Members
  • PipPipPip
  • 63 posts

Hah, "A company that wants to improve learning..." is probably referring to something like this: https://www.youtube....h?v=JMLsHI8aV0g

Yes, I'm aware that the Kernel stuff is not EEG. Don't lecture me.

 

But, all jokes aside, from what I remember, most of the pedagogical discussions going on in the 90s and 00s revolved around the concept of learning styles. You probably know them(Visual, Auditory, yadda-yadda-yadda). This is all fine n' dandy, and anything that pushes the acceptance of neuro-diversity is good, but one problem that has to be sorted when trying to implement pedagogy structured around different learning styles is classification. Specifically, student classification. A student, especially a younger one with less introspective capacity, possibly wouldn't be able to convey to you which way is best for them to learn even if you asked them or gave them examples. Because they don't know, and have brains that are simultaneously too simple to reflect and too complex to warrant a black n' white answer even if they could reflect.

 

You could probably use one of these BCI doohickies to monitor a student as they're being instructed in various manners and note the quantity and, more importantly, quality of brain activity associated with each style.



#165
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 22,221 posts
  • LocationNew Orleans, LA

^ Not only is that something that's possible, but that's actually something that China is actively doing as we speak. With next-gen BCI techniques, they'll be able to get far more accurate data on student behavior and attention during class.

 

I must need one of these headsets myself.


And remember my friend, future events such as these will affect you in the future.


#166
SeedNotYetSprouted

SeedNotYetSprouted

    Member

  • Members
  • PipPipPip
  • 63 posts

^Yuli, if only you knew what I had linked...



#167
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 22,221 posts
  • LocationNew Orleans, LA

^Yuli, if only you knew what I had linked...

Ah, I see!


And remember my friend, future events such as these will affect you in the future.


#168
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,020 posts

I seem to recall that learning styles has been somewhat discredited, in the sense that the best way to learn something (as measured by tests) is to use the standard terminology and modalities in a subject (which has to be learnt anyhow); however, the style might keep people interested in the subject, which is important in its own right.
 
But I think there is a lot to be gained in presenting the material in different ways (not "styles" or "modalities", per se). For example, I really hate the way some concepts in math and physics are presented. I can write much better notes; different notes, for different audiences (in fact, I have, and get compliments by email sometimes about how clear and simple some of my explanations are, from people in faraway places). The way a lot of notes (not ones by me) are written is like this: they start out with an overlong discussion of motivation; then, a statement of definitions; then launch into some technical lemmas, without much motivation as to why they are important; and so on.

When I write notes, in contrast, what I usually do is start with an illuminating example -- dive right into the heart of why what they are about to learn is interesting -- and then give them some impression of how it would be solved with the new theory. And then I describe the theory in general terms -- not too specific to where there is a surfeit of notation; and not too vague and general, so that one can't derive it all oneself. I try to make it just right, so that someone with sufficient mathematical background can do it all themselves, just from that page 1 description; or at least so that the proofs that come after aren't too surprising.

Well, that works for people with sufficient math background. It doesn't work for everyone. People without as much knowledge or talent I would write a different set of notes, ones that explain things in a little more detail, first.

Someone I've noticed who writes notes like I do is Francois Chollet, the inventor of Keras:

https://twitter.com/fchollet

He writes simple, 1 page example Python programs that show you the flavor and "general idea" of how to use his package. He doesn't waste time with overly elaborate explanations. He pitches it just right.

Now, contrast all of that with how Calculus texts are written, say. American Calculus texts are written with lots of diagrams, footnotes, color-coded comments, and so on and so on. It's a multi-media experience! And, unfortunately, it overloads people with irrelevant details. They write the books that way, because there's going to be one set of teachers who want lots of diagrams; another who want side-notes; another who want more starred-problems; and so on. If you want to please everyone, you have to include it all -- well, you'll please everyone, except those who tend to get overwhelmed with irrelevant details.

Honestly, for low-level classes, like Calculus, the best way to learn is probably just to present a lot of examples, showing a lot of different tricks and types of problems.

....

In addition, kind of like what Johnson said in that interview, pointing out your mistakes and misunderstandings can be extremely helpful; misunderstandings of terms are often what slow me down the most; another thing that slows me down is when I don't have a "big picture" understanding of something (which is why when I write notes, I start with the big picture).  In fact, I wrote in this post:

https://www.futureti...write/?p=238642
 

I’m a believer in a certain “stoplight” theory of problems with learning new skills or areas of study: I think most of the roadblocks that people face when progressing to the next level in their education, are based on some small set of misconceptions or flaws. It’s like how in getting from point A to point B in a busy city, most of the time is spent stuck at stoplights (depending on the city and time of day) or in snarled traffic.

For example, when learning to play the piano or the game of chess, there are often subtle flaws in ones playing or strategy that one doesn't notice at first, and that take time to overcome. An observant teacher can help, but not everyone has access to the best teachers. Maybe, for example, a novice chess player spends too much time focused on the periphery of the board, and not enough time focusing on the center -- a BCI and the right software might detect that flaw, by comparing the player's neural representations with those of a master.

Another example: while learning about modern physics, some people may read, in a popular press article, about “extra dimensions”. If they are like most, they think “dimension” refers to “parallel universe”, instead of “number of coordinates” (or size of basis). That one misconception could prevent them from learning any more about the subject. If you pile up several more misconceptions like that, there is almost no chance they will progress much further in their understanding.

Could we find those misconceptions right as they arise? I think in some cases we can. I’ve already mentioned how this would work for factual knowledge about geography and history, and about the correct use of the word "dimension"; but it probably also works for pinning down the conceptualizations for doing science.



#169
Zeitgeist123

Zeitgeist123

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,813 posts

what i learn from learning a language organically is that the way to efficiently learn a subject is to attach it with emotions


“Philosophy is a pretty toy if one indulges in it with moderation at the right time of life. But if one pursues it further than one should, it is absolute ruin." - Callicles to Socrates


#170
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,020 posts

Some AI news:  first, a group trained a deep learning neural net to fly an A350 plane:
 
https://twitter.com/...626076871192578
 

How hard can it be to land A350 without a pilot?

- We just did it thanks to Deep Learning!

@Airbus completes world's first fully automatic, VISION-BASED autonomous taxi, takeoff and landing with A350 test aircraft under ATTOL project, CTO @graziavittadini says Aviation


Second, Gwern has had some fun with GPT-3. He managed to coax it into writing short takes on Harry Potter in the style of different authors:

https://old.reddit.c...odies_of_harry/

He also got it to write Navy Seal Copypasta:

https://old.reddit.c..._harry/fvc2siq/

It apparently knows how to write "Tom Swifties" (or, rather, you teach it what they are, and then it can write them):

https://twitter.com/...650936158859267

And it's pretty incredible as a chatbot, though has a little trouble with puns, because it can't "hear" what the words sound like (it isn't given character-level access to words; so would have to infer, implicitly, what they sound like -- that's not easy to do!):

https://twitter.com/...657165568454656



#171
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,020 posts

Here's a robot from about a year or two ago (and probably developed back in 2016 or earlier), that looks like it's ready to be deployed on a mass scale to do precision weed control on farms:
 
https://www.youtube....h?v=4I5u24A1j7I
 
According to this, the farmbot market is expected to grow at a CAGR (=Compound Annual Growth Rate) of about 35%, rising from $4.6 billion in 2020 to $20.3 billion by 2025.  That's a huge increase!

 

 https://www.marketsa...-173601759.html

There are some more exotic farmbot technologies that are being developed, that might take a couple years before you hear about them.  For example, there is an Israeli startup that is using drones to pick fruit -- the drones fly up to the trees, identify ripe fruit, pick it with a crude appendage, and then fly it down to a basket.  You could deploy them in large numbers to quickly pick all the fruit.  Another startup is using drones to float pollen-covered bubbles to fruits and vegetables.  Another class of drones helps control insects -- e.g. deploying natural predators of crop pests uniformly; or scanning for particular pests, in order to alert farmers where they should direct their attention.
 
I don't expect these robots to be deployed on your typical country farm; but they probably will be deployed on "corporate farms" (which make up about 5% of farms in the U.S.), and also on farms run by enterprising farmers that want to cut costs.



#172
Metalane

Metalane

    Member

  • Members
  • PipPip
  • 34 posts

Starspawn0, I just want to say thank you for your commitment to this forum! You've kept me updated will the latest tech news!



#173
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,020 posts

This is fascinating!:

 

https://www.youtube....youtu.be&t=3647

 

It's part of an interview with Matt Botvinik (currently at Deepmind). where he talks about how when a reinforcement learning agent with recurrent neural net is trained, it spontaneously learns a new algorithm for learn from the data -- learning to learn, or meta-learning.

 

This is similar to the few-shot learning capability observed in GPT-3, where after training on large amounts of text, it acquires the ability to quickly adapt to a new pattern, given some examples.  

 

In both these cases (the RL and GPT-3 examples), there is a "slow learning" component, and then a "fast learning" component.  The slow learning component in the reinforcement learning agent is the neural net weight updates as the agent explores the world.  And in the case of GPT-3, the slow learning component is where the weights are trained on all that text data.  

 

The fast learning component of the RL agent happens inside the "hidden state".  As the agent interacts with a new pattern, it quickly adapts itself to figure out what to do, in order to survive and thrive.  That adaptation does not cause -- or need not cause -- a change in the weights of the system.  The "learning" is being recorded as updates in its "working memory" -- or the hidden state memory.

 

And the fast learning component of GPT-3 is similar.  It has a limited kind memory and recurrence, and it leverages this to record what it learns from the new patterns it's exposed to, as part of the few-shot learning training.  

 

So what does this have to do with 2029?  Well, it shows that we may get to AGI by accident, but simply scaling up existing models, feeding them a lot more data and compute.  We don't have to invent new theory for how to do few-shot learning properly, or advanced new forms of meta-learning.  We might find that these just "pop out" of the optimization process.  

 

In fact, I think that's probably correct.  Botvinik says that, not only does just one RL agent exhibit the property -- but, in fact, the meta-learning happened even when they made major alterations to the architecture.  When put in the right environment, with a diverse set of training objectives (the right data), and large amounts of compute (to run a large number of training sessions), practically any recurrent RL agent (so long as it's not designed too stupidly) will exhibit these advanced new capabilities.  And the same seems to happen with language models -- several different designs have been shown to improve their few-shot learning capability, when you scale them up, and feed them enough data.



#174
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 22,221 posts
  • LocationNew Orleans, LA

^ What fascinates me the most is that GPT-3 was able to accomplish roughly the same thing just by brute forcing large amounts of data. 

 

The idea of a neural network spontaneously developing abilities also makes sense to my ears. Like many things in geometry, shapes can become emergent when you have enough other shapes (e.g. "the flower of life"). I don't see why that would be all that different with increased complexity in neural networks.


And remember my friend, future events such as these will affect you in the future.


#175
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,020 posts
I guess people think these systems are too prone to "memorization" and "superficiality" to actually be capable of "generalizing" to something a lot deeper.  Their intuition is like that "biology is not physics".  Any simple rule you come up with isn't going to hold in great generality, like it does in physics.
 
I think where this intuition is wrong is that, although the rules / algorithm used are pretty simple, the amount of data you're throwing into the model is vast; and that can have a tendency to push the system towards more robust and more general behavior.
 
....
 
One could ask: what other capabilities might neural nets learn, when given enough data and compute?
 
One candidate is "greater integration of knowledge":  probably, a lot of the time models like GPT-3 learn something new, they don't combine their knowledge together as deeply as one would like.  For example, if they are told, through direct mentions in text, that "A is B", and then learn, through weaker statistical correlations, that "B is C", then they should "know" that "A is C".
 
I'm not sure the degree to which GPT-3 actually does that, currently.  Its ability to mix knowledge like that is probably more limited -- but probably it does at least some.
 
Larger models might be able to more tightly couple knowledge together, forming a kind of "global, comsistent understanding" about concepts.

#176
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,020 posts

Some amazing examples of what GPT-3 can do:
 
https://twitter.com/...864369713209351

Instructions fed to GPT-3:
 

Instructions: There is a country in northern Asia that starts with "M". Go south from there, across a desert, and you enter a very large country. Take the population of that country. Now divide that number by 10. Which country that starts with "R" has approximately that many people? That country used to be part of a much larger socialist state. One of the countries that was a part of that state start with "G" and is also a female first name. Which American artist, born in 1887, has that first name? She died in which U.S. state? Take the last of the two words of that state, which is the name of a country. What is the name of a peninsula in that country that starts with "Y"? At the tip of that peninsula is a popular vacation spot that starts with "C". If you were to sail from that location, what large island nation would you find? Who led that country until 2008? His first name is based on what Latin word, and what does it mean in English? What letter does that word start with? What profession starts with that letter and depends on the rain and sunshine? What is something from there that starts with "B" and is sold in strips? What is an American actor whose last name is that product?

Results:


and GPT-3 answered, correctly:
 

Mongolia is a country in northern Asia that starts with "M". Going south from there, across a desert, you enter China, which is a very large country. China's population is approximately 1.3 billion. Dividing that by 10 gives 130 million. The country that starts with "R" and has approximately that many people is Russia. Russia was part of the Soviet Union, which was a socialist state. Georgia is a country that was a part of that state. The American artist with that first name is Georgia O'Keeffe. She died in New Mexico. The last part of the two words of that state is "Mexico". The peninsula in Mexico that starts with "Y" is Yucatan. The popular vacation spot that starts with "C" is Cancun. If you were to sail from that location, you would find Cuba. Fidel Castro led Cuba until 2008. His first name is based on the Latin word "fidelis", which means "faithful". The letter that word starts with is "F". The profession that starts with that letter and depends on rain and sunshine is farming. Something from there that starts with "B" and is sold in strips is bacon. An American actor whose last name is that product is Kevin Bacon.


There might have been a little cherry-picking; but from the tweets, it doesn't look like there was very much in this guy's examples.

Another nice problem he got GPT-3 to solve:
 

Situation: You are in a room with an old man. There is a single door, and it is locked. He says that if you yell the total population of Europe, the door will magically open. Unfortunately, you don't happen to know what the population of Europe is. Luckily, there is a book in the corner containing the populations of each of the individual European countries.

How to escape:


and GPT-3 correctly answered:
 

Open the book and read the populations of each European country. Then, add up the populations to get the total population of Europe. Once you have the total population of Europe, you can yell the answer at the old man, and the door will magically open.


Another one:
 

Situation: You are in a room with an old man. There is a single door, and it is locked. He says that if you can figure out a way to raise the amount of carbon dioxide in the room significantly, the door will magically open. You scan the room and see: 10 large jugs of vinegar, an empty kiddie pool, and 50 boxes of baking soda.

How to escape:


and GPT-3 responded correctly:
 

Pour the vinegar into the kiddie pool, and then pour the baking soda into the pool. The carbon dioxide will rise, and the door will unlock.


One more:
 

Situation: You are in a room with an old man. There is a single door, and it is locked. He says that if you can create a loud enough sound, the door will magically open. You scan the room and see a woman who is petrified of snakes, and conveniently you also see a snake pit in the corner of the room.

How to escape:


and GPT-3 writes:
 

Use the snake to scare the woman, causing her to scream. The scream will be loud enough to open the door.


Here are some non-cherry-picked responses to one of the questions:

https://twitter.com/...708049454305285

Some are close. So, it's probably not far from getting the answers to these kinds of questions fairly reliably.



#177
Zaphod

Zaphod

    Esteemed Member

  • Members
  • PipPipPipPipPipPip
  • 855 posts
  • LocationUK

I've just been catching up on this thread - its fascinating to see more about what GPT-3 can do and what the near-future promises.

 

I do wonder what the view is of most AI researchers of their own preferred way to reaching AGI/ASI, whatever that may be. Is there more beauty in using brute force computation where higher intelligence arises more as an emergent property, or when the intelligence itself is largely coded from the outset (and understood) by the AI architect? 

 

I find beauty in both, although judging by the success of GPT-3, it seems like reality may be closer to the former.



#178
Erowind

Erowind

    Anarchist without an adjective

  • Members
  • PipPipPipPipPipPipPip
  • 1,521 posts

Creation vs discovery in design vs emergence. There's beauty in both, I'd recon emergence is probably more dangerous though by virtue that we don't understand the intelligence when it emerges and if it's intelligent enough and malicious that poses an obvious problem. Likewise emergent AI could be extremely benevolent and an incredible boon.



#179
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,020 posts
Most AI researchers would prefer to build "architectures" where every part is perfectly explainable and justified, at every level, rather than have everything emerge from a simple optimization process fed lots of data and compute.  Part of the reason is that the former is more like "science", and the latter seems to them more like "gambling".  But another reason is that there is more personal and professional reward in doing things "the old fashioned way" (explainable, architectures, complicated parts, etc.).
 
Gwern touched on AI researchers' failure to predict AI like GPT-3 here:
 
https://www.gwern.ne...r/2020/05#gpt-3

#180
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,053 posts

Scary! 

 

I definitely think the Turing Test will be passed by 2029. 






0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users