AI & Robotics News and Discussions

User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

The days and sometimes weeks it took to train AIs only a few years ago was a big reason behind the launch of billions of dollars-worth of new computing startups over the last few years—including Cerebras Systems, Graphcore, Habana Labs, and SambaNova Systems. In addition, Google, Intel, Nvidia and other established companies made their own similar amounts of internal investment (and sometimes acquisition). With the newest edition of the MLPerf training benchmark results, there’s clear evidence that the money was worth it.

The gains to AI training performance since MLPerf benchmarks began “managed to dramatically outstrip Moore’s Law,” says David Kanter, executive director of the MLPerf parent organization MLCommons. The increase in transistor density would account for a little more than doubling of performance between the early version of the MLPerf benchmarks and those from June 2021. But improvements to software as well as processor and computer architecture produced a 6.8-11-fold speedup for the best benchmark results. In the newest tests, called version 1.1, the best results improved by up to 2.3 times over those from June.

According to Nvidia the performance of systems using A100 GPUs has increased more than 5-fold in the last 18 months and 20-fold since the first MLPerf benchmarks three years ago.
And remember my friend, future events such as these will affect you in the future
Nero
Posts: 51
Joined: Sun Aug 15, 2021 5:17 pm

Re: AI & Robotics News and Discussions

Post by Nero »

Yuli Ban wrote: Sun Dec 05, 2021 8:26 am
The days and sometimes weeks it took to train AIs only a few years ago was a big reason behind the launch of billions of dollars-worth of new computing startups over the last few years—including Cerebras Systems, Graphcore, Habana Labs, and SambaNova Systems. In addition, Google, Intel, Nvidia and other established companies made their own similar amounts of internal investment (and sometimes acquisition). With the newest edition of the MLPerf training benchmark results, there’s clear evidence that the money was worth it.

The gains to AI training performance since MLPerf benchmarks began “managed to dramatically outstrip Moore’s Law,” says David Kanter, executive director of the MLPerf parent organization MLCommons. The increase in transistor density would account for a little more than doubling of performance between the early version of the MLPerf benchmarks and those from June 2021. But improvements to software as well as processor and computer architecture produced a 6.8-11-fold speedup for the best benchmark results. In the newest tests, called version 1.1, the best results improved by up to 2.3 times over those from June.

According to Nvidia the performance of systems using A100 GPUs has increased more than 5-fold in the last 18 months and 20-fold since the first MLPerf benchmarks three years ago.
A 20-fold increase in 3 years? Can you imagine if we see the same sort of increase by 2024? We already know that people can make the false argument that GPT-3 is true AI - we will be truly beyond the pail within just a year at this rate. Truly we live in interesting times.
User avatar
Ozzie guy
Posts: 486
Joined: Sun May 16, 2021 4:40 pm

Re: AI & Robotics News and Discussions

Post by Ozzie guy »

Yuli Ban wrote: Sun Dec 05, 2021 8:26 am
The days and sometimes weeks it took to train AIs only a few years ago was a big reason behind the launch of billions of dollars-worth of new computing startups over the last few years—including Cerebras Systems, Graphcore, Habana Labs, and SambaNova Systems. In addition, Google, Intel, Nvidia and other established companies made their own similar amounts of internal investment (and sometimes acquisition). With the newest edition of the MLPerf training benchmark results, there’s clear evidence that the money was worth it.

The gains to AI training performance since MLPerf benchmarks began “managed to dramatically outstrip Moore’s Law,” says David Kanter, executive director of the MLPerf parent organization MLCommons. The increase in transistor density would account for a little more than doubling of performance between the early version of the MLPerf benchmarks and those from June 2021. But improvements to software as well as processor and computer architecture produced a 6.8-11-fold speedup for the best benchmark results. In the newest tests, called version 1.1, the best results improved by up to 2.3 times over those from June.

According to Nvidia the performance of systems using A100 GPUs has increased more than 5-fold in the last 18 months and 20-fold since the first MLPerf benchmarks three years ago.
Am I misunderstanding something or is AI literally getting 20 times better every 3 years !?

If so victory is coming fast.

1X20=20 20X20=400 400X20=8000 by the logic of 20 times better every 3 years we will have an AI 8000 times better than GPT3 on 11 Jun 2029.
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

We can add suggesting and proving mathematical theorems to the long list of what artificial intelligence is capable of: Mathematicians and AI experts have teamed up to demonstrate how machine learning can open up new avenues to explore in the field.

While mathematicians have been using computers to discover patterns for decades, the increasing power of machine learning means that these networks can work through huge swathes of data and identify patterns that haven't been spotted before.

In a newly published study, a research team used artificial intelligence systems developed by DeepMind, the same company that has been deploying AI to solve tricky biology problems and improve the accuracy of weather forecasts, to unknot some long-standing math problems.

"Problems in mathematics are widely regarded as some of the most intellectually challenging problems out there," says mathematician Geordie Williamson from the University of Sydney in Australia.
And remember my friend, future events such as these will affect you in the future
User avatar
caltrek
Posts: 6509
Joined: Mon May 17, 2021 1:17 pm

Re: AI & Robotics News and Discussions

Post by caltrek »


Engineered Arts, a UK-based designer and manufacturer of humanoid robots, recently showed off one of its most lifelike creations in a video posted on YouTube. The robot, called Ameca, is shown making a series of incredibly human-like facial expressions.
https://www.theverge.com/2021/12/5/2281 ... xpressions
Don't mourn, organize.

-Joe Hill
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

The idiom “actions speak louder than words” first appeared in print almost 300 years ago. A new study echoes this view, arguing that combining self-supervised and offline reinforcement learning (RL) could lead to a new class of algorithms that understand the world through actions and enable scalable representation learning,

Machine learning (ML) systems have achieved outstanding performance in domains ranging from computer vision to speech recognition and natural language processing, yet still struggle to match the flexibility and generality of human reasoning. This has led ML researchers to search for the “missing ingredient” that might boost these systems’ ability to understand, reason and generalize.

In the paper Understanding the World Through Action, UC Berkeley assistant professor in the department of electrical engineering and computer sciences Sergey Levine suggests that a general, principled, and powerful framework for utilizing unlabelled data could be derived from RL to enable ML systems leveraging large datasets to better understand the real world.
And remember my friend, future events such as these will affect you in the future
User avatar
funkervogt
Posts: 1171
Joined: Mon May 17, 2021 3:03 pm

Re: AI & Robotics News and Discussions

Post by funkervogt »

The "UnifiedQA" and "Gopher" chatbots are better at answering certain categories of questions than GPT-3, and are nearing the same levels of accuracy as human subject matter experts.
https://deepmind.com/blog/article/langu ... g-at-scale

I predict the Turing Test will be passed this decade.
Nero
Posts: 51
Joined: Sun Aug 15, 2021 5:17 pm

Re: AI & Robotics News and Discussions

Post by Nero »

I do believe that it will not require anymore than three years to reach beyond the limits of the Turing Test. This time in 2018 GPT-2 was the most advanced AI chatbot, that has been surpassed several orders of magnitude in the years since there is no way that by 2024 or 2025 even if the growth somehow slowed we would not be looking at similar advances.

Again you have to credit Mr Yuli-Ban for most of this, he saw this coming and was able to predict this well in advance of today.
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

Researchers at Kobe University and Osaka University have successfully developed artificial intelligence technology that can extract hidden equations of motion from regular observational data and create a model that is faithful to the laws of physics.

This technology could enable researchers to discover the hidden equations of motion behind phenomena for which the laws were considered unexplainable
And remember my friend, future events such as these will affect you in the future
Post Reply