Google AI and DeepMind News and Discussions
-
- Posts: 628
- Joined: Thu Aug 19, 2021 4:29 am
-
- Posts: 628
- Joined: Thu Aug 19, 2021 4:29 am
-
- Posts: 628
- Joined: Thu Aug 19, 2021 4:29 am
Re: Google AI and DeepMind News and Discussions
Google plans "next generation series of models" for 2024
Oct 25, 2023
According to Alphabet's CEO, Google's Gemini is just the first of a series of next-generation AI models that Google plans to bring to market in 2024.
With the multimodal Gemini AI model, Google wants to at least catch up with OpenAI's GPT-4. The model is expected to be released later this year. In the recent quarterly earnings call, Alphabet CEO Sundar Pichai said that Google is "getting the model ready".
Gemini will be released in different sizes and with different capabilities, and will be used for all internal products immediately, Pichai said. So it is likely that Gemini will replace Google's current PaLM-2 language model. Developers and cloud customers will get access through Vertex AI.
Most importantly, Google is "laying the foundation of what I think of as the next-generation series of models we'll be launching throughout 2024," Pichai said.
"The pace of innovation is extraordinarily impressive to see. We are creating it from the ground-up to be multimodal, highly efficient tool and API integrations and more importantly, laying the platform to enable future innovations as well," Pichai said.
https://the-decoder.com/google-plans-ne ... -for-2024/
DALL-E 3 prompted by THE DECODER
Oct 25, 2023
According to Alphabet's CEO, Google's Gemini is just the first of a series of next-generation AI models that Google plans to bring to market in 2024.
With the multimodal Gemini AI model, Google wants to at least catch up with OpenAI's GPT-4. The model is expected to be released later this year. In the recent quarterly earnings call, Alphabet CEO Sundar Pichai said that Google is "getting the model ready".
Gemini will be released in different sizes and with different capabilities, and will be used for all internal products immediately, Pichai said. So it is likely that Gemini will replace Google's current PaLM-2 language model. Developers and cloud customers will get access through Vertex AI.
Most importantly, Google is "laying the foundation of what I think of as the next-generation series of models we'll be launching throughout 2024," Pichai said.
"The pace of innovation is extraordinarily impressive to see. We are creating it from the ground-up to be multimodal, highly efficient tool and API integrations and more importantly, laying the platform to enable future innovations as well," Pichai said.
https://the-decoder.com/google-plans-ne ... -for-2024/
DALL-E 3 prompted by THE DECODER
-
- Posts: 16334
- Joined: Sun May 16, 2021 6:16 pm
Re: Google AI and DeepMind News and Discussions
Google Maps Gets New AI Features to Help You Find Cool Stuff
Maps will soon be able to search images and suggest nearby activities.
By Ryan Whitwam October 27, 2023
https://www.extremetech.com/internet/go ... cool-stuff
Maps will soon be able to search images and suggest nearby activities.
By Ryan Whitwam October 27, 2023
https://www.extremetech.com/internet/go ... cool-stuff
Google is preparing a raft of artificial intelligence features in Maps, giving users better search, suggestions, and navigation. Not only will Maps be able to analyze the content of photos in listings, it'll be able to organize search results to direct you to activities and events that are nearby. Google also plans to vastly expand access to some of its most innovative features that have been limited to a handful of cities, but you still might not have access.
-
- Posts: 628
- Joined: Thu Aug 19, 2021 4:29 am
Re: Google AI and DeepMind News and Discussions
Combine AI with a more advanced version of AlphaFold in 10 years and ask "give us the possible mechanisms we can reverse aging."
To know is essentially the same as not knowing. The only thing that occurs is the rearrangement of atoms in your brain.
Re: Google AI and DeepMind News and Discussions
And remember my friend, future events such as these will affect you in the future
- Cyber_Rebel
- Posts: 435
- Joined: Sat Aug 14, 2021 10:59 pm
- Location: New Dystopios
Re: Google AI and DeepMind News and Discussions
It's about time we got actual working definitions which at least some would agree to. Gives an idea of just how "close" we actually are, as I'm assuming a "competent" AGI is what the term has traditionally meant, IE performing at the level of a median human across many domains. This also fixes the goal post moving some have become accustomed to. Really, it shouldn't even take that long for AGI to scale up the list.
If Google releases Gemini soon, I wonder what they'll place their most capable model at? If it performs no better, or just marginally so than GPT-4, then we know it's just a marketing ploy. If it's actually performing at the next tier, then we have real and not emerging AGI per the paper.
If Google releases Gemini soon, I wonder what they'll place their most capable model at? If it performs no better, or just marginally so than GPT-4, then we know it's just a marketing ploy. If it's actually performing at the next tier, then we have real and not emerging AGI per the paper.
- Cyber_Rebel
- Posts: 435
- Joined: Sat Aug 14, 2021 10:59 pm
- Location: New Dystopios
Re: Google AI and DeepMind News and Discussions
This was explained in the paper, which may answer what you may have been pertaining to:
Basically, something like GPT-4 has "emergent" generality due to the cognitive functions it can perform. GPT-4 is not at complete human level yet across every domain but can certainly outperform "unskilled" workers or artist, or at least can augment their outputs in a meaningful way. It can summarize and understand various topics which an unskilled person may not know on their own, and this includes its coding abilities which are "impressive" compared to an unskilled human's. Many tend to compare GPT-4 to a junior dev (assuming GPT-4 is having a good day) and then there's the various benchmarks like passing the bar and medical exams.
If I'm being "strict" I'd say only multimodal LLMs should count from here on out as "emergent" AGIs, which is basically hearkening back to Microsoft's "sparks of AGI" paper.
------------------------------------------------------------------------------------------------------------------------------1. Focus on Capabilities, not Processes. The majority of definitions focus on what an AGI can
accomplish, not on the mechanism by which it accomplishes tasks. This is important for identifying
characteristics that are not necessarily a prerequisite for achieving AGI (but may nonetheless be
interesting research topics). This focus on capabilities allows us to exclude the following from our
requirements for AGI:
• Achieving AGI does not imply that systems think or understand in a human-like way (since this
focuses on processes, not capabilities)
• Achieving AGI does not imply that systems possess qualities such as consciousness (subjective
awareness) (Butlin et al., 2023) or sentience (the ability to have feelings) (since these qualities
not only have a process focus, but are not currently measurable by agreed-upon scientific
methods)
2. Focus on Generality and Performance. All of the above definitions emphasize generality
to varying degrees, but some exclude performance criteria. We argue that both generality and
performance are key components of AGI. In the next section we introduce a leveled taxonomy that
considers the interplay between these dimensions.
3. Focus on Cognitive and Metacognitive Tasks. Whether to require robotic embodiment (Roy
et al., 2021) as a criterion for AGI is a matter of some debate. Most definitions focus on cognitive
tasks, by which we mean non-physical tasks. Despite recent advances in robotics (Brohan et al.,
2023), physical capabilities for AI systems seem to be lagging behind non-physical capabilities. It is
possible that embodiment in the physical world is necessary for building the world knowledge to be
successful on some cognitive tasks (Shanahan, 2010), or at least may be one path to success on some
classes of cognitive tasks; if that turns out to be true then embodiment may be critical to some paths
toward AGI. We suggest that the ability to perform physical tasks increases a system’s generality, but
should not be considered a necessary prerequisite to achieving AGI. On the other hand, metacognitive
capabilities (such as the ability to learn new tasks or the ability to know when to ask for clarification
or assistance from a human) are key prerequisites for systems to achieve generality.
Basically, something like GPT-4 has "emergent" generality due to the cognitive functions it can perform. GPT-4 is not at complete human level yet across every domain but can certainly outperform "unskilled" workers or artist, or at least can augment their outputs in a meaningful way. It can summarize and understand various topics which an unskilled person may not know on their own, and this includes its coding abilities which are "impressive" compared to an unskilled human's. Many tend to compare GPT-4 to a junior dev (assuming GPT-4 is having a good day) and then there's the various benchmarks like passing the bar and medical exams.
If I'm being "strict" I'd say only multimodal LLMs should count from here on out as "emergent" AGIs, which is basically hearkening back to Microsoft's "sparks of AGI" paper.
- Cyber_Rebel
- Posts: 435
- Joined: Sat Aug 14, 2021 10:59 pm
- Location: New Dystopios
Re: Google AI and DeepMind News and Discussions
Exclusive: Google in talks to invest in AI startup Character.AI
By Krystal Hu
November 10, 20234:28 PM CST
(reuters)
A response to GPTs perhaps? At this point, Character.AI should just go back to Google and have their service run off soon to be released Gemini. If they really wished to give Open AI a challenge, a more robust and powerful waifu creation system might just be the answer.
Character.AI itself was actually quite impressive for the time it was released in, and always somehow had up to date information like PI.AI does.
By Krystal Hu
November 10, 20234:28 PM CST
(reuters)
----------------------------------------------------------------------------------------Nov 10 (Reuters) - Alphabet's (GOOGL.O) Google is in talks to invest hundreds of millions of dollars in Character.AI, as the fast growing artificial intelligence chatbot startup seeks capital to train models and keep up with user demand, two sources briefed on the matter told Reuters.
The investment, which could be structured as convertible notes, according to a third source, will deepen the existing partnership Character.AI already has with Google, in which it uses Google's cloud services and Tensor Processing Units (TPUs) to train models.
Google and Character AI did not respond to requests for comment.
Founded by former Google employees Noam Shazeer and Daniel De Freitas, Character.AI allows people to chat with virtual versions of celebrities like Billie Eilish or anime characters, while creating their own chatbots and AI assistants. It is free to use, but offers subscription model that charges $9.99 a month for users who want to skip the virtual line to access a chatbot.
Character.AI's chatbots, with various roles and tones to choose from, have appealed to users ages 18 to 24, who contributed about 60% of its website traffic, according to data from Similarweb. The demographic is helping the company position itself as the purveyor of more fun personal AI companions, compared to other AI chatbots from OpenAI's ChatGPT and Google's Bard.
The company previously said its website had attracted 100 million monthly visits in the first six months since its launch.
A response to GPTs perhaps? At this point, Character.AI should just go back to Google and have their service run off soon to be released Gemini. If they really wished to give Open AI a challenge, a more robust and powerful waifu creation system might just be the answer.
Character.AI itself was actually quite impressive for the time it was released in, and always somehow had up to date information like PI.AI does.
-
- Posts: 628
- Joined: Thu Aug 19, 2021 4:29 am
Re: Google AI and DeepMind News and Discussions
Google Delays Release of Gemini AI That Aims to Compete With OpenAI
Nov. 16, 2023 4:07 PM PST
Google’s company-defining effort to catch up to ChatGPT creator OpenAI is turning out to be harder than expected.
Google representatives earlier this year told some cloud customers and business partners they would get access to the company’s new conversational AI, a large language model known as Gemini, by November. But the company recently told them not to expect it until the first quarter of next year, according to two people with direct knowledge. The delay comes at a bad time for Google, whose cloud sales growth has slowed while that of its bigger rival, Microsoft, has accelerated. Part of Microsoft’s success has come from selling OpenAI’s technology to its customers.
https://www.theinformation.com/articles ... ith-openai
Nov. 16, 2023 4:07 PM PST
Google’s company-defining effort to catch up to ChatGPT creator OpenAI is turning out to be harder than expected.
Google representatives earlier this year told some cloud customers and business partners they would get access to the company’s new conversational AI, a large language model known as Gemini, by November. But the company recently told them not to expect it until the first quarter of next year, according to two people with direct knowledge. The delay comes at a bad time for Google, whose cloud sales growth has slowed while that of its bigger rival, Microsoft, has accelerated. Part of Microsoft’s success has come from selling OpenAI’s technology to its customers.
https://www.theinformation.com/articles ... ith-openai
- Cyber_Rebel
- Posts: 435
- Joined: Sat Aug 14, 2021 10:59 pm
- Location: New Dystopios
Re: Google AI and DeepMind News and Discussions
AI agents can copy humans to get closer to artificial general intelligence, DeepMind finds
Google’s AI offshoot finds copy-cat robots capable of aping living mentors
Source: theregister
The paper: Nature
Google’s AI offshoot finds copy-cat robots capable of aping living mentors
Source: theregister
The paper: Nature
A team of machine learning researchers from Google's DeepMind claim to have demonstrated that AI can acquire skills in a process analogous to social learning in humans and other animals.
Social learning — where one individual acquires skills and knowledge from another by copying — is vital to the process of development in humans and much of the animal kingdom. The Deepmind team claim to be the first to demonstrate the process in artificial intelligence.
A team led by Edward Hughes, a staff research engineer at Google DeepMind, looked to address some of the limitations in AI agents acquiring new skills. Teaching them new capabilities from human data has relied on supervised learning from large numbers of first-person human demonstrations, which can soak up a lot of lab time and money.
Looking to human learning for inspiration, the researchers sought to show how AI agents could learn from other individuals with human-like efficiency. In a physical simulated task space called GoalCycle3D — a sort of computer-animated playground with footpaths and obstacles — they found AI agents could learn from both human and AI experts across a number of navigational problems, even though they had never seen a human or, we assume, had any idea what one was.
Re: Google AI and DeepMind News and Discussions
Millions of new materials discovered with deep learning
29 November 2023
AI tool GNoME finds 2.2 million new crystals, including 380,000 stable materials that could power future technologies
Modern technologies from computer chips and batteries to solar panels rely on inorganic crystals. To enable new technologies, crystals must be stable otherwise they can decompose, and behind each new, stable crystal can be months of painstaking experimentation.
Today, in a paper published in Nature, we share the discovery of 2.2 million new crystals – equivalent to nearly 800 years’ worth of knowledge. We introduce Graph Networks for Materials Exploration (GNoME), our new deep learning tool that dramatically increases the speed and efficiency of discovery by predicting the stability of new materials.
With GNoME, we’ve multiplied the number of technologically viable materials known to humanity. Of its 2.2 million predictions, 380,000 are the most stable, making them promising candidates for experimental synthesis. Among these candidates are materials that have the potential to develop future transformative technologies ranging from superconductors, powering supercomputers, and next-generation batteries to boost the efficiency of electric vehicles.
GNoME shows the potential of using AI to discover and develop new materials at scale. External researchers in labs around the world have independently created 736 of these new structures experimentally in concurrent work. In partnership with Google DeepMind, a team of researchers at the Lawrence Berkeley National Laboratory has also published a second paper in Nature that shows how our AI predictions can be leveraged for autonomous material synthesis.
https://deepmind.google/discover/blog/m ... -learning/
29 November 2023
AI tool GNoME finds 2.2 million new crystals, including 380,000 stable materials that could power future technologies
Modern technologies from computer chips and batteries to solar panels rely on inorganic crystals. To enable new technologies, crystals must be stable otherwise they can decompose, and behind each new, stable crystal can be months of painstaking experimentation.
Today, in a paper published in Nature, we share the discovery of 2.2 million new crystals – equivalent to nearly 800 years’ worth of knowledge. We introduce Graph Networks for Materials Exploration (GNoME), our new deep learning tool that dramatically increases the speed and efficiency of discovery by predicting the stability of new materials.
With GNoME, we’ve multiplied the number of technologically viable materials known to humanity. Of its 2.2 million predictions, 380,000 are the most stable, making them promising candidates for experimental synthesis. Among these candidates are materials that have the potential to develop future transformative technologies ranging from superconductors, powering supercomputers, and next-generation batteries to boost the efficiency of electric vehicles.
GNoME shows the potential of using AI to discover and develop new materials at scale. External researchers in labs around the world have independently created 736 of these new structures experimentally in concurrent work. In partnership with Google DeepMind, a team of researchers at the Lawrence Berkeley National Laboratory has also published a second paper in Nature that shows how our AI predictions can be leveraged for autonomous material synthesis.
https://deepmind.google/discover/blog/m ... -learning/
-
- Posts: 1910
- Joined: Wed Oct 12, 2022 7:45 am
Re: Google AI and DeepMind News and Discussions
holy shit 800 years worth of technological advancement, in a way
-
- Posts: 628
- Joined: Thu Aug 19, 2021 4:29 am
-
- Posts: 1910
- Joined: Wed Oct 12, 2022 7:45 am
Re: Google AI and DeepMind News and Discussions
They seem to be aiming for perfection
Re: Google AI and DeepMind News and Discussions
Google Gemini has just been released and is likely the new closest AI to AGI
its nearly 3am where I live so I am not going to look into it I am sure details will be summarized soon.
https://blog.google/technology/ai/googl ... ing-gemini
its nearly 3am where I live so I am not going to look into it I am sure details will be summarized soon.
https://blog.google/technology/ai/googl ... ing-gemini