PaLM 2 can code, translate, and "reason" in ways that best GPT-4, says Google.
Benj Edwards - 5/11/2023, 12:20 PM
https://arstechnica.com/information-tec ... e-mastery/
On Wednesday, Google introduced PaLM 2, a family of foundational language models comparable to OpenAI's GPT-4. At its Google I/O event in Mountain View, California, Google revealed that it already uses PaLM 2 to power 25 products, including its Bard conversational AI assistant.
Further Reading
Google’s answer to ChatGPT is now open to everyone in US, packing new features
As a family of large language models (LLMs), PaLM 2 has been trained on an enormous volume of data and does next-word prediction, which outputs the most likely text after a prompt input by humans. PaLM stands for "Pathways Language Model," and "Pathways" is a machine-learning technique created at Google. PaLM 2 follows up on the original PaLM, which Google announced in April 2022.
According to Google, PaLM 2 supports over 100 languages and can perform "reasoning," code generation, and multi-lingual translation. During his 2023 Google I/O keynote, Google CEO Sundar Pichai said that PaLM 2 comes in four sizes: Gecko, Otter, Bison, Unicorn. Gecko is the smallest and can reportedly run on a mobile device. Aside from Bard, PaLM 2 is behind AI features in Docs, Sheets, and Slides.
A Google-provided example of PaLM 2 "reasoning."
Enlarge / A Google-provided example of PaLM 2 "reasoning."
All that is fine and well, but how does PaLM 2 stack up to GPT-4? In the PaLM 2 Technical Report, PaLM 2 appears to beat GPT-4 in some mathematical, translation, and reasoning tasks. But reality might not match Google's benchmarks. In a cursory evaluation of the PaLM 2 version of Bard by Ethan Mollick, a Wharton professor who often writes about AI, Mollick finds that PaLM 2's performance appears worse than GPT-4 and Bing on various informal language tests, which he detailed in a Twitter thread.
Until recently, the PaLM family of language models has been an internal Google Research product with no consumer exposure, but Google began offering limited API access in March. Still, the first PaLM was notable for its massive size: 540 billion parameters. Parameters are numerical variables that serve as the learned "knowledge" of the model, enabling it to make predictions and generate text based on the input it receives.