Re: Proto-AGI/Transformative AI News and Discussions
Posted: Mon Mar 27, 2023 7:49 am
A community of futurology enthusiasts
https://www.futuretimeline.net/forum/
https://www.futuretimeline.net/forum/viewtopic.php?f=16&t=48
Link to the paper: https://arxiv.org/abs/2303.12712"We contend," the researchers write in the paper, published yesterday, "that (this early version of) GPT-4 is part of a new cohort of LLMs (along with ChatGPT and Google's PaLM for example) that exhibit more general intelligence than previous AI models."
"We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting," reads the paper. "Moreover, in all of these tasks, GPT-4's performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT."
"Given the breadth and depth of GPT-4's capabilities," they continue, "we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system."
OpenAI CEO Sam Altman is still sounding the alarm about the potential dangers of advanced artificial intelligence, saying that despite its "tremendous benefits," he also fears the potentially unprecedented scope of its risks.
His company — the creator behind hit generative AI tools like ChatGPT and DALL-E — is keeping that in mind and working to teach AI systems to avoid putting out harmful content, Altman said on tech researcher Lex Fridman's podcast, in an episode posted on Saturday.
"I think it's weird when people think it's like a big dunk that I say, I'm a little bit afraid," Altman told Fridman. "And I think it'd be crazy not to be a little bit afraid, and I empathize with people who are a lot afraid."
"The current worries that I have are that there are going to be disinformation problems or economic shocks, or something else at a level far beyond anything we're prepared for," he added. "And that doesn't require superintelligence."
The transformative changes brought by deep learning and artificial intelligence are accompanied by immense costs. For example, OpenAI's ChatGPT algorithm costs at least $100,000 every day to operate. This could be reduced with accelerators, or computer hardware designed to efficiently perform the specific operations of deep learning. However, such a device is only viable if it can be integrated with mainstream silicon-based computing hardware on the material level.
This was preventing the implementation of one highly promising deep learning accelerator—arrays of electrochemical random-access memory, or ECRAM—until a research team at the University of Illinois Urbana-Champaign achieved the first material-level integration of ECRAMs onto silicon transistors. The researchers, led by graduate student Jinsong Cui and professor Qing Cao of the Department of Materials Science & Engineering, recently reported an ECRAM device designed and fabricated with materials that can be deposited directly onto silicon during fabrication in Nature Electronics, realizing the first practical ECRAM-based deep learning accelerator.
"Other ECRAM devices have been made with the many difficult-to-obtain properties needed for deep learning accelerators, but ours is the first to achieve all these properties and be integrated with silicon without compatibility issues," Cao said. "This was the last major barrier to the technology's widespread use."