Re: AI & Robotics News and Discussions
Posted: Sun May 08, 2022 7:01 am
The rate of progress in AI is beginning to scare me...
A community of futurology enthusiasts
https://www.futuretimeline.net/forum/
The rate of progress in AI is beginning to scare me...
AI experts opinions are mutually exclusive.funkervogt wrote: ↑Tue May 10, 2022 2:05 pm IBM's CEO: Narrow AI is quickly improving and near a usage "tipping point," but general AI is still decades away.
https://www.zdnet.com/article/ibm-ceo-a ... cades-out/
In recent years, scaling up the size of language models has been shown to be a reliable way to improve performance on a range of natural language processing (NLP) tasks. Today’s language models at the scale of 100B or more parameters achieve strong performance on tasks like sentiment analysis and machine translation, even with little or no training examples. Even the largest language models, however, can still struggle with certain multi-step reasoning tasks, such as math word problems and commonsense reasoning. How might we enable language models to perform such reasoning tasks?
In “Chain of Thought Prompting Elicits Reasoning in Large Language Models,” we explore a prompting method for improving the reasoning abilities of language models. Called chain of thought prompting, this method enables models to decompose multi-step problems into intermediate steps. With chain of thought prompting, language models of sufficient scale (~100B parameters) can solve complex reasoning problems that are not solvable with standard prompting methods.