Page 12 of 38

Re: Proto-AGI/Transformative AI News and Discussions

Posted: Mon Oct 03, 2022 7:40 am
by agi
deleted

Re: Proto-AGI/Transformative AI News and Discussions

Posted: Mon Oct 03, 2022 11:02 am
by agi
deleted

Re: Proto-AGI/Transformative AI News and Discussions

Posted: Tue Oct 04, 2022 3:19 am
by Yuli Ban

Re: Proto-AGI/Transformative AI News and Discussions

Posted: Wed Oct 19, 2022 9:56 am
by wjfox
Google offers glimpse of ultra-realistic chat tech

4 hours ago

Google has launched a UK version of an app that lets users interact with the artificial-intelligence system one of its engineers has claimed is sentient.

It is a very limited trial, with just three scenarios to choose from.

And while Google wants feedback about how its Language Model for Dialogue Applications (Lamda) performs, the app users cannot teach it any new tricks.

The company has always maintained the technology, used to power chatbots, has no independent thoughts and feelings.

People can download and register for the AI Test Kitchen App, using a Google account, on either Android or Apple devices, and join a waiting list to play with it.

https://www.bbc.co.uk/news/technology-63301146

Re: Proto-AGI/Transformative AI News and Discussions

Posted: Wed Oct 19, 2022 2:54 pm
by wjfox

Re: Proto-AGI/Transformative AI News and Discussions

Posted: Fri Oct 21, 2022 2:13 am
by Yuli Ban

Re: Proto-AGI/Transformative AI News and Discussions

Posted: Fri Oct 21, 2022 10:49 pm
by Yuli Ban

Re: Proto-AGI/Transformative AI News and Discussions

Posted: Fri Oct 21, 2022 10:54 pm
by Yuli Ban
Impressively, at 540B scale, we show an approximately 2x computational savings rate where U-PaLM achieves the same performance as the final PaLM 540B model at around half its computational budget...
U-PaLM does much better than PaLM on some tasks or demonstrates better quality at much smaller scale (62B as opposed to 540B). Overall, we show that U-PaLM outperforms PaLM on many few-shot setups, i.e., English NLP tasks (e.g., commonsense reasoning, question answering), reasoning tasks with chain-of-thought (e.g., GSM8K), multilingual tasks (MGSM, TydiQA), MMLU and challenging BIG-Bench tasks.
We show that, with almost negligible extra computational costs and no new sources of data, we are able to substantially improve the scaling properties of large language models on downstream metrics.


Image

Re: Proto-AGI/Transformative AI News and Discussions

Posted: Sat Oct 22, 2022 6:00 am
by Yuli Ban

Re: Proto-AGI/Transformative AI News and Discussions

Posted: Sun Nov 06, 2022 4:25 pm
by Yuli Ban