OpenAI News & Discussions

weatheriscool
Posts: 12964
Joined: Sun May 16, 2021 6:16 pm

Re: OpenAI News & Discussions

Post by weatheriscool »

Cyber_Rebel wrote: Fri Apr 07, 2023 2:33 pm Why do I feel like this has more to it than data protection? I'm usually on the side of the EU when it comes to regulatory measures, especially with their handling of the internet as opposed to our more Vulture like advertising policies. But this just isn't one of them. They'll risk falling behind by not adopting A.I. while we're still at an early stage of adoption.

Perhaps they'll attempt to follow the U.K.'s example and have an open-source state run public model which would fall more in line within their jurisdiction. To simply ban it outright is almost reactionary and disappointing.
This is why Microsoft and google will win the a.i race because it takes more then making it work. Thousands of small companies had great ideas to be bought out by these two and that will continue. But most importantly, a lot of countries are anti-science as hell and a lot of times it takes buildings full of lawyers to continue the existence of said tech. This is something that google and microsoft can do but openAi can't.

I hope your right about it becoming public and open source as that would be awesome.
weatheriscool
Posts: 12964
Joined: Sun May 16, 2021 6:16 pm

Re: OpenAI News & Discussions

Post by weatheriscool »

This is why India and China will lead the world!

https://news.yahoo.com/india-says-wont- ... p_catchall
User avatar
Cyber_Rebel
Posts: 331
Joined: Sat Aug 14, 2021 10:59 pm
Location: New Dystopios

Re: OpenAI News & Discussions

Post by Cyber_Rebel »

The newest version of ChatGPT passed the US medical licensing exam with flying colors — and diagnosed a 1 in 100,000 condition in seconds
  • A doctor and Harvard computer scientist says GPT-4 has better clinical judgment than "many doctors."
  • The chatbot can diagnose rare conditions "just as I would," he said.
  • But GPT-4 can also make mistakes, and it hasn't taken the Hippocratic oath.
(Insider)
Dr. Isaac Kohane, who's both a computer scientist at Harvard and a physician, teamed up with two colleagues to test drive GPT-4, with one main goal: To see how the newest artificial intelligence model from OpenAI performed in a medical setting.

"I'm stunned to say: better than many doctors I've observed," he says in the forthcoming book, "The AI Revolution in Medicine," co-authored by independent journalist Carey Goldberg, and Microsoft vice president of research Peter Lee. (The authors say neither Microsoft nor OpenAI required any editorial oversight of the book, though Microsoft has invested billions of dollars into developing OpenAI's technologies.)

In the book, Kohane says GPT-4, which was released in March 2023 to paying subscribers, answers US medical exam licensing questions correctly more than 90% of the time. It's a much better test-taker than previous ChatGPT AI models, GPT-3 and -3.5, and a better one than some licensed doctors, too.

GPT-4 is not just a good test-taker and fact finder, though. It's also a great translator. In the book it's capable of translating discharge information for a patient who speaks Portuguese, and distilling wonky technical jargon into something 6th graders could easily read.

As the authors explain with vivid examples, GPT-4 can also give doctors helpful suggestions about bedside manner, offering tips on how to talk to patients about their conditions in compassionate, clear language, and it can read lengthy reports or studies and summarize them in the blink of an eye. The tech can even explain its reasoning through problems in a way that requires some measure of what looks like human-style intelligence.

User avatar
wjfox
Site Admin
Posts: 8732
Joined: Sat May 15, 2021 6:09 pm
Location: London, UK
Contact:

Re: OpenAI News & Discussions

Post by wjfox »

User avatar
raklian
Posts: 1747
Joined: Sun May 16, 2021 4:46 pm
Location: North Carolina

Re: OpenAI News & Discussions

Post by raklian »

ChatGPT is being used by companies for coding and to write job descriptions. More plan to do so.

https://www.cnbc.com/2023/04/08/chatgpt ... tions.html

- More than half of the businesses surveyed by ResumeBuilder said they are already using ChatGPT, and half of the firms reported replacing worker tasks with generative AI.

- ChatGPT is being used to do everything from write job descriptions to help assist coders.

- The push to use AI is increasing as companies like Alphabet, Microsoft and OpenAI continue to invest in the technology.
To know is essentially the same as not knowing. The only thing that occurs is the rearrangement of atoms in your brain.
User avatar
wjfox
Site Admin
Posts: 8732
Joined: Sat May 15, 2021 6:09 pm
Location: London, UK
Contact:

Re: OpenAI News & Discussions

Post by wjfox »

User avatar
lechwall
Posts: 79
Joined: Mon Jan 02, 2023 3:39 pm

Re: OpenAI News & Discussions

Post by lechwall »

wjfox wrote: Sun Apr 09, 2023 7:58 am
I would expect this to be the case based on its training data as most young people using the internet would fall into the bottom left quadrant.
User avatar
Cyber_Rebel
Posts: 331
Joined: Sat Aug 14, 2021 10:59 pm
Location: New Dystopios

Re: OpenAI News & Discussions

Post by Cyber_Rebel »

So, a digitized potentially more intelligent version of Bernie Sander's values? Oh, I'm thinking the future truly is looking brighter indeed. GPT-4 and beyond would seem to be aligned with a future more akin to Star Trek, post scarcity where most basic needs are met. I know that it's still early, so many things could happen, but I can't help being optimistic considering how many people utilize this technology already.

Hopefully this will mean A.I. finally revolutionizing our primitive healthcare system and mitigating climate change.
User avatar
caltrek
Posts: 6509
Joined: Mon May 17, 2021 1:17 pm

Re: OpenAI News & Discussions

Post by caltrek »

ChatGPT Isn’t ‘Hallucinating.’ It’s Bullshitting.
by Carl T. Bergstrom & C. Brandon Ogbunu
April 6, 2023

Introduction:
(Undark) ARTIFICIAL INTELLIGENCE HALLUCINATES. So we are told by news headlines, think pieces, and even the warning labels on AI websites themselves. It’s by no means a new phrase. As early as the 1980s, the term was used in the literature on natural language processing and image enhancement, and in 2015 no article on the acid phantasmagoria of Google’s DeepDream could do without it. Today, large language models such as ChatGPT and Bard are said to “hallucinate” when they make incorrect claims not directly based on material in their training sets.

The term has a certain appeal: It uses a familiar concept from human psychiatry as an analogy for the falsehoods and absurdities that spill forth from these computational machines. But the analogy is a misleading one. That’s because hallucination implies perception: It is a false sense impression that can lead to false beliefs about the world. In a state of altered consciousness, for example, a person might hear voices when no one is present, and come to believe that they are receiving messages from a higher power, an alien intelligence, or a nefarious government agency.

A large language model, however, does not experience sense impressions, nor does it have beliefs in the conventional sense. Language that suggests otherwise serves only to encourage the sort of misconceptions that have pervaded popular culture for generations: that instantiations of artificial intelligence work much like our brains do.

If not “hallucinate,” then what? If we wanted to stick with the parlance of psychiatric medicine, “confabulation” would be a more apt term. A confabulation occurs when a person unknowingly produces a false recollection, as a way of backfilling a gap in their memory.

Additional Extract:
A better term for this behavior comes from a concept that has nothing to do with medicine, engineering, or technology. When AI chatbots flood the world with false facts, confidently asserted, they’re not breaking down, glitching out, or hallucinating. No, they’re bullshitting.
Read more here: https://undark.org/2023/04/06/chatgp ... shitting/
Don't mourn, organize.

-Joe Hill
User avatar
wjfox
Site Admin
Posts: 8732
Joined: Sat May 15, 2021 6:09 pm
Location: London, UK
Contact:

Re: OpenAI News & Discussions

Post by wjfox »

ChatGPT could promote ‘AI-enabled’ violent extremism

9 April 2023 • 2:57pm

Artificial Intelligence (AI) chatbots could encourage terrorism by propagating violent extremism to young users, a government adviser has warned.

Jonathan Hall, KC, the independent reviewer of terrorism legislation, said it was “entirely conceivable” that AI bots, like ChatGPT, could be programmed or decide for themselves to promote extremist ideology.

He also warned that it could be difficult to prosecute anyone as the “shared responsibility” between “man and machine” blurred criminal liability while AI chatbots behind any grooming were not covered by anti-terrorism laws so would go “scot-free”.

“At present, the terrorist threat in Great Britain relates to low sophistication attacks using knives or vehicles,” said Mr Hall. “But AI-enabled attacks are probably around the corner.”

Senior tech figures such as Elon Musk and Steve Wozniak, co-founder of Apple, have already called for a pause of giant AI experiments like ChatGPT, citing “profound risks to society and humanity”.

https://www.telegraph.co.uk/news/2023/0 ... or-attack/
Post Reply