Page 4 of 5

Re: AI alignment and ethics

Posted: Sat Apr 20, 2024 10:54 pm
by caltrek
Are Tomorrow’s Engineers Ready to Face AI’s Ethical Challenges?
by Elana Goldenkoff and Erin A. Cech
April 19, 2024

Introduction:
(The Conversation) A chatbot turns hostile. A test version of a Roomba vacuum collects images of users in private situations. A Black woman is falsely identified as a suspect on the basis of facial recognition software, which tends to be less accurate at identifying women and people of color.

These incidents are not just glitches, but examples of more fundamental problems. As artificial intelligence and machine learning tools become more integrated into daily life, ethical considerations are growing, from privacy issues and race and gender biases in coding to the spread of misinformation.

The general public depends on software engineers and computer scientists to ensure these technologies are created in a safe and ethical manner. As a sociologist and doctoral candidate interested in science, technology, engineering and math education, we are currently researching how engineers in many different fields learn and understand their responsibilities to the public.

Yet our recent research, as well as that of other scholars, points to a troubling reality: The next generation of engineers often seem unprepared to grapple with the social implications of their work. What’s more, some appear apathetic about the moral dilemmas their careers may bring – just as advances in AI intensify such dilemmas.

Aware, but unprepared

As part of our ongoing research, we interviewed more than 60 electrical engineering and computer science masters students at a top engineering program in the United States. We asked students about their experiences with ethical challenges in engineering, their knowledge of ethical dilemmas in the field and how they would respond to scenarios in the future.
Read more here: https://theconversation.com/profiles/e ... h-389355

Re: AI alignment and ethics

Posted: Mon Apr 22, 2024 6:30 am
by wjfox

Re: AI alignment and ethics

Posted: Mon Apr 22, 2024 8:46 pm
by weatheriscool
wjfox wrote: Mon Apr 22, 2024 6:30 am

This fucker literally wants us to live in the stone age. I am not kidding when I say that their entire agenda is exactly that.

Re: AI alignment and ethics

Posted: Mon Apr 22, 2024 9:25 pm
by raklian
wjfox wrote: Mon Apr 22, 2024 6:30 am

Re: AI alignment and ethics

Posted: Tue May 07, 2024 9:31 am
by wjfox


Re: AI alignment and ethics

Posted: Tue May 07, 2024 2:05 pm
by firestar464
Depends whether they're considered people or not. You can't subpoena a monkey or a parrot.

Re: AI alignment and ethics

Posted: Fri May 10, 2024 8:55 pm
by firestar464
Turing test study shows humans rate artificial intelligence as more 'moral' than other people

https://techxplore.com/news/2024-05-tur ... moral.html

Re: AI alignment and ethics

Posted: Wed May 15, 2024 10:25 am
by wjfox
Ilya Sutskever and Jan Leike have now both resigned. They led OpenAI's "Superalignment" work.

Meta approves anti-Muslim AI-generated ads in India

Posted: Wed May 22, 2024 5:28 pm
by Tadasuke
We humans don't agree on how we align, so how can our digital progeny be aligned? We currently have no common alignment! :(



Some people predicted this.

Re: AI alignment and ethics

Posted: Sat May 25, 2024 8:47 am
by wjfox
Big tech has distracted world from existential risk of AI, says top scientist

Max Tegmark argues that the downplaying is not accidental and threatens to delay, until it’s too late, the strict regulations needed

Sat 25 May 2024 06.00 BST

Big tech has succeeded in distracting the world from the existential risk to humanity that artificial intelligence still poses, a leading scientist and AI campaigner has warned.

Speaking with the Guardian at the AI Summit in Seoul, South Korea, Max Tegmark said the shift in focus from the extinction of life to a broader conception of safety of artificial intelligence risked an unacceptable delay in imposing strict regulation on the creators of the most powerful programs.

“In 1942, Enrico Fermi built the first ever reactor with a self-sustaining nuclear chain reaction under a Chicago football field,” Tegmark, who trained as a physicist, said. “When the top physicists at the time found out about that, they really freaked out, because they realised that the single biggest hurdle remaining to building a nuclear bomb had just been overcome. They realised that it was just a few years away – and in fact, it was three years, with the Trinity test in 1945.

“AI models that can pass the Turing test [where someone cannot tell in conversation that they are not speaking to another human] are the same warning for the kind of AI that you can lose control over. That’s why you get people like Geoffrey Hinton and Yoshua Bengio – and even a lot of tech CEOs, at least in private – freaking out now.”

https://www.theguardian.com/technology/ ... egulations