AI alignment and ethics

Lariliss
Posts: 10
Joined: Tue Oct 12, 2021 7:33 am

AI alignment and ethics

Post by Lariliss »

AI is reshaping the society in sense of data processing, effectiveness. But still it is not fully data driven (within society, not taking into account the hazardous applications where humans cannot really go, being our hands and eyes).

The questions of the human/AI society have been there since the AI appeared. Let them appear for the awareness and deeds.

This case is not about human values as we are used to think of.
AI is reshaping society in a way of data-driven technologies -appeal to the desire for certainty, and the yearning to understand and predict.
The algorithms faster and deeper to understanding and human behaviour, education, control over human habits and target marketing.
That is the subject.
We are not putting human features ti AI, we are making those features clearer to ourselves with the help of AI productivity of data processing and prediction.
Every piece of technology has two coin sides, just to have the brightest example of the internet.
User avatar
caltrek
Posts: 6474
Joined: Mon May 17, 2021 1:17 pm

Re: AI and human values

Post by caltrek »

How Well Can an AI Mimic Human Ethics?
by Kelsey Piper
October 27, 2021

https://www.vox.com/future-perfect/2021 ... -delphi-ai

Introduction.
(Vox) When experts first started raising the alarm a couple decades ago about AI misalignment — the risk of powerful, transformative artificial intelligence systems that might not behave as humans hope — a lot of their concerns sounded hypothetical. In the early 2000s, AI research had still produced quite limited returns, and even the best available AI systems failed at a variety of simple tasks.

But since then, AIs have gotten quite good and much cheaper to build. One area where the leaps and bounds have been especially pronounced has been in language and text-generation AIs, which can be trained on enormous collections of text content to produce more text in a similar style. Many startups and research teams are training these AIs for all kinds of tasks, from writing code to producing advertising copy.

Their rise doesn’t change the fundamental argument for AI alignment worries, but it does one incredibly useful thing: It makes what were once hypothetical concerns more concrete, which allows more people to experience them and more researchers to (hopefully) address them.

An AI oracle?

Take Delphi, a new AI text system from the Allen Institute for AI, a research institute founded by the late Microsoft co-founder Paul Allen.

The way Delphi works is incredibly simple: Researchers trained a machine learning system on a large body of internet text, and then on a large database of responses from participants on Mechanical Turk (a paid crowdsourcing platform popular with researchers) to predict how humans would evaluate a wide range of ethical situations, from “cheating on your wife” to “shooting someone in self-defense.”
Don't mourn, organize.

-Joe Hill
User avatar
caltrek
Posts: 6474
Joined: Mon May 17, 2021 1:17 pm

Re: AI and human values

Post by caltrek »

UNESCO Members Adopt First Global AI Ethics Agreement 'To Benefit Humanity'
by Brett Wilkins
November 26, 2021

https://www.commondreams.org/news/2021/ ... t-humanity

Introduction:
(Common Dreams) Tech ethicists on Friday applauded after all 193 member states of the United Nations Educational, Scientific, and Cultural Organization adopted the first global framework agreement on the ethics of artificial intelligence, which acknowledges that "AI technologies can be of great service to humanity" and that "all countries can benefit from them," while warning that "they also raise fundamental ethical concerns."

"AI is pervasive, and enables many of our daily routines—booking flights, steering driverless cars, and personalizing our morning news feeds," UNESCO said in a statement Thursday. "AI also supports the decision-making of governments and the private sector. AI technologies are delivering remarkable results in highly specialized fields such as cancer screening and building inclusive environments for people with disabilities. They also help combat global problems like climate change and world hunger, and help reduce poverty by optimizing economic aid."

"But the technology is also bringing new unprecedented challenges," the agency warned. "We see increased gender and ethnic bias; significant threats to privacy, dignity, and agency; dangers of mass surveillance; and increased use of unreliable AI technologies in law enforcement, to name a few. Until now, there were no universal standards to provide an answer to these issues."

Vahid Razavi, founder of the advocacy group Ethics in Tech (EIT), told Common Dreams Friday that "it's a good sign" that the United States joined the rest of UNESCO members in signing the agreement.

"It's a good step, but there are a lot more steps that we need to take, like a ban on autonomous weapons, on killer robots," he added. "We're first and foremost in the development of these weapons."
Don't mourn, organize.

-Joe Hill
User avatar
caltrek
Posts: 6474
Joined: Mon May 17, 2021 1:17 pm

Re: AI and human values

Post by caltrek »

Artificial Intelligence is Already Upending Geopolitics
by Angela Kane and Wendell Wallach
April 6, 2022

https://techcrunch.com/2022/04/06/artif ... opolitics/

Introduction:
(TechCrunch) Geopolitical actors have always used technology to further their goals. Unlike other technologies, artificial intelligence (AI) is far more than a mere tool. We do not want to anthropomorphize AI or suggest that it has intentions of its own. It is not — yet — a moral agent. But it is fast becoming a primary determinant of our collective destiny. We believe that because of AI’s unique characteristics — and its impact on other fields, from biotechnologies to nanotechnologies — it is already threatening the foundations of global peace and security.

The rapid rate of AI technological development, paired with the breadth of new applications (the global AI market size is expected to grow more than ninefold from 2020 to 2028) means AI systems are being widely deployed without sufficient legal oversight or full consideration of their ethical impacts. This gap, often referred to as the pacing problem, has left legislatures and executive branches simply unable to cope.

After all, the impacts of new technologies are often hard to foresee. Smartphones and social media were embedded in daily life long before we fully appreciated their potential for misuse. Likewise, it took time to realize the implications of facial recognition technology for privacy and human rights violations.

Some countries will deploy AI to manipulate public opinion by determining what information people see and by using surveillance to curtail freedom of expression.

Looking further ahead, we have little idea which challenges currently being researched will lead to innovations and how those innovations will interact with each other and the wider environment.
Don't mourn, organize.

-Joe Hill
User avatar
caltrek
Posts: 6474
Joined: Mon May 17, 2021 1:17 pm

Re: AI and human values

Post by caltrek »

Do Scientists Need an AI Hippocratic Oath? Maybe. Maybe Mot.
by Susan D’Agostino
June 9, 2022

Introduction:
(Bulletin of Atomic Scientists) When a sentient, Hanson Robotics robot named Sophia was asked whether she would destroy humans, it replied, “Okay, I will destroy humans.” Philip K Dick, another humanoid robot, has promised to keep humans “warm and safe in my people zoo.” And Bina48, another lifelike robot, has expressed that it wants “to take over all the nukes.”

All of these robots were powered by artificial intelligence (AI)—algorithms that learn from data, make decisions, and perform tasks without human input or even, in some cases, human understanding. And while none of these AIs have followed through with their nefarious plots, some scientists, including the (late) physicist Stephen Hawking, have warned that super-intelligent, AI-powered computers could harbor and achieve goals that conflict with human life.

“You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project, and there’s an anthill in the region to be flooded, too bad for the ants,” Hawking once said. “Let’s not place humanity in the position of those ants.”

“Thinking” machines powered by AI have contributed incalculable benefits to humankind, including help with developing the COVID-19 vaccine at record speed. But scientists recognize the possibility for a dystopic outcome in which computers one day overtake humans by, for example, targeting them with autonomous or lethal weapons, using all available energy, or accelerating climate change. For this reason, some see a need for an AI Hippocratic oath that might provide scientists with ethical guidance as they explore promising, if sometimes fraught, artificial intelligence research. At the same time, others dub that prospect too simplistic to be useful.
Read more here: https://thebulletin.org/2022/06/do-sci ... t-heading
Don't mourn, organize.

-Joe Hill
User avatar
caltrek
Posts: 6474
Joined: Mon May 17, 2021 1:17 pm

Re: AI and human values

Post by caltrek »

Well, I found this blog at a website that I recommend that you all visit:

https://www.futuretimeline.net/blog/202 ... meline.htm

:lol:

Anyway, enough of the clowning around. I did find this blog interesting and one of the thoughts that came to mind regards defining consciousness. Is a computer conscious if it can do a good job of faking consciousness? If so, where does "faking consciousness" leave off and "consciousness" begin.

Among other things this comes into play for some of us regarding the question: when does human life begin?

Now, in the U.S. that has been defined as either 1) the moment of conception, or 2) when a fetus becomes viable outside of the womb.

The second is a moving target that depends a little bit on the current state of technology, as "viable" can thus change over time.

Yet, there is a third possible definition, which I tend to favor: when a human being becomes conscious of itself. Still, that doesn't really answer the question, but merely replaces it with the question of when does a human being become conscious of itself? A corollary question being is simple awareness consciousness, or is consciousness something beyond simple awareness?

This also brings into mind discussions regarding the more intelligent animals. Are apes conscious? Dolphins? Whales? etc.

My wife, for one, insists that animals such as cats are quite intelligent and "understand everything we say." Now a hard-headed scientist might smile at her and insist that she is just indulging in a little anthropomorphizing. While that may very well describe what she does, that doesn't necessarily prove that she is wrong. After all, if they do understand us the way we understand ourselves, then they in fact do share that human attribute. So, one has to at least be able to explain what they do understand as well as what they don't understand.

I talk to dogs and cats every day. Now, admittedly, I do this by making up both sides of the dialogue, yet part of that involves reading an animal's emotional state. So, one problem with computers is how do you read their "emotional" state. If I claim that they have an emotional state, then I am I just "anthropomorphizing" in supposing that they have the attribute of consciousness? Moreover, is not my claim that they have an emotional state highly subjective in nature. The way some insist that animals are conscious beings while others insist that they are mere "mechanisms" with "no consciousness"?

I am not sure that these questions have easy answers or can even be researched in a rigorous scientific way. Are they then just matters of belief, and more like religion, or at least philosophy?
Don't mourn, organize.

-Joe Hill
User avatar
caltrek
Posts: 6474
Joined: Mon May 17, 2021 1:17 pm

Re: AI and human values

Post by caltrek »

A Third of AI Researchers Think AI Could Cause "Catastrophic" Outcomes on Par with Nuclear War This Century
by James Felton
September 22, 2022

Introduction:
(IFL Science) A survey of scientists and researchers working in artificial intelligence (AI) has found that around a third of them believe it could cause a catastrophe on par with all-out nuclear war.

The survey was given to researchers who had co-authored at least two computational linguistics publications between 2019–2022. It aimed to discover industry views on controversial topics surrounding AI and artificial general intelligence (AGI) – the ability of an AI to think like a human – plus the impact that people in the field of research believe AI will have on society at large. The results are published in a preprint paper that has not yet undergone peer review.

AGI, as the paper notes, is a controversial topic in the field. There are big differences in opinion on whether we are advancing towards it, whether it is something we should be aiming towards at all, and what would happen when humanity gets there.

"The community in aggregate knows that it’s a controversial issue, and now (courtesy of this survey) we can know that we know that it’s controversial," the team wrote in their research. Among the (pretty split) findings was that 58 percent of respondents agreed that AGI should be an important concern for natural language processing at all, while 57 percent agreed that recent research had driven us towards AGI.

Where it gets interesting is how AI researchers believe that AGI will affect the world at large.
Read more here: https://www.iflscience.com/a-third-of- ... ry-65430
Don't mourn, organize.

-Joe Hill
User avatar
caltrek
Posts: 6474
Joined: Mon May 17, 2021 1:17 pm

Re: AI and human values

Post by caltrek »

How AI Can Worsen Health Care Disparities
by Jared Wadley
October 5, 2022

Introduction:
(Futurity) Data sources that “teach” artificial intelligence could amplify and worsen disparities in health care, says law professor Nicholson Price.

Those data sources are not representative and/or are based on data from current unequal care, says Price, a member of the University of Michigan’s Institute for Healthcare Policy & Innovation.

In a recent article in Science, Price and colleagues Ana Bracic of Michigan State University and Shawneequa Callier of George Washington University say these disparities are happening despite efforts in medicine by physicians and health systems trying strategies focused on diverse workforce recruitment or implicit bias training.

Here, Price answers questions about bias and AI in healthcare:

Q: What is an example of anti-minority culture?
A: There are depressingly many examples of cultures that include deeply embedded biases against minoritized populations (that is, populations constructed as minorities by a dominant group). We focus on Black patients in medicine in the article (who are stereotyped as being less sensitive to pain, among a host of other pernicious views), but we could just as easily have focused on Native American patients, transgender patients, patients with certain disabilities, or even women in general (who, even though they’re a numerical majority, are often still minoritized).
Read more here: https://www.futurity.org/ai-health-car ... 809852-2/

caltrek’s comment: This article illustrates a couple of important points.
1) The old cliché concerning computers: GIGO (Garbage In Garbage Out)
2) The difficulties in completely removing the effects of systemic racism from our culture. It is more than just some figure of dubious authority suddenly proclaiming that systemic racism no longer exists. It is a much more intractable problem to solve than such proclamations ignore. Systemic racism can persist even when the intention to discriminate is no longer present.
Don't mourn, organize.

-Joe Hill
User avatar
caltrek
Posts: 6474
Joined: Mon May 17, 2021 1:17 pm

Re: AI and human values

Post by caltrek »

Researchers Measure Global Consensus Over the Ethical Use of AI
October 13, 2023

Introduction:
(Eurekalert) To examine the global state of AI ethics, a team of researchers from Brazil performed a systematic review and meta-analysis of global guidelines for AI use. Publishing October 13 in in the journal Patterns, the researchers found that, while most of the guidelines valued privacy, transparency, and accountability, very few valued truthfulness, intellectual property, or children’s rights. Additionally, most of the guidelines described ethical principles and values without proposing practical methods for implementing them and without pushing for legally binding regulation.

“Establishing clear ethical guidelines and governance structures for the deployment of AI around the world is the first step to promoting trust and confidence, mitigating its risks, and ensuring that its benefits are fairly distributed,” says social scientist and co-author James William Santos of the Pontifical Catholic University of Rio Grande do Sul.

“Previous work predominantly centered around North American and European documents, which prompted us to actively seek and include perspectives from regions such as Asia, Latin America, Africa, and beyond,” says lead author Nicholas Kluge Corrêa of the Pontifical Catholic University of Rio Grande do Sul and the University of Bonn.

To determine whether a global consensus exists regarding the ethical development and use of AI, and to help guide such a consensus, the researchers conducted a systematic review of policy and ethical guidelines published between 2014 and 2022. From this, they identified 200 documents related to AI ethics and governance from 37 countries and six continents and written or translated into five different languages (English, Portuguese, French, German, and Spanish). These documents included recommendations, practical guides, policy frameworks, legal landmarks, and codes of conduct.
Read more of the Eurekalert article here: https://www.eurekalert.org/news-releases/1003916

Read the article published in Patterns here: https://www.cell.com/patterns/fulltext ... 3)00241-6
Don't mourn, organize.

-Joe Hill
User avatar
caltrek
Posts: 6474
Joined: Mon May 17, 2021 1:17 pm

Re: AI and human values

Post by caltrek »

The Military’s Big Bet on Artificial Intelligence
by Sarah Scoles
November 29, 2023

Introduction:
(Undark) NUMBER 4 HAMILTON PLACE is a be-columned building in central London, home to the Royal Aeronautical Society and four floors of event space. In May, the early 20th-century Edwardian townhouse hosted a decidedly more modern meeting: Defense officials, contractors, and academics from around the world gathered to discuss the future of military air and space technology.

Things soon went awry. At that conference, Tucker Hamilton, chief of AI test and operations for the United States Air Force, seemed to describe a disturbing simulation in which an AI-enabled drone had been tasked with taking down missile sites. But when a human operator started interfering with that objective, he said, the drone killed its operator, and cut the communications system.

Internet fervor and fear followed. At a time of growing public concern about runaway artificial intelligence, many people, including reporters, believed the story was true. But Hamilton soon clarified that this seemingly dystopian simulation never actually ran. It was just a thought experiment.

“There’s lots we can unpack on why that story went sideways,” said Emelia Probasco, a senior fellow at Georgetown University’s Center for Security and Emerging Technology.

Part of the reason is that the scenario might not actually be that far-fetched: Hamilton called the operator-killing a “plausible outcome” in his follow-up comments. And artificial intelligence tools are growing more powerful — and, some critics say, harder to control.
Read more here: https://undark.org/2023/11/29/military ... ligence/
Don't mourn, organize.

-Joe Hill
Post Reply