Global Information Lookup Global Information

Statement on AI risk of extinction information


On May 30, 2023, hundreds of artificial intelligence experts and other notable figures signed the following short Statement on AI Risk:[1][2]

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

At release time, the signatories included over 100 professors of AI including the two most-cited computer scientists and Turing laureates Geoffrey Hinton and Yoshua Bengio, as well as the scientific and executive leaders of several major AI companies, and experts in pandemics, climate, nuclear disarmament, philosophy, social sciences, and other fields.[1][2] Media coverage has emphasized the signatures from several tech leaders;[2] this was followed by concerns in other newspapers that the statement could be motivated by public relations or regulatory capture.[3] The statement was released shortly after an open letter calling for a pause on AI experiments.

The statement is hosted on the website of the AI research and advocacy non-profit Center for AI Safety. It was released with an accompanying text which states that it is still difficult to speak up about extreme risks of AI and that the statement aims to overcome this obstacle.[1] The center's CEO Dan Hendrycks stated that “systemic bias, misinformation, malicious use, cyberattacks, and weaponization” are all examples of “important and urgent risks from AI… not just the risk of extinction” and added, “[s]ocieties can manage multiple risks at once; it’s not ‘either/or’ but ‘yes/and.’”[4]

The Prime Minister of the United Kingdom, Rishi Sunak, retweeted the statement and wrote, "The government is looking very carefully at this."[5] When asked about the statement, the White House Press Secretary, Karine Jean-Pierre, commented that AI "is one of the most powerful technologies that we see currently in our time. But in order to seize the opportunities it presents, we must first mitigate its risks."[6]

Among the well-known signatories are: Sam Altman, Bill Gates, Peter Singer, Daniel Dennett, Sam Harris, Grimes, Stuart Russell, Jaan Tallinn, Vitalik Buterin, David Chalmers, Ray Kurzweil, Max Tegmark, Lex Fridman, Martin Rees, Demis Hassabis, Dawn Song, Ted Lieu, Ilya Sutskever, Martin Hellman, Bill McKibben, Angela Kane, Audrey Tang, David Silver, Andrew Barto, Mira Murati, Pattie Maes, Eric Horvitz, Peter Norvig, Joseph Sifakis, Erik Brynjolfsson, Ian Goodfellow, Baburam Bhattarai, Kersti Kaljulaid, Rusty Schweickart, Nicholas Fairfax, David Haussler, Peter Railton, Bart Selman, Dustin Moskovitz, Scott Aaronson, Bruce Schneier, Martha Minow, Andrew Revkin, Rob Pike, Jacob Tsimerman, Ramy Youssef, James Pennebaker and Ronald C. Arkin.[7]

  1. ^ a b c "Statement on AI Risk | CAIS". www.safe.ai. Retrieved 2023-05-30.
  2. ^ a b c Roose, Kevin (2023-05-30). "A.I. Poses 'Risk of Extinction,' Industry Leaders Warn". The New York Times. ISSN 0362-4331. Retrieved 2023-05-30.
  3. ^ Wong, Matteo (2023-06-02). "AI Doomerism Is a Decoy". The Atlantic. Retrieved 2023-12-26.
  4. ^ Lomas, Natasha (2023-05-30). "OpenAI's Altman and other AI giants back warning of advanced AI as 'extinction' risk". TechCrunch. Retrieved 2023-05-30.
  5. ^ "Artificial intelligence warning over human extinction – all you need to know". The Independent. 2023-05-31. Retrieved 2023-06-03.
  6. ^ "President Biden warns artificial intelligence could 'overtake human thinking'". USA TODAY. Retrieved 2023-06-03.
  7. ^ "Statement on AI Risk | CAIS". www.safe.ai. Retrieved 2024-03-18.

and 27 Related for: Statement on AI risk of extinction information

Request time (Page generated in 1.1177 seconds.)

Statement on AI risk of extinction

Last Update:

On May 30, 2023, hundreds of artificial intelligence experts and other notable figures signed the following short Statement on AI Risk: Mitigating the...

Word Count : 542

Center for AI Safety

Last Update:

AI safety and AI ethics, advocacy, and support to grow the AI safety research field. In May 2023, CAIS published a statement on AI risk of extinction...

Word Count : 403

Existential risk from artificial general intelligence

Last Update:

Paperclip maximizer Philosophy of artificial intelligence Robot ethics § In popular culture Statement on AI risk of extinction Superintelligence: Paths, Dangers...

Word Count : 12708

AI safety

Last Update:

intelligence (AI) systems. It encompasses machine ethics and AI alignment (which aim to make AI systems moral and beneficial), monitoring AI systems for risks and...

Word Count : 9363

AI alignment

Last Update:

AI. AI safety Artificial intelligence detection software Statement on AI risk of extinction Existential risk from artificial general intelligence AI takeover...

Word Count : 11661

AI boom

Last Update:

and propaganda. Industry leaders have further warned in the statement on AI risk of extinction that humanity might irreversibly lose control over a sufficiently...

Word Count : 4414

AI takeover

Last Update:

An AI takeover is a scenario in which artificial intelligence (AI) becomes the dominant form of intelligence on Earth, as computer programs or robots effectively...

Word Count : 4242

Human extinction

Last Update:

that there is a relatively low risk of near-term human extinction due to natural causes. The likelihood of human extinction through mankind's own activities...

Word Count : 6804

Shane Legg

Last Update:

concern of existential risk from AI, highlighted in 2011 in an interview on LessWrong and in 2023 he signed the statement on AI risk of extinction. Before...

Word Count : 899

Human Compatible

Last Update:

arguments dismissing AI risk and attributes much of their persistence to tribalism—AI researchers may see AI risk concerns as an "attack" on their field. Russell...

Word Count : 1133

Artificial general intelligence

Last Update:

July 2023). "AI Should Be Terrified of Humans". TIME. Retrieved 23 December 2023. Roose, Kevin (30 May 2023). "A.I. Poses 'Risk of Extinction,' Industry...

Word Count : 10930

Effective accelerationism

Last Update:

dealing with AI, stating "that's too dangerous. You can't break things when you are talking about AI". In a similar vein, Ellen Huet argued on Bloomberg...

Word Count : 1561

Machine Intelligence Research Institute

Last Update:

since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach...

Word Count : 1138

Artificial intelligence

Last Update:

concerns about the risks of AI. In 2023, many leading AI experts issued the joint statement that "Mitigating the risk of extinction from AI should be a global...

Word Count : 21915

Global catastrophic risk

Last Update:

could cause human extinction or permanently and drastically curtail humanity's existence or potential is known as an "existential risk." Over the last two...

Word Count : 5525

Alignment Research Center

Last Update:

alignment of advanced artificial intelligence with human values and priorities. Established by former OpenAI researcher Paul Christiano, ARC focuses on recognizing...

Word Count : 602

Holocene extinction

Last Update:

Holocene extinction, or Anthropocene extinction, is the ongoing extinction event caused by humans during the Holocene epoch. These extinctions span numerous...

Word Count : 22815

OpenAI

Last Update:

OpenAI is an American artificial intelligence (AI) research organization founded in December 2015, researching artificial intelligence with the goal of developing...

Word Count : 14070

Nick Bostrom

Last Update:

the existential risk from AI. He emphasizes the importance of international collaboration, notably to reduce race to the bottom and AI arms race dynamics...

Word Count : 4086

Ethics of artificial intelligence

Last Update:

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificial intelligence (AI) systems. The ethics of artificial...

Word Count : 13837

Timeline of artificial intelligence

Last Update:

ISSN 1932-2909. S2CID 259470901. "Statement on AI Risk AI experts and public figures express their concern about AI risk". Center for AI Safety. Retrieved 14 September...

Word Count : 4397

ChatGPT

Last Update:

figures demanded that "[m]itigating the risk of extinction from AI should be a global priority". Other prominent AI researchers spoke more optimistically...

Word Count : 15293

Applications of artificial intelligence

Last Update:

reduce the risks of on-road training. AI underpins self-driving vehicles. Companies involved with AI include Tesla, Waymo, and General Motors. AI-based systems...

Word Count : 20681

Lila Ibrahim

Last Update:

declaring that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear...

Word Count : 857

Global catastrophe scenarios

Last Update:

survey of AI experts estimated that the chance of human-level machine learning having an "extremely bad (e.g., human extinction)" long-term effect on humanity...

Word Count : 11532

Technological singularity

Last Update:

existential threat. Because AI is a major factor in singularity risk, a number of organizations pursue a technical theory of aligning AI goal-systems with human...

Word Count : 12054

Alexander Titus

Last Update:

(2023-12-06). "Written statement of Alexander J. Titus, PhD, Principal Scientist, AI Division, Information Sciences Institute, University of Southern California...

Word Count : 1019

PDF Search Engine © AllGlobal.net