Global Information Lookup Global Information

Existential risk from artificial general intelligence information


Existential risk from artificial general intelligence is the idea that substantial progress in artificial general intelligence (AGI) could result in human extinction or an irreversible global catastrophe.[1][2][3]

One argument goes as follows: human beings dominate other species because the human brain possesses distinctive capabilities other animals lack. If AI were to surpass humanity in general intelligence and become superintelligent, then it could become difficult or impossible to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.[4]

The plausibility of existential catastrophe due to AI is widely debated, and hinges in part on whether AGI or superintelligence are achievable, the speed at which dangerous capabilities and behaviors emerge,[5] and whether practical scenarios for AI takeovers exist.[6] Concerns about superintelligence have been voiced by leading computer scientists and tech CEOs such as Geoffrey Hinton,[7] Yoshua Bengio,[8] Alan Turing,[a] Elon Musk,[11] and OpenAI CEO Sam Altman.[12] In 2022, a survey of AI researchers with a 17% response rate found that the majority of respondents believed there is a 10 percent or greater chance that our inability to control AI will cause an existential catastrophe.[13][14] In 2023, hundreds of AI experts and other notable figures signed a statement that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war".[15] Following increased concern over AI risks, government leaders such as United Kingdom prime minister Rishi Sunak[16] and United Nations Secretary-General António Guterres[17] called for an increased focus on global AI regulation.

Two sources of concern stem from the problems of AI control and alignment: controlling a superintelligent machine or instilling it with human-compatible values may be difficult. Many researchers believe that a superintelligent machine would resist attempts to disable it or change its goals, as that would prevent it from accomplishing its present goals. It would be extremely difficult to align a superintelligence with the full breadth of significant human values and constraints.[1][18][19] In contrast, skeptics such as computer scientist Yann LeCun argue that superintelligent machines will have no desire for self-preservation.[20]

A third source of concern is that a sudden "intelligence explosion" might take an unprepared human race by surprise. Such scenarios consider the possibility that an AI that is more intelligent than its creators might be able to recursively improve itself at an exponentially increasing rate, improving too quickly for its handlers and society at large to control.[1][18] Empirically, examples like AlphaZero teaching itself to play Go show that domain-specific AI systems can sometimes progress from subhuman to superhuman ability very quickly, although such systems do not involve altering their fundamental architecture.[21]

  1. ^ a b c Russell, Stuart; Norvig, Peter (2009). "26.3: The Ethics and Risks of Developing Artificial Intelligence". Artificial Intelligence: A Modern Approach. Prentice Hall. ISBN 978-0-13-604259-4.
  2. ^ Bostrom, Nick (2002). "Existential risks". Journal of Evolution and Technology. 9 (1): 1–31.
  3. ^ Turchin, Alexey; Denkenberger, David (3 May 2018). "Classification of global catastrophic risks connected with artificial intelligence". AI & Society. 35 (1): 147–163. doi:10.1007/s00146-018-0845-5. ISSN 0951-5666. S2CID 19208453.
  4. ^ Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies (First ed.). Oxford University Press. ISBN 978-0-19-967811-2.
  5. ^ Vynck, Gerrit De (23 May 2023). "The debate over whether AI will destroy us is dividing Silicon Valley". Washington Post. ISSN 0190-8286. Retrieved 27 July 2023.
  6. ^ Metz, Cade (10 June 2023). "How Could A.I. Destroy Humanity?". The New York Times. ISSN 0362-4331. Retrieved 27 July 2023.
  7. ^ "'Godfather of artificial intelligence' weighs in on the past and potential of AI". www.cbsnews.com. 25 March 2023. Retrieved 10 April 2023.
  8. ^ "How Rogue AIs may Arise". yoshuabengio.org. 26 May 2023. Retrieved 26 May 2023.
  9. ^ Turing, Alan (1951). Intelligent machinery, a heretical theory (Speech). Lecture given to '51 Society'. Manchester: The Turing Digital Archive. Archived from the original on 26 September 2022. Retrieved 22 July 2022.
  10. ^ Turing, Alan (15 May 1951). "Can digital computers think?". Automatic Calculating Machines. Episode 2. BBC. Can digital computers think?.
  11. ^ Parkin, Simon (14 June 2015). "Science fiction no more? Channel 4's Humans and our rogue AI obsessions". The Guardian. Archived from the original on 5 February 2018. Retrieved 5 February 2018.
  12. ^ Jackson, Sarah. "The CEO of the company behind AI chatbot ChatGPT says the worst-case scenario for artificial intelligence is 'lights out for all of us'". Business Insider. Retrieved 10 April 2023.
  13. ^ "The AI Dilemma". www.humanetech.com. Retrieved 10 April 2023. 50% of AI researchers believe there's a 10% or greater chance that humans go extinct from our inability to control AI.
  14. ^ "2022 Expert Survey on Progress in AI". AI Impacts. 4 August 2022. Retrieved 10 April 2023.
  15. ^ Roose, Kevin (30 May 2023). "A.I. Poses 'Risk of Extinction,' Industry Leaders Warn". The New York Times. ISSN 0362-4331. Retrieved 3 June 2023.
  16. ^ Sunak, Rishi (14 June 2023). "Rishi Sunak Wants the U.K. to Be a Key Player in Global AI Regulation". Time.
  17. ^ Cite error: The named reference :12 was invoked but never defined (see the help page).
  18. ^ a b Yudkowsky, Eliezer (2008). "Artificial Intelligence as a Positive and Negative Factor in Global Risk" (PDF). Global Catastrophic Risks: 308–345. Bibcode:2008gcr..book..303Y. Archived (PDF) from the original on 2 March 2013. Retrieved 27 August 2018.
  19. ^ Russell, Stuart; Dewey, Daniel; Tegmark, Max (2015). "Research Priorities for Robust and Beneficial Artificial Intelligence" (PDF). AI Magazine. Association for the Advancement of Artificial Intelligence: 105–114. arXiv:1602.03506. Bibcode:2016arXiv160203506R. Archived (PDF) from the original on 4 August 2019. Retrieved 10 August 2019., cited in "AI Open Letter - Future of Life Institute". Future of Life Institute. January 2015. Archived from the original on 10 August 2019. Retrieved 9 August 2019.
  20. ^ Dowd, Maureen (April 2017). "Elon Musk's Billion-Dollar Crusade to Stop the A.I. Apocalypse". The Hive. Archived from the original on 26 July 2018. Retrieved 27 November 2017.
  21. ^ "AlphaGo Zero: Starting from scratch". www.deepmind.com. 18 October 2017. Retrieved 28 July 2023.


Cite error: There are <ref group=lower-alpha> tags or {{efn}} templates on this page, but the references will not show without a {{reflist|group=lower-alpha}} template or {{notelist}} template (see the help page).

and 27 Related for: Existential risk from artificial general intelligence information

Request time (Page generated in 1.1047 seconds.)

Existential risk from artificial general intelligence

Last Update:

Existential risk from artificial general intelligence is the idea that substantial progress in artificial general intelligence (AGI) could result in human...

Word Count : 12719

Machine Intelligence Research Institute

Last Update:

since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach...

Word Count : 1138

Regulation of artificial intelligence

Last Update:

regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is...

Word Count : 10981

Friendly artificial intelligence

Last Update:

called for the creation of "friendly AI" to mitigate existential risk from advanced artificial intelligence. He explains: "The AI does not hate you, nor does...

Word Count : 2710

Artificial general intelligence

Last Update:

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that can perform as well or better than humans on a wide range of cognitive...

Word Count : 10930

AI safety

Last Update:

They also discuss speculative risks from losing control of future artificial general intelligence (AGI) agents, or from AI enabling perpetually stable...

Word Count : 9363

Superintelligence

Last Update:

takeover Artificial brain Artificial intelligence arms race Effective altruism Ethics of artificial intelligence Existential risk Friendly artificial intelligence...

Word Count : 2947

AI takeover

Last Update:

Effective altruism Existential risk from artificial general intelligence Future of Humanity Institute Global catastrophic risk (existential risk) Government...

Word Count : 4242

Technological singularity

Last Update:

whether, human intelligence will likely be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems...

Word Count : 12054

AI alignment

Last Update:

AI safety Artificial intelligence detection software Statement on AI risk of extinction Existential risk from artificial general intelligence AI takeover...

Word Count : 11661

Future of Life Institute

Last Update:

towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking...

Word Count : 2376

Artificial intelligence in government

Last Update:

Applications of artificial intelligence Artificial general intelligence Civic technology e-government Existential risk from artificial general intelligence Government...

Word Count : 1720

Global catastrophic risk

Last Update:

drastically curtail humanity's existence or potential is known as an "existential risk." Over the last two decades,[when?] a number of academic and non-profit...

Word Count : 5525

Instrumental convergence

Last Update:

philosopher Nick Bostrom in 2003. It illustrates the existential risk that an artificial general intelligence may pose to human beings were it to be successfully...

Word Count : 2801

Transhumanism

Last Update:

topic of transhumanist research is how to protect humanity against existential risks, such as nuclear war or asteroid collision.[better source needed]...

Word Count : 13537

Effective accelerationism

Last Update:

that unrestricted technological progress (especially driven by artificial intelligence) is a solution to universal human problems like poverty, war and...

Word Count : 1561

Outline of artificial intelligence

Last Update:

Cyborgs – Mind uploading – Existential risk from artificial general intelligence Global catastrophic risk § Artificial intelligence AI takeover – point at...

Word Count : 4383

Human Compatible

Last Update:

Compatible: Artificial Intelligence and the Problem of Control is a 2019 non-fiction book by computer scientist Stuart J. Russell. It asserts that the risk to...

Word Count : 1133

AI capability control

Last Update:

and control the behavior of AI systems, including proposed artificial general intelligences (AGIs), in order to reduce the danger they might pose if misaligned...

Word Count : 3055

Ethics of artificial intelligence

Last Update:

catastrophic risks Ethics of uncertain sentience Existential risk from artificial general intelligence Human Compatible Personhood Philosophy of artificial intelligence...

Word Count : 13861

Geoffrey Hinton

Last Update:

malicious actors, technological unemployment, and existential risk from artificial general intelligence. Hinton was educated at Clifton College in Bristol...

Word Count : 4036

Artificial intelligence arms race

Last Update:

systems; the risk is compounded in the case of a race to artificial general intelligence, which may present an existential risk. A third risk of an AI arms...

Word Count : 5759

Generative artificial intelligence

Last Update:

Generative artificial intelligence (generative AI, GenAI, or GAI) is artificial intelligence capable of generating text, images, videos, or other data...

Word Count : 9204

Future of Humanity Institute

Last Update:

and artificial general intelligence. In expecting the largest risks to stem from future technologies, and from advanced artificial intelligence in particular...

Word Count : 1502

Timeline of artificial intelligence

Last Update:

This is a timeline of artificial intelligence, sometimes alternatively called synthetic intelligence. Timeline of machine translation Timeline of machine...

Word Count : 4397

Suffering risks

Last Update:

prone to s-risks. AI control problem Ethics of artificial intelligence Ethics of terraforming Existential risk from artificial general intelligence Global...

Word Count : 499

OpenAI

Last Update:

partly motivated by concerns about AI safety and the existential risk from artificial general intelligence. OpenAI states that "it's hard to fathom how much...

Word Count : 14172

PDF Search Engine © AllGlobal.net