Difference between revisions of "AI predictions"

From GISAXS
Jump to: navigation, search
(Economic and Political)
(Bad Outcomes)
 
(12 intermediate revisions by the same user not shown)
Line 46: Line 46:
 
** AI takeoff will likely be diffuse and salient
 
** AI takeoff will likely be diffuse and salient
 
* 2025-03: [https://www.anthropic.com/news/anthropic-economic-index-insights-from-claude-sonnet-3-7 Anthropic Economic Index: Insights from Claude 3.7 Sonnet]
 
* 2025-03: [https://www.anthropic.com/news/anthropic-economic-index-insights-from-claude-sonnet-3-7 Anthropic Economic Index: Insights from Claude 3.7 Sonnet]
 +
* 2025-04: [https://inferencemagazine.substack.com/p/will-there-be-extreme-inequality Will there be extreme inequality from AI?]
  
 
==Job Loss==
 
==Job Loss==
Line 73: Line 74:
 
* 2025-03: [https://www.pathwaysai.org/p/glimpses-of-ai-progess Glimpses of AI Progress: Mental models for fast times]
 
* 2025-03: [https://www.pathwaysai.org/p/glimpses-of-ai-progess Glimpses of AI Progress: Mental models for fast times]
 
* 2025-04: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean: [https://ai-2027.com/ AI 2027] ([https://ai-2027.com/scenario.pdf pdf])
 
* 2025-04: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean: [https://ai-2027.com/ AI 2027] ([https://ai-2027.com/scenario.pdf pdf])
 +
* 2025-04: Stanford HAI: [https://hai-production.s3.amazonaws.com/files/hai_ai_index_report_2025.pdf Artificial Intelligence Index Report 2025]
 +
* 2025-04: Arvind Narayananand Sayash Kapoor: [https://kfai-documents.s3.amazonaws.com/documents/c3cac5a2a7/AI-as-Normal-Technology---Narayanan---Kapoor.pdf AI as Normal Technology]
  
 
=Overall=
 
=Overall=
Line 79: Line 82:
  
 
==Surveys of Opinions/Predictions==
 
==Surveys of Opinions/Predictions==
 +
* 2016-06: [https://aiimpacts.org/2016-expert-survey-on-progress-in-ai/ 2016 Expert Survey on Progress in AI]
 +
** 2023-03: [https://aiimpacts.org/scoring-forecasts-from-the-2016-expert-survey-on-progress-in-ai/ Scoring forecasts from the 2016 “Expert Survey on Progress in AI”]
 
* 2023-08: [https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2023_expert_survey_on_progress_in_ai 2023 Expert Survey on Progress in AI]
 
* 2023-08: [https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2023_expert_survey_on_progress_in_ai 2023 Expert Survey on Progress in AI]
 +
* 2024-01: [https://arxiv.org/abs/2401.02843 Thousands of AI Authors on the Future of AI]
 
* 2025-02: [https://arxiv.org/abs/2502.14870 Why do Experts Disagree on Existential Risk and P(doom)? A Survey of AI Experts]
 
* 2025-02: [https://arxiv.org/abs/2502.14870 Why do Experts Disagree on Existential Risk and P(doom)? A Survey of AI Experts]
 
* 2025-02: Nicholas Carlini: [https://nicholas.carlini.com/writing/2025/forecasting-ai-2025-update.html AI forecasting retrospective: you're (probably) over-confident]
 
* 2025-02: Nicholas Carlini: [https://nicholas.carlini.com/writing/2025/forecasting-ai-2025-update.html AI forecasting retrospective: you're (probably) over-confident]
Line 89: Line 95:
 
** 2025-02: [https://thezvi.substack.com/p/the-risk-of-gradual-disempowerment The Risk of Gradual Disempowerment from AI]
 
** 2025-02: [https://thezvi.substack.com/p/the-risk-of-gradual-disempowerment The Risk of Gradual Disempowerment from AI]
 
* 2025-04: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean: [https://ai-2027.com/ AI 2027] ([https://ai-2027.com/scenario.pdf pdf])
 
* 2025-04: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean: [https://ai-2027.com/ AI 2027] ([https://ai-2027.com/scenario.pdf pdf])
 +
* 2025-04: [https://www.forethought.org/research/ai-enabled-coups-how-a-small-group-could-use-ai-to-seize-power AI-Enabled Coups: How a Small Group Could Use AI to Seize Power]
  
 
==Intelligence Explosion==
 
==Intelligence Explosion==
 
* 2025-02: [https://www.forethought.org/research/three-types-of-intelligence-explosion Three Types of Intelligence Explosion]
 
* 2025-02: [https://www.forethought.org/research/three-types-of-intelligence-explosion Three Types of Intelligence Explosion]
 
* 2025-03: Future of Life Institute: [https://futureoflife.org/ai/are-we-close-to-an-intelligence-explosion/ Are we close to an intelligence explosion?] AIs are inching ever-closer to a critical threshold. Beyond this threshold lie great risks—but crossing it is not inevitable.
 
* 2025-03: Future of Life Institute: [https://futureoflife.org/ai/are-we-close-to-an-intelligence-explosion/ Are we close to an intelligence explosion?] AIs are inching ever-closer to a critical threshold. Beyond this threshold lie great risks—but crossing it is not inevitable.
 +
* 2025-03: Forethought: [https://www.forethought.org/research/will-ai-r-and-d-automation-cause-a-software-intelligence-explosion Will AI R&D Automation Cause a Software Intelligence Explosion?]
 +
 +
==Long-range/Philosophy==
 +
* 2023-03: Dan Hendrycks: [https://arxiv.org/abs/2303.16200 Natural Selection Favors AIs over Humans]
  
 
=Psychology=
 
=Psychology=
Line 109: Line 120:
 
* [https://arxiv.org/abs/2503.05710 AGI, Governments, and Free Societies]
 
* [https://arxiv.org/abs/2503.05710 AGI, Governments, and Free Societies]
 
* [https://controlai.com/ Control AI]: [https://controlai.com/dip The Direct Institutional Plan]  
 
* [https://controlai.com/ Control AI]: [https://controlai.com/dip The Direct Institutional Plan]  
 +
* Luke Drago and L Rudolf L: [https://lukedrago.substack.com/p/the-use-of-knowledge-in-agi-society?triedRedirect=true The use of knowledge in (AGI) society]: How to build to break the [https://lukedrago.substack.com/p/the-intelligence-curse intelligence curse]
 +
* [https://www.agisocialcontract.org/ AGI Social Contract]
 +
** [https://www.agisocialcontract.org/forging-a-new-agi-social-contract Forging A New AGI Social Contract]
  
 
==Philosophy==
 
==Philosophy==
Line 128: Line 142:
 
** [https://x.com/AnthonyNAguirre/status/1898023049930457468 2025-03]: [https://keepthefuturehuman.ai/ Keep The Future Human]
 
** [https://x.com/AnthonyNAguirre/status/1898023049930457468 2025-03]: [https://keepthefuturehuman.ai/ Keep The Future Human]
 
[[Image:GlchEeObwAQ88NK.jpeg|300px]]
 
[[Image:GlchEeObwAQ88NK.jpeg|300px]]
* 2025-04: Astral Codex Ten: [https://www.astralcodexten.com/p/the-colors-of-her-coat The Colors Of Her Coat] (response to [https://www.theintrinsicperspective.com/p/welcome-to-the-semantic-apocalypse semantic apocalypse] and semantic satiation)
+
* 2025-04: Scott Alexander (Astral Codex Ten): [https://www.astralcodexten.com/p/the-colors-of-her-coat The Colors Of Her Coat] (response to [https://www.theintrinsicperspective.com/p/welcome-to-the-semantic-apocalypse semantic apocalypse] and semantic satiation)
  
 
==Alignment==
 
==Alignment==
Line 143: Line 157:
  
 
==Strategic/Policy==
 
==Strategic/Policy==
 +
* 2015-03: Sam Altman: [https://blog.samaltman.com/machine-intelligence-part-2 Machine intelligence, part 2]
 
* 2019-07: Amanda Askell, Miles Brundage, Gillian Hadfield: [https://arxiv.org/abs/1907.04534 The Role of Cooperation in Responsible AI Development]
 
* 2019-07: Amanda Askell, Miles Brundage, Gillian Hadfield: [https://arxiv.org/abs/1907.04534 The Role of Cooperation in Responsible AI Development]
 
* 2025-03: Dan Hendrycks, Eric Schmidt, Alexandr Wang: [https://www.nationalsecurity.ai/ Superintelligence Strategy]
 
* 2025-03: Dan Hendrycks, Eric Schmidt, Alexandr Wang: [https://www.nationalsecurity.ai/ Superintelligence Strategy]

Latest revision as of 16:28, 18 April 2025

Capability Scaling

GmZHL8xWQAAtFlF.jpeg

AGI Achievable

AGI Definition

Economic and Political

Job Loss

F-kVQuvWkAAemkr.png

Near-term

Overall

  • 2025-03: Kevin Roose (New York Times): Powerful A.I. Is Coming. We’re Not Ready. Three arguments for taking progress toward artificial general intelligence, or A.G.I., more seriously — whether you’re an optimist or a pessimist.
  • 2025-03: Nicholas Carlini: My Thoughts on the Future of "AI": "I have very wide error bars on the potential future of large language models, and I think you should too."

Surveys of Opinions/Predictions

Bad Outcomes

Intelligence Explosion

Long-range/Philosophy

Psychology

Science & Technology Improvements

Plans

Philosophy

GlchEeObwAQ88NK.jpeg

Alignment

Strategic/Technical

Strategic/Policy

See Also