Difference between revisions of "AI predictions"

From GISAXS
Jump to: navigation, search
(Plans)
(Superintelligence)
(32 intermediate revisions by the same user not shown)
Line 6: Line 6:
 
* 2024-06: Aidan McLaughlin: [https://yellow-apartment-148.notion.site/AI-Search-The-Bitter-er-Lesson-44c11acd27294f4495c3de778cd09c8d AI Search: The Bitter-er Lesson]
 
* 2024-06: Aidan McLaughlin: [https://yellow-apartment-148.notion.site/AI-Search-The-Bitter-er-Lesson-44c11acd27294f4495c3de778cd09c8d AI Search: The Bitter-er Lesson]
 
* 2025-03: [https://arxiv.org/abs/2503.14499 Measuring AI Ability to Complete Long Tasks Measuring AI Ability to Complete Long Tasks]
 
* 2025-03: [https://arxiv.org/abs/2503.14499 Measuring AI Ability to Complete Long Tasks Measuring AI Ability to Complete Long Tasks]
 +
** 2025-04: [https://peterwildeford.substack.com/p/forecaster-reacts-metrs-bombshell Forecaster reacts: METR's bombshell paper about AI acceleration] New data supports an exponential AI curve, but lots of uncertainty remains
 +
** 2025-04: AI Digest: [https://theaidigest.org/time-horizons A new Moore's Law for AI agents]
 
[[Image:GmZHL8xWQAAtFlF.jpeg|450px]]
 
[[Image:GmZHL8xWQAAtFlF.jpeg|450px]]
 +
* 2025-04: [https://epoch.ai/blog/trends-in-ai-supercomputers Trends in AI Supercomputers]
 
* [https://ai-timeline.org/ The Road to AGI] (timeline visualization)
 
* [https://ai-timeline.org/ The Road to AGI] (timeline visualization)
  
Line 20: Line 23:
 
* 2023-11: Allan Dafoe, Shane Legg, et al.: [https://arxiv.org/abs/2311.02462 Levels of AGI for Operationalizing Progress on the Path to AGI]
 
* 2023-11: Allan Dafoe, Shane Legg, et al.: [https://arxiv.org/abs/2311.02462 Levels of AGI for Operationalizing Progress on the Path to AGI]
 
* 2024-04: Bowen Xu: [https://arxiv.org/abs/2404.10731 What is Meant by AGI? On the Definition of Artificial General Intelligence]
 
* 2024-04: Bowen Xu: [https://arxiv.org/abs/2404.10731 What is Meant by AGI? On the Definition of Artificial General Intelligence]
 +
 +
==Progress Models==
 +
[[Image:AI impact models01.png|450px]]
  
 
=Economic and Political=
 
=Economic and Political=
Line 32: Line 38:
 
* 2024-12: [https://www.lesswrong.com/posts/KFFaKu27FNugCHFmh/by-default-capital-will-matter-more-than-ever-after-agi By default, capital will matter more than ever after AGI] (L Rudolf L)
 
* 2024-12: [https://www.lesswrong.com/posts/KFFaKu27FNugCHFmh/by-default-capital-will-matter-more-than-ever-after-agi By default, capital will matter more than ever after AGI] (L Rudolf L)
 
* 2025-01: [https://lukedrago.substack.com/p/the-intelligence-curse The Intelligence Curse]: With AGI, powerful actors will lose their incentives to invest in people
 
* 2025-01: [https://lukedrago.substack.com/p/the-intelligence-curse The Intelligence Curse]: With AGI, powerful actors will lose their incentives to invest in people
 +
** Updated 2025-04: [https://intelligence-curse.ai/ The Intelligence Curse] (Luke Drago and Rudolf Laine)
 +
*** [https://intelligence-curse.ai/pyramid/ Pyramid Replacement]
 +
*** [https://intelligence-curse.ai/capital/ Capital, AGI, and Human Ambition]
 +
*** [https://intelligence-curse.ai/defining/ Defining the Intelligence Curse]
 +
*** [https://intelligence-curse.ai/shaping/ Shaping the Social Contract]
 +
*** [https://intelligence-curse.ai/breaking/ Breaking the Intelligence Curse]
 +
*** [https://intelligence-curse.ai/history/ History is Yours to Write]
 
* 2025-01: Microsoft: [https://blogs.microsoft.com/on-the-issues/2025/01/03/the-golden-opportunity-for-american-ai/ The Golden Opportunity for American AI]
 
* 2025-01: Microsoft: [https://blogs.microsoft.com/on-the-issues/2025/01/03/the-golden-opportunity-for-american-ai/ The Golden Opportunity for American AI]
 
* 2025-01: [https://www.maximum-progress.com/p/agi-will-not-make-labor-worthless AGI Will Not Make Labor Worthless]
 
* 2025-01: [https://www.maximum-progress.com/p/agi-will-not-make-labor-worthless AGI Will Not Make Labor Worthless]
Line 47: Line 60:
 
* 2025-03: [https://www.anthropic.com/news/anthropic-economic-index-insights-from-claude-sonnet-3-7 Anthropic Economic Index: Insights from Claude 3.7 Sonnet]
 
* 2025-03: [https://www.anthropic.com/news/anthropic-economic-index-insights-from-claude-sonnet-3-7 Anthropic Economic Index: Insights from Claude 3.7 Sonnet]
 
* 2025-04: [https://inferencemagazine.substack.com/p/will-there-be-extreme-inequality Will there be extreme inequality from AI?]
 
* 2025-04: [https://inferencemagazine.substack.com/p/will-there-be-extreme-inequality Will there be extreme inequality from AI?]
 +
* 2025-04: [https://www.anthropic.com/research/impact-software-development Anthropic Economic Index: AI’s Impact on Software Development]
 +
* 2025-05: [https://www.theguardian.com/books/2025/may/04/the-big-idea-can-we-stop-ai-making-humans-obsolete Better at everything: how AI could make human beings irrelevant]
  
 
==Job Loss==
 
==Job Loss==
Line 65: Line 80:
 
* 2025-01: [https://www.aporiamagazine.com/p/yes-youre-going-to-be-replaced Yes, you're going to be replaced: So much cope about AI]
 
* 2025-01: [https://www.aporiamagazine.com/p/yes-youre-going-to-be-replaced Yes, you're going to be replaced: So much cope about AI]
 
* 2025-03: [https://commonplace.org/2025/03/20/will-ai-automate-away-your-job/ Will AI Automate Away Your Job? The time-horizon model explains the future of the technology]
 
* 2025-03: [https://commonplace.org/2025/03/20/will-ai-automate-away-your-job/ Will AI Automate Away Your Job? The time-horizon model explains the future of the technology]
 +
* 2025-05: [https://www.forbes.com/sites/jackkelly/2025/05/04/its-time-to-get-concerned-klarna-ups-duolingo-cisco-and-many-other-companies-are-replacing-workers-with-ai/ It’s Time To Get Concerned, Klarna, UPS, Duolingo, Cisco, And Many Other Companies Are Replacing Workers With AI]
 +
 +
==National Security==
 +
* 2025-04: Jeremie Harris and Edouard Harris: [https://superintelligence.gladstone.ai/ America’s Superintelligence Project]
  
 
=Near-term=
 
=Near-term=
Line 74: Line 93:
 
* 2025-03: [https://www.pathwaysai.org/p/glimpses-of-ai-progess Glimpses of AI Progress: Mental models for fast times]
 
* 2025-03: [https://www.pathwaysai.org/p/glimpses-of-ai-progess Glimpses of AI Progress: Mental models for fast times]
 
* 2025-04: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean: [https://ai-2027.com/ AI 2027] ([https://ai-2027.com/scenario.pdf pdf])
 
* 2025-04: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean: [https://ai-2027.com/ AI 2027] ([https://ai-2027.com/scenario.pdf pdf])
 +
* 2025-04: Stanford HAI: [https://hai-production.s3.amazonaws.com/files/hai_ai_index_report_2025.pdf Artificial Intelligence Index Report 2025]
 +
* 2025-04: Arvind Narayananand Sayash Kapoor: [https://kfai-documents.s3.amazonaws.com/documents/c3cac5a2a7/AI-as-Normal-Technology---Narayanan---Kapoor.pdf AI as Normal Technology]
 +
* 2025-04: Dwarkesh Patel: [https://www.dwarkesh.com/p/questions-about-ai Questions about the Future of AI]
  
 
=Overall=
 
=Overall=
Line 80: Line 102:
  
 
==Surveys of Opinions/Predictions==
 
==Surveys of Opinions/Predictions==
 +
* 2016-06: [https://aiimpacts.org/2016-expert-survey-on-progress-in-ai/ 2016 Expert Survey on Progress in AI]
 +
** 2023-03: [https://aiimpacts.org/scoring-forecasts-from-the-2016-expert-survey-on-progress-in-ai/ Scoring forecasts from the 2016 “Expert Survey on Progress in AI”]
 
* 2023-08: [https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2023_expert_survey_on_progress_in_ai 2023 Expert Survey on Progress in AI]
 
* 2023-08: [https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2023_expert_survey_on_progress_in_ai 2023 Expert Survey on Progress in AI]
 +
* 2024-01: [https://arxiv.org/abs/2401.02843 Thousands of AI Authors on the Future of AI]
 
* 2025-02: [https://arxiv.org/abs/2502.14870 Why do Experts Disagree on Existential Risk and P(doom)? A Survey of AI Experts]
 
* 2025-02: [https://arxiv.org/abs/2502.14870 Why do Experts Disagree on Existential Risk and P(doom)? A Survey of AI Experts]
 
* 2025-02: Nicholas Carlini: [https://nicholas.carlini.com/writing/2025/forecasting-ai-2025-update.html AI forecasting retrospective: you're (probably) over-confident]
 
* 2025-02: Nicholas Carlini: [https://nicholas.carlini.com/writing/2025/forecasting-ai-2025-update.html AI forecasting retrospective: you're (probably) over-confident]
 +
* 2025-04: Helen Toner: [https://helentoner.substack.com/p/long-timelines-to-advanced-ai-have "Long" timelines to advanced AI have gotten crazy short]
 +
* 2025-05: [https://theaidigest.org/ai2025-analysis-may AI 2025 Forecasts - May Update]
  
 
==Bad Outcomes==
 
==Bad Outcomes==
 +
* [https://pauseai.info/pdoom List of p(doom) values]
 
* 2019-03: [https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like What failure looks like]
 
* 2019-03: [https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like What failure looks like]
 
* 2023-03: gwern: [https://gwern.net/fiction/clippy It Looks Like You’re Trying To Take Over The World]
 
* 2023-03: gwern: [https://gwern.net/fiction/clippy It Looks Like You’re Trying To Take Over The World]
Line 90: Line 118:
 
** 2025-02: [https://thezvi.substack.com/p/the-risk-of-gradual-disempowerment The Risk of Gradual Disempowerment from AI]
 
** 2025-02: [https://thezvi.substack.com/p/the-risk-of-gradual-disempowerment The Risk of Gradual Disempowerment from AI]
 
* 2025-04: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean: [https://ai-2027.com/ AI 2027] ([https://ai-2027.com/scenario.pdf pdf])
 
* 2025-04: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean: [https://ai-2027.com/ AI 2027] ([https://ai-2027.com/scenario.pdf pdf])
 +
* 2025-04: [https://www.forethought.org/research/ai-enabled-coups-how-a-small-group-could-use-ai-to-seize-power AI-Enabled Coups: How a Small Group Could Use AI to Seize Power]
  
 
==Intelligence Explosion==
 
==Intelligence Explosion==
 
* 2025-02: [https://www.forethought.org/research/three-types-of-intelligence-explosion Three Types of Intelligence Explosion]
 
* 2025-02: [https://www.forethought.org/research/three-types-of-intelligence-explosion Three Types of Intelligence Explosion]
 
* 2025-03: Future of Life Institute: [https://futureoflife.org/ai/are-we-close-to-an-intelligence-explosion/ Are we close to an intelligence explosion?] AIs are inching ever-closer to a critical threshold. Beyond this threshold lie great risks—but crossing it is not inevitable.
 
* 2025-03: Future of Life Institute: [https://futureoflife.org/ai/are-we-close-to-an-intelligence-explosion/ Are we close to an intelligence explosion?] AIs are inching ever-closer to a critical threshold. Beyond this threshold lie great risks—but crossing it is not inevitable.
 +
* 2025-03: Forethought: [https://www.forethought.org/research/will-ai-r-and-d-automation-cause-a-software-intelligence-explosion Will AI R&D Automation Cause a Software Intelligence Explosion?]
 +
 +
==Superintelligence==
 +
* 2024-10: [http://yager-research.ca/2024/10/how-smart-will-asi-be/ How Smart will ASI be?]
 +
* 2024-11: [http://yager-research.ca/2024/11/concise-argument-for-asi-risk/ Concise Argument for ASI Risk]
 +
* 2025-03: [https://dynomight.net/smart/ Limits of smart]
 +
* 2025-05: [https://timfduffy.substack.com/p/the-limits-of-superintelligence?manualredirect= The Limits of Superintelligence]
  
 
==Long-range/Philosophy==
 
==Long-range/Philosophy==
Line 114: Line 150:
 
* [https://controlai.com/ Control AI]: [https://controlai.com/dip The Direct Institutional Plan]  
 
* [https://controlai.com/ Control AI]: [https://controlai.com/dip The Direct Institutional Plan]  
 
* Luke Drago and L Rudolf L: [https://lukedrago.substack.com/p/the-use-of-knowledge-in-agi-society?triedRedirect=true The use of knowledge in (AGI) society]: How to build to break the [https://lukedrago.substack.com/p/the-intelligence-curse intelligence curse]
 
* Luke Drago and L Rudolf L: [https://lukedrago.substack.com/p/the-use-of-knowledge-in-agi-society?triedRedirect=true The use of knowledge in (AGI) society]: How to build to break the [https://lukedrago.substack.com/p/the-intelligence-curse intelligence curse]
 +
* [https://www.agisocialcontract.org/ AGI Social Contract]
 +
** [https://www.agisocialcontract.org/forging-a-new-agi-social-contract Forging A New AGI Social Contract]
 +
* Yoshua Bengio: [https://time.com/7283507/safer-ai-development/ A Potential Path to Safer AI Development]
 +
** 2025-02: [https://arxiv.org/abs/2502.15657 Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path?]
  
 
==Philosophy==
 
==Philosophy==
Line 134: Line 174:
 
[[Image:GlchEeObwAQ88NK.jpeg|300px]]
 
[[Image:GlchEeObwAQ88NK.jpeg|300px]]
 
* 2025-04: Scott Alexander (Astral Codex Ten): [https://www.astralcodexten.com/p/the-colors-of-her-coat The Colors Of Her Coat] (response to [https://www.theintrinsicperspective.com/p/welcome-to-the-semantic-apocalypse semantic apocalypse] and semantic satiation)
 
* 2025-04: Scott Alexander (Astral Codex Ten): [https://www.astralcodexten.com/p/the-colors-of-her-coat The Colors Of Her Coat] (response to [https://www.theintrinsicperspective.com/p/welcome-to-the-semantic-apocalypse semantic apocalypse] and semantic satiation)
 +
* 2025-05: Helen Toner: [https://www.ai-frontiers.org/articles/were-arguing-about-ai-safety-wrong We’re Arguing About AI Safety Wrong]: Dynamism vs. stasis is a clearer lens for criticizing controversial AI safety prescriptions
  
 
==Alignment==
 
==Alignment==
Line 143: Line 184:
 
*# [https://joecarlsmith.substack.com/p/paths-and-waystations-in-ai-safety Paths and waystations in AI safety] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16768804-paths-and-waystations-in-ai-safety audio version])
 
*# [https://joecarlsmith.substack.com/p/paths-and-waystations-in-ai-safety Paths and waystations in AI safety] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16768804-paths-and-waystations-in-ai-safety audio version])
 
*# [https://joecarlsmith.substack.com/p/ai-for-ai-safety AI for AI safety] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16790183-ai-for-ai-safety audio version])
 
*# [https://joecarlsmith.substack.com/p/ai-for-ai-safety AI for AI safety] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16790183-ai-for-ai-safety audio version])
 +
*# [https://joecarlsmith.substack.com/p/can-we-safely-automate-alignment Can we safely automate alignment research?] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/17069901-can-we-safely-automate-alignment-research audio version], [https://joecarlsmith.substack.com/p/video-and-transcript-of-talk-on-automating?utm_source=post-email-title&publication_id=1022275&post_id=162375391&utm_campaign=email-post-title&isFreemail=true&r=5av1bk&triedRedirect=true&utm_medium=email video version])
 +
*# [https://joecarlsmith.substack.com/p/the-stakes-of-ai-moral-status The stakes of AI moral status]
 +
* 2025-04: Dario Amodei: [https://www.darioamodei.com/post/the-urgency-of-interpretability The Urgency of Interpretability]
  
 
==Strategic/Technical==
 
==Strategic/Technical==
Line 148: Line 192:
  
 
==Strategic/Policy==
 
==Strategic/Policy==
 +
* 2015-03: Sam Altman: [https://blog.samaltman.com/machine-intelligence-part-2 Machine intelligence, part 2]
 
* 2019-07: Amanda Askell, Miles Brundage, Gillian Hadfield: [https://arxiv.org/abs/1907.04534 The Role of Cooperation in Responsible AI Development]
 
* 2019-07: Amanda Askell, Miles Brundage, Gillian Hadfield: [https://arxiv.org/abs/1907.04534 The Role of Cooperation in Responsible AI Development]
 
* 2025-03: Dan Hendrycks, Eric Schmidt, Alexandr Wang: [https://www.nationalsecurity.ai/ Superintelligence Strategy]
 
* 2025-03: Dan Hendrycks, Eric Schmidt, Alexandr Wang: [https://www.nationalsecurity.ai/ Superintelligence Strategy]
Line 165: Line 210:
 
**# Tiered Regulation: Low regulation on tool-AI, strictest regulation on AGI (general, capable, autonomous systems).
 
**# Tiered Regulation: Low regulation on tool-AI, strictest regulation on AGI (general, capable, autonomous systems).
 
* 2025-04: Helen Toner: [https://helentoner.substack.com/p/nonproliferation-is-the-wrong-approach?source=queue Nonproliferation is the wrong approach to AI misuse]
 
* 2025-04: Helen Toner: [https://helentoner.substack.com/p/nonproliferation-is-the-wrong-approach?source=queue Nonproliferation is the wrong approach to AI misuse]
 +
* 2025-04: MIRI: [https://techgov.intelligence.org/research/ai-governance-to-avoid-extinction AI Governance to Avoid Extinction: The Strategic Landscape and Actionable Research Questions]
 +
* 2025-05: [https://writing.antonleicht.me/p/the-new-ai-policy-frontier The New AI Policy Frontier]: Beyond the shortcomings of centralised control and alignment, a new school of thought on AI governance emerges. It still faces tricky politics.
  
 
=See Also=
 
=See Also=
 
* [[AI safety]]
 
* [[AI safety]]

Revision as of 11:56, 27 May 2025

Capability Scaling

GmZHL8xWQAAtFlF.jpeg

AGI Achievable

AGI Definition

Progress Models

AI impact models01.png

Economic and Political

Job Loss

F-kVQuvWkAAemkr.png

National Security

Near-term

Overall

  • 2025-03: Kevin Roose (New York Times): Powerful A.I. Is Coming. We’re Not Ready. Three arguments for taking progress toward artificial general intelligence, or A.G.I., more seriously — whether you’re an optimist or a pessimist.
  • 2025-03: Nicholas Carlini: My Thoughts on the Future of "AI": "I have very wide error bars on the potential future of large language models, and I think you should too."

Surveys of Opinions/Predictions

Bad Outcomes

Intelligence Explosion

Superintelligence

Long-range/Philosophy

Psychology

Science & Technology Improvements

Plans

Philosophy

GlchEeObwAQ88NK.jpeg

Alignment

Strategic/Technical

Strategic/Policy

See Also