Difference between revisions of "AI predictions"

From GISAXS
Jump to: navigation, search
(Overall)
(Surveys of Opinions/Predictions)
 
(21 intermediate revisions by the same user not shown)
Line 1: Line 1:
=AGI Achievable=
+
=Capability Scaling=
 +
* 2019-03: Rich Sutton: [https://www.cs.utexas.edu/~eunsol/courses/data/bitter_lesson.pdf The Bitter Lesson]
 +
* 2020-09: Ajeya Cotra: [https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines Draft report on AI timelines]
 +
* 2022-01: gwern: [https://gwern.net/scaling-hypothesis The Scaling Hypothesis]
 +
* 2023-05: Richard Ngo: [https://www.lesswrong.com/posts/BoA3agdkAzL6HQtQP/clarifying-and-predicting-agi Clarifying and predicting AGI]
 +
* 2024-06: Aidan McLaughlin: [https://yellow-apartment-148.notion.site/AI-Search-The-Bitter-er-Lesson-44c11acd27294f4495c3de778cd09c8d AI Search: The Bitter-er Lesson]
 +
* 2025-03: [https://arxiv.org/abs/2503.14499 Measuring AI Ability to Complete Long Tasks Measuring AI Ability to Complete Long Tasks]
 +
[[Image:GmZHL8xWQAAtFlF.jpeg|450px]]
 +
* [https://ai-timeline.org/ The Road to AGI] (timeline visualization)
 +
 
 +
==AGI Achievable==
 
* Yoshua Bengio: [https://arxiv.org/abs/2310.17688 Managing extreme AI risks amid rapid progress]
 
* Yoshua Bengio: [https://arxiv.org/abs/2310.17688 Managing extreme AI risks amid rapid progress]
 
* Leopold Aschenbrenner: [https://situational-awareness.ai/from-gpt-4-to-agi/#Counting_the_OOMs Situational Awareness: Counting the OOMs]
 
* Leopold Aschenbrenner: [https://situational-awareness.ai/from-gpt-4-to-agi/#Counting_the_OOMs Situational Awareness: Counting the OOMs]
Line 12: Line 22:
  
 
=Economic and Political=
 
=Economic and Political=
 +
* 2023-10: Marc Andreessen: [https://a16z.com/the-techno-optimist-manifesto/ The Techno-Optimist Manifesto]
 
* 2023-12: [https://vitalik.eth.limo/general/2023/11/27/techno_optimism.html My techno-optimism]: "defensive acceleration" ([https://vitalik.eth.limo/index.html Vitalik Buterin])
 
* 2023-12: [https://vitalik.eth.limo/general/2023/11/27/techno_optimism.html My techno-optimism]: "defensive acceleration" ([https://vitalik.eth.limo/index.html Vitalik Buterin])
 
* 2024-03: Noah Smith: [https://www.noahpinion.blog/p/plentiful-high-paying-jobs-in-the Plentiful, high-paying jobs in the age of AI: Comparative advantage is very subtle, but incredibly powerful.] ([https://x.com/liron/status/1768013030741475485 video])
 
* 2024-03: Noah Smith: [https://www.noahpinion.blog/p/plentiful-high-paying-jobs-in-the Plentiful, high-paying jobs in the age of AI: Comparative advantage is very subtle, but incredibly powerful.] ([https://x.com/liron/status/1768013030741475485 video])
Line 28: Line 39:
 
* 2025-02: [https://arxiv.org/abs/2502.11264 Strategic Wealth Accumulation Under Transformative AI Expectations]
 
* 2025-02: [https://arxiv.org/abs/2502.11264 Strategic Wealth Accumulation Under Transformative AI Expectations]
 
* 2025-02: Tyler Cowen: [https://marginalrevolution.com/marginalrevolution/2025/02/why-i-think-ai-take-off-is-relatively-slow.html Why I think AI take-off is relatively slow]
 
* 2025-02: Tyler Cowen: [https://marginalrevolution.com/marginalrevolution/2025/02/why-i-think-ai-take-off-is-relatively-slow.html Why I think AI take-off is relatively slow]
 +
* 2025-03: Epoch AI: [https://epoch.ai/gradient-updates/most-ai-value-will-come-from-broad-automation-not-from-r-d Most AI value will come from broad automation, not from R&D]
 +
** The primary economic impact of AI will be its ability to broadly automate labor
 +
** Automating AI R&D alone likely won’t dramatically accelerate AI progress
 +
** Fully automating R&D requires a very broad set of abilities
 +
** AI takeoff will likely be diffuse and salient
 +
* 2025-03: [https://www.anthropic.com/news/anthropic-economic-index-insights-from-claude-sonnet-3-7 Anthropic Economic Index: Insights from Claude 3.7 Sonnet]
  
 
==Job Loss==
 
==Job Loss==
Line 45: Line 62:
 
* 2025-01: [https://scholarspace.manoa.hawaii.edu/server/api/core/bitstreams/4f39375d-59c2-4c4a-b394-f3eed7858c80/content AI and Freelancers: Has the Inflection Point Arrived?]
 
* 2025-01: [https://scholarspace.manoa.hawaii.edu/server/api/core/bitstreams/4f39375d-59c2-4c4a-b394-f3eed7858c80/content AI and Freelancers: Has the Inflection Point Arrived?]
 
* 2025-01: [https://www.aporiamagazine.com/p/yes-youre-going-to-be-replaced Yes, you're going to be replaced: So much cope about AI]
 
* 2025-01: [https://www.aporiamagazine.com/p/yes-youre-going-to-be-replaced Yes, you're going to be replaced: So much cope about AI]
 +
* 2025-03: [https://commonplace.org/2025/03/20/will-ai-automate-away-your-job/ Will AI Automate Away Your Job? The time-horizon model explains the future of the technology]
 +
 +
=Near-term=
 +
* 2021-08: Daniel Kokotajlo: [https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like What 2026 looks like]
 +
* 2025-02: Sam Altman: [https://blog.samaltman.com/three-observations Three Observations]
 +
*# The intelligence of an AI model roughly equals the log of the resources used to train and run it.
 +
*# The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use
 +
*# The socioeconomic value of linearly increasing intelligence is super-exponential in nature
 +
* 2025-03: [https://www.pathwaysai.org/p/glimpses-of-ai-progess Glimpses of AI Progress: Mental models for fast times]
 +
* 2025-04: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean: [https://ai-2027.com/ AI 2027] ([https://ai-2027.com/scenario.pdf pdf])
  
 
=Overall=
 
=Overall=
Line 51: Line 78:
  
 
==Surveys of Opinions/Predictions==
 
==Surveys of Opinions/Predictions==
 +
* 2023-08: [https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2023_expert_survey_on_progress_in_ai 2023 Expert Survey on Progress in AI]
 
* 2025-02: [https://arxiv.org/abs/2502.14870 Why do Experts Disagree on Existential Risk and P(doom)? A Survey of AI Experts]
 
* 2025-02: [https://arxiv.org/abs/2502.14870 Why do Experts Disagree on Existential Risk and P(doom)? A Survey of AI Experts]
 +
* 2025-02: Nicholas Carlini: [https://nicholas.carlini.com/writing/2025/forecasting-ai-2025-update.html AI forecasting retrospective: you're (probably) over-confident]
  
 
==Bad Outcomes==
 
==Bad Outcomes==
 
* 2019-03: [https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like What failure looks like]
 
* 2019-03: [https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like What failure looks like]
 +
* 2023-03: gwern: [https://gwern.net/fiction/clippy It Looks Like You’re Trying To Take Over The World]
 
* 2025-01: [https://arxiv.org/abs/2501.16946 Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development] ([https://gradual-disempowerment.ai/ web version])
 
* 2025-01: [https://arxiv.org/abs/2501.16946 Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development] ([https://gradual-disempowerment.ai/ web version])
 
** 2025-02: [https://thezvi.substack.com/p/the-risk-of-gradual-disempowerment The Risk of Gradual Disempowerment from AI]
 
** 2025-02: [https://thezvi.substack.com/p/the-risk-of-gradual-disempowerment The Risk of Gradual Disempowerment from AI]
 +
* 2025-04: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean: [https://ai-2027.com/ AI 2027] ([https://ai-2027.com/scenario.pdf pdf])
  
 
==Intelligence Explosion==
 
==Intelligence Explosion==
 
* 2025-02: [https://www.forethought.org/research/three-types-of-intelligence-explosion Three Types of Intelligence Explosion]
 
* 2025-02: [https://www.forethought.org/research/three-types-of-intelligence-explosion Three Types of Intelligence Explosion]
 +
* 2025-03: Future of Life Institute: [https://futureoflife.org/ai/are-we-close-to-an-intelligence-explosion/ Are we close to an intelligence explosion?] AIs are inching ever-closer to a critical threshold. Beyond this threshold lie great risks—but crossing it is not inevitable.
  
 
=Psychology=
 
=Psychology=
Line 75: Line 107:
 
* [https://cfg.eu/building-cern-for-ai/ Building CERN for AI: An institutional blueprint]
 
* [https://cfg.eu/building-cern-for-ai/ Building CERN for AI: An institutional blueprint]
 
* [https://arxiv.org/abs/2503.05710 AGI, Governments, and Free Societies]
 
* [https://arxiv.org/abs/2503.05710 AGI, Governments, and Free Societies]
 +
* [https://controlai.com/ Control AI]: [https://controlai.com/dip The Direct Institutional Plan]
  
 
==Philosophy==
 
==Philosophy==
Line 96: Line 129:
  
 
==Alignment==
 
==Alignment==
* [https://static1.squarespace.com/static/65392ca578eee444c445c9de/t/6606f95edb20e8118074a344/1711733370985/human-values-and-alignment-29MAR2024.pdf What are human values, and how do we align AI to them?] ([https://meaningalignment.substack.com/p/0480e023-98c0-4633-a604-990d3ac880ac blog])
+
* 2023-03: Leopold Aschenbrenner: [https://www.forourposterity.com/nobodys-on-the-ball-on-agi-alignment/ Nobody’s on the ball on AGI alignment]
* Joe Carlsmith: 2025: [https://joecarlsmith.substack.com/p/how-do-we-solve-the-alignment-problem How do we solve the alignment problem?] Introduction to an essay series on paths to safe, useful superintelligence
+
* 2024-03: [https://static1.squarespace.com/static/65392ca578eee444c445c9de/t/6606f95edb20e8118074a344/1711733370985/human-values-and-alignment-29MAR2024.pdf What are human values, and how do we align AI to them?] ([https://meaningalignment.substack.com/p/0480e023-98c0-4633-a604-990d3ac880ac blog])
 +
* 2025: Joe Carlsmith: [https://joecarlsmith.substack.com/p/how-do-we-solve-the-alignment-problem How do we solve the alignment problem?] Introduction to an essay series on paths to safe, useful superintelligence
 
*# [https://joecarlsmith.substack.com/p/what-is-it-to-solve-the-alignment What is it to solve the alignment problem?] Also: to avoid it? Handle it? Solve it forever? Solve it completely? ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16617671-what-is-it-to-solve-the-alignment-problem audio version])
 
*# [https://joecarlsmith.substack.com/p/what-is-it-to-solve-the-alignment What is it to solve the alignment problem?] Also: to avoid it? Handle it? Solve it forever? Solve it completely? ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16617671-what-is-it-to-solve-the-alignment-problem audio version])
 
*# [https://joecarlsmith.substack.com/p/when-should-we-worry-about-ai-power When should we worry about AI power-seeking?] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16651469-when-should-we-worry-about-ai-power-seeking audio version])
 
*# [https://joecarlsmith.substack.com/p/when-should-we-worry-about-ai-power When should we worry about AI power-seeking?] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16651469-when-should-we-worry-about-ai-power-seeking audio version])
 
*# [https://joecarlsmith.substack.com/p/paths-and-waystations-in-ai-safety Paths and waystations in AI safety] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16768804-paths-and-waystations-in-ai-safety audio version])
 
*# [https://joecarlsmith.substack.com/p/paths-and-waystations-in-ai-safety Paths and waystations in AI safety] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16768804-paths-and-waystations-in-ai-safety audio version])
 
*# [https://joecarlsmith.substack.com/p/ai-for-ai-safety AI for AI safety] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16790183-ai-for-ai-safety audio version])
 
*# [https://joecarlsmith.substack.com/p/ai-for-ai-safety AI for AI safety] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16790183-ai-for-ai-safety audio version])
 +
 +
==Strategic/Technical==
 +
* 2025-03: [https://resilience.baulab.info/docs/AI_Action_Plan_RFI.pdf AI Dominance Requires Interpretability and Standards for Transparency and Security]
  
 
==Strategic/Policy==
 
==Strategic/Policy==

Latest revision as of 14:42, 4 April 2025

Capability Scaling

GmZHL8xWQAAtFlF.jpeg

AGI Achievable

AGI Definition

Economic and Political

Job Loss

F-kVQuvWkAAemkr.png

Near-term

  • 2021-08: Daniel Kokotajlo: What 2026 looks like
  • 2025-02: Sam Altman: Three Observations
    1. The intelligence of an AI model roughly equals the log of the resources used to train and run it.
    2. The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use
    3. The socioeconomic value of linearly increasing intelligence is super-exponential in nature
  • 2025-03: Glimpses of AI Progress: Mental models for fast times
  • 2025-04: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean: AI 2027 (pdf)

Overall

  • 2025-03: Kevin Roose (New York Times): Powerful A.I. Is Coming. We’re Not Ready. Three arguments for taking progress toward artificial general intelligence, or A.G.I., more seriously — whether you’re an optimist or a pessimist.
  • 2025-03: Nicholas Carlini: My Thoughts on the Future of "AI": "I have very wide error bars on the potential future of large language models, and I think you should too."

Surveys of Opinions/Predictions

Bad Outcomes

Intelligence Explosion

Psychology

Science & Technology Improvements

Plans

Philosophy

GlchEeObwAQ88NK.jpeg

Alignment

Strategic/Technical

Strategic/Policy

See Also