Difference between revisions of "AI predictions"

From GISAXS
Jump to: navigation, search
(Capability Scaling)
(Surveys of Opinions/Predictions)
 
(7 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
=Capability Scaling=
 
=Capability Scaling=
 +
* 2019-03: Rich Sutton: [https://www.cs.utexas.edu/~eunsol/courses/data/bitter_lesson.pdf The Bitter Lesson]
 
* 2020-09: Ajeya Cotra: [https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines Draft report on AI timelines]
 
* 2020-09: Ajeya Cotra: [https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines Draft report on AI timelines]
 +
* 2022-01: gwern: [https://gwern.net/scaling-hypothesis The Scaling Hypothesis]
 
* 2023-05: Richard Ngo: [https://www.lesswrong.com/posts/BoA3agdkAzL6HQtQP/clarifying-and-predicting-agi Clarifying and predicting AGI]
 
* 2023-05: Richard Ngo: [https://www.lesswrong.com/posts/BoA3agdkAzL6HQtQP/clarifying-and-predicting-agi Clarifying and predicting AGI]
 +
* 2024-06: Aidan McLaughlin: [https://yellow-apartment-148.notion.site/AI-Search-The-Bitter-er-Lesson-44c11acd27294f4495c3de778cd09c8d AI Search: The Bitter-er Lesson]
 
* 2025-03: [https://arxiv.org/abs/2503.14499 Measuring AI Ability to Complete Long Tasks Measuring AI Ability to Complete Long Tasks]
 
* 2025-03: [https://arxiv.org/abs/2503.14499 Measuring AI Ability to Complete Long Tasks Measuring AI Ability to Complete Long Tasks]
 
[[Image:GmZHL8xWQAAtFlF.jpeg|450px]]
 
[[Image:GmZHL8xWQAAtFlF.jpeg|450px]]
Line 19: Line 22:
  
 
=Economic and Political=
 
=Economic and Political=
 +
* 2023-10: Marc Andreessen: [https://a16z.com/the-techno-optimist-manifesto/ The Techno-Optimist Manifesto]
 
* 2023-12: [https://vitalik.eth.limo/general/2023/11/27/techno_optimism.html My techno-optimism]: "defensive acceleration" ([https://vitalik.eth.limo/index.html Vitalik Buterin])
 
* 2023-12: [https://vitalik.eth.limo/general/2023/11/27/techno_optimism.html My techno-optimism]: "defensive acceleration" ([https://vitalik.eth.limo/index.html Vitalik Buterin])
 
* 2024-03: Noah Smith: [https://www.noahpinion.blog/p/plentiful-high-paying-jobs-in-the Plentiful, high-paying jobs in the age of AI: Comparative advantage is very subtle, but incredibly powerful.] ([https://x.com/liron/status/1768013030741475485 video])
 
* 2024-03: Noah Smith: [https://www.noahpinion.blog/p/plentiful-high-paying-jobs-in-the Plentiful, high-paying jobs in the age of AI: Comparative advantage is very subtle, but incredibly powerful.] ([https://x.com/liron/status/1768013030741475485 video])
Line 61: Line 65:
  
 
=Near-term=
 
=Near-term=
 +
* 2021-08: Daniel Kokotajlo: [https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like What 2026 looks like]
 
* 2025-02: Sam Altman: [https://blog.samaltman.com/three-observations Three Observations]
 
* 2025-02: Sam Altman: [https://blog.samaltman.com/three-observations Three Observations]
 
*# The intelligence of an AI model roughly equals the log of the resources used to train and run it.
 
*# The intelligence of an AI model roughly equals the log of the resources used to train and run it.
Line 66: Line 71:
 
*# The socioeconomic value of linearly increasing intelligence is super-exponential in nature
 
*# The socioeconomic value of linearly increasing intelligence is super-exponential in nature
 
* 2025-03: [https://www.pathwaysai.org/p/glimpses-of-ai-progess Glimpses of AI Progress: Mental models for fast times]
 
* 2025-03: [https://www.pathwaysai.org/p/glimpses-of-ai-progess Glimpses of AI Progress: Mental models for fast times]
 +
* 2025-04: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean: [https://ai-2027.com/ AI 2027] ([https://ai-2027.com/scenario.pdf pdf])
  
 
=Overall=
 
=Overall=
Line 72: Line 78:
  
 
==Surveys of Opinions/Predictions==
 
==Surveys of Opinions/Predictions==
 +
* 2023-08: [https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2023_expert_survey_on_progress_in_ai 2023 Expert Survey on Progress in AI]
 
* 2025-02: [https://arxiv.org/abs/2502.14870 Why do Experts Disagree on Existential Risk and P(doom)? A Survey of AI Experts]
 
* 2025-02: [https://arxiv.org/abs/2502.14870 Why do Experts Disagree on Existential Risk and P(doom)? A Survey of AI Experts]
 
* 2025-02: Nicholas Carlini: [https://nicholas.carlini.com/writing/2025/forecasting-ai-2025-update.html AI forecasting retrospective: you're (probably) over-confident]
 
* 2025-02: Nicholas Carlini: [https://nicholas.carlini.com/writing/2025/forecasting-ai-2025-update.html AI forecasting retrospective: you're (probably) over-confident]
Line 77: Line 84:
 
==Bad Outcomes==
 
==Bad Outcomes==
 
* 2019-03: [https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like What failure looks like]
 
* 2019-03: [https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like What failure looks like]
 +
* 2023-03: gwern: [https://gwern.net/fiction/clippy It Looks Like You’re Trying To Take Over The World]
 
* 2025-01: [https://arxiv.org/abs/2501.16946 Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development] ([https://gradual-disempowerment.ai/ web version])
 
* 2025-01: [https://arxiv.org/abs/2501.16946 Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development] ([https://gradual-disempowerment.ai/ web version])
 
** 2025-02: [https://thezvi.substack.com/p/the-risk-of-gradual-disempowerment The Risk of Gradual Disempowerment from AI]
 
** 2025-02: [https://thezvi.substack.com/p/the-risk-of-gradual-disempowerment The Risk of Gradual Disempowerment from AI]
 +
* 2025-04: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean: [https://ai-2027.com/ AI 2027] ([https://ai-2027.com/scenario.pdf pdf])
  
 
==Intelligence Explosion==
 
==Intelligence Explosion==
Line 120: Line 129:
  
 
==Alignment==
 
==Alignment==
* [https://static1.squarespace.com/static/65392ca578eee444c445c9de/t/6606f95edb20e8118074a344/1711733370985/human-values-and-alignment-29MAR2024.pdf What are human values, and how do we align AI to them?] ([https://meaningalignment.substack.com/p/0480e023-98c0-4633-a604-990d3ac880ac blog])
+
* 2023-03: Leopold Aschenbrenner: [https://www.forourposterity.com/nobodys-on-the-ball-on-agi-alignment/ Nobody’s on the ball on AGI alignment]
* Joe Carlsmith: 2025: [https://joecarlsmith.substack.com/p/how-do-we-solve-the-alignment-problem How do we solve the alignment problem?] Introduction to an essay series on paths to safe, useful superintelligence
+
* 2024-03: [https://static1.squarespace.com/static/65392ca578eee444c445c9de/t/6606f95edb20e8118074a344/1711733370985/human-values-and-alignment-29MAR2024.pdf What are human values, and how do we align AI to them?] ([https://meaningalignment.substack.com/p/0480e023-98c0-4633-a604-990d3ac880ac blog])
 +
* 2025: Joe Carlsmith: [https://joecarlsmith.substack.com/p/how-do-we-solve-the-alignment-problem How do we solve the alignment problem?] Introduction to an essay series on paths to safe, useful superintelligence
 
*# [https://joecarlsmith.substack.com/p/what-is-it-to-solve-the-alignment What is it to solve the alignment problem?] Also: to avoid it? Handle it? Solve it forever? Solve it completely? ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16617671-what-is-it-to-solve-the-alignment-problem audio version])
 
*# [https://joecarlsmith.substack.com/p/what-is-it-to-solve-the-alignment What is it to solve the alignment problem?] Also: to avoid it? Handle it? Solve it forever? Solve it completely? ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16617671-what-is-it-to-solve-the-alignment-problem audio version])
 
*# [https://joecarlsmith.substack.com/p/when-should-we-worry-about-ai-power When should we worry about AI power-seeking?] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16651469-when-should-we-worry-about-ai-power-seeking audio version])
 
*# [https://joecarlsmith.substack.com/p/when-should-we-worry-about-ai-power When should we worry about AI power-seeking?] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16651469-when-should-we-worry-about-ai-power-seeking audio version])

Latest revision as of 14:42, 4 April 2025

Capability Scaling

GmZHL8xWQAAtFlF.jpeg

AGI Achievable

AGI Definition

Economic and Political

Job Loss

F-kVQuvWkAAemkr.png

Near-term

  • 2021-08: Daniel Kokotajlo: What 2026 looks like
  • 2025-02: Sam Altman: Three Observations
    1. The intelligence of an AI model roughly equals the log of the resources used to train and run it.
    2. The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use
    3. The socioeconomic value of linearly increasing intelligence is super-exponential in nature
  • 2025-03: Glimpses of AI Progress: Mental models for fast times
  • 2025-04: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean: AI 2027 (pdf)

Overall

  • 2025-03: Kevin Roose (New York Times): Powerful A.I. Is Coming. We’re Not Ready. Three arguments for taking progress toward artificial general intelligence, or A.G.I., more seriously — whether you’re an optimist or a pessimist.
  • 2025-03: Nicholas Carlini: My Thoughts on the Future of "AI": "I have very wide error bars on the potential future of large language models, and I think you should too."

Surveys of Opinions/Predictions

Bad Outcomes

Intelligence Explosion

Psychology

Science & Technology Improvements

Plans

Philosophy

GlchEeObwAQ88NK.jpeg

Alignment

Strategic/Technical

Strategic/Policy

See Also