Difference between revisions of "AI predictions"
KevinYager (talk | contribs) (→Economic and Political) |
KevinYager (talk | contribs) (→Surveys of Opinions/Predictions) |
||
(43 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
− | =AGI Achievable= | + | =Capability Scaling= |
+ | * 2019-03: Rich Sutton: [https://www.cs.utexas.edu/~eunsol/courses/data/bitter_lesson.pdf The Bitter Lesson] | ||
+ | * 2020-09: Ajeya Cotra: [https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines Draft report on AI timelines] | ||
+ | * 2022-01: gwern: [https://gwern.net/scaling-hypothesis The Scaling Hypothesis] | ||
+ | * 2023-05: Richard Ngo: [https://www.lesswrong.com/posts/BoA3agdkAzL6HQtQP/clarifying-and-predicting-agi Clarifying and predicting AGI] | ||
+ | * 2024-06: Aidan McLaughlin: [https://yellow-apartment-148.notion.site/AI-Search-The-Bitter-er-Lesson-44c11acd27294f4495c3de778cd09c8d AI Search: The Bitter-er Lesson] | ||
+ | * 2025-03: [https://arxiv.org/abs/2503.14499 Measuring AI Ability to Complete Long Tasks Measuring AI Ability to Complete Long Tasks] | ||
+ | [[Image:GmZHL8xWQAAtFlF.jpeg|450px]] | ||
+ | * [https://ai-timeline.org/ The Road to AGI] (timeline visualization) | ||
+ | |||
+ | ==AGI Achievable== | ||
+ | * Yoshua Bengio: [https://arxiv.org/abs/2310.17688 Managing extreme AI risks amid rapid progress] | ||
* Leopold Aschenbrenner: [https://situational-awareness.ai/from-gpt-4-to-agi/#Counting_the_OOMs Situational Awareness: Counting the OOMs] | * Leopold Aschenbrenner: [https://situational-awareness.ai/from-gpt-4-to-agi/#Counting_the_OOMs Situational Awareness: Counting the OOMs] | ||
* Richard Ngo: [https://medium.com/@richardcngo/visualizing-the-deep-learning-revolution-722098eb9c5 Visualizing the deep learning revolution] | * Richard Ngo: [https://medium.com/@richardcngo/visualizing-the-deep-learning-revolution-722098eb9c5 Visualizing the deep learning revolution] | ||
Line 5: | Line 16: | ||
* Epoch AI: [https://epoch.ai/trends Machine Learning Trends] | * Epoch AI: [https://epoch.ai/trends Machine Learning Trends] | ||
* AI Digest: [https://theaidigest.org/progress-and-dangers How fast is AI improving?] | * AI Digest: [https://theaidigest.org/progress-and-dangers How fast is AI improving?] | ||
+ | |||
+ | ==AGI Definition== | ||
+ | * 2023-11: Allan Dafoe, Shane Legg, et al.: [https://arxiv.org/abs/2311.02462 Levels of AGI for Operationalizing Progress on the Path to AGI] | ||
+ | * 2024-04: Bowen Xu: [https://arxiv.org/abs/2404.10731 What is Meant by AGI? On the Definition of Artificial General Intelligence] | ||
=Economic and Political= | =Economic and Political= | ||
+ | * 2023-10: Marc Andreessen: [https://a16z.com/the-techno-optimist-manifesto/ The Techno-Optimist Manifesto] | ||
* 2023-12: [https://vitalik.eth.limo/general/2023/11/27/techno_optimism.html My techno-optimism]: "defensive acceleration" ([https://vitalik.eth.limo/index.html Vitalik Buterin]) | * 2023-12: [https://vitalik.eth.limo/general/2023/11/27/techno_optimism.html My techno-optimism]: "defensive acceleration" ([https://vitalik.eth.limo/index.html Vitalik Buterin]) | ||
* 2024-03: Noah Smith: [https://www.noahpinion.blog/p/plentiful-high-paying-jobs-in-the Plentiful, high-paying jobs in the age of AI: Comparative advantage is very subtle, but incredibly powerful.] ([https://x.com/liron/status/1768013030741475485 video]) | * 2024-03: Noah Smith: [https://www.noahpinion.blog/p/plentiful-high-paying-jobs-in-the Plentiful, high-paying jobs in the age of AI: Comparative advantage is very subtle, but incredibly powerful.] ([https://x.com/liron/status/1768013030741475485 video]) | ||
Line 19: | Line 35: | ||
* 2025-01: [https://cdn.openai.com/global-affairs/ai-in-america-oai-economic-blueprint-20250113.pdf AI in America: OpenAI's Economic Blueprint] ([https://openai.com/global-affairs/openais-economic-blueprint/ blog]) | * 2025-01: [https://cdn.openai.com/global-affairs/ai-in-america-oai-economic-blueprint-20250113.pdf AI in America: OpenAI's Economic Blueprint] ([https://openai.com/global-affairs/openais-economic-blueprint/ blog]) | ||
* 2025-01: [https://inferencemagazine.substack.com/p/how-much-economic-growth-from-ai How much economic growth from AI should we expect, how soon?] | * 2025-01: [https://inferencemagazine.substack.com/p/how-much-economic-growth-from-ai How much economic growth from AI should we expect, how soon?] | ||
+ | * 2025-02: Morgan Stanley: [https://advisor.morganstanley.com/john.howard/documents/field/j/jo/john-howard/The_Humanoid_100_-_Mapping_the_Humanoid_Robot_Value_Chain.pdf The Humanoid 100: Mapping the Humanoid Robot Value Chain] | ||
* 2025-02: [https://www.anthropic.com/news/the-anthropic-economic-index The Anthropic Economic Index]: [https://assets.anthropic.com/m/2e23255f1e84ca97/original/Economic_Tasks_AI_Paper.pdf Which Economic Tasks are Performed with AI? Evidence from Millions of Claude Conversations] | * 2025-02: [https://www.anthropic.com/news/the-anthropic-economic-index The Anthropic Economic Index]: [https://assets.anthropic.com/m/2e23255f1e84ca97/original/Economic_Tasks_AI_Paper.pdf Which Economic Tasks are Performed with AI? Evidence from Millions of Claude Conversations] | ||
* 2025-02: [https://arxiv.org/abs/2502.11264 Strategic Wealth Accumulation Under Transformative AI Expectations] | * 2025-02: [https://arxiv.org/abs/2502.11264 Strategic Wealth Accumulation Under Transformative AI Expectations] | ||
* 2025-02: Tyler Cowen: [https://marginalrevolution.com/marginalrevolution/2025/02/why-i-think-ai-take-off-is-relatively-slow.html Why I think AI take-off is relatively slow] | * 2025-02: Tyler Cowen: [https://marginalrevolution.com/marginalrevolution/2025/02/why-i-think-ai-take-off-is-relatively-slow.html Why I think AI take-off is relatively slow] | ||
+ | * 2025-03: Epoch AI: [https://epoch.ai/gradient-updates/most-ai-value-will-come-from-broad-automation-not-from-r-d Most AI value will come from broad automation, not from R&D] | ||
+ | ** The primary economic impact of AI will be its ability to broadly automate labor | ||
+ | ** Automating AI R&D alone likely won’t dramatically accelerate AI progress | ||
+ | ** Fully automating R&D requires a very broad set of abilities | ||
+ | ** AI takeoff will likely be diffuse and salient | ||
+ | * 2025-03: [https://www.anthropic.com/news/anthropic-economic-index-insights-from-claude-sonnet-3-7 Anthropic Economic Index: Insights from Claude 3.7 Sonnet] | ||
==Job Loss== | ==Job Loss== | ||
Line 39: | Line 62: | ||
* 2025-01: [https://scholarspace.manoa.hawaii.edu/server/api/core/bitstreams/4f39375d-59c2-4c4a-b394-f3eed7858c80/content AI and Freelancers: Has the Inflection Point Arrived?] | * 2025-01: [https://scholarspace.manoa.hawaii.edu/server/api/core/bitstreams/4f39375d-59c2-4c4a-b394-f3eed7858c80/content AI and Freelancers: Has the Inflection Point Arrived?] | ||
* 2025-01: [https://www.aporiamagazine.com/p/yes-youre-going-to-be-replaced Yes, you're going to be replaced: So much cope about AI] | * 2025-01: [https://www.aporiamagazine.com/p/yes-youre-going-to-be-replaced Yes, you're going to be replaced: So much cope about AI] | ||
+ | * 2025-03: [https://commonplace.org/2025/03/20/will-ai-automate-away-your-job/ Will AI Automate Away Your Job? The time-horizon model explains the future of the technology] | ||
+ | |||
+ | =Near-term= | ||
+ | * 2021-08: Daniel Kokotajlo: [https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like What 2026 looks like] | ||
+ | * 2025-02: Sam Altman: [https://blog.samaltman.com/three-observations Three Observations] | ||
+ | *# The intelligence of an AI model roughly equals the log of the resources used to train and run it. | ||
+ | *# The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use | ||
+ | *# The socioeconomic value of linearly increasing intelligence is super-exponential in nature | ||
+ | * 2025-03: [https://www.pathwaysai.org/p/glimpses-of-ai-progess Glimpses of AI Progress: Mental models for fast times] | ||
+ | * 2025-04: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean: [https://ai-2027.com/ AI 2027] ([https://ai-2027.com/scenario.pdf pdf]) | ||
=Overall= | =Overall= | ||
+ | * 2025-03: Kevin Roose (New York Times): [https://www.nytimes.com/2025/03/14/technology/why-im-feeling-the-agi.html?unlocked_article_code=1.304.TIEy.SmNhKYO4e9c7&smid=url-share Powerful A.I. Is Coming. We’re Not Ready.] Three arguments for taking progress toward artificial general intelligence, or A.G.I., more seriously — whether you’re an optimist or a pessimist. | ||
+ | * 2025-03: Nicholas Carlini: [https://nicholas.carlini.com/writing/2025/thoughts-on-future-ai.html My Thoughts on the Future of "AI"]: "I have very wide error bars on the potential future of large language models, and I think you should too." | ||
+ | |||
+ | ==Surveys of Opinions/Predictions== | ||
+ | * 2023-08: [https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2023_expert_survey_on_progress_in_ai 2023 Expert Survey on Progress in AI] | ||
+ | * 2025-02: [https://arxiv.org/abs/2502.14870 Why do Experts Disagree on Existential Risk and P(doom)? A Survey of AI Experts] | ||
+ | * 2025-02: Nicholas Carlini: [https://nicholas.carlini.com/writing/2025/forecasting-ai-2025-update.html AI forecasting retrospective: you're (probably) over-confident] | ||
+ | |||
==Bad Outcomes== | ==Bad Outcomes== | ||
* 2019-03: [https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like What failure looks like] | * 2019-03: [https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like What failure looks like] | ||
+ | * 2023-03: gwern: [https://gwern.net/fiction/clippy It Looks Like You’re Trying To Take Over The World] | ||
* 2025-01: [https://arxiv.org/abs/2501.16946 Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development] ([https://gradual-disempowerment.ai/ web version]) | * 2025-01: [https://arxiv.org/abs/2501.16946 Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development] ([https://gradual-disempowerment.ai/ web version]) | ||
** 2025-02: [https://thezvi.substack.com/p/the-risk-of-gradual-disempowerment The Risk of Gradual Disempowerment from AI] | ** 2025-02: [https://thezvi.substack.com/p/the-risk-of-gradual-disempowerment The Risk of Gradual Disempowerment from AI] | ||
+ | * 2025-04: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean: [https://ai-2027.com/ AI 2027] ([https://ai-2027.com/scenario.pdf pdf]) | ||
+ | |||
+ | ==Intelligence Explosion== | ||
+ | * 2025-02: [https://www.forethought.org/research/three-types-of-intelligence-explosion Three Types of Intelligence Explosion] | ||
+ | * 2025-03: Future of Life Institute: [https://futureoflife.org/ai/are-we-close-to-an-intelligence-explosion/ Are we close to an intelligence explosion?] AIs are inching ever-closer to a critical threshold. Beyond this threshold lie great risks—but crossing it is not inevitable. | ||
=Psychology= | =Psychology= | ||
Line 53: | Line 100: | ||
* 2024-10: Dario Amodei: [https://darioamodei.com/machines-of-loving-grace Machines of Loving Grace] | * 2024-10: Dario Amodei: [https://darioamodei.com/machines-of-loving-grace Machines of Loving Grace] | ||
* 2024-11: Google DeepMind: [https://www.aipolicyperspectives.com/p/a-new-golden-age-of-discovery A new golden age of discovery] | * 2024-11: Google DeepMind: [https://www.aipolicyperspectives.com/p/a-new-golden-age-of-discovery A new golden age of discovery] | ||
+ | * 2025-03: [https://finmoorhouse.com/ Fin Moorhouse], [https://www.williammacaskill.com/ Will MacAskill]: [https://www.forethought.org/research/preparing-for-the-intelligence-explosion Preparing for the Intelligence Explosion] | ||
=Plans= | =Plans= | ||
Line 58: | Line 106: | ||
* Marius Hobbhahn: [https://www.lesswrong.com/posts/bb5Tnjdrptu89rcyY/what-s-the-short-timeline-plan What’s the short timeline plan?] | * Marius Hobbhahn: [https://www.lesswrong.com/posts/bb5Tnjdrptu89rcyY/what-s-the-short-timeline-plan What’s the short timeline plan?] | ||
* [https://cfg.eu/building-cern-for-ai/ Building CERN for AI: An institutional blueprint] | * [https://cfg.eu/building-cern-for-ai/ Building CERN for AI: An institutional blueprint] | ||
+ | * [https://arxiv.org/abs/2503.05710 AGI, Governments, and Free Societies] | ||
+ | * [https://controlai.com/ Control AI]: [https://controlai.com/dip The Direct Institutional Plan] | ||
==Philosophy== | ==Philosophy== | ||
− | * Joe Carlsmith: [https://joecarlsmith.com/2024/01/02/otherness-and-control-in-the-age-of-agi Otherness and control in the age of AGI] | + | * [https://danfaggella.com/ Dan Faggella]: |
+ | ** 2018-07: [https://danfaggella.com/moral-singularity/ Moral Singularity – Unpredictable Values Bodes Poorly for Humanity] | ||
+ | ** 2025-02: [https://danfaggella.com/bend/ There is No Pause – We Must Bend the Posthuman Trajectory] | ||
+ | * Joe Carlsmith: 2024: [https://joecarlsmith.com/2024/01/02/otherness-and-control-in-the-age-of-agi Otherness and control in the age of AGI] | ||
*# [https://joecarlsmith.com/2024/01/02/gentleness-and-the-artificial-other Gentleness and the artificial Other] | *# [https://joecarlsmith.com/2024/01/02/gentleness-and-the-artificial-other Gentleness and the artificial Other] | ||
*# [https://joecarlsmith.com/2024/01/04/deep-atheism-and-ai-risk Deep atheism and AI risk] | *# [https://joecarlsmith.com/2024/01/04/deep-atheism-and-ai-risk Deep atheism and AI risk] | ||
Line 71: | Line 124: | ||
*# [https://joecarlsmith.com/2024/03/25/on-attunement On attunement] | *# [https://joecarlsmith.com/2024/03/25/on-attunement On attunement] | ||
*# [https://joecarlsmith.com/2024/06/18/loving-a-world-you-dont-trust Loving a world you don’t trust] | *# [https://joecarlsmith.com/2024/06/18/loving-a-world-you-dont-trust Loving a world you don’t trust] | ||
+ | * Anthony Aguirre: | ||
+ | ** [https://x.com/AnthonyNAguirre/status/1898023049930457468 2025-03]: [https://keepthefuturehuman.ai/ Keep The Future Human] | ||
+ | [[Image:GlchEeObwAQ88NK.jpeg|300px]] | ||
==Alignment== | ==Alignment== | ||
− | * [https://static1.squarespace.com/static/65392ca578eee444c445c9de/t/6606f95edb20e8118074a344/1711733370985/human-values-and-alignment-29MAR2024.pdf What are human values, and how do we align AI to them?] ([https://meaningalignment.substack.com/p/0480e023-98c0-4633-a604-990d3ac880ac blog]) | + | * 2023-03: Leopold Aschenbrenner: [https://www.forourposterity.com/nobodys-on-the-ball-on-agi-alignment/ Nobody’s on the ball on AGI alignment] |
− | * Joe Carlsmith: [https://joecarlsmith.substack.com/p/how-do-we-solve-the-alignment-problem How do we solve the alignment problem?] Introduction to an essay series on paths to safe, useful superintelligence | + | * 2024-03: [https://static1.squarespace.com/static/65392ca578eee444c445c9de/t/6606f95edb20e8118074a344/1711733370985/human-values-and-alignment-29MAR2024.pdf What are human values, and how do we align AI to them?] ([https://meaningalignment.substack.com/p/0480e023-98c0-4633-a604-990d3ac880ac blog]) |
− | *# [https://joecarlsmith.substack.com/p/what-is-it-to-solve-the-alignment What is it to solve the alignment problem?] Also: to avoid it? Handle it? Solve it forever? Solve it completely? | + | * 2025: Joe Carlsmith: [https://joecarlsmith.substack.com/p/how-do-we-solve-the-alignment-problem How do we solve the alignment problem?] Introduction to an essay series on paths to safe, useful superintelligence |
− | *# [https://joecarlsmith.substack.com/p/when-should-we-worry-about-ai-power When should we worry about AI power-seeking?] | + | *# [https://joecarlsmith.substack.com/p/what-is-it-to-solve-the-alignment What is it to solve the alignment problem?] Also: to avoid it? Handle it? Solve it forever? Solve it completely? ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16617671-what-is-it-to-solve-the-alignment-problem audio version]) |
+ | *# [https://joecarlsmith.substack.com/p/when-should-we-worry-about-ai-power When should we worry about AI power-seeking?] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16651469-when-should-we-worry-about-ai-power-seeking audio version]) | ||
+ | *# [https://joecarlsmith.substack.com/p/paths-and-waystations-in-ai-safety Paths and waystations in AI safety] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16768804-paths-and-waystations-in-ai-safety audio version]) | ||
+ | *# [https://joecarlsmith.substack.com/p/ai-for-ai-safety AI for AI safety] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16790183-ai-for-ai-safety audio version]) | ||
+ | |||
+ | ==Strategic/Technical== | ||
+ | * 2025-03: [https://resilience.baulab.info/docs/AI_Action_Plan_RFI.pdf AI Dominance Requires Interpretability and Standards for Transparency and Security] | ||
+ | |||
+ | ==Strategic/Policy== | ||
+ | * Amanda Askell, Miles Brundage, Gillian Hadfield: [https://arxiv.org/abs/1907.04534 The Role of Cooperation in Responsible AI Development] | ||
+ | * Dan Hendrycks, Eric Schmidt, Alexandr Wang: [https://www.nationalsecurity.ai/ Superintelligence Strategy] | ||
+ | ** [https://www.nationalsecurity.ai/chapter/executive-summary Executive Summary] | ||
+ | ** [https://www.nationalsecurity.ai/chapter/introduction Introduction] | ||
+ | ** [https://www.nationalsecurity.ai/chapter/ai-is-pivotal-for-national-security AI Is Pivotal for National Security] | ||
+ | ** [https://www.nationalsecurity.ai/chapter/deterrence-with-mutual-assured-ai-malfunction-maim Deterrence with Mutual Assured AI Malfunction (MAIM)] | ||
+ | ** [https://www.nationalsecurity.ai/chapter/nonproliferation Nonproliferation] | ||
+ | ** [https://www.nationalsecurity.ai/chapter/competitiveness Competitiveness] | ||
+ | ** [https://www.nationalsecurity.ai/chapter/conclusion Conclusion] | ||
+ | ** [https://www.nationalsecurity.ai/chapter/appendix Appendix FAQs] | ||
+ | * Anthony Aguirre: [https://keepthefuturehuman.ai/ Keep The Future Human] ([https://keepthefuturehuman.ai/essay/ essay]) | ||
+ | ** [https://www.youtube.com/watch?v=zeabrXV8zNE The 4 Rules That Could Stop AI Before It’s Too Late (video)] (2025) | ||
+ | **# Oversight: Registration required for training >10<sup>25</sup> FLOP and inference >10<sup>19</sup> FLOP/s (~1,000 B200 GPUs @ $25M). Build cryptographic licensing into hardware. | ||
+ | **# Computation Limits: Ban on training models >10<sup>27</sup> FLOP or inference >10<sup>20</sup> FLOP/s. | ||
+ | **# Strict Liability: Hold AI companies responsible for outcomes. | ||
+ | **# Tiered Regulation: Low regulation on tool-AI, strictest regulation on AGI (general, capable, autonomous systems). | ||
=See Also= | =See Also= | ||
* [[AI safety]] | * [[AI safety]] |
Latest revision as of 14:42, 4 April 2025
Contents
Capability Scaling
- 2019-03: Rich Sutton: The Bitter Lesson
- 2020-09: Ajeya Cotra: Draft report on AI timelines
- 2022-01: gwern: The Scaling Hypothesis
- 2023-05: Richard Ngo: Clarifying and predicting AGI
- 2024-06: Aidan McLaughlin: AI Search: The Bitter-er Lesson
- 2025-03: Measuring AI Ability to Complete Long Tasks Measuring AI Ability to Complete Long Tasks
- The Road to AGI (timeline visualization)
AGI Achievable
- Yoshua Bengio: Managing extreme AI risks amid rapid progress
- Leopold Aschenbrenner: Situational Awareness: Counting the OOMs
- Richard Ngo: Visualizing the deep learning revolution
- Katja Grace: Survey of 2,778 AI authors: six parts in pictures
- Epoch AI: Machine Learning Trends
- AI Digest: How fast is AI improving?
AGI Definition
- 2023-11: Allan Dafoe, Shane Legg, et al.: Levels of AGI for Operationalizing Progress on the Path to AGI
- 2024-04: Bowen Xu: What is Meant by AGI? On the Definition of Artificial General Intelligence
Economic and Political
- 2023-10: Marc Andreessen: The Techno-Optimist Manifesto
- 2023-12: My techno-optimism: "defensive acceleration" (Vitalik Buterin)
- 2024-03: Noah Smith: Plentiful, high-paying jobs in the age of AI: Comparative advantage is very subtle, but incredibly powerful. (video)
- 2024-03: Scenarios for the Transition to AGI (AGI leads to wage collapse)
- 2024-06: Situational Awareness (Leopold Aschenbrenner) - select quotes, podcast, text summary of podcast
- 2024-06: AI and Growth: Where Do We Stand?
- 2024-09: OpenAI Infrastructure is Destiny: Economic Returns on US Investment in Democratic AI
- 2024-12: By default, capital will matter more than ever after AGI (L Rudolf L)
- 2025-01: The Intelligence Curse: With AGI, powerful actors will lose their incentives to invest in people
- 2025-01: Microsoft: The Golden Opportunity for American AI
- 2025-01: AGI Will Not Make Labor Worthless
- 2025-01: AI in America: OpenAI's Economic Blueprint (blog)
- 2025-01: How much economic growth from AI should we expect, how soon?
- 2025-02: Morgan Stanley: The Humanoid 100: Mapping the Humanoid Robot Value Chain
- 2025-02: The Anthropic Economic Index: Which Economic Tasks are Performed with AI? Evidence from Millions of Claude Conversations
- 2025-02: Strategic Wealth Accumulation Under Transformative AI Expectations
- 2025-02: Tyler Cowen: Why I think AI take-off is relatively slow
- 2025-03: Epoch AI: Most AI value will come from broad automation, not from R&D
- The primary economic impact of AI will be its ability to broadly automate labor
- Automating AI R&D alone likely won’t dramatically accelerate AI progress
- Fully automating R&D requires a very broad set of abilities
- AI takeoff will likely be diffuse and salient
- 2025-03: Anthropic Economic Index: Insights from Claude 3.7 Sonnet
Job Loss
- 2023-03: GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models
- 2023-08: The Short-Term Effects of Generative Artificial Intelligence on Employment: Evidence from an Online Labor Market
- 2023-09: What drives UK firms to adopt AI and robotics, and what are the consequences for jobs?
- 2023-11: New Analysis Shows Over 20% of US Jobs Significantly Exposed to AI Automation In the Near Future
- 2024-01: Duolingo cuts 10% of its contractor workforce as the company embraces AI
- 2024-02: Gen AI is a tool for growth, not just efficiency: Tech CEOs are investing to build their workforce and capitalise on new opportunities from generative AI. That’s a sharp contrast to how their peers view it.
- 2024-04: AI is Poised to Replace the Entry-Level Grunt Work of a Wall Street Career
- 2024-07: AI Is Already Taking Jobs in the Video Game Industry: A WIRED investigation finds that major players like Activision Blizzard, which recently laid off scores of workers, are using generative AI for game development
- 2024-08: Klarna: AI lets us cut thousands of jobs - but pay more
- 2025-01: AI and Freelancers: Has the Inflection Point Arrived?
- 2025-01: Yes, you're going to be replaced: So much cope about AI
- 2025-03: Will AI Automate Away Your Job? The time-horizon model explains the future of the technology
Near-term
- 2021-08: Daniel Kokotajlo: What 2026 looks like
- 2025-02: Sam Altman: Three Observations
- The intelligence of an AI model roughly equals the log of the resources used to train and run it.
- The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use
- The socioeconomic value of linearly increasing intelligence is super-exponential in nature
- 2025-03: Glimpses of AI Progress: Mental models for fast times
- 2025-04: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean: AI 2027 (pdf)
Overall
- 2025-03: Kevin Roose (New York Times): Powerful A.I. Is Coming. We’re Not Ready. Three arguments for taking progress toward artificial general intelligence, or A.G.I., more seriously — whether you’re an optimist or a pessimist.
- 2025-03: Nicholas Carlini: My Thoughts on the Future of "AI": "I have very wide error bars on the potential future of large language models, and I think you should too."
Surveys of Opinions/Predictions
- 2023-08: 2023 Expert Survey on Progress in AI
- 2025-02: Why do Experts Disagree on Existential Risk and P(doom)? A Survey of AI Experts
- 2025-02: Nicholas Carlini: AI forecasting retrospective: you're (probably) over-confident
Bad Outcomes
- 2019-03: What failure looks like
- 2023-03: gwern: It Looks Like You’re Trying To Take Over The World
- 2025-01: Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development (web version)
- 2025-04: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean: AI 2027 (pdf)
Intelligence Explosion
- 2025-02: Three Types of Intelligence Explosion
- 2025-03: Future of Life Institute: Are we close to an intelligence explosion? AIs are inching ever-closer to a critical threshold. Beyond this threshold lie great risks—but crossing it is not inevitable.
Psychology
Science & Technology Improvements
- 2024-09: Sam Altman: The Intelligence Age
- 2024-10: Dario Amodei: Machines of Loving Grace
- 2024-11: Google DeepMind: A new golden age of discovery
- 2025-03: Fin Moorhouse, Will MacAskill: Preparing for the Intelligence Explosion
Plans
- A Narrow Path: How to Secure our Future
- Marius Hobbhahn: What’s the short timeline plan?
- Building CERN for AI: An institutional blueprint
- AGI, Governments, and Free Societies
- Control AI: The Direct Institutional Plan
Philosophy
- Dan Faggella:
- Joe Carlsmith: 2024: Otherness and control in the age of AGI
- Anthony Aguirre:
Alignment
- 2023-03: Leopold Aschenbrenner: Nobody’s on the ball on AGI alignment
- 2024-03: What are human values, and how do we align AI to them? (blog)
- 2025: Joe Carlsmith: How do we solve the alignment problem? Introduction to an essay series on paths to safe, useful superintelligence
- What is it to solve the alignment problem? Also: to avoid it? Handle it? Solve it forever? Solve it completely? (audio version)
- When should we worry about AI power-seeking? (audio version)
- Paths and waystations in AI safety (audio version)
- AI for AI safety (audio version)
Strategic/Technical
Strategic/Policy
- Amanda Askell, Miles Brundage, Gillian Hadfield: The Role of Cooperation in Responsible AI Development
- Dan Hendrycks, Eric Schmidt, Alexandr Wang: Superintelligence Strategy
- Anthony Aguirre: Keep The Future Human (essay)
- The 4 Rules That Could Stop AI Before It’s Too Late (video) (2025)
- Oversight: Registration required for training >1025 FLOP and inference >1019 FLOP/s (~1,000 B200 GPUs @ $25M). Build cryptographic licensing into hardware.
- Computation Limits: Ban on training models >1027 FLOP or inference >1020 FLOP/s.
- Strict Liability: Hold AI companies responsible for outcomes.
- Tiered Regulation: Low regulation on tool-AI, strictest regulation on AGI (general, capable, autonomous systems).
- The 4 Rules That Could Stop AI Before It’s Too Late (video) (2025)