Difference between revisions of "AI predictions"
KevinYager (talk | contribs) (→Near-term) |
KevinYager (talk | contribs) (→Economic and Political) |
||
(5 intermediate revisions by the same user not shown) | |||
Line 35: | Line 35: | ||
* 2025-02: [https://arxiv.org/abs/2502.11264 Strategic Wealth Accumulation Under Transformative AI Expectations] | * 2025-02: [https://arxiv.org/abs/2502.11264 Strategic Wealth Accumulation Under Transformative AI Expectations] | ||
* 2025-02: Tyler Cowen: [https://marginalrevolution.com/marginalrevolution/2025/02/why-i-think-ai-take-off-is-relatively-slow.html Why I think AI take-off is relatively slow] | * 2025-02: Tyler Cowen: [https://marginalrevolution.com/marginalrevolution/2025/02/why-i-think-ai-take-off-is-relatively-slow.html Why I think AI take-off is relatively slow] | ||
+ | * 2025-03: Epoch AI: [https://epoch.ai/gradient-updates/most-ai-value-will-come-from-broad-automation-not-from-r-d Most AI value will come from broad automation, not from R&D] | ||
+ | ** The primary economic impact of AI will be its ability to broadly automate labor | ||
+ | ** Automating AI R&D alone likely won’t dramatically accelerate AI progress | ||
+ | ** Fully automating R&D requires a very broad set of abilities | ||
+ | ** AI takeoff will likely be diffuse and salient | ||
+ | * 2025-03: [https://www.anthropic.com/news/anthropic-economic-index-insights-from-claude-sonnet-3-7 Anthropic Economic Index: Insights from Claude 3.7 Sonnet] | ||
==Job Loss== | ==Job Loss== | ||
Line 52: | Line 58: | ||
* 2025-01: [https://scholarspace.manoa.hawaii.edu/server/api/core/bitstreams/4f39375d-59c2-4c4a-b394-f3eed7858c80/content AI and Freelancers: Has the Inflection Point Arrived?] | * 2025-01: [https://scholarspace.manoa.hawaii.edu/server/api/core/bitstreams/4f39375d-59c2-4c4a-b394-f3eed7858c80/content AI and Freelancers: Has the Inflection Point Arrived?] | ||
* 2025-01: [https://www.aporiamagazine.com/p/yes-youre-going-to-be-replaced Yes, you're going to be replaced: So much cope about AI] | * 2025-01: [https://www.aporiamagazine.com/p/yes-youre-going-to-be-replaced Yes, you're going to be replaced: So much cope about AI] | ||
+ | * 2025-03: [https://commonplace.org/2025/03/20/will-ai-automate-away-your-job/ Will AI Automate Away Your Job? The time-horizon model explains the future of the technology] | ||
=Near-term= | =Near-term= | ||
Line 75: | Line 82: | ||
==Intelligence Explosion== | ==Intelligence Explosion== | ||
* 2025-02: [https://www.forethought.org/research/three-types-of-intelligence-explosion Three Types of Intelligence Explosion] | * 2025-02: [https://www.forethought.org/research/three-types-of-intelligence-explosion Three Types of Intelligence Explosion] | ||
+ | * 2025-03: Future of Life Institute: [https://futureoflife.org/ai/are-we-close-to-an-intelligence-explosion/ Are we close to an intelligence explosion?] AIs are inching ever-closer to a critical threshold. Beyond this threshold lie great risks—but crossing it is not inevitable. | ||
=Psychology= | =Psychology= | ||
Line 90: | Line 98: | ||
* [https://cfg.eu/building-cern-for-ai/ Building CERN for AI: An institutional blueprint] | * [https://cfg.eu/building-cern-for-ai/ Building CERN for AI: An institutional blueprint] | ||
* [https://arxiv.org/abs/2503.05710 AGI, Governments, and Free Societies] | * [https://arxiv.org/abs/2503.05710 AGI, Governments, and Free Societies] | ||
+ | * [https://controlai.com/ Control AI]: [https://controlai.com/dip The Direct Institutional Plan] | ||
==Philosophy== | ==Philosophy== |
Latest revision as of 14:48, 28 March 2025
Contents
Capability Scaling
- 2020-09: Ajeya Cotra: Draft report on AI timelines
- 2023-05: Richard Ngo: Clarifying and predicting AGI
- 2025-03: Measuring AI Ability to Complete Long Tasks Measuring AI Ability to Complete Long Tasks
AGI Achievable
- Yoshua Bengio: Managing extreme AI risks amid rapid progress
- Leopold Aschenbrenner: Situational Awareness: Counting the OOMs
- Richard Ngo: Visualizing the deep learning revolution
- Katja Grace: Survey of 2,778 AI authors: six parts in pictures
- Epoch AI: Machine Learning Trends
- AI Digest: How fast is AI improving?
AGI Definition
- 2023-11: Allan Dafoe, Shane Legg, et al.: Levels of AGI for Operationalizing Progress on the Path to AGI
- 2024-04: Bowen Xu: What is Meant by AGI? On the Definition of Artificial General Intelligence
Economic and Political
- 2023-12: My techno-optimism: "defensive acceleration" (Vitalik Buterin)
- 2024-03: Noah Smith: Plentiful, high-paying jobs in the age of AI: Comparative advantage is very subtle, but incredibly powerful. (video)
- 2024-03: Scenarios for the Transition to AGI (AGI leads to wage collapse)
- 2024-06: Situational Awareness (Leopold Aschenbrenner) - select quotes, podcast, text summary of podcast
- 2024-06: AI and Growth: Where Do We Stand?
- 2024-09: OpenAI Infrastructure is Destiny: Economic Returns on US Investment in Democratic AI
- 2024-12: By default, capital will matter more than ever after AGI (L Rudolf L)
- 2025-01: The Intelligence Curse: With AGI, powerful actors will lose their incentives to invest in people
- 2025-01: Microsoft: The Golden Opportunity for American AI
- 2025-01: AGI Will Not Make Labor Worthless
- 2025-01: AI in America: OpenAI's Economic Blueprint (blog)
- 2025-01: How much economic growth from AI should we expect, how soon?
- 2025-02: Morgan Stanley: The Humanoid 100: Mapping the Humanoid Robot Value Chain
- 2025-02: The Anthropic Economic Index: Which Economic Tasks are Performed with AI? Evidence from Millions of Claude Conversations
- 2025-02: Strategic Wealth Accumulation Under Transformative AI Expectations
- 2025-02: Tyler Cowen: Why I think AI take-off is relatively slow
- 2025-03: Epoch AI: Most AI value will come from broad automation, not from R&D
- The primary economic impact of AI will be its ability to broadly automate labor
- Automating AI R&D alone likely won’t dramatically accelerate AI progress
- Fully automating R&D requires a very broad set of abilities
- AI takeoff will likely be diffuse and salient
- 2025-03: Anthropic Economic Index: Insights from Claude 3.7 Sonnet
Job Loss
- 2023-03: GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models
- 2023-08: The Short-Term Effects of Generative Artificial Intelligence on Employment: Evidence from an Online Labor Market
- 2023-09: What drives UK firms to adopt AI and robotics, and what are the consequences for jobs?
- 2023-11: New Analysis Shows Over 20% of US Jobs Significantly Exposed to AI Automation In the Near Future
- 2024-01: Duolingo cuts 10% of its contractor workforce as the company embraces AI
- 2024-02: Gen AI is a tool for growth, not just efficiency: Tech CEOs are investing to build their workforce and capitalise on new opportunities from generative AI. That’s a sharp contrast to how their peers view it.
- 2024-04: AI is Poised to Replace the Entry-Level Grunt Work of a Wall Street Career
- 2024-07: AI Is Already Taking Jobs in the Video Game Industry: A WIRED investigation finds that major players like Activision Blizzard, which recently laid off scores of workers, are using generative AI for game development
- 2024-08: Klarna: AI lets us cut thousands of jobs - but pay more
- 2025-01: AI and Freelancers: Has the Inflection Point Arrived?
- 2025-01: Yes, you're going to be replaced: So much cope about AI
- 2025-03: Will AI Automate Away Your Job? The time-horizon model explains the future of the technology
Near-term
- 2025-02: Sam Altman: Three Observations
- The intelligence of an AI model roughly equals the log of the resources used to train and run it.
- The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use
- The socioeconomic value of linearly increasing intelligence is super-exponential in nature
- 2025-03: Glimpses of AI Progress: Mental models for fast times
Overall
- 2025-03: Kevin Roose (New York Times): Powerful A.I. Is Coming. We’re Not Ready. Three arguments for taking progress toward artificial general intelligence, or A.G.I., more seriously — whether you’re an optimist or a pessimist.
- 2025-03: Nicholas Carlini: My Thoughts on the Future of "AI": "I have very wide error bars on the potential future of large language models, and I think you should too."
Surveys of Opinions/Predictions
- 2025-02: Why do Experts Disagree on Existential Risk and P(doom)? A Survey of AI Experts
- 2025-02: Nicholas Carlini: AI forecasting retrospective: you're (probably) over-confident
Bad Outcomes
- 2019-03: What failure looks like
- 2025-01: Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development (web version)
Intelligence Explosion
- 2025-02: Three Types of Intelligence Explosion
- 2025-03: Future of Life Institute: Are we close to an intelligence explosion? AIs are inching ever-closer to a critical threshold. Beyond this threshold lie great risks—but crossing it is not inevitable.
Psychology
Science & Technology Improvements
- 2024-09: Sam Altman: The Intelligence Age
- 2024-10: Dario Amodei: Machines of Loving Grace
- 2024-11: Google DeepMind: A new golden age of discovery
- 2025-03: Fin Moorhouse, Will MacAskill: Preparing for the Intelligence Explosion
Plans
- A Narrow Path: How to Secure our Future
- Marius Hobbhahn: What’s the short timeline plan?
- Building CERN for AI: An institutional blueprint
- AGI, Governments, and Free Societies
- Control AI: The Direct Institutional Plan
Philosophy
- Dan Faggella:
- Joe Carlsmith: 2024: Otherness and control in the age of AGI
- Anthony Aguirre:
Alignment
- What are human values, and how do we align AI to them? (blog)
- Joe Carlsmith: 2025: How do we solve the alignment problem? Introduction to an essay series on paths to safe, useful superintelligence
- What is it to solve the alignment problem? Also: to avoid it? Handle it? Solve it forever? Solve it completely? (audio version)
- When should we worry about AI power-seeking? (audio version)
- Paths and waystations in AI safety (audio version)
- AI for AI safety (audio version)
Strategic/Technical
Strategic/Policy
- Amanda Askell, Miles Brundage, Gillian Hadfield: The Role of Cooperation in Responsible AI Development
- Dan Hendrycks, Eric Schmidt, Alexandr Wang: Superintelligence Strategy
- Anthony Aguirre: Keep The Future Human (essay)
- The 4 Rules That Could Stop AI Before It’s Too Late (video) (2025)
- Oversight: Registration required for training >1025 FLOP and inference >1019 FLOP/s (~1,000 B200 GPUs @ $25M). Build cryptographic licensing into hardware.
- Computation Limits: Ban on training models >1027 FLOP or inference >1020 FLOP/s.
- Strict Liability: Hold AI companies responsible for outcomes.
- Tiered Regulation: Low regulation on tool-AI, strictest regulation on AGI (general, capable, autonomous systems).
- The 4 Rules That Could Stop AI Before It’s Too Late (video) (2025)