Difference between revisions of "AI predictions"

From GISAXS
Jump to: navigation, search
(Positives & Optimism)
(Productivity Impact)
 
(4 intermediate revisions by the same user not shown)
Line 119: Line 119:
 
* 2025-10: [https://www.siliconcontinent.com/p/the-ai-becker-problem The AI Becker problem: Who will train the next generation?]
 
* 2025-10: [https://www.siliconcontinent.com/p/the-ai-becker-problem The AI Becker problem: Who will train the next generation?]
 
* 2026-01: [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6134506 AI, Automation, and Expertise]
 
* 2026-01: [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6134506 AI, Automation, and Expertise]
 +
* 2026-02: [https://arachnemag.substack.com/p/the-jevons-paradox-for-intelligence The Jevons Paradox for Intelligence: Fears of AI-induced job loss could not be more wrong]
 +
 +
==Productivity Impact==
 +
* 2025-05: [https://www.nber.org/papers/w33777 Large Language Models, Small Labor Market Effects]
 +
** Significant uptake, but very little economic impact so far
 +
* 2026-02: [https://www.ft.com/content/4b51d0b4-bbfe-4f05-b50a-1d485d419dc5 The AI productivity take-off is finally visible] ([https://x.com/erikbryn/status/2023075588974735869?s=20 Erik Brynjolfsson])
 +
** Businesses are finally beginning to reap some of AI's benefits.
 +
* 2026-02: New York Times: [https://www.nytimes.com/2026/02/18/opinion/ai-software.html The A.I. Disruption We’ve Been Waiting for Has Arrived]
  
 
==National Security==
 
==National Security==
Line 224: Line 232:
 
* 2004: Eliezer Yudkowsky (MIRI): [https://intelligence.org/files/CEV.pdf Coherent Extrapolated Volition] and [https://www.lesswrong.com/s/d3WgHDBAPYYScp5Em/p/K4aGvLnHvYgX9pZHS Fun Theory]
 
* 2004: Eliezer Yudkowsky (MIRI): [https://intelligence.org/files/CEV.pdf Coherent Extrapolated Volition] and [https://www.lesswrong.com/s/d3WgHDBAPYYScp5Em/p/K4aGvLnHvYgX9pZHS Fun Theory]
 
* 2019: John Danaher: [https://www.jstor.org/stable/j.ctvn5txpc Automation and Utopia: Human Flourishing in a World Without Work]
 
* 2019: John Danaher: [https://www.jstor.org/stable/j.ctvn5txpc Automation and Utopia: Human Flourishing in a World Without Work]
 +
 +
==The Grand Tradeoff==
 +
* 2026-02: Nick Bostrom: [https://nickbostrom.com/optimal.pdf Optimal Timing for Superintelligence: Mundane Considerations for Existing People]
  
 
=Plans=
 
=Plans=
Line 237: Line 248:
 
** 2025-02: [https://arxiv.org/abs/2502.15657 Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path?]
 
** 2025-02: [https://arxiv.org/abs/2502.15657 Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path?]
 
* 2026-01: Dario Amodei: [https://www.darioamodei.com/essay/the-adolescence-of-technology The Adolescence of Technology: Confronting and Overcoming the Risks of Powerful AI]
 
* 2026-01: Dario Amodei: [https://www.darioamodei.com/essay/the-adolescence-of-technology The Adolescence of Technology: Confronting and Overcoming the Risks of Powerful AI]
 +
* 2026-02: Ryan Greenblatt: [https://www.lesswrong.com/posts/vjAM7F8vMZS7oRrrh/how-do-we-more-safely-defer-to-ais How do we (more) safely defer to AIs?]
  
 
==Philosophy==
 
==Philosophy==

Latest revision as of 10:22, 18 February 2026

Capability Scaling

GmZHL8xWQAAtFlF.jpeg

Scaling Laws

See: Scaling Laws

AGI Achievable

AGI Definition

Progress Models

From AI Impact Predictions:

AI impact models-2025 11 24.png

Economic and Political

Job Loss

F-kVQuvWkAAemkr.png

0dab4c86-882d-4095-9d12-d19684ed5184 675x680.png

Productivity Impact

National Security

AI Manhattan Project

Near-term

Insightful Analysis of Current State

Overall

Surveys of Opinions/Predictions

Bad Outcomes

Intelligence Explosion

Gm-1jugbYAAtq Y.jpeg

Superintelligence

Long-range/Philosophy

Psychology

Positives & Optimism

Science & Technology Improvements

Social

Post-scarcity Society

The Grand Tradeoff

Plans

Philosophy

GlchEeObwAQ88NK.jpeg

Research

Alignment

Strategic/Technical

Strategic/Policy

Restriction

See Also