Difference between revisions of "AI predictions"

From GISAXS
Jump to: navigation, search
(Job Loss)
(Strategic/Policy)
 
(5 intermediate revisions by the same user not shown)
Line 82: Line 82:
 
* 2025-05: [https://www.forbes.com/sites/jackkelly/2025/05/04/its-time-to-get-concerned-klarna-ups-duolingo-cisco-and-many-other-companies-are-replacing-workers-with-ai/ It’s Time To Get Concerned, Klarna, UPS, Duolingo, Cisco, And Many Other Companies Are Replacing Workers With AI]
 
* 2025-05: [https://www.forbes.com/sites/jackkelly/2025/05/04/its-time-to-get-concerned-klarna-ups-duolingo-cisco-and-many-other-companies-are-replacing-workers-with-ai/ It’s Time To Get Concerned, Klarna, UPS, Duolingo, Cisco, And Many Other Companies Are Replacing Workers With AI]
 
* 2025-05: [https://time.com/7289692/when-ai-replaces-workers/ What Happens When AI Replaces Workers?]
 
* 2025-05: [https://time.com/7289692/when-ai-replaces-workers/ What Happens When AI Replaces Workers?]
 +
* 2025-05: [https://www.oxfordeconomics.com/resource/educated-but-unemployed-a-rising-reality-for-us-college-grads/ Educated but unemployed, a rising reality for US college grads] Structural shifts in tech hiring and the growing impact of AI are driving higher unemployment among recent college graduates
 
* 2025-05: NY Times: [https://www.nytimes.com/2025/05/30/technology/ai-jobs-college-graduates.html?unlocked_article_code=1.LE8.LlC6.eT5XcpA9hxC2&smid=url-share For Some Recent Graduates, the A.I. Job Apocalypse May Already Be Here] The unemployment rate for recent college graduates has jumped as companies try to replace entry-level workers with artificial intelligence
 
* 2025-05: NY Times: [https://www.nytimes.com/2025/05/30/technology/ai-jobs-college-graduates.html?unlocked_article_code=1.LE8.LlC6.eT5XcpA9hxC2&smid=url-share For Some Recent Graduates, the A.I. Job Apocalypse May Already Be Here] The unemployment rate for recent college graduates has jumped as companies try to replace entry-level workers with artificial intelligence
  
Line 98: Line 99:
 
* 2025-04: Arvind Narayananand Sayash Kapoor: [https://kfai-documents.s3.amazonaws.com/documents/c3cac5a2a7/AI-as-Normal-Technology---Narayanan---Kapoor.pdf AI as Normal Technology]
 
* 2025-04: Arvind Narayananand Sayash Kapoor: [https://kfai-documents.s3.amazonaws.com/documents/c3cac5a2a7/AI-as-Normal-Technology---Narayanan---Kapoor.pdf AI as Normal Technology]
 
* 2025-04: Dwarkesh Patel: [https://www.dwarkesh.com/p/questions-about-ai Questions about the Future of AI]
 
* 2025-04: Dwarkesh Patel: [https://www.dwarkesh.com/p/questions-about-ai Questions about the Future of AI]
 +
* 2025-05: [https://www.bondcap.com/report/pdf/Trends_Artificial_Intelligence.pdf Trends – Artificial Intelligence]
  
 
=Overall=
 
=Overall=
Line 119: Line 121:
 
* 2025-01: [https://arxiv.org/abs/2501.16946 Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development] ([https://gradual-disempowerment.ai/ web version])
 
* 2025-01: [https://arxiv.org/abs/2501.16946 Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development] ([https://gradual-disempowerment.ai/ web version])
 
** 2025-02: [https://thezvi.substack.com/p/the-risk-of-gradual-disempowerment The Risk of Gradual Disempowerment from AI]
 
** 2025-02: [https://thezvi.substack.com/p/the-risk-of-gradual-disempowerment The Risk of Gradual Disempowerment from AI]
 +
** 2025-05: [https://www.lesswrong.com/posts/GAv4DRGyDHe2orvwB/gradual-disempowerment-concrete-research-projects Gradual Disempowerment: Concrete Research Projects]
 
* 2025-04: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean: [https://ai-2027.com/ AI 2027] ([https://ai-2027.com/scenario.pdf pdf])
 
* 2025-04: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean: [https://ai-2027.com/ AI 2027] ([https://ai-2027.com/scenario.pdf pdf])
 
* 2025-04: [https://www.forethought.org/research/ai-enabled-coups-how-a-small-group-could-use-ai-to-seize-power AI-Enabled Coups: How a Small Group Could Use AI to Seize Power]
 
* 2025-04: [https://www.forethought.org/research/ai-enabled-coups-how-a-small-group-could-use-ai-to-seize-power AI-Enabled Coups: How a Small Group Could Use AI to Seize Power]
  
 
==Intelligence Explosion==
 
==Intelligence Explosion==
 +
* 2023-06: [https://www.openphilanthropy.org/research/what-a-compute-centric-framework-says-about-takeoff-speeds/ What a Compute-Centric Framework Says About Takeoff Speeds]
 +
** [https://takeoffspeeds.com/ takeoffspeeds.com simulator]
 
* 2025-02: [https://www.forethought.org/research/three-types-of-intelligence-explosion Three Types of Intelligence Explosion]
 
* 2025-02: [https://www.forethought.org/research/three-types-of-intelligence-explosion Three Types of Intelligence Explosion]
 
* 2025-03: Future of Life Institute: [https://futureoflife.org/ai/are-we-close-to-an-intelligence-explosion/ Are we close to an intelligence explosion?] AIs are inching ever-closer to a critical threshold. Beyond this threshold lie great risks—but crossing it is not inevitable.
 
* 2025-03: Future of Life Institute: [https://futureoflife.org/ai/are-we-close-to-an-intelligence-explosion/ Are we close to an intelligence explosion?] AIs are inching ever-closer to a critical threshold. Beyond this threshold lie great risks—but crossing it is not inevitable.
 
* 2025-03: Forethought: [https://www.forethought.org/research/will-ai-r-and-d-automation-cause-a-software-intelligence-explosion Will AI R&D Automation Cause a Software Intelligence Explosion?]
 
* 2025-03: Forethought: [https://www.forethought.org/research/will-ai-r-and-d-automation-cause-a-software-intelligence-explosion Will AI R&D Automation Cause a Software Intelligence Explosion?]
 
* 2025-05: [https://www.thelastinvention.ai/ The Last Invention] Why Humanity’s Final Creation Changes Everything
 
* 2025-05: [https://www.thelastinvention.ai/ The Last Invention] Why Humanity’s Final Creation Changes Everything
 +
* 2025-06:
  
 
==Superintelligence==
 
==Superintelligence==
Line 178: Line 184:
 
* 2025-04: Scott Alexander (Astral Codex Ten): [https://www.astralcodexten.com/p/the-colors-of-her-coat The Colors Of Her Coat] (response to [https://www.theintrinsicperspective.com/p/welcome-to-the-semantic-apocalypse semantic apocalypse] and semantic satiation)
 
* 2025-04: Scott Alexander (Astral Codex Ten): [https://www.astralcodexten.com/p/the-colors-of-her-coat The Colors Of Her Coat] (response to [https://www.theintrinsicperspective.com/p/welcome-to-the-semantic-apocalypse semantic apocalypse] and semantic satiation)
 
* 2025-05: Helen Toner: [https://www.ai-frontiers.org/articles/were-arguing-about-ai-safety-wrong We’re Arguing About AI Safety Wrong]: Dynamism vs. stasis is a clearer lens for criticizing controversial AI safety prescriptions
 
* 2025-05: Helen Toner: [https://www.ai-frontiers.org/articles/were-arguing-about-ai-safety-wrong We’re Arguing About AI Safety Wrong]: Dynamism vs. stasis is a clearer lens for criticizing controversial AI safety prescriptions
 +
 +
==Research==
 +
* 2025-05: [https://www.lesswrong.com/posts/GAv4DRGyDHe2orvwB/gradual-disempowerment-concrete-research-projects Gradual Disempowerment: Concrete Research Projects]
  
 
==Alignment==
 
==Alignment==
Line 215: Line 224:
 
* 2025-04: MIRI: [https://techgov.intelligence.org/research/ai-governance-to-avoid-extinction AI Governance to Avoid Extinction: The Strategic Landscape and Actionable Research Questions]
 
* 2025-04: MIRI: [https://techgov.intelligence.org/research/ai-governance-to-avoid-extinction AI Governance to Avoid Extinction: The Strategic Landscape and Actionable Research Questions]
 
* 2025-05: [https://writing.antonleicht.me/p/the-new-ai-policy-frontier The New AI Policy Frontier]: Beyond the shortcomings of centralised control and alignment, a new school of thought on AI governance emerges. It still faces tricky politics.
 
* 2025-05: [https://writing.antonleicht.me/p/the-new-ai-policy-frontier The New AI Policy Frontier]: Beyond the shortcomings of centralised control and alignment, a new school of thought on AI governance emerges. It still faces tricky politics.
 +
* 2025-05: [https://uncpga.world/agi-uncpga-report/ AGI UNCPGA Report]: Governance of the Transition to Artificial General Intelligence (AGI) Urgent Considerations for the UN General Assembly: Report for the Council of Presidents of the United Nations General Assembly (UNCPGA)
  
 
=See Also=
 
=See Also=
 
* [[AI safety]]
 
* [[AI safety]]

Latest revision as of 11:09, 2 June 2025

Capability Scaling

GmZHL8xWQAAtFlF.jpeg

AGI Achievable

AGI Definition

Progress Models

AI impact models01.png

Economic and Political

Job Loss

F-kVQuvWkAAemkr.png

National Security

Near-term

Overall

  • 2025-03: Kevin Roose (New York Times): Powerful A.I. Is Coming. We’re Not Ready. Three arguments for taking progress toward artificial general intelligence, or A.G.I., more seriously — whether you’re an optimist or a pessimist.
  • 2025-03: Nicholas Carlini: My Thoughts on the Future of "AI": "I have very wide error bars on the potential future of large language models, and I think you should too."

Surveys of Opinions/Predictions

Bad Outcomes

Intelligence Explosion

Superintelligence

Long-range/Philosophy

Psychology

Science & Technology Improvements

Plans

Philosophy

GlchEeObwAQ88NK.jpeg

Research

Alignment

Strategic/Technical

Strategic/Policy

See Also