Difference between revisions of "AI predictions"

From GISAXS
Jump to: navigation, search
(Surveys of Opinions/Predictions)
(Strategic/Policy)
Line 141: Line 141:
  
 
==Strategic/Policy==
 
==Strategic/Policy==
* Amanda Askell, Miles Brundage, Gillian Hadfield: [https://arxiv.org/abs/1907.04534 The Role of Cooperation in Responsible AI Development]
+
* 2019-07: Amanda Askell, Miles Brundage, Gillian Hadfield: [https://arxiv.org/abs/1907.04534 The Role of Cooperation in Responsible AI Development]
* Dan Hendrycks, Eric Schmidt, Alexandr Wang: [https://www.nationalsecurity.ai/ Superintelligence Strategy]
+
* 2025-03: Dan Hendrycks, Eric Schmidt, Alexandr Wang: [https://www.nationalsecurity.ai/ Superintelligence Strategy]
 
** [https://www.nationalsecurity.ai/chapter/executive-summary Executive Summary]
 
** [https://www.nationalsecurity.ai/chapter/executive-summary Executive Summary]
 
** [https://www.nationalsecurity.ai/chapter/introduction Introduction]
 
** [https://www.nationalsecurity.ai/chapter/introduction Introduction]
Line 157: Line 157:
 
**# Strict Liability: Hold AI companies responsible for outcomes.
 
**# Strict Liability: Hold AI companies responsible for outcomes.
 
**# Tiered Regulation: Low regulation on tool-AI, strictest regulation on AGI (general, capable, autonomous systems).
 
**# Tiered Regulation: Low regulation on tool-AI, strictest regulation on AGI (general, capable, autonomous systems).
 +
* 2025-04: Helen Toner: [https://helentoner.substack.com/p/nonproliferation-is-the-wrong-approach?source=queue Nonproliferation is the wrong approach to AI misuse]
  
 
=See Also=
 
=See Also=
 
* [[AI safety]]
 
* [[AI safety]]

Revision as of 10:33, 6 April 2025

Capability Scaling

GmZHL8xWQAAtFlF.jpeg

AGI Achievable

AGI Definition

Economic and Political

Job Loss

F-kVQuvWkAAemkr.png

Near-term

  • 2021-08: Daniel Kokotajlo: What 2026 looks like
  • 2025-02: Sam Altman: Three Observations
    1. The intelligence of an AI model roughly equals the log of the resources used to train and run it.
    2. The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use
    3. The socioeconomic value of linearly increasing intelligence is super-exponential in nature
  • 2025-03: Glimpses of AI Progress: Mental models for fast times
  • 2025-04: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean: AI 2027 (pdf)

Overall

  • 2025-03: Kevin Roose (New York Times): Powerful A.I. Is Coming. We’re Not Ready. Three arguments for taking progress toward artificial general intelligence, or A.G.I., more seriously — whether you’re an optimist or a pessimist.
  • 2025-03: Nicholas Carlini: My Thoughts on the Future of "AI": "I have very wide error bars on the potential future of large language models, and I think you should too."

Surveys of Opinions/Predictions

Bad Outcomes

Intelligence Explosion

Psychology

Science & Technology Improvements

Plans

Philosophy

GlchEeObwAQ88NK.jpeg

Alignment

Strategic/Technical

Strategic/Policy

See Also