Difference between revisions of "AI predictions"

From GISAXS
Jump to: navigation, search
(Alignment)
(Strategic/Policy)
Line 195: Line 195:
 
**# Tiered Regulation: Low regulation on tool-AI, strictest regulation on AGI (general, capable, autonomous systems).
 
**# Tiered Regulation: Low regulation on tool-AI, strictest regulation on AGI (general, capable, autonomous systems).
 
* 2025-04: Helen Toner: [https://helentoner.substack.com/p/nonproliferation-is-the-wrong-approach?source=queue Nonproliferation is the wrong approach to AI misuse]
 
* 2025-04: Helen Toner: [https://helentoner.substack.com/p/nonproliferation-is-the-wrong-approach?source=queue Nonproliferation is the wrong approach to AI misuse]
 +
* 2025-04: MIRI: [https://techgov.intelligence.org/research/ai-governance-to-avoid-extinction AI Governance to Avoid Extinction: The Strategic Landscape and Actionable Research Questions]
  
 
=See Also=
 
=See Also=
 
* [[AI safety]]
 
* [[AI safety]]

Revision as of 10:22, 3 May 2025

Capability Scaling

GmZHL8xWQAAtFlF.jpeg

AGI Achievable

AGI Definition

Progress Models

AI impact models01.png

Economic and Political

Job Loss

F-kVQuvWkAAemkr.png

National Security

Near-term

Overall

  • 2025-03: Kevin Roose (New York Times): Powerful A.I. Is Coming. We’re Not Ready. Three arguments for taking progress toward artificial general intelligence, or A.G.I., more seriously — whether you’re an optimist or a pessimist.
  • 2025-03: Nicholas Carlini: My Thoughts on the Future of "AI": "I have very wide error bars on the potential future of large language models, and I think you should too."

Surveys of Opinions/Predictions

Bad Outcomes

Intelligence Explosion

Long-range/Philosophy

Psychology

Science & Technology Improvements

Plans

Philosophy

GlchEeObwAQ88NK.jpeg

Alignment

Strategic/Technical

Strategic/Policy

See Also