Difference between revisions of "AI predictions"

From GISAXS
Jump to: navigation, search
(Intelligence Explosion)
(Superintelligence)
 
Line 126: Line 126:
  
 
==Superintelligence==
 
==Superintelligence==
 +
* 2024-10: [http://yager-research.ca/2024/10/how-smart-will-asi-be/ How Smart will ASI be?]
 
* 2024-11: [http://yager-research.ca/2024/11/concise-argument-for-asi-risk/ Concise Argument for ASI Risk]
 
* 2024-11: [http://yager-research.ca/2024/11/concise-argument-for-asi-risk/ Concise Argument for ASI Risk]
 
* 2025-03: [https://dynomight.net/smart/ Limits of smart]
 
* 2025-03: [https://dynomight.net/smart/ Limits of smart]

Latest revision as of 11:56, 27 May 2025

Capability Scaling

GmZHL8xWQAAtFlF.jpeg

AGI Achievable

AGI Definition

Progress Models

AI impact models01.png

Economic and Political

Job Loss

F-kVQuvWkAAemkr.png

National Security

Near-term

Overall

  • 2025-03: Kevin Roose (New York Times): Powerful A.I. Is Coming. We’re Not Ready. Three arguments for taking progress toward artificial general intelligence, or A.G.I., more seriously — whether you’re an optimist or a pessimist.
  • 2025-03: Nicholas Carlini: My Thoughts on the Future of "AI": "I have very wide error bars on the potential future of large language models, and I think you should too."

Surveys of Opinions/Predictions

Bad Outcomes

Intelligence Explosion

Superintelligence

Long-range/Philosophy

Psychology

Science & Technology Improvements

Plans

Philosophy

GlchEeObwAQ88NK.jpeg

Alignment

Strategic/Technical

Strategic/Policy

See Also