Difference between revisions of "AI predictions"

From GISAXS
Jump to: navigation, search
(AGI Definition)
(Plans)
Line 224: Line 224:
 
* Yoshua Bengio: [https://time.com/7283507/safer-ai-development/ A Potential Path to Safer AI Development]
 
* Yoshua Bengio: [https://time.com/7283507/safer-ai-development/ A Potential Path to Safer AI Development]
 
** 2025-02: [https://arxiv.org/abs/2502.15657 Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path?]
 
** 2025-02: [https://arxiv.org/abs/2502.15657 Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path?]
 +
* 2026-01: Dario Amodei: [https://www.darioamodei.com/essay/the-adolescence-of-technology The Adolescence of Technology: Confronting and Overcoming the Risks of Powerful AI]
  
 
==Philosophy==
 
==Philosophy==

Revision as of 14:23, 26 January 2026

Capability Scaling

GmZHL8xWQAAtFlF.jpeg

Scaling Laws

See: Scaling Laws

AGI Achievable

AGI Definition

Progress Models

From AI Impact Predictions:

AI impact models-2025 11 24.png

Economic and Political

Job Loss

F-kVQuvWkAAemkr.png

0dab4c86-882d-4095-9d12-d19684ed5184 675x680.png

National Security

AI Manhattan Project

Near-term

Overall

Surveys of Opinions/Predictions

Bad Outcomes

Intelligence Explosion

Gm-1jugbYAAtq Y.jpeg

Superintelligence

Long-range/Philosophy

Psychology

Positives & Optimism

Science & Technology Improvements

Social

Plans

Philosophy

GlchEeObwAQ88NK.jpeg

Research

Alignment

Strategic/Technical

Strategic/Policy

Restriction

See Also