Difference between revisions of "AI predictions"

From GISAXS
Jump to: navigation, search
(Job Loss)
(Plans)
Line 143: Line 143:
 
* [https://www.agisocialcontract.org/ AGI Social Contract]
 
* [https://www.agisocialcontract.org/ AGI Social Contract]
 
** [https://www.agisocialcontract.org/forging-a-new-agi-social-contract Forging A New AGI Social Contract]
 
** [https://www.agisocialcontract.org/forging-a-new-agi-social-contract Forging A New AGI Social Contract]
 +
* Yoshua Bengio: [https://time.com/7283507/safer-ai-development/ A Potential Path to Safer AI Development]
 +
** 2025-02: [https://arxiv.org/abs/2502.15657 Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path?]
  
 
==Philosophy==
 
==Philosophy==

Revision as of 12:02, 11 May 2025

Capability Scaling

GmZHL8xWQAAtFlF.jpeg

AGI Achievable

AGI Definition

Progress Models

AI impact models01.png

Economic and Political

Job Loss

F-kVQuvWkAAemkr.png

National Security

Near-term

Overall

  • 2025-03: Kevin Roose (New York Times): Powerful A.I. Is Coming. We’re Not Ready. Three arguments for taking progress toward artificial general intelligence, or A.G.I., more seriously — whether you’re an optimist or a pessimist.
  • 2025-03: Nicholas Carlini: My Thoughts on the Future of "AI": "I have very wide error bars on the potential future of large language models, and I think you should too."

Surveys of Opinions/Predictions

Bad Outcomes

Intelligence Explosion

Long-range/Philosophy

Psychology

Science & Technology Improvements

Plans

Philosophy

GlchEeObwAQ88NK.jpeg

Alignment

Strategic/Technical

Strategic/Policy

See Also