Difference between revisions of "AI predictions"

From GISAXS
Jump to: navigation, search
(Economic and Political)
(Plans)
Line 247: Line 247:
 
* 2025-06: [https://x.com/littIeramblings Sarah Hastings-Woodhouse]: [https://drive.google.com/file/d/1mmdHBE6M2yiyL21-ctTuRLNH5xOFjqWm/view Safety Features for a Centralized AGI Project]
 
* 2025-06: [https://x.com/littIeramblings Sarah Hastings-Woodhouse]: [https://drive.google.com/file/d/1mmdHBE6M2yiyL21-ctTuRLNH5xOFjqWm/view Safety Features for a Centralized AGI Project]
 
* 2025-07: [https://writing.antonleicht.me/p/a-moving-target A Moving Target] Why we might not be quite ready to comprehensively regulate AI, and why it matters
 
* 2025-07: [https://writing.antonleicht.me/p/a-moving-target A Moving Target] Why we might not be quite ready to comprehensively regulate AI, and why it matters
 +
 +
==Restriction==
 +
* 2024-05: OpenAI: [https://openai.com/index/reimagining-secure-infrastructure-for-advanced-ai/ Reimagining secure infrastructure for advanced AI] OpenAI calls for an evolution in infrastructure security to protect advanced AI
 +
* 2025-07: MIRI: [https://arxiv.org/abs/2507.09801 Technical Requirements for Halting Dangerous AI Activities]
  
 
=See Also=
 
=See Also=
 
* [[AI safety]]
 
* [[AI safety]]

Revision as of 08:46, 22 July 2025

Capability Scaling

GmZHL8xWQAAtFlF.jpeg

AGI Achievable

AGI Definition

Progress Models

AI impact models01.png

Economic and Political

Job Loss

F-kVQuvWkAAemkr.png

0dab4c86-882d-4095-9d12-d19684ed5184 675x680.png

National Security

AI Manhattan Project

Near-term

Overall

Surveys of Opinions/Predictions

Bad Outcomes

Intelligence Explosion

Gm-1jugbYAAtq Y.jpeg

Superintelligence

Long-range/Philosophy

Psychology

Science & Technology Improvements

Plans

Philosophy

GlchEeObwAQ88NK.jpeg

Research

Alignment

Strategic/Technical

Strategic/Policy

Restriction

See Also