Difference between revisions of "AI predictions"

From GISAXS
Jump to: navigation, search
(Overall)
(Philosophy)
Line 168: Line 168:
 
[[Image:GlchEeObwAQ88NK.jpeg|300px]]
 
[[Image:GlchEeObwAQ88NK.jpeg|300px]]
 
* 2025-04: Scott Alexander (Astral Codex Ten): [https://www.astralcodexten.com/p/the-colors-of-her-coat The Colors Of Her Coat] (response to [https://www.theintrinsicperspective.com/p/welcome-to-the-semantic-apocalypse semantic apocalypse] and semantic satiation)
 
* 2025-04: Scott Alexander (Astral Codex Ten): [https://www.astralcodexten.com/p/the-colors-of-her-coat The Colors Of Her Coat] (response to [https://www.theintrinsicperspective.com/p/welcome-to-the-semantic-apocalypse semantic apocalypse] and semantic satiation)
 +
* 2025-05: Helen Toner: [https://www.ai-frontiers.org/articles/were-arguing-about-ai-safety-wrong We’re Arguing About AI Safety Wrong]: Dynamism vs. stasis is a clearer lens for criticizing controversial AI safety prescriptions
  
 
==Alignment==
 
==Alignment==

Revision as of 14:26, 13 May 2025

Capability Scaling

GmZHL8xWQAAtFlF.jpeg

AGI Achievable

AGI Definition

Progress Models

AI impact models01.png

Economic and Political

Job Loss

F-kVQuvWkAAemkr.png

National Security

Near-term

Overall

  • 2025-03: Kevin Roose (New York Times): Powerful A.I. Is Coming. We’re Not Ready. Three arguments for taking progress toward artificial general intelligence, or A.G.I., more seriously — whether you’re an optimist or a pessimist.
  • 2025-03: Nicholas Carlini: My Thoughts on the Future of "AI": "I have very wide error bars on the potential future of large language models, and I think you should too."

Surveys of Opinions/Predictions

Bad Outcomes

Intelligence Explosion

Long-range/Philosophy

Psychology

Science & Technology Improvements

Plans

Philosophy

GlchEeObwAQ88NK.jpeg

Alignment

Strategic/Technical

Strategic/Policy

See Also