Difference between revisions of "AI predictions"

From GISAXS
Jump to: navigation, search
(Philosophy)
(Science & Technology Improvements)
 
(2 intermediate revisions by the same user not shown)
Line 62: Line 62:
 
* 2024-10: Dario Amodei: [https://darioamodei.com/machines-of-loving-grace Machines of Loving Grace]
 
* 2024-10: Dario Amodei: [https://darioamodei.com/machines-of-loving-grace Machines of Loving Grace]
 
* 2024-11: Google DeepMind: [https://www.aipolicyperspectives.com/p/a-new-golden-age-of-discovery A new golden age of discovery]
 
* 2024-11: Google DeepMind: [https://www.aipolicyperspectives.com/p/a-new-golden-age-of-discovery A new golden age of discovery]
 +
* 2025-03: [https://finmoorhouse.com/ Fin Moorhouse], [https://www.williammacaskill.com/ Will MacAskill]: [https://www.forethought.org/research/preparing-for-the-intelligence-explosion Preparing for the Intelligence Explosion]
  
 
=Plans=
 
=Plans=
Line 67: Line 68:
 
* Marius Hobbhahn: [https://www.lesswrong.com/posts/bb5Tnjdrptu89rcyY/what-s-the-short-timeline-plan What’s the short timeline plan?]
 
* Marius Hobbhahn: [https://www.lesswrong.com/posts/bb5Tnjdrptu89rcyY/what-s-the-short-timeline-plan What’s the short timeline plan?]
 
* [https://cfg.eu/building-cern-for-ai/ Building CERN for AI: An institutional blueprint]
 
* [https://cfg.eu/building-cern-for-ai/ Building CERN for AI: An institutional blueprint]
 +
* [https://arxiv.org/abs/2503.05710 AGI, Governments, and Free Societies]
  
 
==Philosophy==
 
==Philosophy==
Line 92: Line 94:
 
*# [https://joecarlsmith.substack.com/p/what-is-it-to-solve-the-alignment What is it to solve the alignment problem?] Also: to avoid it? Handle it? Solve it forever? Solve it completely?
 
*# [https://joecarlsmith.substack.com/p/what-is-it-to-solve-the-alignment What is it to solve the alignment problem?] Also: to avoid it? Handle it? Solve it forever? Solve it completely?
 
*# [https://joecarlsmith.substack.com/p/when-should-we-worry-about-ai-power When should we worry about AI power-seeking?]
 
*# [https://joecarlsmith.substack.com/p/when-should-we-worry-about-ai-power When should we worry about AI power-seeking?]
 +
*# [https://joecarlsmith.substack.com/p/paths-and-waystations-in-ai-safety Paths and waystations in AI safety]
  
 
==Strategic/Policy==
 
==Strategic/Policy==

Latest revision as of 14:22, 12 March 2025

AGI Achievable

AGI Definition

Economic and Political

Job Loss

F-kVQuvWkAAemkr.png

Overall

Surveys of Opinions/Predictions

Bad Outcomes

Psychology

Science & Technology Improvements

Plans

Philosophy

GlchEeObwAQ88NK.jpeg

Alignment

Strategic/Policy

See Also