Difference between revisions of "AI predictions"

From GISAXS
Jump to: navigation, search
(Strategic/Policy)
(Alignment)
 
(2 intermediate revisions by the same user not shown)
Line 83: Line 83:
 
*# [https://joecarlsmith.com/2024/03/25/on-attunement On attunement]
 
*# [https://joecarlsmith.com/2024/03/25/on-attunement On attunement]
 
*# [https://joecarlsmith.com/2024/06/18/loving-a-world-you-dont-trust Loving a world you don’t trust]
 
*# [https://joecarlsmith.com/2024/06/18/loving-a-world-you-dont-trust Loving a world you don’t trust]
 +
* Anthony Aguirre:
 +
** [https://x.com/AnthonyNAguirre/status/1898023049930457468 2025-03]: [https://keepthefuturehuman.ai/ Keep The Future Human]
 +
[[Image:GlchEeObwAQ88NK.jpeg|300px]]
  
 
==Alignment==
 
==Alignment==
Line 89: Line 92:
 
*# [https://joecarlsmith.substack.com/p/what-is-it-to-solve-the-alignment What is it to solve the alignment problem?] Also: to avoid it? Handle it? Solve it forever? Solve it completely?
 
*# [https://joecarlsmith.substack.com/p/what-is-it-to-solve-the-alignment What is it to solve the alignment problem?] Also: to avoid it? Handle it? Solve it forever? Solve it completely?
 
*# [https://joecarlsmith.substack.com/p/when-should-we-worry-about-ai-power When should we worry about AI power-seeking?]
 
*# [https://joecarlsmith.substack.com/p/when-should-we-worry-about-ai-power When should we worry about AI power-seeking?]
 +
*# [https://joecarlsmith.substack.com/p/paths-and-waystations-in-ai-safety Paths and waystations in AI safety]
  
 
==Strategic/Policy==
 
==Strategic/Policy==
 +
* Amanda Askell, Miles Brundage, Gillian Hadfield: [https://arxiv.org/abs/1907.04534 The Role of Cooperation in Responsible AI Development]
 
* Dan Hendrycks, Eric Schmidt, Alexandr Wang: [https://www.nationalsecurity.ai/ Superintelligence Strategy]
 
* Dan Hendrycks, Eric Schmidt, Alexandr Wang: [https://www.nationalsecurity.ai/ Superintelligence Strategy]
 
** [https://www.nationalsecurity.ai/chapter/executive-summary Executive Summary]
 
** [https://www.nationalsecurity.ai/chapter/executive-summary Executive Summary]

Latest revision as of 17:03, 11 March 2025

AGI Achievable

AGI Definition

Economic and Political

Job Loss

F-kVQuvWkAAemkr.png

Overall

Surveys of Opinions/Predictions

Bad Outcomes

Psychology

Science & Technology Improvements

Plans

Philosophy

GlchEeObwAQ88NK.jpeg

Alignment

Strategic/Policy

See Also