Difference between revisions of "AI predictions"

From GISAXS
Jump to: navigation, search
(Overall)
(Science & Technology Improvements)
 
(12 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
=AGI Achievable=
 
=AGI Achievable=
 +
* Yoshua Bengio: [https://arxiv.org/abs/2310.17688 Managing extreme AI risks amid rapid progress]
 
* Leopold Aschenbrenner: [https://situational-awareness.ai/from-gpt-4-to-agi/#Counting_the_OOMs Situational Awareness: Counting the OOMs]
 
* Leopold Aschenbrenner: [https://situational-awareness.ai/from-gpt-4-to-agi/#Counting_the_OOMs Situational Awareness: Counting the OOMs]
 
* Richard Ngo: [https://medium.com/@richardcngo/visualizing-the-deep-learning-revolution-722098eb9c5 Visualizing the deep learning revolution]
 
* Richard Ngo: [https://medium.com/@richardcngo/visualizing-the-deep-learning-revolution-722098eb9c5 Visualizing the deep learning revolution]
Line 5: Line 6:
 
* Epoch AI: [https://epoch.ai/trends Machine Learning Trends]
 
* Epoch AI: [https://epoch.ai/trends Machine Learning Trends]
 
* AI Digest: [https://theaidigest.org/progress-and-dangers How fast is AI improving?]
 
* AI Digest: [https://theaidigest.org/progress-and-dangers How fast is AI improving?]
 +
 +
==AGI Definition==
 +
* 2023-11: Allan Dafoe, Shane Legg, et al.: [https://arxiv.org/abs/2311.02462 Levels of AGI for Operationalizing Progress on the Path to AGI]
 +
* 2024-04: Bowen Xu: [https://arxiv.org/abs/2404.10731 What is Meant by AGI? On the Definition of Artificial General Intelligence]
  
 
=Economic and Political=
 
=Economic and Political=
Line 57: Line 62:
 
* 2024-10: Dario Amodei: [https://darioamodei.com/machines-of-loving-grace Machines of Loving Grace]
 
* 2024-10: Dario Amodei: [https://darioamodei.com/machines-of-loving-grace Machines of Loving Grace]
 
* 2024-11: Google DeepMind: [https://www.aipolicyperspectives.com/p/a-new-golden-age-of-discovery A new golden age of discovery]
 
* 2024-11: Google DeepMind: [https://www.aipolicyperspectives.com/p/a-new-golden-age-of-discovery A new golden age of discovery]
 +
* 2025-03: [https://finmoorhouse.com/ Fin Moorhouse], [https://www.williammacaskill.com/ Will MacAskill]: [https://www.forethought.org/research/preparing-for-the-intelligence-explosion Preparing for the Intelligence Explosion]
  
 
=Plans=
 
=Plans=
Line 62: Line 68:
 
* Marius Hobbhahn: [https://www.lesswrong.com/posts/bb5Tnjdrptu89rcyY/what-s-the-short-timeline-plan What’s the short timeline plan?]
 
* Marius Hobbhahn: [https://www.lesswrong.com/posts/bb5Tnjdrptu89rcyY/what-s-the-short-timeline-plan What’s the short timeline plan?]
 
* [https://cfg.eu/building-cern-for-ai/ Building CERN for AI: An institutional blueprint]
 
* [https://cfg.eu/building-cern-for-ai/ Building CERN for AI: An institutional blueprint]
 +
* [https://arxiv.org/abs/2503.05710 AGI, Governments, and Free Societies]
  
 
==Philosophy==
 
==Philosophy==
* Joe Carlsmith: [https://joecarlsmith.com/2024/01/02/otherness-and-control-in-the-age-of-agi Otherness and control in the age of AGI]
+
* [https://danfaggella.com/ Dan Faggella]:
 +
** 2018-07: [https://danfaggella.com/moral-singularity/ Moral Singularity – Unpredictable Values Bodes Poorly for Humanity]
 +
** 2025-02: [https://danfaggella.com/bend/ There is No Pause – We Must Bend the Posthuman Trajectory]
 +
* Joe Carlsmith: 2024: [https://joecarlsmith.com/2024/01/02/otherness-and-control-in-the-age-of-agi Otherness and control in the age of AGI]
 
*# [https://joecarlsmith.com/2024/01/02/gentleness-and-the-artificial-other Gentleness and the artificial Other]
 
*# [https://joecarlsmith.com/2024/01/02/gentleness-and-the-artificial-other Gentleness and the artificial Other]
 
*# [https://joecarlsmith.com/2024/01/04/deep-atheism-and-ai-risk Deep atheism and AI risk]
 
*# [https://joecarlsmith.com/2024/01/04/deep-atheism-and-ai-risk Deep atheism and AI risk]
Line 75: Line 85:
 
*# [https://joecarlsmith.com/2024/03/25/on-attunement On attunement]
 
*# [https://joecarlsmith.com/2024/03/25/on-attunement On attunement]
 
*# [https://joecarlsmith.com/2024/06/18/loving-a-world-you-dont-trust Loving a world you don’t trust]
 
*# [https://joecarlsmith.com/2024/06/18/loving-a-world-you-dont-trust Loving a world you don’t trust]
 +
* Anthony Aguirre:
 +
** [https://x.com/AnthonyNAguirre/status/1898023049930457468 2025-03]: [https://keepthefuturehuman.ai/ Keep The Future Human]
 +
[[Image:GlchEeObwAQ88NK.jpeg|300px]]
  
 
==Alignment==
 
==Alignment==
 
* [https://static1.squarespace.com/static/65392ca578eee444c445c9de/t/6606f95edb20e8118074a344/1711733370985/human-values-and-alignment-29MAR2024.pdf What are human values, and how do we align AI to them?] ([https://meaningalignment.substack.com/p/0480e023-98c0-4633-a604-990d3ac880ac blog])
 
* [https://static1.squarespace.com/static/65392ca578eee444c445c9de/t/6606f95edb20e8118074a344/1711733370985/human-values-and-alignment-29MAR2024.pdf What are human values, and how do we align AI to them?] ([https://meaningalignment.substack.com/p/0480e023-98c0-4633-a604-990d3ac880ac blog])
* Joe Carlsmith: [https://joecarlsmith.substack.com/p/how-do-we-solve-the-alignment-problem How do we solve the alignment problem?] Introduction to an essay series on paths to safe, useful superintelligence
+
* Joe Carlsmith: 2025: [https://joecarlsmith.substack.com/p/how-do-we-solve-the-alignment-problem How do we solve the alignment problem?] Introduction to an essay series on paths to safe, useful superintelligence
 
*# [https://joecarlsmith.substack.com/p/what-is-it-to-solve-the-alignment What is it to solve the alignment problem?] Also: to avoid it? Handle it? Solve it forever? Solve it completely?
 
*# [https://joecarlsmith.substack.com/p/what-is-it-to-solve-the-alignment What is it to solve the alignment problem?] Also: to avoid it? Handle it? Solve it forever? Solve it completely?
 
*# [https://joecarlsmith.substack.com/p/when-should-we-worry-about-ai-power When should we worry about AI power-seeking?]
 
*# [https://joecarlsmith.substack.com/p/when-should-we-worry-about-ai-power When should we worry about AI power-seeking?]
 +
*# [https://joecarlsmith.substack.com/p/paths-and-waystations-in-ai-safety Paths and waystations in AI safety]
 +
 +
==Strategic/Policy==
 +
* Amanda Askell, Miles Brundage, Gillian Hadfield: [https://arxiv.org/abs/1907.04534 The Role of Cooperation in Responsible AI Development]
 +
* Dan Hendrycks, Eric Schmidt, Alexandr Wang: [https://www.nationalsecurity.ai/ Superintelligence Strategy]
 +
** [https://www.nationalsecurity.ai/chapter/executive-summary Executive Summary]
 +
** [https://www.nationalsecurity.ai/chapter/introduction Introduction]
 +
** [https://www.nationalsecurity.ai/chapter/ai-is-pivotal-for-national-security AI Is Pivotal for National Security]
 +
** [https://www.nationalsecurity.ai/chapter/deterrence-with-mutual-assured-ai-malfunction-maim Deterrence with Mutual Assured AI Malfunction (MAIM)]
 +
** [https://www.nationalsecurity.ai/chapter/nonproliferation Nonproliferation]
 +
** [https://www.nationalsecurity.ai/chapter/competitiveness Competitiveness]
 +
** [https://www.nationalsecurity.ai/chapter/conclusion Conclusion]
 +
** [https://www.nationalsecurity.ai/chapter/appendix Appendix FAQs]
  
 
=See Also=
 
=See Also=
 
* [[AI safety]]
 
* [[AI safety]]

Latest revision as of 14:22, 12 March 2025

AGI Achievable

AGI Definition

Economic and Political

Job Loss

F-kVQuvWkAAemkr.png

Overall

Surveys of Opinions/Predictions

Bad Outcomes

Psychology

Science & Technology Improvements

Plans

Philosophy

GlchEeObwAQ88NK.jpeg

Alignment

Strategic/Policy

See Also