Difference between revisions of "AI predictions"
KevinYager (talk | contribs) (→Philosophy) |
KevinYager (talk | contribs) (→Alignment) |
||
(9 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
=AGI Achievable= | =AGI Achievable= | ||
+ | * Yoshua Bengio: [https://arxiv.org/abs/2310.17688 Managing extreme AI risks amid rapid progress] | ||
* Leopold Aschenbrenner: [https://situational-awareness.ai/from-gpt-4-to-agi/#Counting_the_OOMs Situational Awareness: Counting the OOMs] | * Leopold Aschenbrenner: [https://situational-awareness.ai/from-gpt-4-to-agi/#Counting_the_OOMs Situational Awareness: Counting the OOMs] | ||
* Richard Ngo: [https://medium.com/@richardcngo/visualizing-the-deep-learning-revolution-722098eb9c5 Visualizing the deep learning revolution] | * Richard Ngo: [https://medium.com/@richardcngo/visualizing-the-deep-learning-revolution-722098eb9c5 Visualizing the deep learning revolution] | ||
Line 5: | Line 6: | ||
* Epoch AI: [https://epoch.ai/trends Machine Learning Trends] | * Epoch AI: [https://epoch.ai/trends Machine Learning Trends] | ||
* AI Digest: [https://theaidigest.org/progress-and-dangers How fast is AI improving?] | * AI Digest: [https://theaidigest.org/progress-and-dangers How fast is AI improving?] | ||
+ | |||
+ | ==AGI Definition== | ||
+ | * 2023-11: Allan Dafoe, Shane Legg, et al.: [https://arxiv.org/abs/2311.02462 Levels of AGI for Operationalizing Progress on the Path to AGI] | ||
+ | * 2024-04: Bowen Xu: [https://arxiv.org/abs/2404.10731 What is Meant by AGI? On the Definition of Artificial General Intelligence] | ||
=Economic and Political= | =Economic and Political= | ||
Line 67: | Line 72: | ||
** 2018-07: [https://danfaggella.com/moral-singularity/ Moral Singularity – Unpredictable Values Bodes Poorly for Humanity] | ** 2018-07: [https://danfaggella.com/moral-singularity/ Moral Singularity – Unpredictable Values Bodes Poorly for Humanity] | ||
** 2025-02: [https://danfaggella.com/bend/ There is No Pause – We Must Bend the Posthuman Trajectory] | ** 2025-02: [https://danfaggella.com/bend/ There is No Pause – We Must Bend the Posthuman Trajectory] | ||
− | * Joe Carlsmith: [https://joecarlsmith.com/2024/01/02/otherness-and-control-in-the-age-of-agi Otherness and control in the age of AGI] | + | * Joe Carlsmith: 2024: [https://joecarlsmith.com/2024/01/02/otherness-and-control-in-the-age-of-agi Otherness and control in the age of AGI] |
*# [https://joecarlsmith.com/2024/01/02/gentleness-and-the-artificial-other Gentleness and the artificial Other] | *# [https://joecarlsmith.com/2024/01/02/gentleness-and-the-artificial-other Gentleness and the artificial Other] | ||
*# [https://joecarlsmith.com/2024/01/04/deep-atheism-and-ai-risk Deep atheism and AI risk] | *# [https://joecarlsmith.com/2024/01/04/deep-atheism-and-ai-risk Deep atheism and AI risk] | ||
Line 78: | Line 83: | ||
*# [https://joecarlsmith.com/2024/03/25/on-attunement On attunement] | *# [https://joecarlsmith.com/2024/03/25/on-attunement On attunement] | ||
*# [https://joecarlsmith.com/2024/06/18/loving-a-world-you-dont-trust Loving a world you don’t trust] | *# [https://joecarlsmith.com/2024/06/18/loving-a-world-you-dont-trust Loving a world you don’t trust] | ||
+ | * Anthony Aguirre: | ||
+ | ** [https://x.com/AnthonyNAguirre/status/1898023049930457468 2025-03]: [https://keepthefuturehuman.ai/ Keep The Future Human] | ||
+ | [[Image:GlchEeObwAQ88NK.jpeg|300px]] | ||
==Alignment== | ==Alignment== | ||
* [https://static1.squarespace.com/static/65392ca578eee444c445c9de/t/6606f95edb20e8118074a344/1711733370985/human-values-and-alignment-29MAR2024.pdf What are human values, and how do we align AI to them?] ([https://meaningalignment.substack.com/p/0480e023-98c0-4633-a604-990d3ac880ac blog]) | * [https://static1.squarespace.com/static/65392ca578eee444c445c9de/t/6606f95edb20e8118074a344/1711733370985/human-values-and-alignment-29MAR2024.pdf What are human values, and how do we align AI to them?] ([https://meaningalignment.substack.com/p/0480e023-98c0-4633-a604-990d3ac880ac blog]) | ||
− | * Joe Carlsmith: [https://joecarlsmith.substack.com/p/how-do-we-solve-the-alignment-problem How do we solve the alignment problem?] Introduction to an essay series on paths to safe, useful superintelligence | + | * Joe Carlsmith: 2025: [https://joecarlsmith.substack.com/p/how-do-we-solve-the-alignment-problem How do we solve the alignment problem?] Introduction to an essay series on paths to safe, useful superintelligence |
*# [https://joecarlsmith.substack.com/p/what-is-it-to-solve-the-alignment What is it to solve the alignment problem?] Also: to avoid it? Handle it? Solve it forever? Solve it completely? | *# [https://joecarlsmith.substack.com/p/what-is-it-to-solve-the-alignment What is it to solve the alignment problem?] Also: to avoid it? Handle it? Solve it forever? Solve it completely? | ||
*# [https://joecarlsmith.substack.com/p/when-should-we-worry-about-ai-power When should we worry about AI power-seeking?] | *# [https://joecarlsmith.substack.com/p/when-should-we-worry-about-ai-power When should we worry about AI power-seeking?] | ||
+ | *# [https://joecarlsmith.substack.com/p/paths-and-waystations-in-ai-safety Paths and waystations in AI safety] | ||
+ | |||
+ | ==Strategic/Policy== | ||
+ | * Amanda Askell, Miles Brundage, Gillian Hadfield: [https://arxiv.org/abs/1907.04534 The Role of Cooperation in Responsible AI Development] | ||
+ | * Dan Hendrycks, Eric Schmidt, Alexandr Wang: [https://www.nationalsecurity.ai/ Superintelligence Strategy] | ||
+ | ** [https://www.nationalsecurity.ai/chapter/executive-summary Executive Summary] | ||
+ | ** [https://www.nationalsecurity.ai/chapter/introduction Introduction] | ||
+ | ** [https://www.nationalsecurity.ai/chapter/ai-is-pivotal-for-national-security AI Is Pivotal for National Security] | ||
+ | ** [https://www.nationalsecurity.ai/chapter/deterrence-with-mutual-assured-ai-malfunction-maim Deterrence with Mutual Assured AI Malfunction (MAIM)] | ||
+ | ** [https://www.nationalsecurity.ai/chapter/nonproliferation Nonproliferation] | ||
+ | ** [https://www.nationalsecurity.ai/chapter/competitiveness Competitiveness] | ||
+ | ** [https://www.nationalsecurity.ai/chapter/conclusion Conclusion] | ||
+ | ** [https://www.nationalsecurity.ai/chapter/appendix Appendix FAQs] | ||
=See Also= | =See Also= | ||
* [[AI safety]] | * [[AI safety]] |
Latest revision as of 17:03, 11 March 2025
Contents
AGI Achievable
- Yoshua Bengio: Managing extreme AI risks amid rapid progress
- Leopold Aschenbrenner: Situational Awareness: Counting the OOMs
- Richard Ngo: Visualizing the deep learning revolution
- Katja Grace: Survey of 2,778 AI authors: six parts in pictures
- Epoch AI: Machine Learning Trends
- AI Digest: How fast is AI improving?
AGI Definition
- 2023-11: Allan Dafoe, Shane Legg, et al.: Levels of AGI for Operationalizing Progress on the Path to AGI
- 2024-04: Bowen Xu: What is Meant by AGI? On the Definition of Artificial General Intelligence
Economic and Political
- 2023-12: My techno-optimism: "defensive acceleration" (Vitalik Buterin)
- 2024-03: Noah Smith: Plentiful, high-paying jobs in the age of AI: Comparative advantage is very subtle, but incredibly powerful. (video)
- 2024-03: Scenarios for the Transition to AGI (AGI leads to wage collapse)
- 2024-06: Situational Awareness (Leopold Aschenbrenner) - select quotes, podcast, text summary of podcast
- 2024-06: AI and Growth: Where Do We Stand?
- 2024-09: OpenAI Infrastructure is Destiny: Economic Returns on US Investment in Democratic AI
- 2024-12: By default, capital will matter more than ever after AGI (L Rudolf L)
- 2025-01: The Intelligence Curse: With AGI, powerful actors will lose their incentives to invest in people
- 2025-01: Microsoft: The Golden Opportunity for American AI
- 2025-01: AGI Will Not Make Labor Worthless
- 2025-01: AI in America: OpenAI's Economic Blueprint (blog)
- 2025-01: How much economic growth from AI should we expect, how soon?
- 2025-02: Morgan Stanley: The Humanoid 100: Mapping the Humanoid Robot Value Chain
- 2025-02: The Anthropic Economic Index: Which Economic Tasks are Performed with AI? Evidence from Millions of Claude Conversations
- 2025-02: Strategic Wealth Accumulation Under Transformative AI Expectations
- 2025-02: Tyler Cowen: Why I think AI take-off is relatively slow
Job Loss
- 2023-03: GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models
- 2023-08: The Short-Term Effects of Generative Artificial Intelligence on Employment: Evidence from an Online Labor Market
- 2023-09: What drives UK firms to adopt AI and robotics, and what are the consequences for jobs?
- 2023-11: New Analysis Shows Over 20% of US Jobs Significantly Exposed to AI Automation In the Near Future
- 2024-01: Duolingo cuts 10% of its contractor workforce as the company embraces AI
- 2024-02: Gen AI is a tool for growth, not just efficiency: Tech CEOs are investing to build their workforce and capitalise on new opportunities from generative AI. That’s a sharp contrast to how their peers view it.
- 2024-04: AI is Poised to Replace the Entry-Level Grunt Work of a Wall Street Career
- 2024-07: AI Is Already Taking Jobs in the Video Game Industry: A WIRED investigation finds that major players like Activision Blizzard, which recently laid off scores of workers, are using generative AI for game development
- 2024-08: Klarna: AI lets us cut thousands of jobs - but pay more
- 2025-01: AI and Freelancers: Has the Inflection Point Arrived?
- 2025-01: Yes, you're going to be replaced: So much cope about AI
Overall
Surveys of Opinions/Predictions
Bad Outcomes
- 2019-03: What failure looks like
- 2025-01: Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development (web version)
Psychology
Science & Technology Improvements
- 2024-09: Sam Altman: The Intelligence Age
- 2024-10: Dario Amodei: Machines of Loving Grace
- 2024-11: Google DeepMind: A new golden age of discovery
Plans
- A Narrow Path: How to Secure our Future
- Marius Hobbhahn: What’s the short timeline plan?
- Building CERN for AI: An institutional blueprint
Philosophy
- Dan Faggella:
- Joe Carlsmith: 2024: Otherness and control in the age of AGI
- Anthony Aguirre:
Alignment
- What are human values, and how do we align AI to them? (blog)
- Joe Carlsmith: 2025: How do we solve the alignment problem? Introduction to an essay series on paths to safe, useful superintelligence
- What is it to solve the alignment problem? Also: to avoid it? Handle it? Solve it forever? Solve it completely?
- When should we worry about AI power-seeking?
- Paths and waystations in AI safety
Strategic/Policy
- Amanda Askell, Miles Brundage, Gillian Hadfield: The Role of Cooperation in Responsible AI Development
- Dan Hendrycks, Eric Schmidt, Alexandr Wang: Superintelligence Strategy