Difference between revisions of "AI predictions"
KevinYager (talk | contribs) (→Plans) |
KevinYager (talk | contribs) (→Economic and Political) |
||
(10 intermediate revisions by the same user not shown) | |||
Line 20: | Line 20: | ||
* 2025-01: [https://inferencemagazine.substack.com/p/how-much-economic-growth-from-ai How much economic growth from AI should we expect, how soon?] | * 2025-01: [https://inferencemagazine.substack.com/p/how-much-economic-growth-from-ai How much economic growth from AI should we expect, how soon?] | ||
* 2025-02: [https://www.anthropic.com/news/the-anthropic-economic-index The Anthropic Economic Index]: [https://assets.anthropic.com/m/2e23255f1e84ca97/original/Economic_Tasks_AI_Paper.pdf Which Economic Tasks are Performed with AI? Evidence from Millions of Claude Conversations] | * 2025-02: [https://www.anthropic.com/news/the-anthropic-economic-index The Anthropic Economic Index]: [https://assets.anthropic.com/m/2e23255f1e84ca97/original/Economic_Tasks_AI_Paper.pdf Which Economic Tasks are Performed with AI? Evidence from Millions of Claude Conversations] | ||
+ | * 2025-02: [https://arxiv.org/abs/2502.11264 Strategic Wealth Accumulation Under Transformative AI Expectations] | ||
+ | * 2025-02: Tyler Cowen: [https://marginalrevolution.com/marginalrevolution/2025/02/why-i-think-ai-take-off-is-relatively-slow.html Why I think AI take-off is relatively slow] | ||
==Job Loss== | ==Job Loss== | ||
Line 53: | Line 55: | ||
=Plans= | =Plans= | ||
− | |||
* [https://www.narrowpath.co/ A Narrow Path: How to Secure our Future] | * [https://www.narrowpath.co/ A Narrow Path: How to Secure our Future] | ||
* Marius Hobbhahn: [https://www.lesswrong.com/posts/bb5Tnjdrptu89rcyY/what-s-the-short-timeline-plan What’s the short timeline plan?] | * Marius Hobbhahn: [https://www.lesswrong.com/posts/bb5Tnjdrptu89rcyY/what-s-the-short-timeline-plan What’s the short timeline plan?] | ||
* [https://cfg.eu/building-cern-for-ai/ Building CERN for AI: An institutional blueprint] | * [https://cfg.eu/building-cern-for-ai/ Building CERN for AI: An institutional blueprint] | ||
+ | |||
+ | ==Philosophy== | ||
+ | * Joe Carlsmith: [https://joecarlsmith.com/2024/01/02/otherness-and-control-in-the-age-of-agi Otherness and control in the age of AGI] | ||
+ | *# [https://joecarlsmith.com/2024/01/02/gentleness-and-the-artificial-other Gentleness and the artificial Other] | ||
+ | *# [https://joecarlsmith.com/2024/01/04/deep-atheism-and-ai-risk Deep atheism and AI risk] | ||
+ | *# [https://joecarlsmith.com/2024/01/08/when-yang-goes-wrong When “yang” goes wrong] | ||
+ | *# [https://joecarlsmith.com/2024/01/09/does-ai-risk-other-the-ais Does AI risk “other” the AIs?] | ||
+ | *# [https://joecarlsmith.com/2024/01/11/an-even-deeper-atheism An even deeper atheism] | ||
+ | *# [https://joecarlsmith.com/2024/01/16/being-nicer-than-clippy Being nicer than Clippy] | ||
+ | *# [https://joecarlsmith.com/2024/01/18/on-the-abolition-of-man On the abolition of man] | ||
+ | *# [https://joecarlsmith.com/2024/03/21/on-green On green] | ||
+ | *# [https://joecarlsmith.com/2024/03/25/on-attunement On attunement] | ||
+ | *# [https://joecarlsmith.com/2024/06/18/loving-a-world-you-dont-trust Loving a world you don’t trust] | ||
==Alignment== | ==Alignment== | ||
+ | * [https://static1.squarespace.com/static/65392ca578eee444c445c9de/t/6606f95edb20e8118074a344/1711733370985/human-values-and-alignment-29MAR2024.pdf What are human values, and how do we align AI to them?] ([https://meaningalignment.substack.com/p/0480e023-98c0-4633-a604-990d3ac880ac blog]) | ||
* Joe Carlsmith: [https://joecarlsmith.substack.com/p/how-do-we-solve-the-alignment-problem How do we solve the alignment problem?] Introduction to an essay series on paths to safe, useful superintelligence | * Joe Carlsmith: [https://joecarlsmith.substack.com/p/how-do-we-solve-the-alignment-problem How do we solve the alignment problem?] Introduction to an essay series on paths to safe, useful superintelligence | ||
*# [https://joecarlsmith.substack.com/p/what-is-it-to-solve-the-alignment What is it to solve the alignment problem?] Also: to avoid it? Handle it? Solve it forever? Solve it completely? | *# [https://joecarlsmith.substack.com/p/what-is-it-to-solve-the-alignment What is it to solve the alignment problem?] Also: to avoid it? Handle it? Solve it forever? Solve it completely? | ||
+ | *# [https://joecarlsmith.substack.com/p/when-should-we-worry-about-ai-power When should we worry about AI power-seeking?] | ||
+ | |||
+ | =See Also= | ||
+ | * [[AI safety]] |
Latest revision as of 13:31, 23 February 2025
Contents
AGI Achievable
- Leopold Aschenbrenner: Situational Awareness: Counting the OOMs
- Richard Ngo: Visualizing the deep learning revolution
- Katja Grace: Survey of 2,778 AI authors: six parts in pictures
- Epoch AI: Machine Learning Trends
- AI Digest: How fast is AI improving?
Economic and Political
- 2023-12: My techno-optimism: "defensive acceleration" (Vitalik Buterin)
- 2024-03: Noah Smith: Plentiful, high-paying jobs in the age of AI: Comparative advantage is very subtle, but incredibly powerful. (video)
- 2024-03: Scenarios for the Transition to AGI (AGI leads to wage collapse)
- 2024-06: Situational Awareness (Leopold Aschenbrenner) - select quotes, podcast, text summary of podcast
- 2024-06: AI and Growth: Where Do We Stand?
- 2024-09: OpenAI Infrastructure is Destiny: Economic Returns on US Investment in Democratic AI
- 2024-12: By default, capital will matter more than ever after AGI (L Rudolf L)
- 2025-01: The Intelligence Curse: With AGI, powerful actors will lose their incentives to invest in people
- 2025-01: Microsoft: The Golden Opportunity for American AI
- 2025-01: AGI Will Not Make Labor Worthless
- 2025-01: AI in America: OpenAI's Economic Blueprint (blog)
- 2025-01: How much economic growth from AI should we expect, how soon?
- 2025-02: The Anthropic Economic Index: Which Economic Tasks are Performed with AI? Evidence from Millions of Claude Conversations
- 2025-02: Strategic Wealth Accumulation Under Transformative AI Expectations
- 2025-02: Tyler Cowen: Why I think AI take-off is relatively slow
Job Loss
- 2023-03: GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models
- 2023-08: The Short-Term Effects of Generative Artificial Intelligence on Employment: Evidence from an Online Labor Market
- 2023-09: What drives UK firms to adopt AI and robotics, and what are the consequences for jobs?
- 2023-11: New Analysis Shows Over 20% of US Jobs Significantly Exposed to AI Automation In the Near Future
- 2024-01: Duolingo cuts 10% of its contractor workforce as the company embraces AI
- 2024-02: Gen AI is a tool for growth, not just efficiency: Tech CEOs are investing to build their workforce and capitalise on new opportunities from generative AI. That’s a sharp contrast to how their peers view it.
- 2024-04: AI is Poised to Replace the Entry-Level Grunt Work of a Wall Street Career
- 2024-07: AI Is Already Taking Jobs in the Video Game Industry: A WIRED investigation finds that major players like Activision Blizzard, which recently laid off scores of workers, are using generative AI for game development
- 2024-08: Klarna: AI lets us cut thousands of jobs - but pay more
- 2025-01: AI and Freelancers: Has the Inflection Point Arrived?
- 2025-01: Yes, you're going to be replaced: So much cope about AI
Overall
Bad Outcomes
- 2019-03: What failure looks like
- 2025-01: Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development (web version)
Psychology
Science & Technology Improvements
- 2024-09: Sam Altman: The Intelligence Age
- 2024-10: Dario Amodei: Machines of Loving Grace
- 2024-11: Google DeepMind: A new golden age of discovery
Plans
- A Narrow Path: How to Secure our Future
- Marius Hobbhahn: What’s the short timeline plan?
- Building CERN for AI: An institutional blueprint
Philosophy
- Joe Carlsmith: Otherness and control in the age of AGI
Alignment
- What are human values, and how do we align AI to them? (blog)
- Joe Carlsmith: How do we solve the alignment problem? Introduction to an essay series on paths to safe, useful superintelligence
- What is it to solve the alignment problem? Also: to avoid it? Handle it? Solve it forever? Solve it completely?
- When should we worry about AI power-seeking?