Difference between revisions of "AI predictions"

From GISAXS
Jump to: navigation, search
(Alignment)
(Economic and Political)
 
(11 intermediate revisions by the same user not shown)
Line 20: Line 20:
 
* 2025-01: [https://inferencemagazine.substack.com/p/how-much-economic-growth-from-ai How much economic growth from AI should we expect, how soon?]
 
* 2025-01: [https://inferencemagazine.substack.com/p/how-much-economic-growth-from-ai How much economic growth from AI should we expect, how soon?]
 
* 2025-02: [https://www.anthropic.com/news/the-anthropic-economic-index The Anthropic Economic Index]: [https://assets.anthropic.com/m/2e23255f1e84ca97/original/Economic_Tasks_AI_Paper.pdf Which Economic Tasks are Performed with AI? Evidence from Millions of Claude Conversations]
 
* 2025-02: [https://www.anthropic.com/news/the-anthropic-economic-index The Anthropic Economic Index]: [https://assets.anthropic.com/m/2e23255f1e84ca97/original/Economic_Tasks_AI_Paper.pdf Which Economic Tasks are Performed with AI? Evidence from Millions of Claude Conversations]
 +
* 2025-02: [https://arxiv.org/abs/2502.11264 Strategic Wealth Accumulation Under Transformative AI Expectations]
 +
* 2025-02: Tyler Cowen: [https://marginalrevolution.com/marginalrevolution/2025/02/why-i-think-ai-take-off-is-relatively-slow.html Why I think AI take-off is relatively slow]
  
 
==Job Loss==
 
==Job Loss==
Line 56: Line 58:
 
* Marius Hobbhahn: [https://www.lesswrong.com/posts/bb5Tnjdrptu89rcyY/what-s-the-short-timeline-plan What’s the short timeline plan?]
 
* Marius Hobbhahn: [https://www.lesswrong.com/posts/bb5Tnjdrptu89rcyY/what-s-the-short-timeline-plan What’s the short timeline plan?]
 
* [https://cfg.eu/building-cern-for-ai/ Building CERN for AI: An institutional blueprint]
 
* [https://cfg.eu/building-cern-for-ai/ Building CERN for AI: An institutional blueprint]
 +
 +
==Philosophy==
 +
* Joe Carlsmith: [https://joecarlsmith.com/2024/01/02/otherness-and-control-in-the-age-of-agi Otherness and control in the age of AGI]
 +
*# [https://joecarlsmith.com/2024/01/02/gentleness-and-the-artificial-other Gentleness and the artificial Other]
 +
*# [https://joecarlsmith.com/2024/01/04/deep-atheism-and-ai-risk Deep atheism and AI risk]
 +
*# [https://joecarlsmith.com/2024/01/08/when-yang-goes-wrong When “yang” goes wrong]
 +
*# [https://joecarlsmith.com/2024/01/09/does-ai-risk-other-the-ais Does AI risk “other” the AIs?]
 +
*# [https://joecarlsmith.com/2024/01/11/an-even-deeper-atheism An even deeper atheism]
 +
*# [https://joecarlsmith.com/2024/01/16/being-nicer-than-clippy Being nicer than Clippy]
 +
*# [https://joecarlsmith.com/2024/01/18/on-the-abolition-of-man On the abolition of man]
 +
*# [https://joecarlsmith.com/2024/03/21/on-green On green]
 +
*# [https://joecarlsmith.com/2024/03/25/on-attunement On attunement]
 +
*# [https://joecarlsmith.com/2024/06/18/loving-a-world-you-dont-trust Loving a world you don’t trust]
  
 
==Alignment==
 
==Alignment==
 +
* [https://static1.squarespace.com/static/65392ca578eee444c445c9de/t/6606f95edb20e8118074a344/1711733370985/human-values-and-alignment-29MAR2024.pdf What are human values, and how do we align AI to them?] ([https://meaningalignment.substack.com/p/0480e023-98c0-4633-a604-990d3ac880ac blog])
 
* Joe Carlsmith: [https://joecarlsmith.substack.com/p/how-do-we-solve-the-alignment-problem How do we solve the alignment problem?] Introduction to an essay series on paths to safe, useful superintelligence
 
* Joe Carlsmith: [https://joecarlsmith.substack.com/p/how-do-we-solve-the-alignment-problem How do we solve the alignment problem?] Introduction to an essay series on paths to safe, useful superintelligence
 
*# [https://joecarlsmith.substack.com/p/what-is-it-to-solve-the-alignment What is it to solve the alignment problem?] Also: to avoid it? Handle it? Solve it forever? Solve it completely?
 
*# [https://joecarlsmith.substack.com/p/what-is-it-to-solve-the-alignment What is it to solve the alignment problem?] Also: to avoid it? Handle it? Solve it forever? Solve it completely?
 +
*# [https://joecarlsmith.substack.com/p/when-should-we-worry-about-ai-power When should we worry about AI power-seeking?]
 +
 +
=See Also=
 +
* [[AI safety]]

Latest revision as of 13:31, 23 February 2025

AGI Achievable

Economic and Political

Job Loss

F-kVQuvWkAAemkr.png

Overall

Bad Outcomes

Psychology

Science & Technology Improvements

Plans

Philosophy

Alignment

See Also