Difference between revisions of "AI predictions"

From GISAXS
Jump to: navigation, search
(Strategic/Policy)
(Job Loss)
 
(20 intermediate revisions by the same user not shown)
Line 11: Line 11:
 
* 2025-04: [https://epoch.ai/blog/trends-in-ai-supercomputers Trends in AI Supercomputers] ([https://arxiv.org/abs/2504.16026 preprint])
 
* 2025-04: [https://epoch.ai/blog/trends-in-ai-supercomputers Trends in AI Supercomputers] ([https://arxiv.org/abs/2504.16026 preprint])
 
* [https://ai-timeline.org/ The Road to AGI] (timeline visualization)
 
* [https://ai-timeline.org/ The Road to AGI] (timeline visualization)
 +
* 2025-09: [https://arxiv.org/abs/2509.09677 The Illusion of Diminishing Returns: Measuring Long Horizon Execution in LLMs]
 +
* 2025-09: [https://www.julian.ac/blog/2025/09/27/failing-to-understand-the-exponential-again/ Failing to Understand the Exponential, Again]
 +
 +
==Scaling Laws==
 +
See: [[AI_understanding#Scaling_Laws|Scaling Laws]]
  
 
==AGI Achievable==
 
==AGI Achievable==
Line 64: Line 69:
 
* 2025-05: [https://www.theguardian.com/books/2025/may/04/the-big-idea-can-we-stop-ai-making-humans-obsolete Better at everything: how AI could make human beings irrelevant]
 
* 2025-05: [https://www.theguardian.com/books/2025/may/04/the-big-idea-can-we-stop-ai-making-humans-obsolete Better at everything: how AI could make human beings irrelevant]
 
* 2025-05: Forethought: [https://www.forethought.org/research/the-industrial-explosion The Industrial Explosion]
 
* 2025-05: Forethought: [https://www.forethought.org/research/the-industrial-explosion The Industrial Explosion]
 +
* 2025-05: [https://arxiv.org/abs/2505.20273 Ten Principles of AI Agent Economics]
 
* 2025-07: [https://substack.com/home/post/p-167879696 What Economists Get Wrong about AI] They ignore innovation effects, use outdated capability assumptions, and miss the robotics revolution
 
* 2025-07: [https://substack.com/home/post/p-167879696 What Economists Get Wrong about AI] They ignore innovation effects, use outdated capability assumptions, and miss the robotics revolution
 +
* 2025-07: [https://conference.nber.org/conf_papers/f227505.pdf We Wont be Missed: Work and Growth in the Era of AGI]
 +
* 2025-07: [https://www.nber.org/books-and-chapters/economics-transformative-ai/we-wont-be-missed-work-and-growth-era-agi We Won't Be Missed: Work and Growth in the Era of AGI]
 +
* 2025-09: [https://conference.nber.org/conf_papers/f227491.pdf Genius on Demand: The Value of Transformative Artificial Intelligence]
  
 
==Job Loss==
 
==Job Loss==
Line 91: Line 100:
 
[[Image:0dab4c86-882d-4095-9d12-d19684ed5184 675x680.png|300px]]
 
[[Image:0dab4c86-882d-4095-9d12-d19684ed5184 675x680.png|300px]]
 
* 2025-07: Harvard Business Review: [https://hbr.org/2025/06/what-gets-measured-ai-will-automate What Gets Measured, AI Will Automate]
 
* 2025-07: Harvard Business Review: [https://hbr.org/2025/06/what-gets-measured-ai-will-automate What Gets Measured, AI Will Automate]
 +
* 2025-08: [https://digitaleconomy.stanford.edu/publications/canaries-in-the-coal-mine/ Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence]
 +
* 2025-10: [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5560401 Performance or Principle: Resistance to Artificial Intelligence in the U.S. Labor Market]
  
 
==National Security==
 
==National Security==
Line 120: Line 131:
  
 
=Overall=
 
=Overall=
 +
* 1993: [https://en.wikipedia.org/wiki/Vernor_Vinge Vernor Vinge]: [https://edoras.sdsu.edu/~vinge/misc/singularity.html The Coming Technological Singularity: How to Survive in the Post-Human Era]
 
* 2025-03: Kevin Roose (New York Times): [https://www.nytimes.com/2025/03/14/technology/why-im-feeling-the-agi.html?unlocked_article_code=1.304.TIEy.SmNhKYO4e9c7&smid=url-share Powerful A.I. Is Coming. We’re Not Ready.] Three arguments for taking progress toward artificial general intelligence, or A.G.I., more seriously — whether you’re an optimist or a pessimist.
 
* 2025-03: Kevin Roose (New York Times): [https://www.nytimes.com/2025/03/14/technology/why-im-feeling-the-agi.html?unlocked_article_code=1.304.TIEy.SmNhKYO4e9c7&smid=url-share Powerful A.I. Is Coming. We’re Not Ready.] Three arguments for taking progress toward artificial general intelligence, or A.G.I., more seriously — whether you’re an optimist or a pessimist.
 
* 2025-03: Nicholas Carlini: [https://nicholas.carlini.com/writing/2025/thoughts-on-future-ai.html My Thoughts on the Future of "AI"]: "I have very wide error bars on the potential future of large language models, and I think you should too."
 
* 2025-03: Nicholas Carlini: [https://nicholas.carlini.com/writing/2025/thoughts-on-future-ai.html My Thoughts on the Future of "AI"]: "I have very wide error bars on the potential future of large language models, and I think you should too."
Line 127: Line 139:
 
* 2016-06: [https://aiimpacts.org/2016-expert-survey-on-progress-in-ai/ 2016 Expert Survey on Progress in AI]
 
* 2016-06: [https://aiimpacts.org/2016-expert-survey-on-progress-in-ai/ 2016 Expert Survey on Progress in AI]
 
** 2023-03: [https://aiimpacts.org/scoring-forecasts-from-the-2016-expert-survey-on-progress-in-ai/ Scoring forecasts from the 2016 “Expert Survey on Progress in AI”]
 
** 2023-03: [https://aiimpacts.org/scoring-forecasts-from-the-2016-expert-survey-on-progress-in-ai/ Scoring forecasts from the 2016 “Expert Survey on Progress in AI”]
 +
* 2022-10: Forecasting Research Institute: [https://forecastingresearch.org/near-term-xpt-accuracy Assessing Near-Term Accuracy in the Existential Risk Persuasion Tournament]
 +
** 2025-09: Ethan Mollick: [https://x.com/emollick/status/1962859757674344823 Progress is ahead of expectations]
 
* 2023-08: [https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2023_expert_survey_on_progress_in_ai 2023 Expert Survey on Progress in AI]
 
* 2023-08: [https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2023_expert_survey_on_progress_in_ai 2023 Expert Survey on Progress in AI]
 
* 2024-01: [https://arxiv.org/abs/2401.02843 Thousands of AI Authors on the Future of AI]
 
* 2024-01: [https://arxiv.org/abs/2401.02843 Thousands of AI Authors on the Future of AI]
Line 143: Line 157:
 
* 2025-04: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean: [https://ai-2027.com/ AI 2027] ([https://ai-2027.com/scenario.pdf pdf])
 
* 2025-04: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean: [https://ai-2027.com/ AI 2027] ([https://ai-2027.com/scenario.pdf pdf])
 
* 2025-04: [https://www.forethought.org/research/ai-enabled-coups-how-a-small-group-could-use-ai-to-seize-power AI-Enabled Coups: How a Small Group Could Use AI to Seize Power]
 
* 2025-04: [https://www.forethought.org/research/ai-enabled-coups-how-a-small-group-could-use-ai-to-seize-power AI-Enabled Coups: How a Small Group Could Use AI to Seize Power]
 +
* 2025-09: [https://doctrines.ai/ The three main doctrines on the future of AI]
 +
** '''Dominance doctrine:''' First actor to create advanced AI will attain overwhelming strategic superiority
 +
** '''Extinction doctrine:''' Humanity will lose control of ASI, leading to extinction or permanent disempowerment
 +
** '''Replacement doctrine:''' AI will automate human tasks, but without fundamentally reshaping or ending civilization
 +
* 2025-09: Sean ÓhÉigeartaigh: [https://www.cambridge.org/core/journals/cambridge-prisms-extinction/article/extinction-of-the-human-species-what-could-cause-it-and-how-likely-is-it-to-occur/D8816A79BEF5A4C30A3E44FD8D768622 Extinction of the human species: What could cause it and how likely is it to occur?]
  
 
==Intelligence Explosion==
 
==Intelligence Explosion==
Line 152: Line 171:
 
[[Image:Gm-1jugbYAAtq Y.jpeg|450px]]
 
[[Image:Gm-1jugbYAAtq Y.jpeg|450px]]
 
* 2025-05: [https://www.thelastinvention.ai/ The Last Invention] Why Humanity’s Final Creation Changes Everything
 
* 2025-05: [https://www.thelastinvention.ai/ The Last Invention] Why Humanity’s Final Creation Changes Everything
 +
* 2025-08: [https://www.forethought.org/research/how-quick-and-big-would-a-software-intelligence-explosion-be How quick and big would a software intelligence explosion be?]
  
 
==Superintelligence==
 
==Superintelligence==
Line 165: Line 185:
 
* 2025-01: [https://longerramblings.substack.com/p/a-defence-of-slowness-at-the-end A defence of slowness at the end of the world]
 
* 2025-01: [https://longerramblings.substack.com/p/a-defence-of-slowness-at-the-end A defence of slowness at the end of the world]
  
=Science & Technology Improvements=
+
=Positives & Optimism=
 +
==Science & Technology Improvements==
 
* 2024-09: Sam Altman: [https://ia.samaltman.com/ The Intelligence Age]
 
* 2024-09: Sam Altman: [https://ia.samaltman.com/ The Intelligence Age]
 
* 2024-10: Dario Amodei: [https://darioamodei.com/machines-of-loving-grace Machines of Loving Grace]
 
* 2024-10: Dario Amodei: [https://darioamodei.com/machines-of-loving-grace Machines of Loving Grace]
 
* 2024-11: Google DeepMind: [https://www.aipolicyperspectives.com/p/a-new-golden-age-of-discovery A new golden age of discovery]
 
* 2024-11: Google DeepMind: [https://www.aipolicyperspectives.com/p/a-new-golden-age-of-discovery A new golden age of discovery]
 
* 2025-03: [https://finmoorhouse.com/ Fin Moorhouse], [https://www.williammacaskill.com/ Will MacAskill]: [https://www.forethought.org/research/preparing-for-the-intelligence-explosion Preparing for the Intelligence Explosion]
 
* 2025-03: [https://finmoorhouse.com/ Fin Moorhouse], [https://www.williammacaskill.com/ Will MacAskill]: [https://www.forethought.org/research/preparing-for-the-intelligence-explosion Preparing for the Intelligence Explosion]
 +
 +
==Social==
 +
* 2025-09: [https://blog.cosmos-institute.org/p/coasean-bargaining-at-scale Coasean Bargaining at Scale]: Decentralization, coordination, and co-existence with AGI
  
 
=Plans=
 
=Plans=
Line 217: Line 241:
 
*# [https://joecarlsmith.substack.com/p/ai-for-ai-safety AI for AI safety] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16790183-ai-for-ai-safety audio version])
 
*# [https://joecarlsmith.substack.com/p/ai-for-ai-safety AI for AI safety] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16790183-ai-for-ai-safety audio version])
 
*# [https://joecarlsmith.substack.com/p/can-we-safely-automate-alignment Can we safely automate alignment research?] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/17069901-can-we-safely-automate-alignment-research audio version], [https://joecarlsmith.substack.com/p/video-and-transcript-of-talk-on-automating?utm_source=post-email-title&publication_id=1022275&post_id=162375391&utm_campaign=email-post-title&isFreemail=true&r=5av1bk&triedRedirect=true&utm_medium=email video version])
 
*# [https://joecarlsmith.substack.com/p/can-we-safely-automate-alignment Can we safely automate alignment research?] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/17069901-can-we-safely-automate-alignment-research audio version], [https://joecarlsmith.substack.com/p/video-and-transcript-of-talk-on-automating?utm_source=post-email-title&publication_id=1022275&post_id=162375391&utm_campaign=email-post-title&isFreemail=true&r=5av1bk&triedRedirect=true&utm_medium=email video version])
 +
*# [https://joecarlsmith.substack.com/p/giving-ais-safe-motivations?utm_source=post-email-title&publication_id=1022275&post_id=171250683&utm_campaign=email-post-title&isFreemail=true&r=5av1bk&triedRedirect=true&utm_medium=email Giving AIs safe motivations] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/17686921-giving-ais-safe-motivations audio version])
 
* 2025-04: Dario Amodei: [https://www.darioamodei.com/post/the-urgency-of-interpretability The Urgency of Interpretability]
 
* 2025-04: Dario Amodei: [https://www.darioamodei.com/post/the-urgency-of-interpretability The Urgency of Interpretability]
  

Latest revision as of 14:35, 9 October 2025

Capability Scaling

GmZHL8xWQAAtFlF.jpeg

Scaling Laws

See: Scaling Laws

AGI Achievable

AGI Definition

Progress Models

AI impact models01.png

Economic and Political

Job Loss

F-kVQuvWkAAemkr.png

0dab4c86-882d-4095-9d12-d19684ed5184 675x680.png

National Security

AI Manhattan Project

Near-term

Overall

Surveys of Opinions/Predictions

Bad Outcomes

Intelligence Explosion

Gm-1jugbYAAtq Y.jpeg

Superintelligence

Long-range/Philosophy

Psychology

Positives & Optimism

Science & Technology Improvements

Social

Plans

Philosophy

GlchEeObwAQ88NK.jpeg

Research

Alignment

Strategic/Technical

Strategic/Policy

Restriction

See Also