Difference between revisions of "AI predictions"

From GISAXS
Jump to: navigation, search
(Alignment)
(Economic and Political)
 
(10 intermediate revisions by the same user not shown)
Line 36: Line 36:
 
=Economic and Political=
 
=Economic and Political=
 
* 2019-11: [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3482150 The Impact of Artificial Intelligence on the Labor Market]
 
* 2019-11: [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3482150 The Impact of Artificial Intelligence on the Labor Market]
 +
* 2020-06: [https://www.openphilanthropy.org/research/modeling-the-human-trajectory/ Modeling the Human Trajectory] (GDP)
 +
* 2021-06: [https://www.openphilanthropy.org/research/report-on-whether-ai-could-drive-explosive-economic-growth/ Report on Whether AI Could Drive Explosive Economic Growth]
 
* 2023-10: Marc Andreessen: [https://a16z.com/the-techno-optimist-manifesto/ The Techno-Optimist Manifesto]
 
* 2023-10: Marc Andreessen: [https://a16z.com/the-techno-optimist-manifesto/ The Techno-Optimist Manifesto]
 
* 2023-12: [https://vitalik.eth.limo/general/2023/11/27/techno_optimism.html My techno-optimism]: "defensive acceleration" ([https://vitalik.eth.limo/index.html Vitalik Buterin])
 
* 2023-12: [https://vitalik.eth.limo/general/2023/11/27/techno_optimism.html My techno-optimism]: "defensive acceleration" ([https://vitalik.eth.limo/index.html Vitalik Buterin])
Line 75: Line 77:
 
* 2025-07: [https://www.nber.org/books-and-chapters/economics-transformative-ai/we-wont-be-missed-work-and-growth-era-agi We Won't Be Missed: Work and Growth in the Era of AGI]
 
* 2025-07: [https://www.nber.org/books-and-chapters/economics-transformative-ai/we-wont-be-missed-work-and-growth-era-agi We Won't Be Missed: Work and Growth in the Era of AGI]
 
* 2025-09: [https://conference.nber.org/conf_papers/f227491.pdf Genius on Demand: The Value of Transformative Artificial Intelligence]
 
* 2025-09: [https://conference.nber.org/conf_papers/f227491.pdf Genius on Demand: The Value of Transformative Artificial Intelligence]
 +
* 2025-10: [https://peterwildeford.substack.com/p/ai-is-probably-not-a-bubble AI is probably not a bubble: AI companies have revenue, demand, and paths to immense value]
 +
* 2025-11: [https://windowsontheory.org/2025/11/04/thoughts-by-a-non-economist-on-ai-and-economics/ Thoughts by a non-economist on AI and economics]
 +
* 2025-11: [https://www.nber.org/papers/w34444 Artificial Intelligence, Competition, and Welfare]
 +
* 2025-11: [https://www.anthropic.com/research/estimating-productivity-gains Estimating AI productivity gains from Claude conversations] (Anthropic)
  
 
==Job Loss==
 
==Job Loss==
Line 131: Line 137:
 
* 2025-06: IdeaFoundry: [https://ideafoundry.substack.com/p/evolution-vs-extinction-the-choice Evolution vs. Extinction: The Choice is Ours] The next 18 months will decide whether AI ends us or evolves us
 
* 2025-06: IdeaFoundry: [https://ideafoundry.substack.com/p/evolution-vs-extinction-the-choice Evolution vs. Extinction: The Choice is Ours] The next 18 months will decide whether AI ends us or evolves us
 
* 2025-07: [https://cfg.eu/advanced-ai-possible-futures/ Advanced AI: Possible futures] Five scenarios for how the AI-transition could unfold
 
* 2025-07: [https://cfg.eu/advanced-ai-possible-futures/ Advanced AI: Possible futures] Five scenarios for how the AI-transition could unfold
 +
* 2025-11: [https://android-dreams.ai/ Android Dreams]
  
 
=Overall=
 
=Overall=
Line 189: Line 196:
 
=Positives & Optimism=
 
=Positives & Optimism=
 
==Science & Technology Improvements==
 
==Science & Technology Improvements==
 +
* 2023-05: [https://www.planned-obsolescence.org/author/kelsey/ Kelsey Piper]: [https://www.planned-obsolescence.org/the-costs-of-caution/ The costs of caution]
 
* 2024-09: Sam Altman: [https://ia.samaltman.com/ The Intelligence Age]
 
* 2024-09: Sam Altman: [https://ia.samaltman.com/ The Intelligence Age]
 
* 2024-10: Dario Amodei: [https://darioamodei.com/machines-of-loving-grace Machines of Loving Grace]
 
* 2024-10: Dario Amodei: [https://darioamodei.com/machines-of-loving-grace Machines of Loving Grace]
Line 245: Line 253:
 
*# [https://joecarlsmith.substack.com/p/can-we-safely-automate-alignment Can we safely automate alignment research?] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/17069901-can-we-safely-automate-alignment-research audio version], [https://joecarlsmith.substack.com/p/video-and-transcript-of-talk-on-automating?utm_source=post-email-title&publication_id=1022275&post_id=162375391&utm_campaign=email-post-title&isFreemail=true&r=5av1bk&triedRedirect=true&utm_medium=email video version])
 
*# [https://joecarlsmith.substack.com/p/can-we-safely-automate-alignment Can we safely automate alignment research?] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/17069901-can-we-safely-automate-alignment-research audio version], [https://joecarlsmith.substack.com/p/video-and-transcript-of-talk-on-automating?utm_source=post-email-title&publication_id=1022275&post_id=162375391&utm_campaign=email-post-title&isFreemail=true&r=5av1bk&triedRedirect=true&utm_medium=email video version])
 
*# [https://joecarlsmith.substack.com/p/giving-ais-safe-motivations?utm_source=post-email-title&publication_id=1022275&post_id=171250683&utm_campaign=email-post-title&isFreemail=true&r=5av1bk&triedRedirect=true&utm_medium=email Giving AIs safe motivations] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/17686921-giving-ais-safe-motivations audio version])
 
*# [https://joecarlsmith.substack.com/p/giving-ais-safe-motivations?utm_source=post-email-title&publication_id=1022275&post_id=171250683&utm_campaign=email-post-title&isFreemail=true&r=5av1bk&triedRedirect=true&utm_medium=email Giving AIs safe motivations] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/17686921-giving-ais-safe-motivations audio version])
*# [https://joecarlsmith.com/2025/09/29/controlling-the-options-ais-can-pursue Controlling the options AIs can pursue]
+
*# [https://joecarlsmith.com/2025/09/29/controlling-the-options-ais-can-pursue Controlling the options AIs can pursue] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/17909401-controlling-the-options-ais-can-pursue audio version])
 +
*# [https://joecarlsmith.substack.com/p/how-human-like-do-safe-ai-motivations?utm_source=post-email-title&publication_id=1022275&post_id=178666988&utm_campaign=email-post-title&isFreemail=true&r=5av1bk&triedRedirect=true&utm_medium=email How human-like do safe AI motivations need to be?] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/18175429-how-human-like-do-safe-ai-motivations-need-to-be audio version])
 
* 2025-04: Dario Amodei: [https://www.darioamodei.com/post/the-urgency-of-interpretability The Urgency of Interpretability]
 
* 2025-04: Dario Amodei: [https://www.darioamodei.com/post/the-urgency-of-interpretability The Urgency of Interpretability]
  

Latest revision as of 10:00, 26 November 2025

Capability Scaling

GmZHL8xWQAAtFlF.jpeg

Scaling Laws

See: Scaling Laws

AGI Achievable

AGI Definition

Progress Models

AI impact models01.png

Economic and Political

Job Loss

F-kVQuvWkAAemkr.png

0dab4c86-882d-4095-9d12-d19684ed5184 675x680.png

National Security

AI Manhattan Project

Near-term

Overall

Surveys of Opinions/Predictions

Bad Outcomes

Intelligence Explosion

Gm-1jugbYAAtq Y.jpeg

Superintelligence

Long-range/Philosophy

Psychology

Positives & Optimism

Science & Technology Improvements

Social

Plans

Philosophy

GlchEeObwAQ88NK.jpeg

Research

Alignment

Strategic/Technical

Strategic/Policy

Restriction

See Also