Difference between revisions of "AI predictions"

From GISAXS
Jump to: navigation, search
(Bad Outcomes)
(Philosophy)
 
(7 intermediate revisions by the same user not shown)
Line 9: Line 9:
 
** 2025-04: AI Digest: [https://theaidigest.org/time-horizons A new Moore's Law for AI agents]
 
** 2025-04: AI Digest: [https://theaidigest.org/time-horizons A new Moore's Law for AI agents]
 
[[Image:GmZHL8xWQAAtFlF.jpeg|450px]]
 
[[Image:GmZHL8xWQAAtFlF.jpeg|450px]]
* 2025-04: [https://epoch.ai/blog/trends-in-ai-supercomputers Trends in AI Supercomputers]
+
* 2025-04: [https://epoch.ai/blog/trends-in-ai-supercomputers Trends in AI Supercomputers] ([https://arxiv.org/abs/2504.16026 preprint])
 
* [https://ai-timeline.org/ The Road to AGI] (timeline visualization)
 
* [https://ai-timeline.org/ The Road to AGI] (timeline visualization)
  
Line 184: Line 184:
 
* 2025-04: Scott Alexander (Astral Codex Ten): [https://www.astralcodexten.com/p/the-colors-of-her-coat The Colors Of Her Coat] (response to [https://www.theintrinsicperspective.com/p/welcome-to-the-semantic-apocalypse semantic apocalypse] and semantic satiation)
 
* 2025-04: Scott Alexander (Astral Codex Ten): [https://www.astralcodexten.com/p/the-colors-of-her-coat The Colors Of Her Coat] (response to [https://www.theintrinsicperspective.com/p/welcome-to-the-semantic-apocalypse semantic apocalypse] and semantic satiation)
 
* 2025-05: Helen Toner: [https://www.ai-frontiers.org/articles/were-arguing-about-ai-safety-wrong We’re Arguing About AI Safety Wrong]: Dynamism vs. stasis is a clearer lens for criticizing controversial AI safety prescriptions
 
* 2025-05: Helen Toner: [https://www.ai-frontiers.org/articles/were-arguing-about-ai-safety-wrong We’re Arguing About AI Safety Wrong]: Dynamism vs. stasis is a clearer lens for criticizing controversial AI safety prescriptions
 +
* 2025-05: Joe Carlsmith: [https://joecarlsmith.substack.com/p/the-stakes-of-ai-moral-status The stakes of AI moral status]
 +
 +
==Research==
 +
* 2025-05: [https://www.lesswrong.com/posts/GAv4DRGyDHe2orvwB/gradual-disempowerment-concrete-research-projects Gradual Disempowerment: Concrete Research Projects]
  
 
==Alignment==
 
==Alignment==
Line 194: Line 198:
 
*# [https://joecarlsmith.substack.com/p/ai-for-ai-safety AI for AI safety] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16790183-ai-for-ai-safety audio version])
 
*# [https://joecarlsmith.substack.com/p/ai-for-ai-safety AI for AI safety] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16790183-ai-for-ai-safety audio version])
 
*# [https://joecarlsmith.substack.com/p/can-we-safely-automate-alignment Can we safely automate alignment research?] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/17069901-can-we-safely-automate-alignment-research audio version], [https://joecarlsmith.substack.com/p/video-and-transcript-of-talk-on-automating?utm_source=post-email-title&publication_id=1022275&post_id=162375391&utm_campaign=email-post-title&isFreemail=true&r=5av1bk&triedRedirect=true&utm_medium=email video version])
 
*# [https://joecarlsmith.substack.com/p/can-we-safely-automate-alignment Can we safely automate alignment research?] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/17069901-can-we-safely-automate-alignment-research audio version], [https://joecarlsmith.substack.com/p/video-and-transcript-of-talk-on-automating?utm_source=post-email-title&publication_id=1022275&post_id=162375391&utm_campaign=email-post-title&isFreemail=true&r=5av1bk&triedRedirect=true&utm_medium=email video version])
*# [https://joecarlsmith.substack.com/p/the-stakes-of-ai-moral-status The stakes of AI moral status]
 
 
* 2025-04: Dario Amodei: [https://www.darioamodei.com/post/the-urgency-of-interpretability The Urgency of Interpretability]
 
* 2025-04: Dario Amodei: [https://www.darioamodei.com/post/the-urgency-of-interpretability The Urgency of Interpretability]
  
Line 221: Line 224:
 
* 2025-04: MIRI: [https://techgov.intelligence.org/research/ai-governance-to-avoid-extinction AI Governance to Avoid Extinction: The Strategic Landscape and Actionable Research Questions]
 
* 2025-04: MIRI: [https://techgov.intelligence.org/research/ai-governance-to-avoid-extinction AI Governance to Avoid Extinction: The Strategic Landscape and Actionable Research Questions]
 
* 2025-05: [https://writing.antonleicht.me/p/the-new-ai-policy-frontier The New AI Policy Frontier]: Beyond the shortcomings of centralised control and alignment, a new school of thought on AI governance emerges. It still faces tricky politics.
 
* 2025-05: [https://writing.antonleicht.me/p/the-new-ai-policy-frontier The New AI Policy Frontier]: Beyond the shortcomings of centralised control and alignment, a new school of thought on AI governance emerges. It still faces tricky politics.
 +
* 2025-05: [https://uncpga.world/agi-uncpga-report/ AGI UNCPGA Report]: Governance of the Transition to Artificial General Intelligence (AGI) Urgent Considerations for the UN General Assembly: Report for the Council of Presidents of the United Nations General Assembly (UNCPGA)
 +
* 2025-06: [https://writing.antonleicht.me/p/ai-and-jobs-politics-without-policy AI & Jobs: Politics without Policy] Political support mounts - for a policy platform that does not yet exist
  
 
=See Also=
 
=See Also=
 
* [[AI safety]]
 
* [[AI safety]]

Latest revision as of 10:56, 6 June 2025

Capability Scaling

GmZHL8xWQAAtFlF.jpeg

AGI Achievable

AGI Definition

Progress Models

AI impact models01.png

Economic and Political

Job Loss

F-kVQuvWkAAemkr.png

National Security

Near-term

Overall

  • 2025-03: Kevin Roose (New York Times): Powerful A.I. Is Coming. We’re Not Ready. Three arguments for taking progress toward artificial general intelligence, or A.G.I., more seriously — whether you’re an optimist or a pessimist.
  • 2025-03: Nicholas Carlini: My Thoughts on the Future of "AI": "I have very wide error bars on the potential future of large language models, and I think you should too."

Surveys of Opinions/Predictions

Bad Outcomes

Intelligence Explosion

Superintelligence

Long-range/Philosophy

Psychology

Science & Technology Improvements

Plans

Philosophy

GlchEeObwAQ88NK.jpeg

Research

Alignment

Strategic/Technical

Strategic/Policy

See Also