Difference between revisions of "AI predictions"

From GISAXS
Jump to: navigation, search
(Alignment)
(Alignment)
Line 170: Line 170:
 
*# [https://joecarlsmith.substack.com/p/paths-and-waystations-in-ai-safety Paths and waystations in AI safety] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16768804-paths-and-waystations-in-ai-safety audio version])
 
*# [https://joecarlsmith.substack.com/p/paths-and-waystations-in-ai-safety Paths and waystations in AI safety] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16768804-paths-and-waystations-in-ai-safety audio version])
 
*# [https://joecarlsmith.substack.com/p/ai-for-ai-safety AI for AI safety] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16790183-ai-for-ai-safety audio version])
 
*# [https://joecarlsmith.substack.com/p/ai-for-ai-safety AI for AI safety] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16790183-ai-for-ai-safety audio version])
*# [https://joecarlsmith.substack.com/p/can-we-safely-automate-alignment Can we safely automate alignment research?] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/17069901-can-we-safely-automate-alignment-research audio version])
+
*# [https://joecarlsmith.substack.com/p/can-we-safely-automate-alignment Can we safely automate alignment research?] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/17069901-can-we-safely-automate-alignment-research audio version], [https://joecarlsmith.substack.com/p/video-and-transcript-of-talk-on-automating?utm_source=post-email-title&publication_id=1022275&post_id=162375391&utm_campaign=email-post-title&isFreemail=true&r=5av1bk&triedRedirect=true&utm_medium=email video version])
 
* 2025-04: Dario Amodei: [https://www.darioamodei.com/post/the-urgency-of-interpretability The Urgency of Interpretability]
 
* 2025-04: Dario Amodei: [https://www.darioamodei.com/post/the-urgency-of-interpretability The Urgency of Interpretability]
  

Revision as of 10:17, 3 May 2025

Capability Scaling

GmZHL8xWQAAtFlF.jpeg

AGI Achievable

AGI Definition

Progress Models

AI impact models01.png

Economic and Political

Job Loss

F-kVQuvWkAAemkr.png

National Security

Near-term

Overall

  • 2025-03: Kevin Roose (New York Times): Powerful A.I. Is Coming. We’re Not Ready. Three arguments for taking progress toward artificial general intelligence, or A.G.I., more seriously — whether you’re an optimist or a pessimist.
  • 2025-03: Nicholas Carlini: My Thoughts on the Future of "AI": "I have very wide error bars on the potential future of large language models, and I think you should too."

Surveys of Opinions/Predictions

Bad Outcomes

Intelligence Explosion

Long-range/Philosophy

Psychology

Science & Technology Improvements

Plans

Philosophy

GlchEeObwAQ88NK.jpeg

Alignment

Strategic/Technical

Strategic/Policy

See Also