Difference between revisions of "AI predictions"

From GISAXS
Jump to: navigation, search
(Intelligence Explosion)
(Job Loss)
 
(2 intermediate revisions by the same user not shown)
Line 91: Line 91:
 
[[Image:0dab4c86-882d-4095-9d12-d19684ed5184 675x680.png|300px]]
 
[[Image:0dab4c86-882d-4095-9d12-d19684ed5184 675x680.png|300px]]
 
* 2025-07: Harvard Business Review: [https://hbr.org/2025/06/what-gets-measured-ai-will-automate What Gets Measured, AI Will Automate]
 
* 2025-07: Harvard Business Review: [https://hbr.org/2025/06/what-gets-measured-ai-will-automate What Gets Measured, AI Will Automate]
 +
* 2025-08: [https://digitaleconomy.stanford.edu/publications/canaries-in-the-coal-mine/ Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence]
  
 
==National Security==
 
==National Security==
Line 120: Line 121:
  
 
=Overall=
 
=Overall=
 +
* 1993: [https://en.wikipedia.org/wiki/Vernor_Vinge Vernor Vinge]: [https://edoras.sdsu.edu/~vinge/misc/singularity.html The Coming Technological Singularity: How to Survive in the Post-Human Era]
 
* 2025-03: Kevin Roose (New York Times): [https://www.nytimes.com/2025/03/14/technology/why-im-feeling-the-agi.html?unlocked_article_code=1.304.TIEy.SmNhKYO4e9c7&smid=url-share Powerful A.I. Is Coming. We’re Not Ready.] Three arguments for taking progress toward artificial general intelligence, or A.G.I., more seriously — whether you’re an optimist or a pessimist.
 
* 2025-03: Kevin Roose (New York Times): [https://www.nytimes.com/2025/03/14/technology/why-im-feeling-the-agi.html?unlocked_article_code=1.304.TIEy.SmNhKYO4e9c7&smid=url-share Powerful A.I. Is Coming. We’re Not Ready.] Three arguments for taking progress toward artificial general intelligence, or A.G.I., more seriously — whether you’re an optimist or a pessimist.
 
* 2025-03: Nicholas Carlini: [https://nicholas.carlini.com/writing/2025/thoughts-on-future-ai.html My Thoughts on the Future of "AI"]: "I have very wide error bars on the potential future of large language models, and I think you should too."
 
* 2025-03: Nicholas Carlini: [https://nicholas.carlini.com/writing/2025/thoughts-on-future-ai.html My Thoughts on the Future of "AI"]: "I have very wide error bars on the potential future of large language models, and I think you should too."
Line 218: Line 220:
 
*# [https://joecarlsmith.substack.com/p/ai-for-ai-safety AI for AI safety] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16790183-ai-for-ai-safety audio version])
 
*# [https://joecarlsmith.substack.com/p/ai-for-ai-safety AI for AI safety] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16790183-ai-for-ai-safety audio version])
 
*# [https://joecarlsmith.substack.com/p/can-we-safely-automate-alignment Can we safely automate alignment research?] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/17069901-can-we-safely-automate-alignment-research audio version], [https://joecarlsmith.substack.com/p/video-and-transcript-of-talk-on-automating?utm_source=post-email-title&publication_id=1022275&post_id=162375391&utm_campaign=email-post-title&isFreemail=true&r=5av1bk&triedRedirect=true&utm_medium=email video version])
 
*# [https://joecarlsmith.substack.com/p/can-we-safely-automate-alignment Can we safely automate alignment research?] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/17069901-can-we-safely-automate-alignment-research audio version], [https://joecarlsmith.substack.com/p/video-and-transcript-of-talk-on-automating?utm_source=post-email-title&publication_id=1022275&post_id=162375391&utm_campaign=email-post-title&isFreemail=true&r=5av1bk&triedRedirect=true&utm_medium=email video version])
 +
*# [https://joecarlsmith.substack.com/p/giving-ais-safe-motivations?utm_source=post-email-title&publication_id=1022275&post_id=171250683&utm_campaign=email-post-title&isFreemail=true&r=5av1bk&triedRedirect=true&utm_medium=email Giving AIs safe motivations] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/17686921-giving-ais-safe-motivations audio version])
 
* 2025-04: Dario Amodei: [https://www.darioamodei.com/post/the-urgency-of-interpretability The Urgency of Interpretability]
 
* 2025-04: Dario Amodei: [https://www.darioamodei.com/post/the-urgency-of-interpretability The Urgency of Interpretability]
  

Latest revision as of 15:34, 26 August 2025

Capability Scaling

GmZHL8xWQAAtFlF.jpeg

AGI Achievable

AGI Definition

Progress Models

AI impact models01.png

Economic and Political

Job Loss

F-kVQuvWkAAemkr.png

0dab4c86-882d-4095-9d12-d19684ed5184 675x680.png

National Security

AI Manhattan Project

Near-term

Overall

Surveys of Opinions/Predictions

Bad Outcomes

Intelligence Explosion

Gm-1jugbYAAtq Y.jpeg

Superintelligence

Long-range/Philosophy

Psychology

Science & Technology Improvements

Plans

Philosophy

GlchEeObwAQ88NK.jpeg

Research

Alignment

Strategic/Technical

Strategic/Policy

Restriction

See Also