Difference between revisions of "AI safety"

From GISAXS
Jump to: navigation, search
(Research)
(Light)
Line 1: Line 1:
 
=Learning Resources=
 
=Learning Resources=
 
==Light==
 
==Light==
* [https://www.youtube.com/watch?v=xfMQ7hzyFW4 Writing Doom] (27m video): short film on Superintelligence (2024)
 
 
* [https://orxl.org/ai-doom.html a casual intro to AI doom and alignment] (2022)
 
* [https://orxl.org/ai-doom.html a casual intro to AI doom and alignment] (2022)
 
* Anthony Aguirre: [https://keepthefuturehuman.ai/ Keep The Future Human]
 
* Anthony Aguirre: [https://keepthefuturehuman.ai/ Keep The Future Human]
Line 11: Line 10:
 
** Text version: Center for Humane Technology: [https://centerforhumanetechnology.substack.com/p/the-narrow-path-why-ai-is-our-ultimate The Narrow Path: Why AI is Our Ultimate Test and Greatest Invitation]
 
** Text version: Center for Humane Technology: [https://centerforhumanetechnology.substack.com/p/the-narrow-path-why-ai-is-our-ultimate The Narrow Path: Why AI is Our Ultimate Test and Greatest Invitation]
 
* [https://x.com/KeiranJHarris/status/1935429439476887594 Fable about Transformative AI]
 
* [https://x.com/KeiranJHarris/status/1935429439476887594 Fable about Transformative AI]
 +
* 2024-10: [https://www.youtube.com/watch?v=xfMQ7hzyFW4 Writing Doom]: short film on Superintelligence (27m video)
 +
* 2026-03: [https://www.youtube.com/watch?v=Nl7-bRFSZBs The AI book that's freaking out national security advisors] (44m video)
  
 
==Deep==
 
==Deep==

Revision as of 12:42, 12 March 2026

Learning Resources

Light

Deep

Description of Safety Concerns

Key Concepts

Medium-term Risks

Long-term (x-risk)

Status

Assessmment

Policy

Proposals

Research

Demonstrations of Negative Use Capabilities

Threat Vectors

See Also