Difference between revisions of "AI safety"

From GISAXS
Jump to: navigation, search
(Light)
(Light)
Line 23: Line 23:
 
* [https://www.youtube.com/watch?v=xfMQ7hzyFW4 Writing Doom] (27m video): short film on Superintelligence (2024)
 
* [https://www.youtube.com/watch?v=xfMQ7hzyFW4 Writing Doom] (27m video): short film on Superintelligence (2024)
 
* [https://orxl.org/ai-doom.html a casual intro to AI doom and alignment] (2022)
 
* [https://orxl.org/ai-doom.html a casual intro to AI doom and alignment] (2022)
* Anthony Aguirre: [https://keepthefuturehuman.ai/ Keep The Future Human] ([https://keepthefuturehuman.ai/essay/ essay])
+
* Anthony Aguirre: [https://keepthefuturehuman.ai/ Keep The Future Human]
 +
** [https://interactive.keepthefuturehuman.ai/ Interactive Explainer]
 +
** [https://keepthefuturehuman.ai/essay/ Essay: Keep the Future Human]
 
** [https://www.youtube.com/watch?v=27KDl2uPiL8 We Can’t Stop AI – Here’s What To Do Instead] (4m video, 2025)
 
** [https://www.youtube.com/watch?v=27KDl2uPiL8 We Can’t Stop AI – Here’s What To Do Instead] (4m video, 2025)
 
** [https://www.youtube.com/watch?v=zeabrXV8zNE The 4 Rules That Could Stop AI Before It’s Too Late] (15m video, 2025)
 
** [https://www.youtube.com/watch?v=zeabrXV8zNE The 4 Rules That Could Stop AI Before It’s Too Late] (15m video, 2025)

Revision as of 09:40, 2 April 2025

Description of Safety Concerns

Key Concepts

Medium-term Risks

Long-term (x-risk)

Learning Resources

Light

Deep

Status

Policy

Proposals

Research

Demonstrations of Negative Use Capabilities

See Also