Difference between revisions of "AI safety"

From GISAXS
Jump to: navigation, search
(Learning Resources)
(Light)
Line 24: Line 24:
 
* [https://orxl.org/ai-doom.html a casual intro to AI doom and alignment] (2022)
 
* [https://orxl.org/ai-doom.html a casual intro to AI doom and alignment] (2022)
 
* Anthony Aguirre: [https://keepthefuturehuman.ai/ Keep The Future Human] ([https://keepthefuturehuman.ai/essay/ essay])
 
* Anthony Aguirre: [https://keepthefuturehuman.ai/ Keep The Future Human] ([https://keepthefuturehuman.ai/essay/ essay])
 +
** [https://www.youtube.com/watch?v=27KDl2uPiL8 We Can’t Stop AI – Here’s What To Do Instead] (4m video, 2025)
 
** [https://www.youtube.com/watch?v=zeabrXV8zNE The 4 Rules That Could Stop AI Before It’s Too Late] (15m video, 2025)
 
** [https://www.youtube.com/watch?v=zeabrXV8zNE The 4 Rules That Could Stop AI Before It’s Too Late] (15m video, 2025)
  

Revision as of 09:38, 2 April 2025

Description of Safety Concerns

Key Concepts

Medium-term Risks

Long-term (x-risk)

Learning Resources

Light

Deep

Status

Policy

Proposals

Research

Demonstrations of Negative Use Capabilities

See Also