Difference between revisions of "AI safety"

From GISAXS
Jump to: navigation, search
(Proposals)
(Proposals)
 
Line 46: Line 46:
 
* 2025-03: [https://controlai.com/ Control AI] [https://controlai.com/dip The Direct Institutional Plan]
 
* 2025-03: [https://controlai.com/ Control AI] [https://controlai.com/dip The Direct Institutional Plan]
 
* 2025-04: Google DeepMind: [https://deepmind.google/discover/blog/taking-a-responsible-path-to-agi/ Taking a responsible path to AGI]
 
* 2025-04: Google DeepMind: [https://deepmind.google/discover/blog/taking-a-responsible-path-to-agi/ Taking a responsible path to AGI]
** [https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/evaluating-potential-cybersecurity-threats-of-advanced-ai/An_Approach_to_Technical_AGI_Safety_Apr_2025.pdf An Approach to Technical AGI Safety and Security]
+
** Paper: [https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/evaluating-potential-cybersecurity-threats-of-advanced-ai/An_Approach_to_Technical_AGI_Safety_Apr_2025.pdf An Approach to Technical AGI Safety and Security]
  
 
=Research=
 
=Research=

Latest revision as of 11:14, 2 April 2025

Learning Resources

Light

Deep

Description of Safety Concerns

Key Concepts

Medium-term Risks

Long-term (x-risk)

Status

Policy

Proposals

Research

Demonstrations of Negative Use Capabilities

See Also