Difference between revisions of "AI safety"

From GISAXS
Jump to: navigation, search
(Research)
(Description of Safety Concerns)
 
Line 27: Line 27:
 
* [https://www.lesswrong.com/posts/N6vZEnCn6A95Xn39p/are-we-in-an-ai-overhang Overhang]
 
* [https://www.lesswrong.com/posts/N6vZEnCn6A95Xn39p/are-we-in-an-ai-overhang Overhang]
 
* [https://www.alignmentforum.org/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target Reward is not the optimization target] (Alex Turner)
 
* [https://www.alignmentforum.org/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target Reward is not the optimization target] (Alex Turner)
 +
* 80,000 hours:
 +
** [https://80000hours.org/problem-profiles/risks-from-power-seeking-ai/ Risks from power-seeking AI systems]
 +
** [https://80000hours.org/problem-profiles/gradual-disempowerment/ Gradual disempowerment]
 +
** [https://80000hours.org/problem-profiles/catastrophic-ai-misuse/ Catastrophic AI misuse]
  
 
==Medium-term Risks==
 
==Medium-term Risks==

Latest revision as of 13:55, 19 March 2026

Learning Resources

Light

Deep

Description of Safety Concerns

Key Concepts

Medium-term Risks

Long-term (x-risk)

Status

Assessmment

Policy

Proposals

Research

Demonstrations of Negative Use Capabilities

Threat Vectors

See Also