AI safety
From GISAXS
Description of Safety Concerns
Key Concepts
Medium-term Risks
Long-term (x-risk)
Learning Resources
Status
Policy
Proposals
Research
- 2022-09: The alignment problem from a deep learning perspective
- 2022-12: Discovering Latent Knowledge in Language Models Without Supervision
- 2023-02: Pretraining Language Models with Human Preferences
- 2023-04: Do the Rewards Justify the Means? Measuring Trade-Offs Between Rewards and Ethical Behavior in the MACHIAVELLI Benchmark
- 2023-05: Model evaluation for extreme risks (DeepMind)
- 2023-05: Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision
- 2023-06: Preference Ranking Optimization for Human Alignment
- 2023-08: Self-Alignment with Instruction Backtranslation
- 2023-11: Debate Helps Supervise Unreliable Experts
- 2023-12: Weak-to-Strong Generalization: Eliciting Strong Capabilities with Weak Supervision (OpenAI, blog)
- 2023-12: Practices for Governing Agentic AI Systems (OpenAI, blog)
- 2024-01: Sleeper Agents: Training Deceptive LLMs that Persist through Safety Training (Anthropic)
- 2024-04: The Instruction Hierarchy: Training LLMs to Prioritize Privileged Instructions (OpenAI)
- 2024-07: On scalable oversight with weak LLMs judging strong LLMs
- 2024-07: Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress? (Dan Hendrycks et al.)
- 2024-08: Tamper-Resistant Safeguards for Open-Weight LLMs (project, code)
- 2024-08: Better Alignment with Instruction Back-and-Forth Translation
- 2024-10: First-Person Fairness in Chatbots (OpenAI, blog)
- 2024-10: Sabotage evaluations for frontier models (Anthropic, blog)
- 2024-12: Alignment Faking in Large Language Models (Anthropic)
- 2024-12: Best-of-N Jailbreaking (code)
- 2024-12: Towards Safe and Honest AI Agents with Neural Self-Other Overlap
- 2024-12: Deliberative Alignment: Reasoning Enables Safer Language Models (OpenAI)
- 2025-01: Trading Inference-Time Compute for Adversarial Robustness (OpenAI, blog)
- 2025-01: Constitutional Classifiers: Defending against Universal Jailbreaks across Thousands of Hours of Red Teaming (Anthropic, blog,
- 2025-02: Utility Engineering: Analyzing and Controlling Emergent Value Systems in AIs (site, github)
- 2025-02: Auditing Prompt Caching in Language Model APIs
- 2025-03: The Alignment Problem from a Deep Learning Perspective
- 2025-03: Auditing language models for hidden objectives (Anthropic, blog)
- 2025-03: Superalignment with Dynamic Human Values
Demonstrations of Negative Use Capabilities
See Also