Difference between revisions of "AI safety"
KevinYager (talk | contribs) (→Research) |
KevinYager (talk | contribs) (→Learning Resources) |
||
Line 10: | Line 10: | ||
=Learning Resources= | =Learning Resources= | ||
* [https://deepmindsafetyresearch.medium.com/introducing-our-short-course-on-agi-safety-1072adb7912c DeepMind short course on AGI safety] | * [https://deepmindsafetyresearch.medium.com/introducing-our-short-course-on-agi-safety-1072adb7912c DeepMind short course on AGI safety] | ||
+ | * [https://aisafety.info/ AI Safety FAQ] | ||
=Research= | =Research= |
Revision as of 13:42, 14 February 2025
Contents
Description of Safety Concerns
Medium-term Risks
- 2023-04: A.I. Dilemma – Tristan Harris and Aza Raskin” (video) (.website-files.com/5f0e1294f002b1bb26e1f304/64224a9051a6637c1b60162a_65-your-undivided-attention-The-AI-Dilemma-transcript.pdf podcast transcript): raises concern about human ability to handle these transformations
- 2023-04: Daniel Schmachtenberger and Liv Boeree (video): AI could accelerate perverse social dynamics