Difference between revisions of "AI predictions"

From GISAXS
Jump to: navigation, search
(Surveys of Opinions/Predictions)
(Bad Outcomes)
 
(One intermediate revision by the same user not shown)
Line 133: Line 133:
 
** 2023-03: [https://aiimpacts.org/scoring-forecasts-from-the-2016-expert-survey-on-progress-in-ai/ Scoring forecasts from the 2016 “Expert Survey on Progress in AI”]
 
** 2023-03: [https://aiimpacts.org/scoring-forecasts-from-the-2016-expert-survey-on-progress-in-ai/ Scoring forecasts from the 2016 “Expert Survey on Progress in AI”]
 
* 2022-10: Forecasting Research Institute: [https://forecastingresearch.org/near-term-xpt-accuracy Assessing Near-Term Accuracy in the Existential Risk Persuasion Tournament]
 
* 2022-10: Forecasting Research Institute: [https://forecastingresearch.org/near-term-xpt-accuracy Assessing Near-Term Accuracy in the Existential Risk Persuasion Tournament]
 +
** 2025-09: Ethan Mollick: [https://x.com/emollick/status/1962859757674344823 Progress is ahead of expectations]
 
* 2023-08: [https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2023_expert_survey_on_progress_in_ai 2023 Expert Survey on Progress in AI]
 
* 2023-08: [https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2023_expert_survey_on_progress_in_ai 2023 Expert Survey on Progress in AI]
 
* 2024-01: [https://arxiv.org/abs/2401.02843 Thousands of AI Authors on the Future of AI]
 
* 2024-01: [https://arxiv.org/abs/2401.02843 Thousands of AI Authors on the Future of AI]
Line 149: Line 150:
 
* 2025-04: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean: [https://ai-2027.com/ AI 2027] ([https://ai-2027.com/scenario.pdf pdf])
 
* 2025-04: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean: [https://ai-2027.com/ AI 2027] ([https://ai-2027.com/scenario.pdf pdf])
 
* 2025-04: [https://www.forethought.org/research/ai-enabled-coups-how-a-small-group-could-use-ai-to-seize-power AI-Enabled Coups: How a Small Group Could Use AI to Seize Power]
 
* 2025-04: [https://www.forethought.org/research/ai-enabled-coups-how-a-small-group-could-use-ai-to-seize-power AI-Enabled Coups: How a Small Group Could Use AI to Seize Power]
 +
* 2025-09: [https://doctrines.ai/ The three main doctrines on the future of AI]
 +
** '''Dominance doctrine:''' First actor to create advanced AI will attain overwhelming strategic superiority
 +
** '''Extinction doctrine:''' Humanity will lose control of ASI, leading to extinction or permanent disempowerment
 +
** '''Replacement doctrine:''' AI will automate human tasks, but without fundamentally reshaping or ending civilization
  
 
==Intelligence Explosion==
 
==Intelligence Explosion==

Latest revision as of 11:41, 2 September 2025

Capability Scaling

GmZHL8xWQAAtFlF.jpeg

Scaling Laws

See: Scaling Laws

AGI Achievable

AGI Definition

Progress Models

AI impact models01.png

Economic and Political

Job Loss

F-kVQuvWkAAemkr.png

0dab4c86-882d-4095-9d12-d19684ed5184 675x680.png

National Security

AI Manhattan Project

Near-term

Overall

Surveys of Opinions/Predictions

Bad Outcomes

Intelligence Explosion

Gm-1jugbYAAtq Y.jpeg

Superintelligence

Long-range/Philosophy

Psychology

Science & Technology Improvements

Plans

Philosophy

GlchEeObwAQ88NK.jpeg

Research

Alignment

Strategic/Technical

Strategic/Policy

Restriction

See Also