Difference between revisions of "AI predictions"

From GISAXS
Jump to: navigation, search
(Science & Technology Improvements)
(Strategic/Policy)
 
Line 107: Line 107:
 
** [https://www.nationalsecurity.ai/chapter/conclusion Conclusion]
 
** [https://www.nationalsecurity.ai/chapter/conclusion Conclusion]
 
** [https://www.nationalsecurity.ai/chapter/appendix Appendix FAQs]
 
** [https://www.nationalsecurity.ai/chapter/appendix Appendix FAQs]
 +
* Anthony Aguirre: [https://keepthefuturehuman.ai/ Keep The Future Human] ([https://keepthefuturehuman.ai/essay/ essay])
 +
** [https://www.youtube.com/watch?v=zeabrXV8zNE The 4 Rules That Could Stop AI Before It’s Too Late (video)]  (2025)
 +
**# Oversight: Registration required for training >10<sup>25</sup> FLOP and inference >10<sup>19</sup> FLOP/s (~1,000 B200 GPUs @ $25M). Build cryptographic licensing into hardware.
 +
**# Computation Limits: Ban on training models >10<sup>27</sup> FLOP or inference >10<sup>20</sup> FLOP/s.
 +
**# Strict Liability: Hold AI companies responsible for outcomes.
 +
**# Tiered Regulation: Low regulation on tool-AI, strictest regulation on AGI (general, capable, autonomous systems).
  
 
=See Also=
 
=See Also=
 
* [[AI safety]]
 
* [[AI safety]]

Latest revision as of 08:32, 13 March 2025

AGI Achievable

AGI Definition

Economic and Political

Job Loss

F-kVQuvWkAAemkr.png

Overall

Surveys of Opinions/Predictions

Bad Outcomes

Psychology

Science & Technology Improvements

Plans

Philosophy

GlchEeObwAQ88NK.jpeg

Alignment

Strategic/Policy

See Also