Difference between revisions of "Talk:AI predictions"
KevinYager (talk | contribs) (Created page with "==Skeptical== ===Gary Marcus predictions=== * 2022-03: [https://nautil.us/deep-learning-is-hitting-a-wall-238440/ Deep Learning Is Hitting a Wall: What would it take for artif...") |
KevinYager (talk | contribs) |
||
Line 4: | Line 4: | ||
* 2024-03: [https://garymarcus.substack.com/p/two-years-later-deep-learning-is Two years later, deep learning is still faced with the same fundamental challenges] | * 2024-03: [https://garymarcus.substack.com/p/two-years-later-deep-learning-is Two years later, deep learning is still faced with the same fundamental challenges] | ||
* 2024-04: [https://garymarcus.substack.com/p/evidence-that-llms-are-reaching-a Evidence that LLMs are reaching a point of diminishing returns — and what that might mean] | * 2024-04: [https://garymarcus.substack.com/p/evidence-that-llms-are-reaching-a Evidence that LLMs are reaching a point of diminishing returns — and what that might mean] | ||
+ | |||
+ | ==Random Essays== | ||
+ | * [https://sohl-dickstein.github.io/ Jascha's Blog] | ||
+ | ** 2022-11: [https://sohl-dickstein.github.io/2022/11/06/strong-Goodhart.html Too much efficiency makes everything worse: overfitting and the strong version of Goodhart's law] | ||
+ | ** 2023-03: [https://sohl-dickstein.github.io/2023/03/09/coherence.html The hot mess theory of AI misalignment: More intelligent agents behave less coherently] | ||
+ | ** 2023-09: [https://sohl-dickstein.github.io/2023/09/10/diversity-ai-risk.html Brain dump on the diversity of AI risk] | ||
+ | ** 2024-02: [https://sohl-dickstein.github.io/2024/02/12/fractal.html Neural network training makes beautiful fractals] |
Revision as of 09:55, 10 May 2025
Skeptical
Gary Marcus predictions
- 2022-03: Deep Learning Is Hitting a Wall: What would it take for artificial intelligence to make real progress?
- 2024-03: Two years later, deep learning is still faced with the same fundamental challenges
- 2024-04: Evidence that LLMs are reaching a point of diminishing returns — and what that might mean
Random Essays
- Jascha's Blog
- 2022-11: Too much efficiency makes everything worse: overfitting and the strong version of Goodhart's law
- 2023-03: The hot mess theory of AI misalignment: More intelligent agents behave less coherently
- 2023-09: Brain dump on the diversity of AI risk
- 2024-02: Neural network training makes beautiful fractals