Difference between revisions of "AI tutorials"
KevinYager (talk | contribs) (→Visualizations) |
KevinYager (talk | contribs) |
||
Line 14: | Line 14: | ||
* [https://moebio.com/mind/ Phrase completion] | * [https://moebio.com/mind/ Phrase completion] | ||
* Karpathy: [https://colab.research.google.com/drive/1SVS-ALf9ToN6I6WmJno5RQkZEHFhaykJ#scrollTo=57wUOMOhaL2y Tiktoken Emoji] | * Karpathy: [https://colab.research.google.com/drive/1SVS-ALf9ToN6I6WmJno5RQkZEHFhaykJ#scrollTo=57wUOMOhaL2y Tiktoken Emoji] | ||
+ | |||
+ | ==LLM== | ||
+ | * [https://aman.ai/ Aman AI]: [[https://aman.ai/primers/ai/LLM/ Overview of Large Language Models] | ||
==Other Visualizations== | ==Other Visualizations== | ||
* [https://pytorch.org/blog/inside-the-matrix/ Inside the Matrix: Visualizing Matrix Multiplication, Attention and Beyond] | * [https://pytorch.org/blog/inside-the-matrix/ Inside the Matrix: Visualizing Matrix Multiplication, Attention and Beyond] | ||
* [https://sohl-dickstein.github.io/2024/02/12/fractal.html Neural network training makes beautiful fractals] | * [https://sohl-dickstein.github.io/2024/02/12/fractal.html Neural network training makes beautiful fractals] |
Revision as of 08:55, 29 December 2024
General
Loss Functions
Transformer
- Wolfram: What Is ChatGPT Doing … and Why Does It Work?
- The Illustrated Transformer
- Transformers Explained Visually — Not Just How, but Why They Work So Well