Difference between revisions of "AI compute"
KevinYager (talk | contribs) (→Multi-model Web Chat Interfaces) |
KevinYager (talk | contribs) |
||
Line 53: | Line 53: | ||
* Tesla [https://en.wikipedia.org/wiki/Tesla_Dojo Dojo] | * Tesla [https://en.wikipedia.org/wiki/Tesla_Dojo Dojo] | ||
* [https://deepsilicon.com/ Deep Silicon]: Combined hardware/software solution for accelerated AI ([https://x.com/sdianahu/status/1833186687369023550 e.g.] ternary math) | * [https://deepsilicon.com/ Deep Silicon]: Combined hardware/software solution for accelerated AI ([https://x.com/sdianahu/status/1833186687369023550 e.g.] ternary math) | ||
+ | |||
+ | =Energy Use= | ||
+ | * 2021-04: [https://arxiv.org/abs/2104.10350 Carbon Emissions and Large Neural Network Training] | ||
+ | * 2023-10: [https://arxiv.org/abs/2310.03003 From Words to Watts: Benchmarking the Energy Costs of Large Language Model Inference] | ||
+ | * 2024-01: [https://iea.blob.core.windows.net/assets/6b2fd954-2017-408e-bf08-952fdd62118a/Electricity2024-Analysisandforecastto2026.pdf Electricity 2024: Analysis and forecast to 2026] | ||
+ | * 2024-02: [https://www.nature.com/articles/s41598-024-54271-x The carbon emissions of writing and illustrating are lower for AI than for humans] | ||
+ | * 2025-04: [https://andymasley.substack.com/p/a-cheat-sheet-for-conversations-about Why using ChatGPT is not bad for the environment - a cheat sheet] |
Revision as of 14:27, 20 June 2025
Contents
Cloud GPU
Cloud Training Compute
Cloud LLM Routers & Inference Providers
- OpenRouter (open and closed models, no Enterprise tier)
- LiteLLM (closed models, Enterprise tier)
- Cent ML (open models, Enterprise tier)
- Fireworks AI (open models, Enterprise tier)
- Abacus AI (open and closed models, Enterprise tier)
- Portkey (open? and closed models, Enterprise tier)
- Together AI (open models, Enterprise tier)
- Hyperbolic AI (open models, Enterprise tier)
- Huggingface Inference Providers Hub
Multi-model with Model Selection
Multi-model Web Chat Interfaces
Multi-model Web Playground Interfaces
Local Router
Acceleration Hardware
- Nvidia GPUs
- Google TPU
- Etched: Transformer ASICs
- Cerebras
- Untether AI
- Graphcore
- SambaNova Systems
- Groq
- Tesla Dojo
- Deep Silicon: Combined hardware/software solution for accelerated AI (e.g. ternary math)
Energy Use
- 2021-04: Carbon Emissions and Large Neural Network Training
- 2023-10: From Words to Watts: Benchmarking the Energy Costs of Large Language Model Inference
- 2024-01: Electricity 2024: Analysis and forecast to 2026
- 2024-02: The carbon emissions of writing and illustrating are lower for AI than for humans
- 2025-04: Why using ChatGPT is not bad for the environment - a cheat sheet