Difference between revisions of "AI compute"
KevinYager (talk | contribs) (→Cloud LLM Routers & Inference Providers) |
KevinYager (talk | contribs) (→Cloud LLM Routers & Inference Providers) |
||
Line 21: | Line 21: | ||
* [https://simtheory.ai/ SimTheory] | * [https://simtheory.ai/ SimTheory] | ||
* [https://abacus.ai/ Abacus AI] | * [https://abacus.ai/ Abacus AI] | ||
+ | |||
+ | ===Multi-model Web Playground Interfaces=== | ||
+ | * [https://www.together.ai/ Together AI] | ||
+ | * [https://hyperbolic.xyz/ Hyperbolic AI] | ||
==Acceleration Hardware== | ==Acceleration Hardware== |
Revision as of 14:57, 9 February 2025
Contents
Cloud GPU
Cloud Training Compute
Cloud LLM Routers & Inference Providers
- OpenRouter
- LiteLLM
- Cent ML
- Fireworks AI
- Huggingface Inference Providers Hub
Multi-model Web Chat Interfaces
Multi-model Web Playground Interfaces
Acceleration Hardware
- Nvidia GPUs
- Google TPU
- Etched: Transformer ASICs
- Cerebras
- Untether AI
- Graphcore
- SambaNova Systems
- Groq
- Tesla Dojo
- Deep Silicon: Combined hardware/software solution for accelerated AI (e.g. ternary math)