AI research trends
Revision as of 08:52, 24 December 2024 by KevinYager (talk | contribs) (Created page with "=Neural (non-token) Latent Representation= * 2024-11: Microsoft: [https://arxiv.org/abs/2411.02820 DroidSpeak: KV Cache Sharing for Cross-LLM Communication and Multi-LLM Servi...")
Neural (non-token) Latent Representation
- 2024-11: Microsoft: DroidSpeak: KV Cache Sharing for Cross-LLM Communication and Multi-LLM Serving: LLMs invent their own inter-communication language
- 2024-12: Meta: Training Large Language Models to Reason in a Continuous Latent Space: feeding the latent representation directly back into the model, instead of tokenizing intermediate thoughts (Chain of Continuous Thought, a.k.a. Coconut)
- 2024-12: Meta: Large Concept Models: Language Modeling in a Sentence Representation Space: train a model that operates at a higher level of abstraction than typical word/token LLMs; model operates in a space of concept embeddings (more akin to full sentences than individual words)
- 2024-12: Meta: Byte Latent Transformer: Patches Scale Better Than Tokens: Instead of tokenization, dynamically convert input byte-stream into patches, yielding gains in compute efficiency, with minimal loss in performance
- 2024-12: Google DeepMind: Deliberation in Latent Space via Differentiable Cache Augmentation