Difference between revisions of "AI research trends"
KevinYager (talk | contribs) (→Episodic Memory) |
KevinYager (talk | contribs) (→Novel Tokenization and/or Sampling) |
||
(6 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
=Novel Tokenization and/or Sampling= | =Novel Tokenization and/or Sampling= | ||
+ | * 2024-04: [https://arxiv.org/abs/2404.19737 Better & Faster Large Language Models via Multi-token Prediction] | ||
* 2024-10: [https://github.com/xjdr-alt/entropix entropix: Entropy Based Sampling and Parallel CoT Decoding] | * 2024-10: [https://github.com/xjdr-alt/entropix entropix: Entropy Based Sampling and Parallel CoT Decoding] | ||
− | * 2024-12: [https://arxiv.org/abs/2412.06676 I Don't Know: Explicit Modeling of Uncertainty with an [IDK] Token] | + | * 2024-10: [https://arxiv.org/abs/2410.01104 softmax is not enough (for sharp out-of-distribution)] |
+ | * 2024-12: [https://arxiv.org/abs/2412.06676 I Don't Know: Explicit Modeling of Uncertainty with an <nowiki>[IDK]</nowiki> Token] | ||
=System 2 Reasoning= | =System 2 Reasoning= | ||
Line 8: | Line 10: | ||
=Memory= | =Memory= | ||
==Context Length== | ==Context Length== | ||
− | + | * 2020: [https://ai.googleblog.com/2020/10/rethinking-attention-with-performers.html Various ideas] for scaling context window, including [https://arxiv.org/abs/2004.05150 Longformer] | |
+ | * 2023-April-02: [https://hazyresearch.stanford.edu/blog/2023-03-27-long-learning Discussion] of ideas for how to scale context window | ||
+ | * 2023-May-11: Anthropic announces 100k window | ||
+ | * 2023-June-07: [https://magic.dev/ magic.dev] claims [https://magic.dev/blog/ltm-1 5M tokens coming soon] | ||
+ | * 2023-July-05: Microsoft describes [https://arxiv.org/abs/2307.02486 LongNet], with 1 billion token window | ||
+ | * 2023-July-11: [https://arxiv.org/abs/2307.03170 Focused Transformer] 256k | ||
+ | * 2023-Nov-06: [https://openai.com/blog/new-models-and-developer-products-announced-at-devday GPT-4 turbo] 128k | ||
+ | * 2023-Nov-22: [https://techcrunch.com/2023/11/21/anthropic-claude-2-1/ Anthropic Claude 2.1] 200k | ||
+ | * 2023-Dec-13: [https://arxiv.org/abs/2312.00752 Mamba] alternative | ||
+ | * 2024-Jan-04: [https://arxiv.org/abs/2401.01325 LongLM] to extend context window | ||
+ | * 2024-Feb-15: [https://blog.google/technology/ai/google-gemini-next-generation-model-february-2024/#architecture Gemini 1.5] 1M tokens | ||
+ | * 2024-Mar-04: [https://www.anthropic.com/news/claude-3-family Anthropic Claude 3] 200k | ||
+ | * 2024-Mar-08: [https://arxiv.org/abs/2403.05530 Google claims] Gemini 1.5 can scale to 10M | ||
+ | * 2024-Apr-10: Google [https://arxiv.org/abs/2404.07143 preprint] demonstrates infinite context length by using compressive memory | ||
+ | * 2024-Apr-12: Meta et al. demonstrate [https://arxiv.org/abs/2404.08801 Megalodon] that enables infinite context via a more efficient architecture | ||
+ | * 2024-Apr-14: Google presents [https://arxiv.org/abs/2404.09173 TransformerFAM], which leverages a feedback loop so it attends to its own latent representations, acting as working memory and provides effectively infinite context | ||
+ | |||
+ | ==Retrieval beyond RAG== | ||
+ | See also: [[AI_tools#Retrieval_Augmented_Generation_.28RAG.29|AI tools: Retrieval Augmented Generation (RAG)]] | ||
+ | * 2024-12: [https://arxiv.org/abs/2412.11536 Let your LLM generate a few tokens and you will reduce the need for retrieval] | ||
+ | * 2024-12: [https://arxiv.org/abs/2412.11919 RetroLLM: Empowering Large Language Models to Retrieve Fine-grained Evidence within Generation] | ||
==Working Memory== | ==Working Memory== |
Latest revision as of 10:42, 30 December 2024
Contents
Novel Tokenization and/or Sampling
- 2024-04: Better & Faster Large Language Models via Multi-token Prediction
- 2024-10: entropix: Entropy Based Sampling and Parallel CoT Decoding
- 2024-10: softmax is not enough (for sharp out-of-distribution)
- 2024-12: I Don't Know: Explicit Modeling of Uncertainty with an [IDK] Token
System 2 Reasoning
See: Increasing AI Agent Intelligence
Memory
Context Length
- 2020: Various ideas for scaling context window, including Longformer
- 2023-April-02: Discussion of ideas for how to scale context window
- 2023-May-11: Anthropic announces 100k window
- 2023-June-07: magic.dev claims 5M tokens coming soon
- 2023-July-05: Microsoft describes LongNet, with 1 billion token window
- 2023-July-11: Focused Transformer 256k
- 2023-Nov-06: GPT-4 turbo 128k
- 2023-Nov-22: Anthropic Claude 2.1 200k
- 2023-Dec-13: Mamba alternative
- 2024-Jan-04: LongLM to extend context window
- 2024-Feb-15: Gemini 1.5 1M tokens
- 2024-Mar-04: Anthropic Claude 3 200k
- 2024-Mar-08: Google claims Gemini 1.5 can scale to 10M
- 2024-Apr-10: Google preprint demonstrates infinite context length by using compressive memory
- 2024-Apr-12: Meta et al. demonstrate Megalodon that enables infinite context via a more efficient architecture
- 2024-Apr-14: Google presents TransformerFAM, which leverages a feedback loop so it attends to its own latent representations, acting as working memory and provides effectively infinite context
Retrieval beyond RAG
See also: AI tools: Retrieval Augmented Generation (RAG)
- 2024-12: Let your LLM generate a few tokens and you will reduce the need for retrieval
- 2024-12: RetroLLM: Empowering Large Language Models to Retrieve Fine-grained Evidence within Generation
Working Memory
Episodic Memory
Neural (non-token) Latent Representation
- 2024-11: Microsoft: DroidSpeak: KV Cache Sharing for Cross-LLM Communication and Multi-LLM Serving: LLMs invent their own inter-communication language
- 2024-12: Meta: Training Large Language Models to Reason in a Continuous Latent Space: feeding the latent representation directly back into the model, instead of tokenizing intermediate thoughts (Chain of Continuous Thought, a.k.a. Coconut)
- 2024-12: Meta: Large Concept Models: Language Modeling in a Sentence Representation Space: train a model that operates at a higher level of abstraction than typical word/token LLMs; model operates in a space of concept embeddings (more akin to full sentences than individual words)
- 2024-12: Meta: Byte Latent Transformer: Patches Scale Better Than Tokens: Instead of tokenization, dynamically convert input byte-stream into patches, yielding gains in compute efficiency, with minimal loss in performance
- 2024-12: Compressed Chain of Thought: Efficient Reasoning Through Dense Representations
- 2024-12: Google DeepMind: Deliberation in Latent Space via Differentiable Cache Augmentation
- 2024-12: LANG-JEPA: Learning to Think in Latent Space