Difference between revisions of "AI research trends"

From GISAXS
Jump to: navigation, search
(Novel Tokenization and/or Sampling)
 
(9 intermediate revisions by the same user not shown)
Line 1: Line 1:
 +
=Novel Tokenization and/or Sampling=
 +
* 2024-04: [https://arxiv.org/abs/2404.19737 Better & Faster Large Language Models via Multi-token Prediction]
 +
* 2024-10: [https://github.com/xjdr-alt/entropix entropix: Entropy Based Sampling and Parallel CoT Decoding]
 +
* 2024-10: [https://arxiv.org/abs/2410.01104 softmax is not enough (for sharp out-of-distribution)]
 +
* 2024-12: [https://arxiv.org/abs/2412.06676 I Don't Know: Explicit Modeling of Uncertainty with an <nowiki>[IDK]</nowiki> Token]
 +
 
=System 2 Reasoning=
 
=System 2 Reasoning=
 
See: [[AI_Agents#Increasing_AI_Agent_Intelligence|Increasing AI Agent Intelligence]]
 
See: [[AI_Agents#Increasing_AI_Agent_Intelligence|Increasing AI Agent Intelligence]]
  
=Episodic Memory=
+
=Memory=
 +
==Context Length==
 +
* 2020: [https://ai.googleblog.com/2020/10/rethinking-attention-with-performers.html Various ideas] for scaling context window, including [https://arxiv.org/abs/2004.05150 Longformer]
 +
* 2023-April-02: [https://hazyresearch.stanford.edu/blog/2023-03-27-long-learning Discussion] of ideas for how to scale context window
 +
* 2023-May-11: Anthropic announces 100k window
 +
* 2023-June-07: [https://magic.dev/ magic.dev] claims [https://magic.dev/blog/ltm-1 5M tokens coming soon]
 +
* 2023-July-05: Microsoft describes [https://arxiv.org/abs/2307.02486 LongNet], with 1 billion token window
 +
* 2023-July-11: [https://arxiv.org/abs/2307.03170 Focused Transformer] 256k
 +
* 2023-Nov-06: [https://openai.com/blog/new-models-and-developer-products-announced-at-devday GPT-4 turbo] 128k
 +
* 2023-Nov-22: [https://techcrunch.com/2023/11/21/anthropic-claude-2-1/ Anthropic Claude 2.1] 200k
 +
* 2023-Dec-13: [https://arxiv.org/abs/2312.00752 Mamba] alternative
 +
* 2024-Jan-04: [https://arxiv.org/abs/2401.01325 LongLM] to extend context window
 +
* 2024-Feb-15: [https://blog.google/technology/ai/google-gemini-next-generation-model-february-2024/#architecture Gemini 1.5] 1M tokens
 +
* 2024-Mar-04: [https://www.anthropic.com/news/claude-3-family Anthropic Claude 3] 200k
 +
* 2024-Mar-08: [https://arxiv.org/abs/2403.05530 Google claims] Gemini 1.5 can scale to 10M
 +
* 2024-Apr-10: Google [https://arxiv.org/abs/2404.07143 preprint] demonstrates infinite context length by using compressive memory
 +
* 2024-Apr-12: Meta et al. demonstrate [https://arxiv.org/abs/2404.08801 Megalodon] that enables infinite context via a more efficient architecture
 +
* 2024-Apr-14: Google presents [https://arxiv.org/abs/2404.09173 TransformerFAM], which leverages a feedback loop so it attends to its own latent representations, acting as working memory and provides effectively infinite context
 +
 
 +
==Retrieval beyond RAG==
 +
See also: [[AI_tools#Retrieval_Augmented_Generation_.28RAG.29|AI tools: Retrieval Augmented Generation (RAG)]]
 +
* 2024-12: [https://arxiv.org/abs/2412.11536 Let your LLM generate a few tokens and you will reduce the need for retrieval]
 +
* 2024-12: [https://arxiv.org/abs/2412.11919 RetroLLM: Empowering Large Language Models to Retrieve Fine-grained Evidence within Generation]
 +
 
 +
==Working Memory==
 +
* 2024-12: [https://www.arxiv.org/abs/2412.18069 Improving Factuality with Explicit Working Memory]
 +
 
 +
==Episodic Memory==
 
* 2024-03: [https://arxiv.org/abs/2403.11901 Larimar: Large Language Models with Episodic Memory Control]
 
* 2024-03: [https://arxiv.org/abs/2403.11901 Larimar: Large Language Models with Episodic Memory Control]
  
Line 10: Line 43:
 
* 2024-12: Meta: [https://arxiv.org/abs/2412.08821 Large Concept Models: Language Modeling in a Sentence Representation Space]: train a model that operates at a higher level of abstraction than typical word/token LLMs; model operates in a space of concept embeddings (more akin to full sentences than individual words)
 
* 2024-12: Meta: [https://arxiv.org/abs/2412.08821 Large Concept Models: Language Modeling in a Sentence Representation Space]: train a model that operates at a higher level of abstraction than typical word/token LLMs; model operates in a space of concept embeddings (more akin to full sentences than individual words)
 
* 2024-12: Meta: [https://ai.meta.com/research/publications/byte-latent-transformer-patches-scale-better-than-tokens/ Byte Latent Transformer: Patches Scale Better Than Tokens]: Instead of tokenization, dynamically convert input byte-stream into patches, yielding gains in compute efficiency, with minimal loss in performance
 
* 2024-12: Meta: [https://ai.meta.com/research/publications/byte-latent-transformer-patches-scale-better-than-tokens/ Byte Latent Transformer: Patches Scale Better Than Tokens]: Instead of tokenization, dynamically convert input byte-stream into patches, yielding gains in compute efficiency, with minimal loss in performance
 +
* 2024-12: [https://arxiv.org/abs/2412.13171 Compressed Chain of Thought: Efficient Reasoning Through Dense Representations]
 
* 2024-12: Google DeepMind: [https://arxiv.org/abs/2412.17747 Deliberation in Latent Space via Differentiable Cache Augmentation]
 
* 2024-12: Google DeepMind: [https://arxiv.org/abs/2412.17747 Deliberation in Latent Space via Differentiable Cache Augmentation]
 
* 2024-12: [https://github.com/jerber/lang-jepa LANG-JEPA: Learning to Think in Latent Space]
 
* 2024-12: [https://github.com/jerber/lang-jepa LANG-JEPA: Learning to Think in Latent Space]

Latest revision as of 10:42, 30 December 2024

Novel Tokenization and/or Sampling

System 2 Reasoning

See: Increasing AI Agent Intelligence

Memory

Context Length

Retrieval beyond RAG

See also: AI tools: Retrieval Augmented Generation (RAG)

Working Memory

Episodic Memory

Neural (non-token) Latent Representation