Difference between revisions of "AI research trends"

From GISAXS
Jump to: navigation, search
(Context Length)
(Neural (non-token) Latent Representation)
 
(3 intermediate revisions by the same user not shown)
Line 15: Line 15:
 
==Context Length==
 
==Context Length==
 
* 2020: [https://ai.googleblog.com/2020/10/rethinking-attention-with-performers.html Various ideas] for scaling context window, including [https://arxiv.org/abs/2004.05150 Longformer]
 
* 2020: [https://ai.googleblog.com/2020/10/rethinking-attention-with-performers.html Various ideas] for scaling context window, including [https://arxiv.org/abs/2004.05150 Longformer]
* 2023-April-02: [https://hazyresearch.stanford.edu/blog/2023-03-27-long-learning Discussion] of ideas for how to scale context window
+
* 2023-04-02: [https://hazyresearch.stanford.edu/blog/2023-03-27-long-learning Discussion] of ideas for how to scale context window
* 2023-May-11: Anthropic announces 100k window
+
* 2023-05-11: Anthropic announces 100k window
* 2023-June-07: [https://magic.dev/ magic.dev] claims [https://magic.dev/blog/ltm-1 5M tokens coming soon]
+
* 2023-06-07: [https://magic.dev/ magic.dev] claims [https://magic.dev/blog/ltm-1 5M tokens coming soon]
* 2023-July-05: Microsoft describes [https://arxiv.org/abs/2307.02486 LongNet], with 1 billion token window
+
* 2023-07-05: Microsoft describes [https://arxiv.org/abs/2307.02486 LongNet], with 1 billion token window
* 2023-July-11: [https://arxiv.org/abs/2307.03170 Focused Transformer] 256k
+
* 2023-07-11: [https://arxiv.org/abs/2307.03170 Focused Transformer] 256k
* 2023-Nov-06: [https://openai.com/blog/new-models-and-developer-products-announced-at-devday GPT-4 turbo] 128k
+
* 2023-11-06: [https://openai.com/blog/new-models-and-developer-products-announced-at-devday GPT-4 turbo] 128k
* 2023-Nov-22: [https://techcrunch.com/2023/11/21/anthropic-claude-2-1/ Anthropic Claude 2.1] 200k
+
* 2023-11-22: [https://techcrunch.com/2023/11/21/anthropic-claude-2-1/ Anthropic Claude 2.1] 200k
* 2023-Dec-13: [https://arxiv.org/abs/2312.00752 Mamba] alternative
+
* 2023-12-13: [https://arxiv.org/abs/2312.00752 Mamba] alternative
* 2024-Jan-04: [https://arxiv.org/abs/2401.01325 LongLM] to extend context window
+
* 2024-01-04: [https://arxiv.org/abs/2401.01325 LongLM] to extend context window
* 2024-Feb-15: [https://blog.google/technology/ai/google-gemini-next-generation-model-february-2024/#architecture Gemini 1.5] 1M tokens
+
* 2024-02-15: [https://blog.google/technology/ai/google-gemini-next-generation-model-february-2024/#architecture Gemini 1.5] 1M tokens
* 2024-Mar-04: [https://www.anthropic.com/news/claude-3-family Anthropic Claude 3] 200k
+
* 2024-03-04: [https://www.anthropic.com/news/claude-3-family Anthropic Claude 3] 200k
* 2024-Mar-08: [https://arxiv.org/abs/2403.05530 Google claims] Gemini 1.5 can scale to 10M
+
* 2024-03-08: [https://arxiv.org/abs/2403.05530 Google claims] Gemini 1.5 can scale to 10M
* 2024-Apr-10: Google [https://arxiv.org/abs/2404.07143 preprint] demonstrates infinite context length by using compressive memory
+
* 2024-04-10: Google [https://arxiv.org/abs/2404.07143 preprint] demonstrates infinite context length by using compressive memory
* 2024-Apr-12: Meta et al. demonstrate [https://arxiv.org/abs/2404.08801 Megalodon] that enables infinite context via a more efficient architecture
+
* 2024-04-12: Meta et al. demonstrate [https://arxiv.org/abs/2404.08801 Megalodon] that enables infinite context via a more efficient architecture
* 2024-Apr-14: Google presents [https://arxiv.org/abs/2404.09173 TransformerFAM], which leverages a feedback loop so it attends to its own latent representations, acting as working memory and provides effectively infinite context
+
* 2024-04-14: Google presents [https://arxiv.org/abs/2404.09173 TransformerFAM], which leverages a feedback loop so it attends to its own latent representations, acting as working memory and provides effectively infinite context
 +
* [https://x.com/MiniMax__AI/status/1879226391352549451 2025-01Jan-14]: [https://www.minimaxi.com/en/news/minimax-01-series-2 MiniMax-01] 4M ([https://www.minimaxi.com/en/news/minimax-01-series-2 paper])
  
 
==Extended Context==
 
==Extended Context==
Line 45: Line 46:
 
* 2024-03: [https://arxiv.org/abs/2403.11901 Larimar: Large Language Models with Episodic Memory Control]
 
* 2024-03: [https://arxiv.org/abs/2403.11901 Larimar: Large Language Models with Episodic Memory Control]
  
=Neural (non-token) Latent Representation=
+
=Updating Weights at Inference-time=
 +
* 2025-01: [https://arxiv.org/abs/2501.06252 Transformer<sup>2</sup>: Self-adaptive LLMs]
 +
 
 +
=Internal Though Representation Space=
 +
==Visual Thinking==
 +
* 2025-01: [https://arxiv.org/abs/2501.05874 VideoRAG: Retrieval-Augmented Generation over Video Corpus]
 +
* 2025-01: [https://arxiv.org/abs/2501.07542 Imagine while Reasoning in Space: Multimodal Visualization-of-Thought]
 +
 
 +
==Neural (non-token) Latent Representation==
 
* 2024-11: Microsoft: [https://arxiv.org/abs/2411.02820 DroidSpeak: KV Cache Sharing for Cross-LLM Communication and Multi-LLM Serving]: LLMs invent their own inter-communication language
 
* 2024-11: Microsoft: [https://arxiv.org/abs/2411.02820 DroidSpeak: KV Cache Sharing for Cross-LLM Communication and Multi-LLM Serving]: LLMs invent their own inter-communication language
 
* 2024-12: Meta: [https://arxiv.org/abs/2412.06769 Training Large Language Models to Reason in a Continuous Latent Space]: feeding the latent representation directly back into the model, instead of tokenizing intermediate thoughts (Chain of Continuous Thought, a.k.a. Coconut)
 
* 2024-12: Meta: [https://arxiv.org/abs/2412.06769 Training Large Language Models to Reason in a Continuous Latent Space]: feeding the latent representation directly back into the model, instead of tokenizing intermediate thoughts (Chain of Continuous Thought, a.k.a. Coconut)
Line 53: Line 62:
 
* 2024-12: Google DeepMind: [https://arxiv.org/abs/2412.17747 Deliberation in Latent Space via Differentiable Cache Augmentation]
 
* 2024-12: Google DeepMind: [https://arxiv.org/abs/2412.17747 Deliberation in Latent Space via Differentiable Cache Augmentation]
 
* 2024-12: [https://github.com/jerber/lang-jepa LANG-JEPA: Learning to Think in Latent Space]
 
* 2024-12: [https://github.com/jerber/lang-jepa LANG-JEPA: Learning to Think in Latent Space]
 +
 +
=See Also=
 +
* [[Increasing AI Intelligence]]

Latest revision as of 09:14, 15 January 2025

Novel Tokenization and/or Sampling

System 2 Reasoning

See: Increasing AI Intelligence

Memory

LLM Weights Memory

Context Length

Extended Context

Retrieval beyond RAG

See also: AI tools: Retrieval Augmented Generation (RAG)

Working Memory

Episodic Memory

Updating Weights at Inference-time

Internal Though Representation Space

Visual Thinking

Neural (non-token) Latent Representation

See Also