Difference between revisions of "AI research trends"

From GISAXS
Jump to: navigation, search
(Episodic Memory)
(Context Length)
Line 8: Line 8:
 
=Memory=
 
=Memory=
 
==Context Length==
 
==Context Length==
TBD
+
* 2020: [https://ai.googleblog.com/2020/10/rethinking-attention-with-performers.html Various ideas] for scaling context window, including [https://arxiv.org/abs/2004.05150 Longformer]
 +
* 2023-April-02: [https://hazyresearch.stanford.edu/blog/2023-03-27-long-learning Discussion] of ideas for how to scale context window
 +
* 2023-May-11: Anthropic announces 100k window
 +
* 2023-June-07: [https://magic.dev/ magic.dev] claims [https://magic.dev/blog/ltm-1 5M tokens coming soon]
 +
* 2023-July-05: Microsoft describes [https://arxiv.org/abs/2307.02486 LongNet], with 1 billion token window
 +
* 2023-July-11: [https://arxiv.org/abs/2307.03170 Focused Transformer] 256k
 +
* 2023-Nov-06: [https://openai.com/blog/new-models-and-developer-products-announced-at-devday GPT-4 turbo] 128k
 +
* 2023-Nov-22: [https://techcrunch.com/2023/11/21/anthropic-claude-2-1/ Anthropic Claude 2.1] 200k
 +
* 2023-Dec-13: [https://arxiv.org/abs/2312.00752 Mamba] alternative
 +
* 2024-Jan-04: [https://arxiv.org/abs/2401.01325 LongLM] to extend context window
 +
* 2024-Feb-15: [https://blog.google/technology/ai/google-gemini-next-generation-model-february-2024/#architecture Gemini 1.5] 1M tokens
 +
* 2024-Mar-04: [https://www.anthropic.com/news/claude-3-family Anthropic Claude 3] 200k
 +
* 2024-Mar-08: [https://arxiv.org/abs/2403.05530 Google claims] Gemini 1.5 can scale to 10M
 +
* 2024-Apr-10: Google [https://arxiv.org/abs/2404.07143 preprint] demonstrates infinite context length by using compressive memory
 +
* 2024-Apr-12: Meta et al. demonstrate [https://arxiv.org/abs/2404.08801 Megalodon] that enables infinite context via a more efficient architecture
 +
* 2024-Apr-14: Google presents [https://arxiv.org/abs/2404.09173 TransformerFAM], which leverages a feedback loop so it attends to its own latent representations, acting as working memory and provides effectively infinite context
  
 
==Working Memory==
 
==Working Memory==

Revision as of 09:59, 27 December 2024

Novel Tokenization and/or Sampling

System 2 Reasoning

See: Increasing AI Agent Intelligence

Memory

Context Length

Working Memory

Episodic Memory

Neural (non-token) Latent Representation