Difference between revisions of "AI research trends"

From GISAXS
Jump to: navigation, search
(System 2 Reasoning)
(Context Length)
Line 30: Line 30:
 
* 2024-Apr-12: Meta et al. demonstrate [https://arxiv.org/abs/2404.08801 Megalodon] that enables infinite context via a more efficient architecture
 
* 2024-Apr-12: Meta et al. demonstrate [https://arxiv.org/abs/2404.08801 Megalodon] that enables infinite context via a more efficient architecture
 
* 2024-Apr-14: Google presents [https://arxiv.org/abs/2404.09173 TransformerFAM], which leverages a feedback loop so it attends to its own latent representations, acting as working memory and provides effectively infinite context
 
* 2024-Apr-14: Google presents [https://arxiv.org/abs/2404.09173 TransformerFAM], which leverages a feedback loop so it attends to its own latent representations, acting as working memory and provides effectively infinite context
 +
 +
==Extended Context==
 +
* 2025-01: [https://arxiv.org/abs/2501.00663 Titans: Learning to Memorize at Test Time]
  
 
==Retrieval beyond RAG==
 
==Retrieval beyond RAG==

Revision as of 12:07, 14 January 2025

Novel Tokenization and/or Sampling

System 2 Reasoning

See: Increasing AI Intelligence

Memory

LLM Weights Memory

Context Length

Extended Context

Retrieval beyond RAG

See also: AI tools: Retrieval Augmented Generation (RAG)

Working Memory

Episodic Memory

Neural (non-token) Latent Representation