Difference between revisions of "AI research trends"

From GISAXS
Jump to: navigation, search
(Neural (non-token) Latent Representation)
(Context Length)
Line 31: Line 31:
 
* 2024-04-14: Google presents [https://arxiv.org/abs/2404.09173 TransformerFAM], which leverages a feedback loop so it attends to its own latent representations, acting as working memory and provides effectively infinite context
 
* 2024-04-14: Google presents [https://arxiv.org/abs/2404.09173 TransformerFAM], which leverages a feedback loop so it attends to its own latent representations, acting as working memory and provides effectively infinite context
 
* [https://x.com/MiniMax__AI/status/1879226391352549451 2025-01Jan-14]: [https://www.minimaxi.com/en/news/minimax-01-series-2 MiniMax-01] 4M ([https://www.minimaxi.com/en/news/minimax-01-series-2 paper])
 
* [https://x.com/MiniMax__AI/status/1879226391352549451 2025-01Jan-14]: [https://www.minimaxi.com/en/news/minimax-01-series-2 MiniMax-01] 4M ([https://www.minimaxi.com/en/news/minimax-01-series-2 paper])
 +
* [https://x.com/Alibaba_Qwen/status/1883557964759654608 2025-01Jan-27]: [https://qwenlm.github.io/blog/qwen2.5-1m/ Qwen2.5-1M] ([https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-1M/Qwen2_5_1M_Technical_Report.pdf report])
  
 
==Extended Context==
 
==Extended Context==

Revision as of 09:30, 27 January 2025

Novel Tokenization and/or Sampling

System 2 Reasoning

See: Increasing AI Intelligence

Memory

LLM Weights Memory

Context Length

Extended Context

Retrieval beyond RAG

See also: AI tools: Retrieval Augmented Generation (RAG)

Working Memory

Episodic Memory

Updating Weights at Inference-time

Internal Though Representation Space

Visual Thinking

Neural (non-token) Latent Representation

See Also