Difference between revisions of "AI research trends"

From GISAXS
Jump to: navigation, search
(Context Length)
(Neural (non-token) Latent Representation)
 
(One intermediate revision by the same user not shown)
Line 50: Line 50:
 
* 2025-01: [https://arxiv.org/abs/2501.06252 Transformer<sup>2</sup>: Self-adaptive LLMs]
 
* 2025-01: [https://arxiv.org/abs/2501.06252 Transformer<sup>2</sup>: Self-adaptive LLMs]
  
=Internal Though Representation Space=
+
=Internal Thought Representation Space=
 
==Visual Thinking==
 
==Visual Thinking==
 
* 2025-01: [https://arxiv.org/abs/2501.05874 VideoRAG: Retrieval-Augmented Generation over Video Corpus]
 
* 2025-01: [https://arxiv.org/abs/2501.05874 VideoRAG: Retrieval-Augmented Generation over Video Corpus]
Line 63: Line 63:
 
* 2024-12: Google DeepMind: [https://arxiv.org/abs/2412.17747 Deliberation in Latent Space via Differentiable Cache Augmentation]
 
* 2024-12: Google DeepMind: [https://arxiv.org/abs/2412.17747 Deliberation in Latent Space via Differentiable Cache Augmentation]
 
* 2024-12: [https://github.com/jerber/lang-jepa LANG-JEPA: Learning to Think in Latent Space]
 
* 2024-12: [https://github.com/jerber/lang-jepa LANG-JEPA: Learning to Think in Latent Space]
 +
* 2025-01: [https://arxiv.org/abs/2501.19201 Efficient Reasoning with Hidden Thinking] ([https://github.com/shawnricecake/Heima code])
  
 
=See Also=
 
=See Also=
 
* [[Increasing AI Intelligence]]
 
* [[Increasing AI Intelligence]]

Latest revision as of 09:38, 3 February 2025

Novel Tokenization and/or Sampling

System 2 Reasoning

See: Increasing AI Intelligence

Memory

LLM Weights Memory

Context Length

Extended Context

Retrieval beyond RAG

See also: AI tools: Retrieval Augmented Generation (RAG)

Working Memory

Episodic Memory

Updating Weights at Inference-time

Internal Thought Representation Space

Visual Thinking

Neural (non-token) Latent Representation

See Also