Difference between revisions of "AI research trends"

From GISAXS
Jump to: navigation, search
(Sampling)
(Memory)
 
(2 intermediate revisions by the same user not shown)
Line 6: Line 6:
 
==LLM Weights Memory==
 
==LLM Weights Memory==
 
* 2024-12: [https://arxiv.org/abs/2412.09764 Memory Layers at Scale]
 
* 2024-12: [https://arxiv.org/abs/2412.09764 Memory Layers at Scale]
 +
* 2025-10: [https://arxiv.org/abs/2510.15103 Continual Learning via Sparse Memory Finetuning]
  
 
==Context Length==
 
==Context Length==
Line 50: Line 51:
 
* 2025-03: Microsoft: [https://www.microsoft.com/en-us/research/blog/introducing-kblam-bringing-plug-and-play-external-knowledge-to-llms/ Introducing KBLaM: Bringing plug-and-play external knowledge to LLMs]
 
* 2025-03: Microsoft: [https://www.microsoft.com/en-us/research/blog/introducing-kblam-bringing-plug-and-play-external-knowledge-to-llms/ Introducing KBLaM: Bringing plug-and-play external knowledge to LLMs]
 
* 2025-07: [https://arxiv.org/pdf/2507.07957 MIRIX: Multi-Agent Memory System for LLM-Based Agents] ([https://mirix.io/ mirix])
 
* 2025-07: [https://arxiv.org/pdf/2507.07957 MIRIX: Multi-Agent Memory System for LLM-Based Agents] ([https://mirix.io/ mirix])
 +
* 2025-08: [https://arxiv.org/abs/2508.16153 Memento: Fine-tuning LLM Agents without Fine-tuning LLMs]
  
 
==Working Memory==
 
==Working Memory==
Line 63: Line 65:
 
* 2024-03: [https://arxiv.org/abs/2403.11901 Larimar: Large Language Models with Episodic Memory Control]
 
* 2024-03: [https://arxiv.org/abs/2403.11901 Larimar: Large Language Models with Episodic Memory Control]
 
* 2025-08: [https://arxiv.org/abs/2508.16153 AgentFly: Fine-tuning LLM Agents without Fine-tuning LLMs]
 
* 2025-08: [https://arxiv.org/abs/2508.16153 AgentFly: Fine-tuning LLM Agents without Fine-tuning LLMs]
 +
 +
==Continual Learning==
 +
* 2025-10: [https://arxiv.org/abs/2510.15103 Continual Learning via Sparse Memory Finetuning]
 +
* 2025-11: [https://research.google/blog/introducing-nested-learning-a-new-ml-paradigm-for-continual-learning/ Introducing Nested Learning: A new ML paradigm for continual learning]
  
 
=Updating Weights at Inference-time=
 
=Updating Weights at Inference-time=

Latest revision as of 09:46, 10 November 2025

System 2 Reasoning

See: Increasing AI Intelligence

Memory

LLM Weights Memory

Context Length

Extended Context

Context Remaking

Retrieval beyond RAG

See also: AI tools: Retrieval Augmented Generation (RAG)

Working Memory

Long-Term Memory

Storage and Retrieval

Episodic Memory

Continual Learning

Updating Weights at Inference-time

Parameters as Tokens

Internal Thought Representation Space

Visual Thinking

Neural (non-token) Latent Representation

Altered Transformer

Tokenization

Generation Order

Diffusion Language Models

Related: Image Synthesis via Autoregression/Diffusion

Sampling

Missing Elements

  • Memory
  • Continuous learning/update
  • Robust contextual model
  • Long-time-horizon coherence
  • Fluid intelligence
  • Agency
  • Modeling of self
  • Daydreaming

Memes

See Also