Difference between revisions of "AI research trends"

From GISAXS
Jump to: navigation, search
(Memes)
(Memes)
 
(2 intermediate revisions by the same user not shown)
Line 45: Line 45:
 
* 2024-12: [https://arxiv.org/abs/2412.11919 RetroLLM: Empowering Large Language Models to Retrieve Fine-grained Evidence within Generation]
 
* 2024-12: [https://arxiv.org/abs/2412.11919 RetroLLM: Empowering Large Language Models to Retrieve Fine-grained Evidence within Generation]
 
* 2025-03: Microsoft: [https://www.microsoft.com/en-us/research/blog/introducing-kblam-bringing-plug-and-play-external-knowledge-to-llms/ Introducing KBLaM: Bringing plug-and-play external knowledge to LLMs]
 
* 2025-03: Microsoft: [https://www.microsoft.com/en-us/research/blog/introducing-kblam-bringing-plug-and-play-external-knowledge-to-llms/ Introducing KBLaM: Bringing plug-and-play external knowledge to LLMs]
 +
* 2025-07: [https://arxiv.org/pdf/2507.07957 MIRIX: Multi-Agent Memory System for LLM-Based Agents] ([https://mirix.io/ mirix])
  
 
==Working Memory==
 
==Working Memory==
Line 122: Line 123:
 
* Agency
 
* Agency
 
* Modeling of self
 
* Modeling of self
 +
* [https://gwern.net/ai-daydreaming Daydreaming]
  
 
=Memes=
 
=Memes=
 
* Andrej Karpathy:  
 
* Andrej Karpathy:  
 +
** 2015-05: "Hallucination" in [https://karpathy.github.io/2015/05/21/rnn-effectiveness/ The Unreasonable Effectiveness of Recurrent Neural Networks]
 
** 2017-11: [https://karpathy.medium.com/software-2-0-a64152b37c35 Software 2.0] ([https://x.com/karpathy/status/893576281375219712 "Gradient descent can write code better than you. I'm sorry."])
 
** 2017-11: [https://karpathy.medium.com/software-2-0-a64152b37c35 Software 2.0] ([https://x.com/karpathy/status/893576281375219712 "Gradient descent can write code better than you. I'm sorry."])
 
** 2022-10: [https://x.com/karpathy/status/1582807367988654081 Transformers as general-purpose differentiable computers] ([https://www.youtube.com/watch?v=9uw3F6rndnA talk])
 
** 2022-10: [https://x.com/karpathy/status/1582807367988654081 Transformers as general-purpose differentiable computers] ([https://www.youtube.com/watch?v=9uw3F6rndnA talk])

Latest revision as of 09:29, 28 July 2025

System 2 Reasoning

See: Increasing AI Intelligence

Memory

LLM Weights Memory

Context Length

Extended Context

Retrieval beyond RAG

See also: AI tools: Retrieval Augmented Generation (RAG)

Working Memory

Long-Term Memory

Episodic Memory

Updating Weights at Inference-time

Parameters as Tokens

Internal Thought Representation Space

Visual Thinking

Neural (non-token) Latent Representation

Altered Transformer

Tokenization

Generation Order

Diffusion Language Models

Related: Image Synthesis via Autoregression/Diffusion

Sampling

Missing Elements

  • Memory
  • Continuous learning/update
  • Robust contextual model
  • Long-time-horizon coherence
  • Fluid intelligence
  • Agency
  • Modeling of self
  • Daydreaming

Memes

See Also