Difference between revisions of "AI research trends"

From GISAXS
Jump to: navigation, search
(Retrieval beyond RAG)
(Updating Weights at Inference-time)
 
(5 intermediate revisions by the same user not shown)
Line 55: Line 55:
 
===Episodic Memory===
 
===Episodic Memory===
 
* 2024-03: [https://arxiv.org/abs/2403.11901 Larimar: Large Language Models with Episodic Memory Control]
 
* 2024-03: [https://arxiv.org/abs/2403.11901 Larimar: Large Language Models with Episodic Memory Control]
 +
* 2025-08: [https://arxiv.org/abs/2508.16153 AgentFly: Fine-tuning LLM Agents without Fine-tuning LLMs]
  
 
=Updating Weights at Inference-time=
 
=Updating Weights at Inference-time=
 
* 2025-01: [https://arxiv.org/abs/2501.06252 Transformer<sup>2</sup>: Self-adaptive LLMs]
 
* 2025-01: [https://arxiv.org/abs/2501.06252 Transformer<sup>2</sup>: Self-adaptive LLMs]
 +
* 2025-08: [https://arxiv.org/abs/2508.14143 Beyond Turing: Memory-Amortized Inference as a Foundation for Cognitive Computation]
  
 
==Parameters as Tokens==
 
==Parameters as Tokens==
Line 114: Line 116:
 
* 2024-10: [https://arxiv.org/abs/2410.01104 softmax is not enough (for sharp out-of-distribution)]
 
* 2024-10: [https://arxiv.org/abs/2410.01104 softmax is not enough (for sharp out-of-distribution)]
 
* 2025-06: [https://arxiv.org/abs/2506.06215 Corrector Sampling in Language Models]
 
* 2025-06: [https://arxiv.org/abs/2506.06215 Corrector Sampling in Language Models]
 +
* 2025-08: [https://arxiv.org/abs/2508.15260 Deep Think with Confidence] ([https://jiaweizzhao.github.io/deepconf/ project])
  
 
=Missing Elements=
 
=Missing Elements=
Line 123: Line 126:
 
* Agency
 
* Agency
 
* Modeling of self
 
* Modeling of self
 +
* [https://gwern.net/ai-daydreaming Daydreaming]
  
 
=Memes=
 
=Memes=
 
* Andrej Karpathy:  
 
* Andrej Karpathy:  
 +
** 2015-05: "Hallucination" in [https://karpathy.github.io/2015/05/21/rnn-effectiveness/ The Unreasonable Effectiveness of Recurrent Neural Networks]
 
** 2017-11: [https://karpathy.medium.com/software-2-0-a64152b37c35 Software 2.0] ([https://x.com/karpathy/status/893576281375219712 "Gradient descent can write code better than you. I'm sorry."])
 
** 2017-11: [https://karpathy.medium.com/software-2-0-a64152b37c35 Software 2.0] ([https://x.com/karpathy/status/893576281375219712 "Gradient descent can write code better than you. I'm sorry."])
 
** 2022-10: [https://x.com/karpathy/status/1582807367988654081 Transformers as general-purpose differentiable computers] ([https://www.youtube.com/watch?v=9uw3F6rndnA talk])
 
** 2022-10: [https://x.com/karpathy/status/1582807367988654081 Transformers as general-purpose differentiable computers] ([https://www.youtube.com/watch?v=9uw3F6rndnA talk])
 
** 2023-01: [https://x.com/karpathy/status/1617979122625712128 The hottest new programming language is English]
 
** 2023-01: [https://x.com/karpathy/status/1617979122625712128 The hottest new programming language is English]
 
** 2023-09: [https://x.com/karpathy/status/1707437820045062561 LLM as kernel of a new Operating System] ([https://x.com/karpathy/status/1723140519554105733 diagram], [https://www.threads.com/@karpathy/post/CzehPtxPEF3 OS analogies])
 
** 2023-09: [https://x.com/karpathy/status/1707437820045062561 LLM as kernel of a new Operating System] ([https://x.com/karpathy/status/1723140519554105733 diagram], [https://www.threads.com/@karpathy/post/CzehPtxPEF3 OS analogies])
 +
** 2024-07: [https://x.com/karpathy/status/1816531576228053133 Jagged Intelligence] (c.f. [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4573321 Mollick paper])
 
** 2025-02: [https://x.com/karpathy/status/1886192184808149383 Vibe coding]
 
** 2025-02: [https://x.com/karpathy/status/1886192184808149383 Vibe coding]
 
** 2025-06: [https://www.latent.space/p/s3 Software 3.0] ([https://www.youtube.com/watch?v=LCEmiRjPEtQ&t=1s talk]): "Prompts as Programs". Software 1.0 is code; 2.0 is model weights; 3.0 is prompts.
 
** 2025-06: [https://www.latent.space/p/s3 Software 3.0] ([https://www.youtube.com/watch?v=LCEmiRjPEtQ&t=1s talk]): "Prompts as Programs". Software 1.0 is code; 2.0 is model weights; 3.0 is prompts.

Latest revision as of 10:14, 25 August 2025

System 2 Reasoning

See: Increasing AI Intelligence

Memory

LLM Weights Memory

Context Length

Extended Context

Retrieval beyond RAG

See also: AI tools: Retrieval Augmented Generation (RAG)

Working Memory

Long-Term Memory

Episodic Memory

Updating Weights at Inference-time

Parameters as Tokens

Internal Thought Representation Space

Visual Thinking

Neural (non-token) Latent Representation

Altered Transformer

Tokenization

Generation Order

Diffusion Language Models

Related: Image Synthesis via Autoregression/Diffusion

Sampling

Missing Elements

  • Memory
  • Continuous learning/update
  • Robust contextual model
  • Long-time-horizon coherence
  • Fluid intelligence
  • Agency
  • Modeling of self
  • Daydreaming

Memes

See Also