Difference between revisions of "AI research trends"

From GISAXS
Jump to: navigation, search
(Diffusion Language Models)
(Missing Elements)
 
(3 intermediate revisions by the same user not shown)
Line 1: Line 1:
=Novel Tokenization and/or Sampling=
 
* 2024-04: [https://arxiv.org/abs/2404.19737 Better & Faster Large Language Models via Multi-token Prediction]
 
* 2024-10: [https://github.com/xjdr-alt/entropix entropix: Entropy Based Sampling and Parallel CoT Decoding]
 
* 2024-10: [https://arxiv.org/abs/2410.01104 softmax is not enough (for sharp out-of-distribution)]
 
* 2024-12: [https://arxiv.org/abs/2412.06676 I Don't Know: Explicit Modeling of Uncertainty with an <nowiki>[IDK]</nowiki> Token]
 
 
 
=System 2 Reasoning=
 
=System 2 Reasoning=
 
See: [[Increasing AI Intelligence]]
 
See: [[Increasing AI Intelligence]]
Line 79: Line 73:
 
* 2025-02: Meta: [https://arxiv.org/abs/2502.08524 LLM Pretraining with Continuous Concepts] (CoCoMix)
 
* 2025-02: Meta: [https://arxiv.org/abs/2502.08524 LLM Pretraining with Continuous Concepts] (CoCoMix)
  
=Generation Strategies=
+
=Altered Transformer=
 +
 
 +
==Tokenization==
 +
* 2024-04: [https://arxiv.org/abs/2404.19737 Better & Faster Large Language Models via Multi-token Prediction]
 +
* 2024-12: [https://arxiv.org/abs/2412.06676 I Don't Know: Explicit Modeling of Uncertainty with an <nowiki>[IDK]</nowiki> Token]
 +
* 2025-04: Meta: [https://arxiv.org/abs/2504.00927 Multi-Token Attention]
 +
 
 
==Generation Order==
 
==Generation Order==
 
* 2019-02: [https://arxiv.org/abs/1902.02192 Non-Monotonic Sequential Text Generation]
 
* 2019-02: [https://arxiv.org/abs/1902.02192 Non-Monotonic Sequential Text Generation]
Line 99: Line 99:
 
* 2024-02: [https://arxiv.org/abs/2402.09470 Rolling Diffusion Models]
 
* 2024-02: [https://arxiv.org/abs/2402.09470 Rolling Diffusion Models]
 
* 2024-08: [https://arxiv.org/abs/2408.11039 Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model]
 
* 2024-08: [https://arxiv.org/abs/2408.11039 Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model]
 +
 +
==Sampling==
 +
* 2024-10: [https://github.com/xjdr-alt/entropix entropix: Entropy Based Sampling and Parallel CoT Decoding]
 +
* 2024-10: [https://arxiv.org/abs/2410.01104 softmax is not enough (for sharp out-of-distribution)]
 +
 +
=Missing Elements=
 +
* Memory
 +
* Continuous learning/update
 +
* Robust contextual model
 +
* Long-time-horizon coherence
 +
* Fluid intelligence
 +
* Agency
  
 
=See Also=
 
=See Also=
 
* [[Increasing AI Intelligence]]
 
* [[Increasing AI Intelligence]]

Latest revision as of 09:13, 3 April 2025

System 2 Reasoning

See: Increasing AI Intelligence

Memory

LLM Weights Memory

Context Length

Extended Context

Retrieval beyond RAG

See also: AI tools: Retrieval Augmented Generation (RAG)

Working Memory

Episodic Memory

Updating Weights at Inference-time

Parameters as Tokens

Internal Thought Representation Space

Visual Thinking

Neural (non-token) Latent Representation

Altered Transformer

Tokenization

Generation Order

Diffusion Language Models

Related: Image Synthesis via Autoregression/Diffusion

Sampling

Missing Elements

  • Memory
  • Continuous learning/update
  • Robust contextual model
  • Long-time-horizon coherence
  • Fluid intelligence
  • Agency

See Also