Difference between revisions of "AI research trends"

From GISAXS
Jump to: navigation, search
(Context Length)
(Diffusion Language Models)
(10 intermediate revisions by the same user not shown)
Line 1: Line 1:
=Novel Tokenization and/or Sampling=
 
* 2024-04: [https://arxiv.org/abs/2404.19737 Better & Faster Large Language Models via Multi-token Prediction]
 
* 2024-10: [https://github.com/xjdr-alt/entropix entropix: Entropy Based Sampling and Parallel CoT Decoding]
 
* 2024-10: [https://arxiv.org/abs/2410.01104 softmax is not enough (for sharp out-of-distribution)]
 
* 2024-12: [https://arxiv.org/abs/2412.06676 I Don't Know: Explicit Modeling of Uncertainty with an <nowiki>[IDK]</nowiki> Token]
 
 
 
=System 2 Reasoning=
 
=System 2 Reasoning=
 
See: [[Increasing AI Intelligence]]
 
See: [[Increasing AI Intelligence]]
Line 38: Line 32:
 
* 2025-02-19: [https://github.com/MoonshotAI/MoBA MoBA: Mixture of Block Attention for Long-Context LLMs]
 
* 2025-02-19: [https://github.com/MoonshotAI/MoBA MoBA: Mixture of Block Attention for Long-Context LLMs]
 
* 2025-02-27: [https://arxiv.org/abs/2502.20082 LongRoPE2: Near-Lossless LLM Context Window Scaling] ([https://github.com/microsoft/LongRoPE code])
 
* 2025-02-27: [https://arxiv.org/abs/2502.20082 LongRoPE2: Near-Lossless LLM Context Window Scaling] ([https://github.com/microsoft/LongRoPE code])
* [https://x.com/sundarpichai/status/1904579419496386736 2025-03-25]: Gemini 2.5 Pro [https://x.com/pvncher/status/1904685092053606715 1M]
+
* [https://x.com/sundarpichai/status/1904579419496386736 2025-03-25]: [https://blog.google/technology/google-deepmind/gemini-model-thinking-updates-march-2025/ Gemini 2.5 Pro] [https://x.com/pvncher/status/1904685092053606715 1M]
 +
* 2025-04-05: Meta [https://ai.meta.com/blog/llama-4-multimodal-intelligence/ Llama 4] 10M
  
 
==Extended Context==
 
==Extended Context==
Line 79: Line 74:
 
* 2025-02: Meta: [https://arxiv.org/abs/2502.08524 LLM Pretraining with Continuous Concepts] (CoCoMix)
 
* 2025-02: Meta: [https://arxiv.org/abs/2502.08524 LLM Pretraining with Continuous Concepts] (CoCoMix)
  
=Diffusion Language Models=
+
=Altered Transformer=
 +
 
 +
==Tokenization==
 +
* 2024-04: [https://arxiv.org/abs/2404.19737 Better & Faster Large Language Models via Multi-token Prediction]
 +
* 2024-12: [https://arxiv.org/abs/2412.06676 I Don't Know: Explicit Modeling of Uncertainty with an <nowiki>[IDK]</nowiki> Token]
 +
* 2025-04: Meta: [https://arxiv.org/abs/2504.00927 Multi-Token Attention]
 +
 
 +
==Generation Order==
 +
* 2019-02: [https://arxiv.org/abs/1902.02192 Non-Monotonic Sequential Text Generation]
 +
* 2019-04: [https://arxiv.org/abs/1904.09324 Mask-Predict: Parallel Decoding of Conditional Masked Language Models]
 +
* 2019-06: [https://arxiv.org/abs/1906.09601 Sequence Generation: From Both Sides to the Middle]
 +
* 2020-04: [https://arxiv.org/abs/2004.11579 Probabilistically Masked Language Model Capable of Autoregressive Generation in Arbitrary Word Order]
 +
* 2021-12: [https://arxiv.org/abs/2112.10543 Spiral Language Modeling]
 +
* 2023-10: [https://arxiv.org/abs/2310.09930 FiLM: Fill-in Language Models for Any-Order Generation]
 +
* 2024-07: [https://arxiv.org/abs/2407.03582 Integrating Randomness in Large Language Models: A Linear Congruential Generator Approach for Generating Clinically Relevant Content]
 +
 
 +
==Diffusion Language Models==
 
* 2024-02: [https://arxiv.org/abs/2402.03687 Pard: Permutation-Invariant Autoregressive Diffusion for Graph Generation]
 
* 2024-02: [https://arxiv.org/abs/2402.03687 Pard: Permutation-Invariant Autoregressive Diffusion for Graph Generation]
 
* 2025-02: [https://arxiv.org/abs/2502.09992 Large Language Diffusion Models]
 
* 2025-02: [https://arxiv.org/abs/2502.09992 Large Language Diffusion Models]
 
* 2025-02: [https://www.inceptionlabs.ai/ Inception Labs] [https://www.inceptionlabs.ai/news Mercury] model ([https://chat.inceptionlabs.ai/ online demo])
 
* 2025-02: [https://www.inceptionlabs.ai/ Inception Labs] [https://www.inceptionlabs.ai/news Mercury] model ([https://chat.inceptionlabs.ai/ online demo])
 
* 2025-03: [https://arxiv.org/abs/2503.09573 Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models] ([https://m-arriola.com/bd3lms/ project], [https://github.com/kuleshov-group/bd3lms code], [https://huggingface.co/collections/kuleshov-group/bd3-lms-67be95f81b96b15fec50d53f hf])
 
* 2025-03: [https://arxiv.org/abs/2503.09573 Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models] ([https://m-arriola.com/bd3lms/ project], [https://github.com/kuleshov-group/bd3lms code], [https://huggingface.co/collections/kuleshov-group/bd3-lms-67be95f81b96b15fec50d53f hf])
 +
* 2025-04: [https://hkunlp.github.io/blog/2025/dream/ Dream 7B: Introducing Dream 7B, the most powerful open diffusion large language model to date]
 +
* 2025-04: [https://dllm-reasoning.github.io/d1: Scaling Reasoning in Diffusion Large Language Models via Reinforcement Learning] ([https://dllm-reasoning.github.io/media/preprint.pdf preprint], [https://github.com/dllm-reasoning/d1 code])
 +
 +
===Related: Image Synthesis via Autoregression/Diffusion===
 +
* 2023-10: [https://arxiv.org/abs/2310.01400 Sequential Data Generation with Groupwise Diffusion Process]
 +
* 2024-02: [https://arxiv.org/abs/2402.09470 Rolling Diffusion Models]
 +
* 2024-08: [https://arxiv.org/abs/2408.11039 Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model]
 +
 +
==Sampling==
 +
* 2024-10: [https://github.com/xjdr-alt/entropix entropix: Entropy Based Sampling and Parallel CoT Decoding]
 +
* 2024-10: [https://arxiv.org/abs/2410.01104 softmax is not enough (for sharp out-of-distribution)]
 +
 +
=Missing Elements=
 +
* Memory
 +
* Continuous learning/update
 +
* Robust contextual model
 +
* Long-time-horizon coherence
 +
* Fluid intelligence
 +
* Agency
  
 
=See Also=
 
=See Also=
 
* [[Increasing AI Intelligence]]
 
* [[Increasing AI Intelligence]]

Revision as of 12:02, 13 April 2025

System 2 Reasoning

See: Increasing AI Intelligence

Memory

LLM Weights Memory

Context Length

Extended Context

Retrieval beyond RAG

See also: AI tools: Retrieval Augmented Generation (RAG)

Working Memory

Episodic Memory

Updating Weights at Inference-time

Parameters as Tokens

Internal Thought Representation Space

Visual Thinking

Neural (non-token) Latent Representation

Altered Transformer

Tokenization

Generation Order

Diffusion Language Models

Related: Image Synthesis via Autoregression/Diffusion

Sampling

Missing Elements

  • Memory
  • Continuous learning/update
  • Robust contextual model
  • Long-time-horizon coherence
  • Fluid intelligence
  • Agency

See Also