Difference between revisions of "AI research trends"

From GISAXS
Jump to: navigation, search
(Neural (non-token) Latent Representation)
(Diffusion Language Models)
(28 intermediate revisions by the same user not shown)
Line 1: Line 1:
=Novel Tokenization and/or Sampling=
 
* 2024-04: [https://arxiv.org/abs/2404.19737 Better & Faster Large Language Models via Multi-token Prediction]
 
* 2024-10: [https://github.com/xjdr-alt/entropix entropix: Entropy Based Sampling and Parallel CoT Decoding]
 
* 2024-10: [https://arxiv.org/abs/2410.01104 softmax is not enough (for sharp out-of-distribution)]
 
* 2024-12: [https://arxiv.org/abs/2412.06676 I Don't Know: Explicit Modeling of Uncertainty with an <nowiki>[IDK]</nowiki> Token]
 
 
 
=System 2 Reasoning=
 
=System 2 Reasoning=
 
See: [[Increasing AI Intelligence]]
 
See: [[Increasing AI Intelligence]]
Line 30: Line 24:
 
* 2024-04-12: Meta et al. demonstrate [https://arxiv.org/abs/2404.08801 Megalodon] that enables infinite context via a more efficient architecture
 
* 2024-04-12: Meta et al. demonstrate [https://arxiv.org/abs/2404.08801 Megalodon] that enables infinite context via a more efficient architecture
 
* 2024-04-14: Google presents [https://arxiv.org/abs/2404.09173 TransformerFAM], which leverages a feedback loop so it attends to its own latent representations, acting as working memory and provides effectively infinite context
 
* 2024-04-14: Google presents [https://arxiv.org/abs/2404.09173 TransformerFAM], which leverages a feedback loop so it attends to its own latent representations, acting as working memory and provides effectively infinite context
* [https://x.com/MiniMax__AI/status/1879226391352549451 2025-01Jan-14]: [https://www.minimaxi.com/en/news/minimax-01-series-2 MiniMax-01] 4M ([https://www.minimaxi.com/en/news/minimax-01-series-2 paper])
+
* 2024-10-31: [https://arxiv.org/abs/2410.23771 What is Wrong with Perplexity for Long-context Language Modeling?]
* [https://x.com/Alibaba_Qwen/status/1883557964759654608 2025-01Jan-27]: [https://qwenlm.github.io/blog/qwen2.5-1m/ Qwen2.5-1M] ([https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-1M/Qwen2_5_1M_Technical_Report.pdf report])
+
* [https://x.com/MiniMax__AI/status/1879226391352549451 2025-01-14]: [https://www.minimaxi.com/en/news/minimax-01-series-2 MiniMax-01] 4M ([https://www.minimaxi.com/en/news/minimax-01-series-2 paper])
 +
* [https://x.com/Alibaba_Qwen/status/1883557964759654608 2025-01-27]: [https://qwenlm.github.io/blog/qwen2.5-1m/ Qwen2.5-1M] ([https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-1M/Qwen2_5_1M_Technical_Report.pdf report])
 +
* 2025-02-14: [https://arxiv.org/abs/2502.08910 InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on a Single GPU] 3M
 +
* [https://x.com/AnimaAnandkumar/status/1897449851941744648 2025-02-18]: [https://arxiv.org/abs/2502.12574 HeadInfer: Memory-Efficient LLM Inference by Head-wise Offloading] ([https://github.com/wdlctc/headinfer code])
 +
* 2025-02-18: [https://arxiv.org/abs/2502.12962 Infinite Retrieval: Attention Enhanced LLMs in Long-Context Processing]
 +
* 2025-02-19: [https://github.com/MoonshotAI/MoBA MoBA: Mixture of Block Attention for Long-Context LLMs]
 +
* 2025-02-27: [https://arxiv.org/abs/2502.20082 LongRoPE2: Near-Lossless LLM Context Window Scaling] ([https://github.com/microsoft/LongRoPE code])
 +
* [https://x.com/sundarpichai/status/1904579419496386736 2025-03-25]: [https://blog.google/technology/google-deepmind/gemini-model-thinking-updates-march-2025/ Gemini 2.5 Pro] [https://x.com/pvncher/status/1904685092053606715 1M]
 +
* 2025-04-05: Meta [https://ai.meta.com/blog/llama-4-multimodal-intelligence/ Llama 4] 10M
  
 
==Extended Context==
 
==Extended Context==
Line 38: Line 40:
 
==Retrieval beyond RAG==
 
==Retrieval beyond RAG==
 
See also: [[AI_tools#Retrieval_Augmented_Generation_.28RAG.29|AI tools: Retrieval Augmented Generation (RAG)]]
 
See also: [[AI_tools#Retrieval_Augmented_Generation_.28RAG.29|AI tools: Retrieval Augmented Generation (RAG)]]
 +
* 2024-10: Microsoft: [https://arxiv.org/abs/2410.10450 KBLaM: Knowledge Base augmented Language Model]
 
* 2024-12: [https://arxiv.org/abs/2412.11536 Let your LLM generate a few tokens and you will reduce the need for retrieval]
 
* 2024-12: [https://arxiv.org/abs/2412.11536 Let your LLM generate a few tokens and you will reduce the need for retrieval]
 
* 2024-12: [https://arxiv.org/abs/2412.11919 RetroLLM: Empowering Large Language Models to Retrieve Fine-grained Evidence within Generation]
 
* 2024-12: [https://arxiv.org/abs/2412.11919 RetroLLM: Empowering Large Language Models to Retrieve Fine-grained Evidence within Generation]
 +
* 2025-03: Microsoft: [https://www.microsoft.com/en-us/research/blog/introducing-kblam-bringing-plug-and-play-external-knowledge-to-llms/ Introducing KBLaM: Bringing plug-and-play external knowledge to LLMs]
  
 
==Working Memory==
 
==Working Memory==
Line 49: Line 53:
 
=Updating Weights at Inference-time=
 
=Updating Weights at Inference-time=
 
* 2025-01: [https://arxiv.org/abs/2501.06252 Transformer<sup>2</sup>: Self-adaptive LLMs]
 
* 2025-01: [https://arxiv.org/abs/2501.06252 Transformer<sup>2</sup>: Self-adaptive LLMs]
 +
 +
==Parameters as Tokens==
 +
* 2024-10: [https://arxiv.org/abs/2410.23168 TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters] ([https://github.com/Haiyang-W/TokenFormer code])
  
 
=Internal Thought Representation Space=
 
=Internal Thought Representation Space=
Line 65: Line 72:
 
* 2025-01: [https://arxiv.org/abs/2501.19201 Efficient Reasoning with Hidden Thinking] ([https://github.com/shawnricecake/Heima code])
 
* 2025-01: [https://arxiv.org/abs/2501.19201 Efficient Reasoning with Hidden Thinking] ([https://github.com/shawnricecake/Heima code])
 
* 2025-02: [https://huggingface.co/tomg-group-umd/huginn-0125 Huginn-0125]: [https://arxiv.org/abs/2502.05171 Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach] ([https://github.com/seal-rg/recurrent-pretraining code], [https://huggingface.co/tomg-group-umd/huginn-0125 model])
 
* 2025-02: [https://huggingface.co/tomg-group-umd/huginn-0125 Huginn-0125]: [https://arxiv.org/abs/2502.05171 Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach] ([https://github.com/seal-rg/recurrent-pretraining code], [https://huggingface.co/tomg-group-umd/huginn-0125 model])
 +
* 2025-02: Meta: [https://arxiv.org/abs/2502.08524 LLM Pretraining with Continuous Concepts] (CoCoMix)
 +
 +
=Altered Transformer=
 +
 +
==Tokenization==
 +
* 2024-04: [https://arxiv.org/abs/2404.19737 Better & Faster Large Language Models via Multi-token Prediction]
 +
* 2024-12: [https://arxiv.org/abs/2412.06676 I Don't Know: Explicit Modeling of Uncertainty with an <nowiki>[IDK]</nowiki> Token]
 +
* 2025-04: Meta: [https://arxiv.org/abs/2504.00927 Multi-Token Attention]
 +
 +
==Generation Order==
 +
* 2019-02: [https://arxiv.org/abs/1902.02192 Non-Monotonic Sequential Text Generation]
 +
* 2019-04: [https://arxiv.org/abs/1904.09324 Mask-Predict: Parallel Decoding of Conditional Masked Language Models]
 +
* 2019-06: [https://arxiv.org/abs/1906.09601 Sequence Generation: From Both Sides to the Middle]
 +
* 2020-04: [https://arxiv.org/abs/2004.11579 Probabilistically Masked Language Model Capable of Autoregressive Generation in Arbitrary Word Order]
 +
* 2021-12: [https://arxiv.org/abs/2112.10543 Spiral Language Modeling]
 +
* 2023-10: [https://arxiv.org/abs/2310.09930 FiLM: Fill-in Language Models for Any-Order Generation]
 +
* 2024-07: [https://arxiv.org/abs/2407.03582 Integrating Randomness in Large Language Models: A Linear Congruential Generator Approach for Generating Clinically Relevant Content]
 +
 +
==Diffusion Language Models==
 +
* 2024-02: [https://arxiv.org/abs/2402.03687 Pard: Permutation-Invariant Autoregressive Diffusion for Graph Generation]
 +
* 2025-02: [https://arxiv.org/abs/2502.09992 Large Language Diffusion Models]
 +
* 2025-02: [https://www.inceptionlabs.ai/ Inception Labs] [https://www.inceptionlabs.ai/news Mercury] model ([https://chat.inceptionlabs.ai/ online demo])
 +
* 2025-03: [https://arxiv.org/abs/2503.09573 Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models] ([https://m-arriola.com/bd3lms/ project], [https://github.com/kuleshov-group/bd3lms code], [https://huggingface.co/collections/kuleshov-group/bd3-lms-67be95f81b96b15fec50d53f hf])
 +
* 2025-04: [https://hkunlp.github.io/blog/2025/dream/ Dream 7B: Introducing Dream 7B, the most powerful open diffusion large language model to date]
 +
* 2025-04: [https://dllm-reasoning.github.io/d1: Scaling Reasoning in Diffusion Large Language Models via Reinforcement Learning] ([https://dllm-reasoning.github.io/media/preprint.pdf preprint], [https://github.com/dllm-reasoning/d1 code])
 +
 +
===Related: Image Synthesis via Autoregression/Diffusion===
 +
* 2023-10: [https://arxiv.org/abs/2310.01400 Sequential Data Generation with Groupwise Diffusion Process]
 +
* 2024-02: [https://arxiv.org/abs/2402.09470 Rolling Diffusion Models]
 +
* 2024-08: [https://arxiv.org/abs/2408.11039 Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model]
 +
 +
==Sampling==
 +
* 2024-10: [https://github.com/xjdr-alt/entropix entropix: Entropy Based Sampling and Parallel CoT Decoding]
 +
* 2024-10: [https://arxiv.org/abs/2410.01104 softmax is not enough (for sharp out-of-distribution)]
 +
 +
=Missing Elements=
 +
* Memory
 +
* Continuous learning/update
 +
* Robust contextual model
 +
* Long-time-horizon coherence
 +
* Fluid intelligence
 +
* Agency
  
 
=See Also=
 
=See Also=
 
* [[Increasing AI Intelligence]]
 
* [[Increasing AI Intelligence]]

Revision as of 12:02, 13 April 2025

System 2 Reasoning

See: Increasing AI Intelligence

Memory

LLM Weights Memory

Context Length

Extended Context

Retrieval beyond RAG

See also: AI tools: Retrieval Augmented Generation (RAG)

Working Memory

Episodic Memory

Updating Weights at Inference-time

Parameters as Tokens

Internal Thought Representation Space

Visual Thinking

Neural (non-token) Latent Representation

Altered Transformer

Tokenization

Generation Order

Diffusion Language Models

Related: Image Synthesis via Autoregression/Diffusion

Sampling

Missing Elements

  • Memory
  • Continuous learning/update
  • Robust contextual model
  • Long-time-horizon coherence
  • Fluid intelligence
  • Agency

See Also