Difference between revisions of "AI research trends"

From GISAXS
Jump to: navigation, search
(Novel Tokenization and/or Sampling)
(Diffusion Language Models)
(5 intermediate revisions by the same user not shown)
Line 33: Line 33:
 
* 2025-02-27: [https://arxiv.org/abs/2502.20082 LongRoPE2: Near-Lossless LLM Context Window Scaling] ([https://github.com/microsoft/LongRoPE code])
 
* 2025-02-27: [https://arxiv.org/abs/2502.20082 LongRoPE2: Near-Lossless LLM Context Window Scaling] ([https://github.com/microsoft/LongRoPE code])
 
* [https://x.com/sundarpichai/status/1904579419496386736 2025-03-25]: [https://blog.google/technology/google-deepmind/gemini-model-thinking-updates-march-2025/ Gemini 2.5 Pro] [https://x.com/pvncher/status/1904685092053606715 1M]
 
* [https://x.com/sundarpichai/status/1904579419496386736 2025-03-25]: [https://blog.google/technology/google-deepmind/gemini-model-thinking-updates-march-2025/ Gemini 2.5 Pro] [https://x.com/pvncher/status/1904685092053606715 1M]
 +
* 2025-04-05: Meta [https://ai.meta.com/blog/llama-4-multimodal-intelligence/ Llama 4] 10M
  
 
==Extended Context==
 
==Extended Context==
Line 73: Line 74:
 
* 2025-02: Meta: [https://arxiv.org/abs/2502.08524 LLM Pretraining with Continuous Concepts] (CoCoMix)
 
* 2025-02: Meta: [https://arxiv.org/abs/2502.08524 LLM Pretraining with Continuous Concepts] (CoCoMix)
  
=Generation Strategies=
+
=Altered Transformer=
 +
 
 +
==Tokenization==
 +
* 2024-04: [https://arxiv.org/abs/2404.19737 Better & Faster Large Language Models via Multi-token Prediction]
 +
* 2024-12: [https://arxiv.org/abs/2412.06676 I Don't Know: Explicit Modeling of Uncertainty with an <nowiki>[IDK]</nowiki> Token]
 +
* 2025-04: Meta: [https://arxiv.org/abs/2504.00927 Multi-Token Attention]
 +
 
 
==Generation Order==
 
==Generation Order==
 
* 2019-02: [https://arxiv.org/abs/1902.02192 Non-Monotonic Sequential Text Generation]
 
* 2019-02: [https://arxiv.org/abs/1902.02192 Non-Monotonic Sequential Text Generation]
Line 88: Line 95:
 
* 2025-02: [https://www.inceptionlabs.ai/ Inception Labs] [https://www.inceptionlabs.ai/news Mercury] model ([https://chat.inceptionlabs.ai/ online demo])
 
* 2025-02: [https://www.inceptionlabs.ai/ Inception Labs] [https://www.inceptionlabs.ai/news Mercury] model ([https://chat.inceptionlabs.ai/ online demo])
 
* 2025-03: [https://arxiv.org/abs/2503.09573 Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models] ([https://m-arriola.com/bd3lms/ project], [https://github.com/kuleshov-group/bd3lms code], [https://huggingface.co/collections/kuleshov-group/bd3-lms-67be95f81b96b15fec50d53f hf])
 
* 2025-03: [https://arxiv.org/abs/2503.09573 Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models] ([https://m-arriola.com/bd3lms/ project], [https://github.com/kuleshov-group/bd3lms code], [https://huggingface.co/collections/kuleshov-group/bd3-lms-67be95f81b96b15fec50d53f hf])
 +
* 2025-04: [https://hkunlp.github.io/blog/2025/dream/ Dream 7B: Introducing Dream 7B, the most powerful open diffusion large language model to date]
 +
* 2025-04: [https://dllm-reasoning.github.io/d1: Scaling Reasoning in Diffusion Large Language Models via Reinforcement Learning] ([https://dllm-reasoning.github.io/media/preprint.pdf preprint], [https://github.com/dllm-reasoning/d1 code])
  
 
===Related: Image Synthesis via Autoregression/Diffusion===
 
===Related: Image Synthesis via Autoregression/Diffusion===
Line 93: Line 102:
 
* 2024-02: [https://arxiv.org/abs/2402.09470 Rolling Diffusion Models]
 
* 2024-02: [https://arxiv.org/abs/2402.09470 Rolling Diffusion Models]
 
* 2024-08: [https://arxiv.org/abs/2408.11039 Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model]
 
* 2024-08: [https://arxiv.org/abs/2408.11039 Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model]
 +
 +
==Sampling==
 +
* 2024-10: [https://github.com/xjdr-alt/entropix entropix: Entropy Based Sampling and Parallel CoT Decoding]
 +
* 2024-10: [https://arxiv.org/abs/2410.01104 softmax is not enough (for sharp out-of-distribution)]
 +
 +
=Missing Elements=
 +
* Memory
 +
* Continuous learning/update
 +
* Robust contextual model
 +
* Long-time-horizon coherence
 +
* Fluid intelligence
 +
* Agency
  
 
=See Also=
 
=See Also=
 
* [[Increasing AI Intelligence]]
 
* [[Increasing AI Intelligence]]

Revision as of 12:02, 13 April 2025

System 2 Reasoning

See: Increasing AI Intelligence

Memory

LLM Weights Memory

Context Length

Extended Context

Retrieval beyond RAG

See also: AI tools: Retrieval Augmented Generation (RAG)

Working Memory

Episodic Memory

Updating Weights at Inference-time

Parameters as Tokens

Internal Thought Representation Space

Visual Thinking

Neural (non-token) Latent Representation

Altered Transformer

Tokenization

Generation Order

Diffusion Language Models

Related: Image Synthesis via Autoregression/Diffusion

Sampling

Missing Elements

  • Memory
  • Continuous learning/update
  • Robust contextual model
  • Long-time-horizon coherence
  • Fluid intelligence
  • Agency

See Also