Difference between revisions of "AI research trends"
KevinYager (talk | contribs) (→Missing Elements) |
KevinYager (talk | contribs) (→See Also) |
||
(3 intermediate revisions by the same user not shown) | |||
Line 77: | Line 77: | ||
* 2025-02: [https://huggingface.co/tomg-group-umd/huginn-0125 Huginn-0125]: [https://arxiv.org/abs/2502.05171 Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach] ([https://github.com/seal-rg/recurrent-pretraining code], [https://huggingface.co/tomg-group-umd/huginn-0125 model]) | * 2025-02: [https://huggingface.co/tomg-group-umd/huginn-0125 Huginn-0125]: [https://arxiv.org/abs/2502.05171 Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach] ([https://github.com/seal-rg/recurrent-pretraining code], [https://huggingface.co/tomg-group-umd/huginn-0125 model]) | ||
* 2025-02: Meta: [https://arxiv.org/abs/2502.08524 LLM Pretraining with Continuous Concepts] (CoCoMix) | * 2025-02: Meta: [https://arxiv.org/abs/2502.08524 LLM Pretraining with Continuous Concepts] (CoCoMix) | ||
+ | * 2025-06: [https://arxiv.org/abs/2505.12514 Reasoning by Superposition: A Theoretical Perspective on Chain of Continuous Thought] | ||
=Altered Transformer= | =Altered Transformer= | ||
Line 101: | Line 102: | ||
* 2025-04: [https://hkunlp.github.io/blog/2025/dream/ Dream 7B: Introducing Dream 7B, the most powerful open diffusion large language model to date] | * 2025-04: [https://hkunlp.github.io/blog/2025/dream/ Dream 7B: Introducing Dream 7B, the most powerful open diffusion large language model to date] | ||
* 2025-04: [https://dllm-reasoning.github.io/d1: Scaling Reasoning in Diffusion Large Language Models via Reinforcement Learning] ([https://dllm-reasoning.github.io/media/preprint.pdf preprint], [https://github.com/dllm-reasoning/d1 code]) | * 2025-04: [https://dllm-reasoning.github.io/d1: Scaling Reasoning in Diffusion Large Language Models via Reinforcement Learning] ([https://dllm-reasoning.github.io/media/preprint.pdf preprint], [https://github.com/dllm-reasoning/d1 code]) | ||
+ | * 2025-06: [https://arxiv.org/abs/2506.01928 Esoteric Language Models] ([https://s-sahoo.com/Eso-LMs/ project]) | ||
===Related: Image Synthesis via Autoregression/Diffusion=== | ===Related: Image Synthesis via Autoregression/Diffusion=== | ||
Line 110: | Line 112: | ||
* 2024-10: [https://github.com/xjdr-alt/entropix entropix: Entropy Based Sampling and Parallel CoT Decoding] | * 2024-10: [https://github.com/xjdr-alt/entropix entropix: Entropy Based Sampling and Parallel CoT Decoding] | ||
* 2024-10: [https://arxiv.org/abs/2410.01104 softmax is not enough (for sharp out-of-distribution)] | * 2024-10: [https://arxiv.org/abs/2410.01104 softmax is not enough (for sharp out-of-distribution)] | ||
+ | * 2025-06: [https://arxiv.org/abs/2506.06215 Corrector Sampling in Language Models] | ||
=Missing Elements= | =Missing Elements= | ||
Line 119: | Line 122: | ||
* Agency | * Agency | ||
* Modeling of self | * Modeling of self | ||
+ | |||
+ | =Memes= | ||
+ | * Andrej Karpathy: | ||
+ | ** 2017-11: [https://karpathy.medium.com/software-2-0-a64152b37c35 Software 2.0] ([https://x.com/karpathy/status/893576281375219712 "Gradient descent can write code better than you. I'm sorry."]) | ||
+ | ** 2022-10: [https://x.com/karpathy/status/1582807367988654081 Transformers as general-purpose differentiable computers] ([https://www.youtube.com/watch?v=9uw3F6rndnA talk]) | ||
+ | ** 2023-09: [https://x.com/karpathy/status/1707437820045062561 LLM as kernel of a new Operating System] ([https://x.com/karpathy/status/1723140519554105733 diagram], [https://www.threads.com/@karpathy/post/CzehPtxPEF3 OS analogies]) | ||
+ | ** 2025-02: [https://x.com/karpathy/status/1886192184808149383 Vibe coding] | ||
+ | ** 2025-06: [https://www.latent.space/p/s3 Software 3.0] ([https://www.youtube.com/watch?v=LCEmiRjPEtQ&t=1s talk]): "Prompts as Programs". Software 1.0 is code; 2.0 is model weights; 3.0 is prompts. | ||
+ | ** 2025-06: [https://x.com/karpathy/status/1937902205765607626 "Context Engineering" instead of "Prompt Engineering"] | ||
+ | ** 2025-06: [https://x.com/karpathy/status/1938626382248149433 LLMs as "cognitive cores"] | ||
=See Also= | =See Also= | ||
* [[Increasing AI Intelligence]] | * [[Increasing AI Intelligence]] |
Latest revision as of 12:52, 30 June 2025
Contents
System 2 Reasoning
See: Increasing AI Intelligence
Memory
LLM Weights Memory
- 2024-12: Memory Layers at Scale
Context Length
- 2020: Various ideas for scaling context window, including Longformer
- 2023-04-02: Discussion of ideas for how to scale context window
- 2023-05-11: Anthropic announces 100k window
- 2023-06-07: magic.dev claims 5M tokens coming soon
- 2023-07-05: Microsoft describes LongNet, with 1 billion token window
- 2023-07-11: Focused Transformer 256k
- 2023-11-06: GPT-4 turbo 128k
- 2023-11-22: Anthropic Claude 2.1 200k
- 2023-12-13: Mamba alternative
- 2024-01-04: LongLM to extend context window
- 2024-02-15: Gemini 1.5 1M tokens
- 2024-03-04: Anthropic Claude 3 200k
- 2024-03-08: Google claims Gemini 1.5 can scale to 10M
- 2024-04-10: Google preprint demonstrates infinite context length by using compressive memory
- 2024-04-12: Meta et al. demonstrate Megalodon that enables infinite context via a more efficient architecture
- 2024-04-14: Google presents TransformerFAM, which leverages a feedback loop so it attends to its own latent representations, acting as working memory and provides effectively infinite context
- 2024-10-31: What is Wrong with Perplexity for Long-context Language Modeling?
- 2025-01-14: MiniMax-01 4M (paper)
- 2025-01-27: Qwen2.5-1M (report)
- 2025-02-14: InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on a Single GPU 3M
- 2025-02-18: HeadInfer: Memory-Efficient LLM Inference by Head-wise Offloading (code)
- 2025-02-18: Infinite Retrieval: Attention Enhanced LLMs in Long-Context Processing
- 2025-02-19: MoBA: Mixture of Block Attention for Long-Context LLMs
- 2025-02-27: LongRoPE2: Near-Lossless LLM Context Window Scaling (code)
- 2025-03-25: Gemini 2.5 Pro 1M
- 2025-04-05: Meta Llama 4 10M
- 2025-04-14: OpenAI GPT-4.1 1M
Extended Context
Retrieval beyond RAG
See also: AI tools: Retrieval Augmented Generation (RAG)
- 2024-10: Microsoft: KBLaM: Knowledge Base augmented Language Model
- 2024-12: Let your LLM generate a few tokens and you will reduce the need for retrieval
- 2024-12: RetroLLM: Empowering Large Language Models to Retrieve Fine-grained Evidence within Generation
- 2025-03: Microsoft: Introducing KBLaM: Bringing plug-and-play external knowledge to LLMs
Working Memory
Long-Term Memory
Episodic Memory
Updating Weights at Inference-time
- 2025-01: Transformer2: Self-adaptive LLMs
Parameters as Tokens
Internal Thought Representation Space
Visual Thinking
- 2025-01: VideoRAG: Retrieval-Augmented Generation over Video Corpus
- 2025-01: Imagine while Reasoning in Space: Multimodal Visualization-of-Thought
Neural (non-token) Latent Representation
- 2024-11: Microsoft: DroidSpeak: KV Cache Sharing for Cross-LLM Communication and Multi-LLM Serving: LLMs invent their own inter-communication language
- 2024-12: Meta: Training Large Language Models to Reason in a Continuous Latent Space: feeding the latent representation directly back into the model, instead of tokenizing intermediate thoughts (Chain of Continuous Thought, a.k.a. Coconut)
- 2024-12: Meta: Large Concept Models: Language Modeling in a Sentence Representation Space: train a model that operates at a higher level of abstraction than typical word/token LLMs; model operates in a space of concept embeddings (more akin to full sentences than individual words)
- 2024-12: Meta: Byte Latent Transformer: Patches Scale Better Than Tokens: Instead of tokenization, dynamically convert input byte-stream into patches, yielding gains in compute efficiency, with minimal loss in performance
- 2024-12: Compressed Chain of Thought: Efficient Reasoning Through Dense Representations
- 2024-12: Google DeepMind: Deliberation in Latent Space via Differentiable Cache Augmentation
- 2024-12: LANG-JEPA: Learning to Think in Latent Space
- 2025-01: Efficient Reasoning with Hidden Thinking (code)
- 2025-02: Huginn-0125: Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach (code, model)
- 2025-02: Meta: LLM Pretraining with Continuous Concepts (CoCoMix)
- 2025-06: Reasoning by Superposition: A Theoretical Perspective on Chain of Continuous Thought
Altered Transformer
Tokenization
- 2024-04: Better & Faster Large Language Models via Multi-token Prediction
- 2024-12: I Don't Know: Explicit Modeling of Uncertainty with an [IDK] Token
- 2025-04: Meta: Multi-Token Attention
Generation Order
- 2019-02: Non-Monotonic Sequential Text Generation
- 2019-04: Mask-Predict: Parallel Decoding of Conditional Masked Language Models
- 2019-06: Sequence Generation: From Both Sides to the Middle
- 2020-04: Probabilistically Masked Language Model Capable of Autoregressive Generation in Arbitrary Word Order
- 2021-12: Spiral Language Modeling
- 2023-10: FiLM: Fill-in Language Models for Any-Order Generation
- 2024-07: Integrating Randomness in Large Language Models: A Linear Congruential Generator Approach for Generating Clinically Relevant Content
Diffusion Language Models
- 2024-02: Pard: Permutation-Invariant Autoregressive Diffusion for Graph Generation
- 2025-02: Large Language Diffusion Models
- 2025-02: Inception Labs Mercury model (online demo)
- 2025-03: Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models (project, code, hf)
- 2025-04: Dream 7B: Introducing Dream 7B, the most powerful open diffusion large language model to date
- 2025-04: Scaling Reasoning in Diffusion Large Language Models via Reinforcement Learning (preprint, code)
- 2025-06: Esoteric Language Models (project)
Related: Image Synthesis via Autoregression/Diffusion
- 2023-10: Sequential Data Generation with Groupwise Diffusion Process
- 2024-02: Rolling Diffusion Models
- 2024-08: Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model
Sampling
- 2024-10: entropix: Entropy Based Sampling and Parallel CoT Decoding
- 2024-10: softmax is not enough (for sharp out-of-distribution)
- 2025-06: Corrector Sampling in Language Models
Missing Elements
- Memory
- Continuous learning/update
- Robust contextual model
- Long-time-horizon coherence
- Fluid intelligence
- Agency
- Modeling of self
Memes
- Andrej Karpathy:
- 2017-11: Software 2.0 ("Gradient descent can write code better than you. I'm sorry.")
- 2022-10: Transformers as general-purpose differentiable computers (talk)
- 2023-09: LLM as kernel of a new Operating System (diagram, OS analogies)
- 2025-02: Vibe coding
- 2025-06: Software 3.0 (talk): "Prompts as Programs". Software 1.0 is code; 2.0 is model weights; 3.0 is prompts.
- 2025-06: "Context Engineering" instead of "Prompt Engineering"
- 2025-06: LLMs as "cognitive cores"