Difference between revisions of "AI research trends"

From GISAXS
Jump to: navigation, search
(Missing Elements)
(See Also)
 
(3 intermediate revisions by the same user not shown)
Line 77: Line 77:
 
* 2025-02: [https://huggingface.co/tomg-group-umd/huginn-0125 Huginn-0125]: [https://arxiv.org/abs/2502.05171 Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach] ([https://github.com/seal-rg/recurrent-pretraining code], [https://huggingface.co/tomg-group-umd/huginn-0125 model])
 
* 2025-02: [https://huggingface.co/tomg-group-umd/huginn-0125 Huginn-0125]: [https://arxiv.org/abs/2502.05171 Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach] ([https://github.com/seal-rg/recurrent-pretraining code], [https://huggingface.co/tomg-group-umd/huginn-0125 model])
 
* 2025-02: Meta: [https://arxiv.org/abs/2502.08524 LLM Pretraining with Continuous Concepts] (CoCoMix)
 
* 2025-02: Meta: [https://arxiv.org/abs/2502.08524 LLM Pretraining with Continuous Concepts] (CoCoMix)
 +
* 2025-06: [https://arxiv.org/abs/2505.12514 Reasoning by Superposition: A Theoretical Perspective on Chain of Continuous Thought]
  
 
=Altered Transformer=
 
=Altered Transformer=
Line 101: Line 102:
 
* 2025-04: [https://hkunlp.github.io/blog/2025/dream/ Dream 7B: Introducing Dream 7B, the most powerful open diffusion large language model to date]
 
* 2025-04: [https://hkunlp.github.io/blog/2025/dream/ Dream 7B: Introducing Dream 7B, the most powerful open diffusion large language model to date]
 
* 2025-04: [https://dllm-reasoning.github.io/d1: Scaling Reasoning in Diffusion Large Language Models via Reinforcement Learning] ([https://dllm-reasoning.github.io/media/preprint.pdf preprint], [https://github.com/dllm-reasoning/d1 code])
 
* 2025-04: [https://dllm-reasoning.github.io/d1: Scaling Reasoning in Diffusion Large Language Models via Reinforcement Learning] ([https://dllm-reasoning.github.io/media/preprint.pdf preprint], [https://github.com/dllm-reasoning/d1 code])
 +
* 2025-06: [https://arxiv.org/abs/2506.01928 Esoteric Language Models] ([https://s-sahoo.com/Eso-LMs/ project])
  
 
===Related: Image Synthesis via Autoregression/Diffusion===
 
===Related: Image Synthesis via Autoregression/Diffusion===
Line 110: Line 112:
 
* 2024-10: [https://github.com/xjdr-alt/entropix entropix: Entropy Based Sampling and Parallel CoT Decoding]
 
* 2024-10: [https://github.com/xjdr-alt/entropix entropix: Entropy Based Sampling and Parallel CoT Decoding]
 
* 2024-10: [https://arxiv.org/abs/2410.01104 softmax is not enough (for sharp out-of-distribution)]
 
* 2024-10: [https://arxiv.org/abs/2410.01104 softmax is not enough (for sharp out-of-distribution)]
 +
* 2025-06: [https://arxiv.org/abs/2506.06215 Corrector Sampling in Language Models]
  
 
=Missing Elements=
 
=Missing Elements=
Line 119: Line 122:
 
* Agency
 
* Agency
 
* Modeling of self
 
* Modeling of self
 +
 +
=Memes=
 +
* Andrej Karpathy:
 +
** 2017-11: [https://karpathy.medium.com/software-2-0-a64152b37c35 Software 2.0] ([https://x.com/karpathy/status/893576281375219712 "Gradient descent can write code better than you. I'm sorry."])
 +
** 2022-10: [https://x.com/karpathy/status/1582807367988654081 Transformers as general-purpose differentiable computers] ([https://www.youtube.com/watch?v=9uw3F6rndnA talk])
 +
** 2023-09: [https://x.com/karpathy/status/1707437820045062561 LLM as kernel of a new Operating System] ([https://x.com/karpathy/status/1723140519554105733 diagram], [https://www.threads.com/@karpathy/post/CzehPtxPEF3 OS analogies])
 +
** 2025-02: [https://x.com/karpathy/status/1886192184808149383 Vibe coding]
 +
** 2025-06: [https://www.latent.space/p/s3 Software 3.0] ([https://www.youtube.com/watch?v=LCEmiRjPEtQ&t=1s talk]): "Prompts as Programs". Software 1.0 is code; 2.0 is model weights; 3.0 is prompts.
 +
** 2025-06: [https://x.com/karpathy/status/1937902205765607626 "Context Engineering" instead of "Prompt Engineering"]
 +
** 2025-06: [https://x.com/karpathy/status/1938626382248149433 LLMs as "cognitive cores"]
  
 
=See Also=
 
=See Also=
 
* [[Increasing AI Intelligence]]
 
* [[Increasing AI Intelligence]]

Latest revision as of 12:52, 30 June 2025

System 2 Reasoning

See: Increasing AI Intelligence

Memory

LLM Weights Memory

Context Length

Extended Context

Retrieval beyond RAG

See also: AI tools: Retrieval Augmented Generation (RAG)

Working Memory

Long-Term Memory

Episodic Memory

Updating Weights at Inference-time

Parameters as Tokens

Internal Thought Representation Space

Visual Thinking

Neural (non-token) Latent Representation

Altered Transformer

Tokenization

Generation Order

Diffusion Language Models

Related: Image Synthesis via Autoregression/Diffusion

Sampling

Missing Elements

  • Memory
  • Continuous learning/update
  • Robust contextual model
  • Long-time-horizon coherence
  • Fluid intelligence
  • Agency
  • Modeling of self

Memes

See Also