Difference between revisions of "AI research trends"

From GISAXS
Jump to: navigation, search
(Neural (non-token) Latent Representation)
(Context Length)
 
(14 intermediate revisions by the same user not shown)
Line 30: Line 30:
 
* 2024-04-12: Meta et al. demonstrate [https://arxiv.org/abs/2404.08801 Megalodon] that enables infinite context via a more efficient architecture
 
* 2024-04-12: Meta et al. demonstrate [https://arxiv.org/abs/2404.08801 Megalodon] that enables infinite context via a more efficient architecture
 
* 2024-04-14: Google presents [https://arxiv.org/abs/2404.09173 TransformerFAM], which leverages a feedback loop so it attends to its own latent representations, acting as working memory and provides effectively infinite context
 
* 2024-04-14: Google presents [https://arxiv.org/abs/2404.09173 TransformerFAM], which leverages a feedback loop so it attends to its own latent representations, acting as working memory and provides effectively infinite context
* [https://x.com/MiniMax__AI/status/1879226391352549451 2025-01Jan-14]: [https://www.minimaxi.com/en/news/minimax-01-series-2 MiniMax-01] 4M ([https://www.minimaxi.com/en/news/minimax-01-series-2 paper])
+
* 2024-10-31: [https://arxiv.org/abs/2410.23771 What is Wrong with Perplexity for Long-context Language Modeling?]
* [https://x.com/Alibaba_Qwen/status/1883557964759654608 2025-01Jan-27]: [https://qwenlm.github.io/blog/qwen2.5-1m/ Qwen2.5-1M] ([https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-1M/Qwen2_5_1M_Technical_Report.pdf report])
+
* [https://x.com/MiniMax__AI/status/1879226391352549451 2025-01-14]: [https://www.minimaxi.com/en/news/minimax-01-series-2 MiniMax-01] 4M ([https://www.minimaxi.com/en/news/minimax-01-series-2 paper])
 +
* [https://x.com/Alibaba_Qwen/status/1883557964759654608 2025-01-27]: [https://qwenlm.github.io/blog/qwen2.5-1m/ Qwen2.5-1M] ([https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-1M/Qwen2_5_1M_Technical_Report.pdf report])
 +
* 2025-02-14: [https://arxiv.org/abs/2502.08910 InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on a Single GPU] 3M
 +
* [https://x.com/AnimaAnandkumar/status/1897449851941744648 2025-02-18]: [https://arxiv.org/abs/2502.12574 HeadInfer: Memory-Efficient LLM Inference by Head-wise Offloading] ([https://github.com/wdlctc/headinfer code])
 +
* 2025-02-18: [https://arxiv.org/abs/2502.12962 Infinite Retrieval: Attention Enhanced LLMs in Long-Context Processing]
 +
* 2025-02-19: [https://github.com/MoonshotAI/MoBA MoBA: Mixture of Block Attention for Long-Context LLMs]
 +
* 2025-02-27: [https://arxiv.org/abs/2502.20082 LongRoPE2: Near-Lossless LLM Context Window Scaling] ([https://github.com/microsoft/LongRoPE code])
  
 
==Extended Context==
 
==Extended Context==
Line 49: Line 55:
 
=Updating Weights at Inference-time=
 
=Updating Weights at Inference-time=
 
* 2025-01: [https://arxiv.org/abs/2501.06252 Transformer<sup>2</sup>: Self-adaptive LLMs]
 
* 2025-01: [https://arxiv.org/abs/2501.06252 Transformer<sup>2</sup>: Self-adaptive LLMs]
 +
 +
==Parameters as Tokens==
 +
* 2024-10: [https://arxiv.org/abs/2410.23168 TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters] ([https://github.com/Haiyang-W/TokenFormer code])
  
 
=Internal Thought Representation Space=
 
=Internal Thought Representation Space=
Line 65: Line 74:
 
* 2025-01: [https://arxiv.org/abs/2501.19201 Efficient Reasoning with Hidden Thinking] ([https://github.com/shawnricecake/Heima code])
 
* 2025-01: [https://arxiv.org/abs/2501.19201 Efficient Reasoning with Hidden Thinking] ([https://github.com/shawnricecake/Heima code])
 
* 2025-02: [https://huggingface.co/tomg-group-umd/huginn-0125 Huginn-0125]: [https://arxiv.org/abs/2502.05171 Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach] ([https://github.com/seal-rg/recurrent-pretraining code], [https://huggingface.co/tomg-group-umd/huginn-0125 model])
 
* 2025-02: [https://huggingface.co/tomg-group-umd/huginn-0125 Huginn-0125]: [https://arxiv.org/abs/2502.05171 Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach] ([https://github.com/seal-rg/recurrent-pretraining code], [https://huggingface.co/tomg-group-umd/huginn-0125 model])
 +
* 2025-02: Meta: [https://arxiv.org/abs/2502.08524 LLM Pretraining with Continuous Concepts] (CoCoMix)
 +
 +
=Diffusion Language Models=
 +
* 2025-02: [https://arxiv.org/abs/2502.09992 Large Language Diffusion Models]
 +
* 2025-02: [https://www.inceptionlabs.ai/ Inception Labs] [https://www.inceptionlabs.ai/news Mercury] model ([https://chat.inceptionlabs.ai/ online demo])
  
 
=See Also=
 
=See Also=
 
* [[Increasing AI Intelligence]]
 
* [[Increasing AI Intelligence]]

Latest revision as of 09:38, 6 March 2025

Novel Tokenization and/or Sampling

System 2 Reasoning

See: Increasing AI Intelligence

Memory

LLM Weights Memory

Context Length

Extended Context

Retrieval beyond RAG

See also: AI tools: Retrieval Augmented Generation (RAG)

Working Memory

Episodic Memory

Updating Weights at Inference-time

Parameters as Tokens

Internal Thought Representation Space

Visual Thinking

Neural (non-token) Latent Representation

Diffusion Language Models

See Also