Difference between revisions of "AI research trends"

From GISAXS
Jump to: navigation, search
(Diffusion Language Models)
(Context Length)
(2 intermediate revisions by the same user not shown)
Line 38: Line 38:
 
* 2025-02-19: [https://github.com/MoonshotAI/MoBA MoBA: Mixture of Block Attention for Long-Context LLMs]
 
* 2025-02-19: [https://github.com/MoonshotAI/MoBA MoBA: Mixture of Block Attention for Long-Context LLMs]
 
* 2025-02-27: [https://arxiv.org/abs/2502.20082 LongRoPE2: Near-Lossless LLM Context Window Scaling] ([https://github.com/microsoft/LongRoPE code])
 
* 2025-02-27: [https://arxiv.org/abs/2502.20082 LongRoPE2: Near-Lossless LLM Context Window Scaling] ([https://github.com/microsoft/LongRoPE code])
 +
* [https://x.com/sundarpichai/status/1904579419496386736 2025-03-25]: [https://blog.google/technology/google-deepmind/gemini-model-thinking-updates-march-2025/ Gemini 2.5 Pro] [https://x.com/pvncher/status/1904685092053606715 1M]
  
 
==Extended Context==
 
==Extended Context==
Line 44: Line 45:
 
==Retrieval beyond RAG==
 
==Retrieval beyond RAG==
 
See also: [[AI_tools#Retrieval_Augmented_Generation_.28RAG.29|AI tools: Retrieval Augmented Generation (RAG)]]
 
See also: [[AI_tools#Retrieval_Augmented_Generation_.28RAG.29|AI tools: Retrieval Augmented Generation (RAG)]]
 +
* 2024-10: Microsoft: [https://arxiv.org/abs/2410.10450 KBLaM: Knowledge Base augmented Language Model]
 
* 2024-12: [https://arxiv.org/abs/2412.11536 Let your LLM generate a few tokens and you will reduce the need for retrieval]
 
* 2024-12: [https://arxiv.org/abs/2412.11536 Let your LLM generate a few tokens and you will reduce the need for retrieval]
 
* 2024-12: [https://arxiv.org/abs/2412.11919 RetroLLM: Empowering Large Language Models to Retrieve Fine-grained Evidence within Generation]
 
* 2024-12: [https://arxiv.org/abs/2412.11919 RetroLLM: Empowering Large Language Models to Retrieve Fine-grained Evidence within Generation]
 +
* 2025-03: Microsoft: [https://www.microsoft.com/en-us/research/blog/introducing-kblam-bringing-plug-and-play-external-knowledge-to-llms/ Introducing KBLaM: Bringing plug-and-play external knowledge to LLMs]
  
 
==Working Memory==
 
==Working Memory==

Revision as of 10:42, 26 March 2025

Novel Tokenization and/or Sampling

System 2 Reasoning

See: Increasing AI Intelligence

Memory

LLM Weights Memory

Context Length

Extended Context

Retrieval beyond RAG

See also: AI tools: Retrieval Augmented Generation (RAG)

Working Memory

Episodic Memory

Updating Weights at Inference-time

Parameters as Tokens

Internal Thought Representation Space

Visual Thinking

Neural (non-token) Latent Representation

Diffusion Language Models

See Also