Difference between revisions of "AI research trends"

From GISAXS
Jump to: navigation, search
(See Also)
(Context Length)
Line 34: Line 34:
 
* [https://x.com/Alibaba_Qwen/status/1883557964759654608 2025-01-27]: [https://qwenlm.github.io/blog/qwen2.5-1m/ Qwen2.5-1M] ([https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-1M/Qwen2_5_1M_Technical_Report.pdf report])
 
* [https://x.com/Alibaba_Qwen/status/1883557964759654608 2025-01-27]: [https://qwenlm.github.io/blog/qwen2.5-1m/ Qwen2.5-1M] ([https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-1M/Qwen2_5_1M_Technical_Report.pdf report])
 
* 2025-02-14: [https://arxiv.org/abs/2502.08910 InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on a Single GPU] 3M
 
* 2025-02-14: [https://arxiv.org/abs/2502.08910 InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on a Single GPU] 3M
 +
* 2025-02-18: [https://arxiv.org/abs/2502.12962 Infinite Retrieval: Attention Enhanced LLMs in Long-Context Processing]
 
* 2025-02-19: [https://github.com/MoonshotAI/MoBA MoBA: Mixture of Block Attention for Long-Context LLMs]
 
* 2025-02-19: [https://github.com/MoonshotAI/MoBA MoBA: Mixture of Block Attention for Long-Context LLMs]
  

Revision as of 11:52, 1 March 2025

Novel Tokenization and/or Sampling

System 2 Reasoning

See: Increasing AI Intelligence

Memory

LLM Weights Memory

Context Length

Extended Context

Retrieval beyond RAG

See also: AI tools: Retrieval Augmented Generation (RAG)

Working Memory

Episodic Memory

Updating Weights at Inference-time

Parameters as Tokens

Internal Thought Representation Space

Visual Thinking

Neural (non-token) Latent Representation

Diffusion Language Models

See Also