Difference between revisions of "AI research trends"

From GISAXS
Jump to: navigation, search
(Neural (non-token) Latent Representation)
(Neural (non-token) Latent Representation)
Line 65: Line 65:
 
* 2025-01: [https://arxiv.org/abs/2501.19201 Efficient Reasoning with Hidden Thinking] ([https://github.com/shawnricecake/Heima code])
 
* 2025-01: [https://arxiv.org/abs/2501.19201 Efficient Reasoning with Hidden Thinking] ([https://github.com/shawnricecake/Heima code])
 
* 2025-02: [https://huggingface.co/tomg-group-umd/huginn-0125 Huginn-0125]: [https://arxiv.org/abs/2502.05171 Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach] ([https://github.com/seal-rg/recurrent-pretraining code], [https://huggingface.co/tomg-group-umd/huginn-0125 model])
 
* 2025-02: [https://huggingface.co/tomg-group-umd/huginn-0125 Huginn-0125]: [https://arxiv.org/abs/2502.05171 Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach] ([https://github.com/seal-rg/recurrent-pretraining code], [https://huggingface.co/tomg-group-umd/huginn-0125 model])
* 2025-02: [https://arxiv.org/abs/2502.08524 LLM Pretraining with Continuous Concepts]
+
* 2025-02: Meta: [https://arxiv.org/abs/2502.08524 LLM Pretraining with Continuous Concepts]
  
 
=See Also=
 
=See Also=
 
* [[Increasing AI Intelligence]]
 
* [[Increasing AI Intelligence]]

Revision as of 10:38, 13 February 2025

Novel Tokenization and/or Sampling

System 2 Reasoning

See: Increasing AI Intelligence

Memory

LLM Weights Memory

Context Length

Extended Context

Retrieval beyond RAG

See also: AI tools: Retrieval Augmented Generation (RAG)

Working Memory

Episodic Memory

Updating Weights at Inference-time

Internal Thought Representation Space

Visual Thinking

Neural (non-token) Latent Representation

See Also