Difference between revisions of "Increasing AI Intelligence"

From GISAXS
Jump to: navigation, search
((Optimal) Usage of Reasoning Compute)
(Reinforcement Learning)
 
(8 intermediate revisions by the same user not shown)
Line 41: Line 41:
 
* 2025-04: [https://arxiv.org/abs/2504.20571 Reinforcement Learning for Reasoning in Large Language Models with One Training Example]
 
* 2025-04: [https://arxiv.org/abs/2504.20571 Reinforcement Learning for Reasoning in Large Language Models with One Training Example]
 
* 2025-05: [https://arxiv.org/abs/2505.03335 Absolute Zero: Reinforced Self-play Reasoning with Zero Data]
 
* 2025-05: [https://arxiv.org/abs/2505.03335 Absolute Zero: Reinforced Self-play Reasoning with Zero Data]
 +
* 2025-09: [https://www.nature.com/articles/s41586-025-09422-z DeepSeek-R1 incentivizes reasoning in LLMs through reinforcement learning]
 +
* 2025-09: [https://github.com/NVlabs/RLP/blob/main/pdf/RLP_Reinforcement_as_a_Pretraining_Objective.pdf RLP : Reinforcement Learning Pre‑training] (Nvidia)
  
 
====Optimize Confidence/Entropy====
 
====Optimize Confidence/Entropy====
Line 53: Line 55:
 
* 2025-03: [https://tecunningham.github.io/posts/2023-09-05-model-of-ai-imitation.html An AI Which Imitates Humans Can Beat Humans]
 
* 2025-03: [https://tecunningham.github.io/posts/2023-09-05-model-of-ai-imitation.html An AI Which Imitates Humans Can Beat Humans]
 
* 2025-08: [https://arxiv.org/abs/2508.17669 A Taxonomy of Transcendence]
 
* 2025-08: [https://arxiv.org/abs/2508.17669 A Taxonomy of Transcendence]
 +
 +
====Self-play====
 +
* 2025-09: [https://arxiv.org/abs/2509.07414 Language Self-Play For Data-Free Training]
  
 
===Training Data (Data Refinement, Synthetic Data)===
 
===Training Data (Data Refinement, Synthetic Data)===
Line 193: Line 198:
 
* 2024-11: [https://arxiv.org/abs/2411.10440 LLaVA-o1: Let Vision Language Models Reason Step-by-Step] ([https://github.com/PKU-YuanGroup/LLaVA-o1 code])
 
* 2024-11: [https://arxiv.org/abs/2411.10440 LLaVA-o1: Let Vision Language Models Reason Step-by-Step] ([https://github.com/PKU-YuanGroup/LLaVA-o1 code])
 
* 2025-04: [https://arxiv.org/abs/2504.07081 Self-Steering Language Models]: Planner generates program, Followers accomplish sub-tasks
 
* 2025-04: [https://arxiv.org/abs/2504.07081 Self-Steering Language Models]: Planner generates program, Followers accomplish sub-tasks
 +
* 2025-09: [https://arxiv.org/abs/2508.21184 BED-LLM: Intelligent Information Gathering with LLMs and Bayesian Experimental Design]
 +
* 2025-09: [https://arxiv.org/abs/2509.03918 MTQA: Matrix of Thought for Enhanced Reasoning in Complex Question Answering]
  
 
===Iteration (e.g. neural-like layered blocks)===
 
===Iteration (e.g. neural-like layered blocks)===
Line 210: Line 217:
 
* 2024-10: [https://arxiv.org/abs/2410.01707 Interpretable Contrastive Monte Carlo Tree Search Reasoning]
 
* 2024-10: [https://arxiv.org/abs/2410.01707 Interpretable Contrastive Monte Carlo Tree Search Reasoning]
 
* 2024-12: [https://arxiv.org/abs/2412.18319 Mulberry: Empowering MLLM with o1-like Reasoning and Reflection via Collective Monte Carlo Tree Search]
 
* 2024-12: [https://arxiv.org/abs/2412.18319 Mulberry: Empowering MLLM with o1-like Reasoning and Reflection via Collective Monte Carlo Tree Search]
 +
 +
===Pathfinding===
 +
* 2024-08: [https://arxiv.org/abs/2408.08152 DeepSeek-Prover-V1.5: Harnessing Proof Assistant Feedback for Reinforcement Learning and Monte-Carlo Tree Search]
 +
* 2025-06: [https://arxiv.org/abs/2506.01939 Beyond the 80/20 Rule: High-Entropy Minority Tokens Drive Effective Reinforcement Learning for LLM Reasoning]
 +
* 2025-09: [https://arxiv.org/abs/2509.09284 Tree-OPO: Off-policy Monte Carlo Tree-Guided Advantage Optimization for Multistep Reasoning]
 +
* 2025-09: [https://arxiv.org/abs/2509.06160v1 Reverse-Engineered Reasoning for Open-Ended Generation]
  
 
===Other Search===
 
===Other Search===
Line 227: Line 240:
 
* 2025-03: [https://arxiv.org/abs/2503.23513 RARE: Retrieval-Augmented Reasoning Modeling]
 
* 2025-03: [https://arxiv.org/abs/2503.23513 RARE: Retrieval-Augmented Reasoning Modeling]
 
* 2025-07: [https://arxiv.org/abs/2501.18858 BRiTE: Bootstrapping Reinforced Thinking Process to Enhance Language Model Reasoning]
 
* 2025-07: [https://arxiv.org/abs/2501.18858 BRiTE: Bootstrapping Reinforced Thinking Process to Enhance Language Model Reasoning]
 +
* 2025-09: [https://arxiv.org/abs/2509.13351 Teaching LLMs to Plan: Logical Chain-of-Thought Instruction Tuning for Symbolic Planning]
  
 
===Inner Monologue===
 
===Inner Monologue===
Line 290: Line 304:
 
=Interact with Environment=
 
=Interact with Environment=
 
* 2025-01: [https://arxiv.org/abs/2501.10893 Learn-by-interact: A Data-Centric Framework for Self-Adaptive Agents in Realistic Environments]
 
* 2025-01: [https://arxiv.org/abs/2501.10893 Learn-by-interact: A Data-Centric Framework for Self-Adaptive Agents in Realistic Environments]
 +
* 2025-09: [https://arxiv.org/abs/2509.24527 Training Agents Inside of Scalable World Models]
  
 
=Memory=
 
=Memory=
Line 310: Line 325:
 
* 2025-01: [https://arxiv.org/abs/2501.13946 Hallucination Mitigation using Agentic AI Natural Language-Based Frameworks]
 
* 2025-01: [https://arxiv.org/abs/2501.13946 Hallucination Mitigation using Agentic AI Natural Language-Based Frameworks]
 
* 2025-02: [https://arxiv.org/abs/2502.16111 PlanGEN: A Multi-Agent Framework for Generating Planning and Reasoning Trajectories for Complex Problem Solving]
 
* 2025-02: [https://arxiv.org/abs/2502.16111 PlanGEN: A Multi-Agent Framework for Generating Planning and Reasoning Trajectories for Complex Problem Solving]
 +
* 2025-09: [https://arxiv.org/abs/2509.15172 Internalizing Self-Consistency in Language Models: Multi-Agent Consensus Alignment]
  
 
==Competition==
 
==Competition==

Latest revision as of 14:23, 2 October 2025

Contents

Reviews

World Model

Prompt Engineering

Thought Templates

Automatic Prompt Optimization

Fine Tuning

Proactive Search

Compute expended after training, but before inference.

Reinforcement Learning

Optimize Confidence/Entropy

Exceed humans, using human-level data

Self-play

Training Data (Data Refinement, Synthetic Data)

Re-captioning

Pre-generate material

Generate consistent plans/thoughts

  • 2024-08: Mutual Reasoning Makes Smaller LLMs Stronger Problem-Solvers (code)
    • (Microsoft) rStar is a self-play mutual reasoning approach. A small model adds to MCTS using some defined reasoning heuristics. Mutually consistent trajectories can be emphasized.
  • 2024-09: Self-Harmonized Chain of Thought
    • Produce refined chain-of-thought style solutions/prompts for diverse problems. Given a large set of problems/questions, first aggregated semantically, then apply zero-shot chain-of-thought to each problem. Then cross-pollinate between proposed solutions to similar problems, looking for refined and generalize solutions.
  • 2024-11: LLMs Do Not Think Step-by-step In Implicit Reasoning
    • They argue that models trained to reproduce CoT outputs do not, internally, perform stepwise reasoning (with intermediate representations); this suggests that explicit CoT could be superior to implicit CoT.

Sampling

Automated prompt generation

Distill inference-time-compute into model

CoT reasoning model

See also: AI tools > LLM > Open-weights LLM > Reasoning

Scaling

Inference Time Compute

Methods

Review

In context learning (ICL), search, and other inference-time methods

Inference-time Sampling

Inference-time Gradient/Updating/RL/etc.

Self-prompting

Retrieval or Memory

In-context thought

Naive multi-LLM (verification, self-critique, majority voting, best-of-N, etc.)

Multi-LLM (multiple comparisons, branching, etc.)

Iteration (e.g. neural-like layered blocks)

Iterative reasoning via graphs

Monte Carlo Tree Search (MCTS)

Pathfinding

Other Search

Chain-of-Thought Reasoning

Inner Monologue

Model Merging

Meta-methods

Analysis

Scaling

(Optimal) Usage of Reasoning Compute

Usage of Training Data

  • 2025-02: LIMO: Less is More for Reasoning (surprisingly easy generalization, from very few reasoning training examples; model can go from knowledge-retrieval to diverse reasoning using curated examples)

Theory

Expending compute works

Compute.png

Pragmatics

Code for Inference-time Compute

  • optillm: Inference proxy which implements state-of-the-art techniques to improve accuracy and performance of LLMs (improve reasoning over coding, logical and mathematical queries)

Interact with Environment

Memory

Tool Use

Integrated

Multi-agent Effort (and Emergent Intelligence)

Competition

ML-like Optimization of LLM Setup

Self-modification

Limitations/Requirements

Creativity

See: AI creativity

See Also