Difference between revisions of "Increasing AI Intelligence"

From GISAXS
Jump to: navigation, search
(Reviews)
(Optimize Confidence)
(3 intermediate revisions by the same user not shown)
Line 41: Line 41:
 
* 2025-04: [https://arxiv.org/abs/2504.20571 Reinforcement Learning for Reasoning in Large Language Models with One Training Example]
 
* 2025-04: [https://arxiv.org/abs/2504.20571 Reinforcement Learning for Reasoning in Large Language Models with One Training Example]
 
* 2025-05: [https://arxiv.org/abs/2505.03335 Absolute Zero: Reinforced Self-play Reasoning with Zero Data]
 
* 2025-05: [https://arxiv.org/abs/2505.03335 Absolute Zero: Reinforced Self-play Reasoning with Zero Data]
 +
 +
====Optimize Confidence====
 +
* C.f. 2025-02: [https://arxiv.org/abs/2502.06233 Confidence Improves Self-Consistency in LLMs]
 +
* [https://x.com/xuandongzhao/status/1927270931874910259 2025-05]: [https://arxiv.org/abs/2505.19590 Learning to Reason without External Rewards] ([https://github.com/sunblaze-ucb/Intuitor code]): Reinforcement Learning from Internal Feedback, RLIF
 +
* [https://x.com/mihirp98/status/1927767453490172277 2025-05]: [https://rent-rl.github.io/ Maximizing Confidence Alone Improves Reasoning] ([https://github.com/satrams/rent-rl code]); a.k.a. RENT: Reinforcement Learning via Entropy Minimization.
  
 
====Exceed humans, using human-level data====
 
====Exceed humans, using human-level data====

Revision as of 17:05, 28 May 2025

Contents

Reviews

World Model

Prompt Engineering

Thought Templates

Automatic Prompt Optimization

Fine Tuning

Proactive Search

Compute expended after training, but before inference.

Reinforcement Learning

Optimize Confidence

Exceed humans, using human-level data

Training Data (Data Refinement, Synthetic Data)

Re-captioning

Pre-generate material

Generate consistent plans/thoughts

  • 2024-08: Mutual Reasoning Makes Smaller LLMs Stronger Problem-Solvers (code)
    • (Microsoft) rStar is a self-play mutual reasoning approach. A small model adds to MCTS using some defined reasoning heuristics. Mutually consistent trajectories can be emphasized.
  • 2024-09: Self-Harmonized Chain of Thought
    • Produce refined chain-of-thought style solutions/prompts for diverse problems. Given a large set of problems/questions, first aggregated semantically, then apply zero-shot chain-of-thought to each problem. Then cross-pollinate between proposed solutions to similar problems, looking for refined and generalize solutions.
  • 2024-11: LLMs Do Not Think Step-by-step In Implicit Reasoning
    • They argue that models trained to reproduce CoT outputs do not, internally, perform stepwise reasoning (with intermediate representations); this suggests that explicit CoT could be superior to implicit CoT.

Sampling

Automated prompt generation

Distill inference-time-compute into model

CoT reasoning model

See also: AI tools > LLM > Open-weights LLM > Reasoning

Scaling

Inference Time Compute

Methods

Review

In context learning (ICL), search, and other inference-time methods

Inference-time Sampling

Inference-time Gradient/Updating/RL/etc.

Self-prompting

Retrieval or Memory

In-context thought

Naive multi-LLM (verification, self-critique, majority voting, best-of-N, etc.)

Multi-LLM (multiple comparisons, branching, etc.)

Iteration (e.g. neural-like layered blocks)

Iterative reasoning via graphs

Monte Carlo Tree Search (MCTS)

Other Search

Chain-of-Thought Reasoning

Model Merging

Meta-methods

Analysis

Scaling

(Optimal) Usage of Reasoning Compute

Usage of Training Data

  • 2025-02: LIMO: Less is More for Reasoning (surprisingly easy generalization, from very few reasoning training examples; model can go from knowledge-retrieval to diverse reasoning using curated examples)

Theory

Expending compute works

Compute.png

Pragmatics

Code for Inference-time Compute

  • optillm: Inference proxy which implements state-of-the-art techniques to improve accuracy and performance of LLMs (improve reasoning over coding, logical and mathematical queries)

Interact with Environment

Memory

Tool Use

Integrated

Multi-agent Effort (and Emergent Intelligence)

ML-like Optimization of LLM Setup

Limitations/Requirements

Creativity

See: AI creativity

See Also