Difference between revisions of "Increasing AI Intelligence"

From GISAXS
Jump to: navigation, search
(Automatic Prompt Optimization)
(Prompt Engineering)
Line 18: Line 18:
 
* 2025-02: [https://arxiv.org/abs/2502.16923 A Systematic Survey of Automatic Prompt Optimization Techniques]
 
* 2025-02: [https://arxiv.org/abs/2502.16923 A Systematic Survey of Automatic Prompt Optimization Techniques]
 
* 2025-02: [https://arxiv.org/abs/2502.18746 Automatic Prompt Optimization via Heuristic Search: A Survey]
 
* 2025-02: [https://arxiv.org/abs/2502.18746 Automatic Prompt Optimization via Heuristic Search: A Survey]
 +
 +
=Automatic Optimization=
 +
==Analogous to Gradient Descent==
 +
* 2024-06: [https://arxiv.org/abs/2406.07496 TextGrad: Automatic "Differentiation" via Text]
 +
* 2024-06: [https://arxiv.org/abs/2406.18532 Symbolic Learning Enables Self-Evolving Agents]
  
 
=Fine Tuning=
 
=Fine Tuning=

Revision as of 10:52, 3 March 2025

Reviews

Prompt Engineering

Thought Templates

Automatic Prompt Optimization

Automatic Optimization

Analogous to Gradient Descent

Fine Tuning

Proactive Search

Compute expended after training, but before inference.

Training Data (Data Refinement, Synthetic Data)

Generate consistent plans/thoughts

  • 2024-08: Mutual Reasoning Makes Smaller LLMs Stronger Problem-Solvers (code)
    • (Microsoft) rStar is a self-play mutual reasoning approach. A small model adds to MCTS using some defined reasoning heuristics. Mutually consistent trajectories can be emphasized.
  • 2024-09: Self-Harmonized Chain of Thought
    • Produce refined chain-of-thought style solutions/prompts for diverse problems. Given a large set of problems/questions, first aggregated semantically, then apply zero-shot chain-of-thought to each problem. Then cross-pollinate between proposed solutions to similar problems, looking for refined and generalize solutions.
  • 2024-11: LLMs Do Not Think Step-by-step In Implicit Reasoning
    • They argue that models trained to reproduce CoT outputs do not, internally, perform stepwise reasoning (with intermediate representations); this suggests that explicit CoT could be superior to implicit CoT.

Sampling

Automated prompt generation

Distill inference-time-compute into model

CoT reasoning model

See also: AI tools > LLM > Open-weights LLM > Reasoning

Scaling

Inference Time Compute

Methods

Review

In context learning (ICL), search, and other inference-time methods

Inference-time Sampling

Inference-time Gradient

Self-prompting

Retrieval or Memory

In-context thought

Naive multi-LLM (verification, majority voting, best-of-N, etc.)

Multi-LLM (multiple comparisons, branching, etc.)

Iteration (e.g. neural-like layered blocks)

Iterative reasoning via graphs

Monte Carlo Tree Search (MCTS)

Other Search

Chain-of-Thought Reasoning

Meta-methods

Analysis

Scaling

Usage of Reasoning Compute

Usage of Training Data

  • 2025-02: LIMO: Less is More for Reasoning (surprisingly easy generalization, from very few reasoning training examples; model can go from knowledge-retrieval to diverse reasoning using curated examples)

Theory

Expending compute works

Compute.png

Pragmatics

Code for Inference-time Compute

  • optillm: Inference proxy which implements state-of-the-art techniques to improve accuracy and performance of LLMs (improve reasoning over coding, logical and mathematical queries)

Interact with Environment

Memory

Tool Use

Integrated

Multi-agent Effort (and Emergent Intelligence)

ML-like Optimization of LLM Setup

Limitations/Requirements

Creativity

See Also