Difference between revisions of "AI benchmarks"
KevinYager (talk | contribs) (→Task Length) |
KevinYager (talk | contribs) |
||
Line 1: | Line 1: | ||
+ | =General= | ||
+ | * [https://lifearchitect.ai/models-table/ Models Table] (lifearchitect.ai) | ||
+ | |||
=Methods= | =Methods= | ||
* [https://openreview.net/pdf?id=fz969ahcvJ AidanBench: Evaluating Novel Idea Generation on Open-Ended Questions] ([https://github.com/aidanmclaughlin/AidanBench code]) | * [https://openreview.net/pdf?id=fz969ahcvJ AidanBench: Evaluating Novel Idea Generation on Open-Ended Questions] ([https://github.com/aidanmclaughlin/AidanBench code]) |
Revision as of 13:38, 20 March 2025
Contents
General
- Models Table (lifearchitect.ai)
Methods
- AidanBench: Evaluating Novel Idea Generation on Open-Ended Questions (code)
- ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning. Assess reasoning using puzzles of tunable complexity.
Task Length
- 2020-09: Ajeya Cotra: Draft report on AI timelines
- 2025-03: Measuring AI Ability to Complete Long Tasks Measuring AI Ability to Complete Long Tasks
Assess Specific Attributes
Various
- LMSYS: Human preference ranking leaderboard
- Tracking AI: "IQ" leaderboard
- Vectara Hallucination Leaderboard
- LiveBench: A Challenging, Contamination-Free LLM Benchmark
Software/Coding
Creativity
Reasoning
- ENIGMAEVAL: "reasoning" leaderboard (paper)
Assistant/Agentic
- GAIA: a benchmark for General AI Assistants
- Galileo AI Agent Leaderboard
- Smolagents LLM Leaderboard: LLMs powering agents
Science
See: Science Benchmarks