Difference between revisions of "AI benchmarks"
KevinYager (talk | contribs) (→Assess Specific Attributes) |
KevinYager (talk | contribs) (→Reasoning) |
||
(3 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
=General= | =General= | ||
* [https://lifearchitect.ai/models-table/ Models Table] (lifearchitect.ai) | * [https://lifearchitect.ai/models-table/ Models Table] (lifearchitect.ai) | ||
+ | * [https://artificialanalysis.ai/models Artificial Analysis] | ||
+ | * [https://epoch.ai/ Epoch AI] | ||
+ | ** [https://epoch.ai/data/notable-ai-models Notable AI models] | ||
=Methods= | =Methods= | ||
Line 6: | Line 9: | ||
** [https://aidanbench.com/ Leaderboard] | ** [https://aidanbench.com/ Leaderboard] | ||
** [https://x.com/scaling01/status/1897301054431064391 Suggestion to use] [https://en.wikipedia.org/wiki/Borda_count Borda count] | ** [https://x.com/scaling01/status/1897301054431064391 Suggestion to use] [https://en.wikipedia.org/wiki/Borda_count Borda count] | ||
+ | ** 2025-04: [https://x.com/scaling01/status/1910499781601874008 add] Quasar Alpha, Optimus Alpha, Llama-4 Scout and Llama-4 Maverick | ||
* [https://arxiv.org/abs/2502.01100 ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning]. Assess reasoning using puzzles of tunable complexity. | * [https://arxiv.org/abs/2502.01100 ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning]. Assess reasoning using puzzles of tunable complexity. | ||
Line 38: | Line 42: | ||
==Reasoning== | ==Reasoning== | ||
* [https://scale.com/leaderboard/enigma_eval ENIGMAEVAL]: "reasoning" leaderboard ([https://static.scale.com/uploads/654197dc94d34f66c0f5184e/EnigmaEval%20v4.pdf paper]) | * [https://scale.com/leaderboard/enigma_eval ENIGMAEVAL]: "reasoning" leaderboard ([https://static.scale.com/uploads/654197dc94d34f66c0f5184e/EnigmaEval%20v4.pdf paper]) | ||
+ | * [https://bethgelab.github.io/sober-reasoning/ Sober Reasoning Leaderboard] | ||
+ | ** 2025-04: [https://arxiv.org/abs/2504.07086 A Sober Look at Progress in Language Model Reasoning: Pitfalls and Paths to Reproducibility] | ||
==Assistant/Agentic== | ==Assistant/Agentic== | ||
Line 43: | Line 49: | ||
* [https://www.galileo.ai/blog/agent-leaderboard Galileo AI] [https://huggingface.co/spaces/galileo-ai/agent-leaderboard Agent Leaderboard] | * [https://www.galileo.ai/blog/agent-leaderboard Galileo AI] [https://huggingface.co/spaces/galileo-ai/agent-leaderboard Agent Leaderboard] | ||
* [https://huggingface.co/spaces/smolagents/smolagents-leaderboard Smolagents LLM Leaderboard]: LLMs powering agents | * [https://huggingface.co/spaces/smolagents/smolagents-leaderboard Smolagents LLM Leaderboard]: LLMs powering agents | ||
+ | * OpenAI [https://openai.com/index/paperbench/ PaperBench: Evaluating AI’s Ability to Replicate AI Research] ([https://cdn.openai.com/papers/22265bac-3191-44e5-b057-7aaacd8e90cd/paperbench.pdf paper], [https://github.com/openai/preparedness/tree/main/project/paperbench code]) | ||
==Science== | ==Science== | ||
See: [[Science_Agents#Science_Benchmarks|Science Benchmarks]] | See: [[Science_Agents#Science_Benchmarks|Science Benchmarks]] |
Revision as of 12:01, 13 April 2025
Contents
General
- Models Table (lifearchitect.ai)
- Artificial Analysis
- Epoch AI
Methods
- AidanBench: Evaluating Novel Idea Generation on Open-Ended Questions (code)
- Leaderboard
- Suggestion to use Borda count
- 2025-04: add Quasar Alpha, Optimus Alpha, Llama-4 Scout and Llama-4 Maverick
- ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning. Assess reasoning using puzzles of tunable complexity.
Task Length
- 2020-09: Ajeya Cotra: Draft report on AI timelines
- 2025-03: Measuring AI Ability to Complete Long Tasks Measuring AI Ability to Complete Long Tasks
Assess Specific Attributes
Various
- LMSYS: Human preference ranking leaderboard
- Tracking AI: "IQ" leaderboard
- LiveBench: A Challenging, Contamination-Free LLM Benchmark
- LLM Thematic Generalization Benchmark
Hallucination
Software/Coding
Visual
- 2025-03: Can Large Vision Language Models Read Maps Like a Human? MapBench
Creativity
- 2024-10: AI as Humanity's Salieri: Quantifying Linguistic Creativity of Language Models via Systematic Attribution of Machine Text against Web Text
- 2024-11: AidanBench: Evaluating Novel Idea Generation on Open-Ended Questions (code)
- 2024-12: LiveIdeaBench: Evaluating LLMs' Scientific Creativity and Idea Generation with Minimal Context
- LLM Creative Story-Writing Benchmark
Reasoning
- ENIGMAEVAL: "reasoning" leaderboard (paper)
- Sober Reasoning Leaderboard
Assistant/Agentic
- GAIA: a benchmark for General AI Assistants
- Galileo AI Agent Leaderboard
- Smolagents LLM Leaderboard: LLMs powering agents
- OpenAI PaperBench: Evaluating AI’s Ability to Replicate AI Research (paper, code)
Science
See: Science Benchmarks