Difference between revisions of "AI benchmarks"
KevinYager (talk | contribs) (→Assess Specific Attributes) |
KevinYager (talk | contribs) (→General) |
||
(9 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
+ | =General= | ||
+ | * [https://lifearchitect.ai/models-table/ Models Table] (lifearchitect.ai) | ||
+ | * [https://artificialanalysis.ai/models Artificial Analysis] | ||
+ | * [https://epoch.ai/ Epoch AI] | ||
+ | ** [https://epoch.ai/data/notable-ai-models Notable AI models] | ||
+ | |||
=Methods= | =Methods= | ||
* [https://openreview.net/pdf?id=fz969ahcvJ AidanBench: Evaluating Novel Idea Generation on Open-Ended Questions] ([https://github.com/aidanmclaughlin/AidanBench code]) | * [https://openreview.net/pdf?id=fz969ahcvJ AidanBench: Evaluating Novel Idea Generation on Open-Ended Questions] ([https://github.com/aidanmclaughlin/AidanBench code]) | ||
Line 4: | Line 10: | ||
** [https://x.com/scaling01/status/1897301054431064391 Suggestion to use] [https://en.wikipedia.org/wiki/Borda_count Borda count] | ** [https://x.com/scaling01/status/1897301054431064391 Suggestion to use] [https://en.wikipedia.org/wiki/Borda_count Borda count] | ||
* [https://arxiv.org/abs/2502.01100 ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning]. Assess reasoning using puzzles of tunable complexity. | * [https://arxiv.org/abs/2502.01100 ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning]. Assess reasoning using puzzles of tunable complexity. | ||
+ | |||
+ | ==Task Length== | ||
+ | * 2020-09: Ajeya Cotra: [https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines Draft report on AI timelines] | ||
+ | * 2025-03: [https://arxiv.org/abs/2503.14499 Measuring AI Ability to Complete Long Tasks Measuring AI Ability to Complete Long Tasks] | ||
+ | [[Image:GmZHL8xWQAAtFlF.jpeg|450px]] | ||
=Assess Specific Attributes= | =Assess Specific Attributes= | ||
Line 9: | Line 20: | ||
* [https://lmsys.org/ LMSYS]: Human preference ranking leaderboard | * [https://lmsys.org/ LMSYS]: Human preference ranking leaderboard | ||
* [https://trackingai.org/home Tracking AI]: "IQ" leaderboard | * [https://trackingai.org/home Tracking AI]: "IQ" leaderboard | ||
+ | * [https://livebench.ai/#/ LiveBench: A Challenging, Contamination-Free LLM Benchmark] | ||
+ | * [https://github.com/lechmazur/generalization/ LLM Thematic Generalization Benchmark] | ||
+ | |||
+ | ==Hallucination== | ||
* [https://www.vectara.com/ Vectara] [https://github.com/vectara/hallucination-leaderboard Hallucination Leaderboard] | * [https://www.vectara.com/ Vectara] [https://github.com/vectara/hallucination-leaderboard Hallucination Leaderboard] | ||
− | * [https:// | + | * [https://github.com/lechmazur/confabulations/ LLM Confabulation (Hallucination) Leaderboard for RAG] |
==Software/Coding== | ==Software/Coding== | ||
* 2025-02: [https://arxiv.org/abs/2502.12115SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?] ([https://github.com/openai/SWELancer-Benchmark code]) | * 2025-02: [https://arxiv.org/abs/2502.12115SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?] ([https://github.com/openai/SWELancer-Benchmark code]) | ||
+ | |||
+ | ==Visual== | ||
+ | * 2025-03: [https://arxiv.org/abs/2503.14607 Can Large Vision Language Models Read Maps Like a Human?] MapBench | ||
==Creativity== | ==Creativity== | ||
* 2024-10: [https://arxiv.org/abs/2410.04265 AI as Humanity's Salieri: Quantifying Linguistic Creativity of Language Models via Systematic Attribution of Machine Text against Web Text] | * 2024-10: [https://arxiv.org/abs/2410.04265 AI as Humanity's Salieri: Quantifying Linguistic Creativity of Language Models via Systematic Attribution of Machine Text against Web Text] | ||
+ | * 2024-11: [https://openreview.net/pdf?id=fz969ahcvJ AidanBench: Evaluating Novel Idea Generation on Open-Ended Questions] ([https://github.com/aidanmclaughlin/AidanBench code]) | ||
+ | * 2024-12: [https://arxiv.org/abs/2412.17596 LiveIdeaBench: Evaluating LLMs' Scientific Creativity and Idea Generation with Minimal Context] | ||
+ | * [https://github.com/lechmazur/writing/ LLM Creative Story-Writing Benchmark] | ||
==Reasoning== | ==Reasoning== |
Latest revision as of 13:43, 28 March 2025
Contents
General
- Models Table (lifearchitect.ai)
- Artificial Analysis
- Epoch AI
Methods
- AidanBench: Evaluating Novel Idea Generation on Open-Ended Questions (code)
- ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning. Assess reasoning using puzzles of tunable complexity.
Task Length
- 2020-09: Ajeya Cotra: Draft report on AI timelines
- 2025-03: Measuring AI Ability to Complete Long Tasks Measuring AI Ability to Complete Long Tasks
Assess Specific Attributes
Various
- LMSYS: Human preference ranking leaderboard
- Tracking AI: "IQ" leaderboard
- LiveBench: A Challenging, Contamination-Free LLM Benchmark
- LLM Thematic Generalization Benchmark
Hallucination
Software/Coding
Visual
- 2025-03: Can Large Vision Language Models Read Maps Like a Human? MapBench
Creativity
- 2024-10: AI as Humanity's Salieri: Quantifying Linguistic Creativity of Language Models via Systematic Attribution of Machine Text against Web Text
- 2024-11: AidanBench: Evaluating Novel Idea Generation on Open-Ended Questions (code)
- 2024-12: LiveIdeaBench: Evaluating LLMs' Scientific Creativity and Idea Generation with Minimal Context
- LLM Creative Story-Writing Benchmark
Reasoning
- ENIGMAEVAL: "reasoning" leaderboard (paper)
Assistant/Agentic
- GAIA: a benchmark for General AI Assistants
- Galileo AI Agent Leaderboard
- Smolagents LLM Leaderboard: LLMs powering agents
Science
See: Science Benchmarks