Difference between revisions of "AI benchmarks"
KevinYager (talk | contribs) (→General) |
KevinYager (talk | contribs) (→Assistant/Agentic) |
||
Line 46: | Line 46: | ||
* [https://www.galileo.ai/blog/agent-leaderboard Galileo AI] [https://huggingface.co/spaces/galileo-ai/agent-leaderboard Agent Leaderboard] | * [https://www.galileo.ai/blog/agent-leaderboard Galileo AI] [https://huggingface.co/spaces/galileo-ai/agent-leaderboard Agent Leaderboard] | ||
* [https://huggingface.co/spaces/smolagents/smolagents-leaderboard Smolagents LLM Leaderboard]: LLMs powering agents | * [https://huggingface.co/spaces/smolagents/smolagents-leaderboard Smolagents LLM Leaderboard]: LLMs powering agents | ||
+ | * OpenAI [https://openai.com/index/paperbench/ PaperBench: Evaluating AI’s Ability to Replicate AI Research] ([https://cdn.openai.com/papers/22265bac-3191-44e5-b057-7aaacd8e90cd/paperbench.pdf paper], [https://github.com/openai/preparedness/tree/main/project/paperbench code]) | ||
==Science== | ==Science== | ||
See: [[Science_Agents#Science_Benchmarks|Science Benchmarks]] | See: [[Science_Agents#Science_Benchmarks|Science Benchmarks]] |
Latest revision as of 17:42, 2 April 2025
Contents
General
- Models Table (lifearchitect.ai)
- Artificial Analysis
- Epoch AI
Methods
- AidanBench: Evaluating Novel Idea Generation on Open-Ended Questions (code)
- ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning. Assess reasoning using puzzles of tunable complexity.
Task Length
- 2020-09: Ajeya Cotra: Draft report on AI timelines
- 2025-03: Measuring AI Ability to Complete Long Tasks Measuring AI Ability to Complete Long Tasks
Assess Specific Attributes
Various
- LMSYS: Human preference ranking leaderboard
- Tracking AI: "IQ" leaderboard
- LiveBench: A Challenging, Contamination-Free LLM Benchmark
- LLM Thematic Generalization Benchmark
Hallucination
Software/Coding
Visual
- 2025-03: Can Large Vision Language Models Read Maps Like a Human? MapBench
Creativity
- 2024-10: AI as Humanity's Salieri: Quantifying Linguistic Creativity of Language Models via Systematic Attribution of Machine Text against Web Text
- 2024-11: AidanBench: Evaluating Novel Idea Generation on Open-Ended Questions (code)
- 2024-12: LiveIdeaBench: Evaluating LLMs' Scientific Creativity and Idea Generation with Minimal Context
- LLM Creative Story-Writing Benchmark
Reasoning
- ENIGMAEVAL: "reasoning" leaderboard (paper)
Assistant/Agentic
- GAIA: a benchmark for General AI Assistants
- Galileo AI Agent Leaderboard
- Smolagents LLM Leaderboard: LLMs powering agents
- OpenAI PaperBench: Evaluating AI’s Ability to Replicate AI Research (paper, code)
Science
See: Science Benchmarks