Difference between revisions of "AI benchmarks"
KevinYager (talk | contribs) (→Methods) |
KevinYager (talk | contribs) (→Various) |
||
(5 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
=Methods= | =Methods= | ||
* [https://openreview.net/pdf?id=fz969ahcvJ AidanBench: Evaluating Novel Idea Generation on Open-Ended Questions] ([https://github.com/aidanmclaughlin/AidanBench code]) | * [https://openreview.net/pdf?id=fz969ahcvJ AidanBench: Evaluating Novel Idea Generation on Open-Ended Questions] ([https://github.com/aidanmclaughlin/AidanBench code]) | ||
Line 14: | Line 6: | ||
=Assess Specific Attributes= | =Assess Specific Attributes= | ||
+ | ==Various== | ||
+ | * [https://lmsys.org/ LMSYS]: Human preference ranking leaderboard | ||
+ | * [https://trackingai.org/home Tracking AI]: "IQ" leaderboard | ||
+ | * [https://www.vectara.com/ Vectara] [https://github.com/vectara/hallucination-leaderboard Hallucination Leaderboard] | ||
+ | * [https://livebench.ai/#/ LiveBench: A Challenging, Contamination-Free LLM Benchmark] | ||
+ | |||
==Software/Coding== | ==Software/Coding== | ||
* 2025-02: [https://arxiv.org/abs/2502.12115SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?] ([https://github.com/openai/SWELancer-Benchmark code]) | * 2025-02: [https://arxiv.org/abs/2502.12115SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?] ([https://github.com/openai/SWELancer-Benchmark code]) | ||
Line 19: | Line 17: | ||
==Creativity== | ==Creativity== | ||
* 2024-10: [https://arxiv.org/abs/2410.04265 AI as Humanity's Salieri: Quantifying Linguistic Creativity of Language Models via Systematic Attribution of Machine Text against Web Text] | * 2024-10: [https://arxiv.org/abs/2410.04265 AI as Humanity's Salieri: Quantifying Linguistic Creativity of Language Models via Systematic Attribution of Machine Text against Web Text] | ||
+ | |||
+ | ==Reasoning== | ||
+ | * [https://scale.com/leaderboard/enigma_eval ENIGMAEVAL]: "reasoning" leaderboard ([https://static.scale.com/uploads/654197dc94d34f66c0f5184e/EnigmaEval%20v4.pdf paper]) | ||
+ | |||
+ | ==Assistant/Agentic== | ||
+ | * [https://arxiv.org/abs/2311.12983 GAIA: a benchmark for General AI Assistants] | ||
+ | * [https://www.galileo.ai/blog/agent-leaderboard Galileo AI] [https://huggingface.co/spaces/galileo-ai/agent-leaderboard Agent Leaderboard] | ||
+ | * [https://huggingface.co/spaces/smolagents/smolagents-leaderboard Smolagents LLM Leaderboard]: LLMs powering agents |
Latest revision as of 15:23, 11 March 2025
Contents
Methods
- AidanBench: Evaluating Novel Idea Generation on Open-Ended Questions (code)
- ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning. Assess reasoning using puzzles of tunable complexity.
Assess Specific Attributes
Various
- LMSYS: Human preference ranking leaderboard
- Tracking AI: "IQ" leaderboard
- Vectara Hallucination Leaderboard
- LiveBench: A Challenging, Contamination-Free LLM Benchmark
Software/Coding
Creativity
Reasoning
- ENIGMAEVAL: "reasoning" leaderboard (paper)
Assistant/Agentic
- GAIA: a benchmark for General AI Assistants
- Galileo AI Agent Leaderboard
- Smolagents LLM Leaderboard: LLMs powering agents