Difference between revisions of "AI benchmarks"

From GISAXS
Jump to: navigation, search
(Software/Coding)
(Various)
 
(6 intermediate revisions by the same user not shown)
Line 1: Line 1:
=Leaderboards=
 
* [https://lmsys.org/ LMSYS]: Human preference ranking
 
* [https://trackingai.org/home Tracking AI]
 
* [https://www.vectara.com/ Vectara] [https://github.com/vectara/hallucination-leaderboard Hallucination Leaderboard]
 
* [https://livebench.ai/#/ LiveBench: A Challenging, Contamination-Free LLM Benchmark]
 
* [https://scale.com/leaderboard/enigma_eval ENIGMAEVAL] ([https://static.scale.com/uploads/654197dc94d34f66c0f5184e/EnigmaEval%20v4.pdf paper])
 
* [https://www.galileo.ai/blog/agent-leaderboard Galileo AI] [https://huggingface.co/spaces/galileo-ai/agent-leaderboard Agent Leaderboard]
 
 
 
=Methods=
 
=Methods=
 
* [https://openreview.net/pdf?id=fz969ahcvJ AidanBench: Evaluating Novel Idea Generation on Open-Ended Questions] ([https://github.com/aidanmclaughlin/AidanBench code])
 
* [https://openreview.net/pdf?id=fz969ahcvJ AidanBench: Evaluating Novel Idea Generation on Open-Ended Questions] ([https://github.com/aidanmclaughlin/AidanBench code])
 +
** [https://aidanbench.com/ Leaderboard]
 +
** [https://x.com/scaling01/status/1897301054431064391 Suggestion to use] [https://en.wikipedia.org/wiki/Borda_count Borda count]
 
* [https://arxiv.org/abs/2502.01100 ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning]. Assess reasoning using puzzles of tunable complexity.
 
* [https://arxiv.org/abs/2502.01100 ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning]. Assess reasoning using puzzles of tunable complexity.
  
 
=Assess Specific Attributes=
 
=Assess Specific Attributes=
 +
==Various==
 +
* [https://lmsys.org/ LMSYS]: Human preference ranking leaderboard
 +
* [https://trackingai.org/home Tracking AI]: "IQ" leaderboard
 +
* [https://www.vectara.com/ Vectara] [https://github.com/vectara/hallucination-leaderboard Hallucination Leaderboard]
 +
* [https://livebench.ai/#/ LiveBench: A Challenging, Contamination-Free LLM Benchmark]
 +
 
==Software/Coding==
 
==Software/Coding==
 
* 2025-02: [https://arxiv.org/abs/2502.12115SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?] ([https://github.com/openai/SWELancer-Benchmark code])
 
* 2025-02: [https://arxiv.org/abs/2502.12115SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?] ([https://github.com/openai/SWELancer-Benchmark code])
Line 17: Line 17:
 
==Creativity==
 
==Creativity==
 
* 2024-10: [https://arxiv.org/abs/2410.04265 AI as Humanity's Salieri: Quantifying Linguistic Creativity of Language Models via Systematic Attribution of Machine Text against Web Text]
 
* 2024-10: [https://arxiv.org/abs/2410.04265 AI as Humanity's Salieri: Quantifying Linguistic Creativity of Language Models via Systematic Attribution of Machine Text against Web Text]
 +
 +
==Reasoning==
 +
* [https://scale.com/leaderboard/enigma_eval ENIGMAEVAL]: "reasoning" leaderboard ([https://static.scale.com/uploads/654197dc94d34f66c0f5184e/EnigmaEval%20v4.pdf paper])
 +
 +
==Assistant/Agentic==
 +
* [https://arxiv.org/abs/2311.12983 GAIA: a benchmark for General AI Assistants]
 +
* [https://www.galileo.ai/blog/agent-leaderboard Galileo AI] [https://huggingface.co/spaces/galileo-ai/agent-leaderboard Agent Leaderboard]
 +
* [https://huggingface.co/spaces/smolagents/smolagents-leaderboard Smolagents LLM Leaderboard]: LLMs powering agents

Latest revision as of 15:23, 11 March 2025