Difference between revisions of "AI benchmarks"

From GISAXS
Jump to: navigation, search
(Assess Specific Attributes)
(Assistant/Agentic)
(6 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
=General=
 
=General=
 
* [https://lifearchitect.ai/models-table/ Models Table] (lifearchitect.ai)
 
* [https://lifearchitect.ai/models-table/ Models Table] (lifearchitect.ai)
 +
* [https://artificialanalysis.ai/models Artificial Analysis]
 +
* [https://epoch.ai/ Epoch AI]
 +
** [https://epoch.ai/data/notable-ai-models Notable AI models]
  
 
=Methods=
 
=Methods=
Line 6: Line 9:
 
** [https://aidanbench.com/ Leaderboard]
 
** [https://aidanbench.com/ Leaderboard]
 
** [https://x.com/scaling01/status/1897301054431064391 Suggestion to use] [https://en.wikipedia.org/wiki/Borda_count Borda count]
 
** [https://x.com/scaling01/status/1897301054431064391 Suggestion to use] [https://en.wikipedia.org/wiki/Borda_count Borda count]
 +
** 2025-04: [https://x.com/scaling01/status/1910499781601874008 add] Quasar Alpha, Optimus Alpha, Llama-4 Scout and Llama-4 Maverick
 
* [https://arxiv.org/abs/2502.01100 ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning]. Assess reasoning using puzzles of tunable complexity.
 
* [https://arxiv.org/abs/2502.01100 ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning]. Assess reasoning using puzzles of tunable complexity.
  
Line 29: Line 33:
 
==Visual==
 
==Visual==
 
* 2025-03: [https://arxiv.org/abs/2503.14607 Can Large Vision Language Models Read Maps Like a Human?] MapBench
 
* 2025-03: [https://arxiv.org/abs/2503.14607 Can Large Vision Language Models Read Maps Like a Human?] MapBench
 +
 +
==Conversation==
 +
* 2025-01: [https://arxiv.org/abs/2501.17399 MultiChallenge: A Realistic Multi-Turn Conversation Evaluation Benchmark Challenging to Frontier LLMs] ([https://scale.com/research/multichallenge project], [https://github.com/ekwinox117/multi-challenge code], [https://scale.com/leaderboard/multichallenge leaderboard])
  
 
==Creativity==
 
==Creativity==
Line 38: Line 45:
 
==Reasoning==
 
==Reasoning==
 
* [https://scale.com/leaderboard/enigma_eval ENIGMAEVAL]: "reasoning" leaderboard ([https://static.scale.com/uploads/654197dc94d34f66c0f5184e/EnigmaEval%20v4.pdf paper])
 
* [https://scale.com/leaderboard/enigma_eval ENIGMAEVAL]: "reasoning" leaderboard ([https://static.scale.com/uploads/654197dc94d34f66c0f5184e/EnigmaEval%20v4.pdf paper])
 +
* [https://bethgelab.github.io/sober-reasoning/ Sober Reasoning Leaderboard]
 +
** 2025-04: [https://arxiv.org/abs/2504.07086 A Sober Look at Progress in Language Model Reasoning: Pitfalls and Paths to Reproducibility]
  
 
==Assistant/Agentic==
 
==Assistant/Agentic==
 +
See: [[AI_Agents#Optimization|AI Agents: Optimization]]
 
* [https://arxiv.org/abs/2311.12983 GAIA: a benchmark for General AI Assistants]
 
* [https://arxiv.org/abs/2311.12983 GAIA: a benchmark for General AI Assistants]
 
* [https://www.galileo.ai/blog/agent-leaderboard Galileo AI] [https://huggingface.co/spaces/galileo-ai/agent-leaderboard Agent Leaderboard]
 
* [https://www.galileo.ai/blog/agent-leaderboard Galileo AI] [https://huggingface.co/spaces/galileo-ai/agent-leaderboard Agent Leaderboard]
 
* [https://huggingface.co/spaces/smolagents/smolagents-leaderboard Smolagents LLM Leaderboard]: LLMs powering agents
 
* [https://huggingface.co/spaces/smolagents/smolagents-leaderboard Smolagents LLM Leaderboard]: LLMs powering agents
 +
* OpenAI [https://openai.com/index/paperbench/ PaperBench: Evaluating AI’s Ability to Replicate AI Research] ([https://cdn.openai.com/papers/22265bac-3191-44e5-b057-7aaacd8e90cd/paperbench.pdf paper], [https://github.com/openai/preparedness/tree/main/project/paperbench code])
  
 
==Science==
 
==Science==
 
See: [[Science_Agents#Science_Benchmarks|Science Benchmarks]]
 
See: [[Science_Agents#Science_Benchmarks|Science Benchmarks]]

Revision as of 16:28, 14 April 2025

General

Methods

Task Length

GmZHL8xWQAAtFlF.jpeg

Assess Specific Attributes

Various

Hallucination

Software/Coding

Visual

Conversation

Creativity

Reasoning

Assistant/Agentic

See: AI Agents: Optimization

Science

See: Science Benchmarks