Difference between revisions of "AI benchmarks"
KevinYager (talk | contribs) (→Leaderboards) |
KevinYager (talk | contribs) (→Leaderboards) |
||
Line 5: | Line 5: | ||
* [https://livebench.ai/#/ LiveBench: A Challenging, Contamination-Free LLM Benchmark] | * [https://livebench.ai/#/ LiveBench: A Challenging, Contamination-Free LLM Benchmark] | ||
* [https://scale.com/leaderboard/enigma_eval ENIGMAEVAL] ([https://static.scale.com/uploads/654197dc94d34f66c0f5184e/EnigmaEval%20v4.pdf paper]) | * [https://scale.com/leaderboard/enigma_eval ENIGMAEVAL] ([https://static.scale.com/uploads/654197dc94d34f66c0f5184e/EnigmaEval%20v4.pdf paper]) | ||
+ | * [https://www.galileo.ai/blog/agent-leaderboard Galileo AI] [https://huggingface.co/spaces/galileo-ai/agent-leaderboard Agent Leaderboard] | ||
=Methods= | =Methods= |
Revision as of 11:26, 15 February 2025
Leaderboards
- LMSYS: Human preference ranking
- Tracking AI
- Vectara Hallucination Leaderboard
- LiveBench: A Challenging, Contamination-Free LLM Benchmark
- ENIGMAEVAL (paper)
- Galileo AI Agent Leaderboard
Methods
- AidanBench: Evaluating Novel Idea Generation on Open-Ended Questions (code)
- ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning. Assess reasoning using puzzles of tunable complexity.