AI benchmarks
Revision as of 12:59, 18 February 2025 by KevinYager (talk | contribs) (→Assess Specific Attributes)
Leaderboards
- LMSYS: Human preference ranking
- Tracking AI
- Vectara Hallucination Leaderboard
- LiveBench: A Challenging, Contamination-Free LLM Benchmark
- ENIGMAEVAL (paper)
- Galileo AI Agent Leaderboard
Methods
- AidanBench: Evaluating Novel Idea Generation on Open-Ended Questions (code)
- ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning. Assess reasoning using puzzles of tunable complexity.