Difference between revisions of "AI benchmarks"
KevinYager (talk | contribs) (→Leaderboards) |
KevinYager (talk | contribs) (→Assess Specific Attributes) |
||
Line 12: | Line 12: | ||
=Assess Specific Attributes= | =Assess Specific Attributes= | ||
+ | ==Software/Coding== | ||
+ | * 2025-02: [https://arxiv.org/abs/2502.12115SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?] | ||
+ | |||
==Creativity== | ==Creativity== | ||
* 2024-10: [https://arxiv.org/abs/2410.04265 AI as Humanity's Salieri: Quantifying Linguistic Creativity of Language Models via Systematic Attribution of Machine Text against Web Text] | * 2024-10: [https://arxiv.org/abs/2410.04265 AI as Humanity's Salieri: Quantifying Linguistic Creativity of Language Models via Systematic Attribution of Machine Text against Web Text] |
Revision as of 12:59, 18 February 2025
Leaderboards
- LMSYS: Human preference ranking
- Tracking AI
- Vectara Hallucination Leaderboard
- LiveBench: A Challenging, Contamination-Free LLM Benchmark
- ENIGMAEVAL (paper)
- Galileo AI Agent Leaderboard
Methods
- AidanBench: Evaluating Novel Idea Generation on Open-Ended Questions (code)
- ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning. Assess reasoning using puzzles of tunable complexity.