Difference between revisions of "AI benchmarks"
KevinYager (talk | contribs) |
KevinYager (talk | contribs) (→Methods) |
||
(One intermediate revision by the same user not shown) | |||
Line 1: | Line 1: | ||
+ | =Leaderboards= | ||
* [https://lmsys.org/ LMSYS]: Human preference ranking | * [https://lmsys.org/ LMSYS]: Human preference ranking | ||
* [https://trackingai.org/home Tracking AI] | * [https://trackingai.org/home Tracking AI] | ||
* [https://www.vectara.com/ Vectara] [https://github.com/vectara/hallucination-leaderboard Hallucination Leaderboard] | * [https://www.vectara.com/ Vectara] [https://github.com/vectara/hallucination-leaderboard Hallucination Leaderboard] | ||
+ | |||
+ | =Methods= | ||
+ | * [https://openreview.net/pdf?id=fz969ahcvJ AidanBench: Evaluating Novel Idea Generation on Open-Ended Questions] ([https://github.com/aidanmclaughlin/AidanBench code]) | ||
+ | * [https://arxiv.org/abs/2502.01100 ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning]. Assess reasoning using puzzles of tunable complexity. |
Latest revision as of 13:33, 4 February 2025
Leaderboards
- LMSYS: Human preference ranking
- Tracking AI
- Vectara Hallucination Leaderboard
Methods
- AidanBench: Evaluating Novel Idea Generation on Open-Ended Questions (code)
- ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning. Assess reasoning using puzzles of tunable complexity.