Difference between revisions of "Science Agents"

From GISAXS
Jump to: navigation, search
(AI Use-cases for Science)
(Science Benchmarks)
Line 59: Line 59:
 
=Science Benchmarks=
 
=Science Benchmarks=
 
* 2024-07: [https://arxiv.org/abs/2407.13168 SciCode: A Research Coding Benchmark Curated by Scientists]
 
* 2024-07: [https://arxiv.org/abs/2407.13168 SciCode: A Research Coding Benchmark Curated by Scientists]
 +
* 2024-11: [https://openreview.net/pdf?id=fz969ahcvJ AidanBench: Evaluating Novel Idea Generation on Open-Ended Questions] ([https://github.com/aidanmclaughlin/AidanBench code])
 +
* 2024-12: [https://arxiv.org/abs/2412.17596 LiveIdeaBench: Evaluating LLMs' Scientific Creativity and Idea Generation with Minimal Context]
 
* 2025-01: [https://agi.safe.ai/ Humanity's Last Exam]
 
* 2025-01: [https://agi.safe.ai/ Humanity's Last Exam]
  

Revision as of 09:31, 3 February 2025

AI Use-cases for Science

Literature

LLM extract data from papers

AI finding links in literature

Autonomous Ideation

Adapting LLMs to Science

AI/ML Methods tailored to Science

Regression (Data Fitting)

Tabular Classification/Regression

Symbolic Regression

Literature Discovery

Commercial

AI/ML Methods co-opted for Science

Mechanistic Interpretability

Train large model on science data. Then apply mechanistic interpretability (e.g. sparse autoencoders, SAE) to the feature/activation space.

Uncertainty

Science Benchmarks

Science Agents

Reviews

Specific

Science Multi-Agent Setups

AI Science Systems

Inorganic Materials Discovery

Chemistry

Impact of AI in Science

Related Tools

Literature Search

Data Visualization

See Also