Difference between revisions of "Science Agents"

From GISAXS
Jump to: navigation, search
(Biology)
(AI Use-cases for Science)
Line 9: Line 9:
 
* 2019-07: [https://doi.org/10.1038/s41586-019-1335-8  Unsupervised word embeddings capture latent knowledge from materials science literature]
 
* 2019-07: [https://doi.org/10.1038/s41586-019-1335-8  Unsupervised word embeddings capture latent knowledge from materials science literature]
 
* 2024-11: [https://doi.org/10.1038/s41562-024-02046-9  Large language models surpass human experts in predicting neuroscience results]
 
* 2024-11: [https://doi.org/10.1038/s41562-024-02046-9  Large language models surpass human experts in predicting neuroscience results]
 +
 +
==Explanation==
 +
* [https://tiger-ai-lab.github.io/TheoremExplainAgent/ TheoremExplainAgent: Towards Multimodal Explanations for LLM Theorem Understanding] ([https://arxiv.org/abs/2502.19400 preprint])
  
 
==Autonomous Ideation==
 
==Autonomous Ideation==

Revision as of 11:48, 3 March 2025

AI Use-cases for Science

Literature

LLM extract data from papers

AI finding links in literature

Explanation

Autonomous Ideation

Adapting LLMs to Science

AI/ML Methods tailored to Science

Regression (Data Fitting)

Tabular Classification/Regression

Symbolic Regression

Literature Discovery

Commercial

AI/ML Methods in Science

Chemistry

Biology

AI/ML Methods co-opted for Science

Mechanistic Interpretability

Train large model on science data. Then apply mechanistic interpretability (e.g. sparse autoencoders, SAE) to the feature/activation space.

Uncertainty

Science Benchmarks

Science Agents

Reviews

Specific

Science Multi-Agent Setups

AI Science Systems

Inorganic Materials Discovery

Chemistry

Impact of AI in Science

Related Tools

Literature Search

Data Visualization

See Also