Difference between revisions of "Science Agents"

From GISAXS
Jump to: navigation, search
(Science Agents)
(AI/ML Methods tailored to Science)
Line 21: Line 21:
 
* 2024-06: [https://arxiv.org/abs/2406.14546 Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data]: training on (x,y) pairs enables inferring underlying function (define it in code, invert it, compose it)
 
* 2024-06: [https://arxiv.org/abs/2406.14546 Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data]: training on (x,y) pairs enables inferring underlying function (define it in code, invert it, compose it)
 
* 2024-12: [https://arxiv.org/abs/2402.14547 OmniPred: Language Models as Universal Regressors]
 
* 2024-12: [https://arxiv.org/abs/2402.14547 OmniPred: Language Models as Universal Regressors]
 +
 +
===Tabular Classification/Regression===
 +
* 2025-01: [https://www.nature.com/articles/s41586-024-08328-6 Accurate predictions on small data with a tabular foundation model]
  
 
===Symbolic Regression===
 
===Symbolic Regression===

Revision as of 09:35, 9 January 2025

AI Use-cases for Science

Literature

AI finding links in literature

Autonomous Ideation

Adapting LLMs to Science

AI/ML Methods tailored to Science

Regression (Data Fitting)

Tabular Classification/Regression

Symbolic Regression

Literature Discovery

Commercial

AI/ML Methods co-opted for Science

Mechanistic Interpretability

Train large model on science data. Then apply mechanistic interpretability (e.g. sparse autoencoders, SAE) to the feature/activation space.

Uncertainty

Science Agents

Science Multi-Agent Setups

AI Science Systems

Inorganic Materials Discovery

Chemistry

Impact of AI in Science

Related Tools

Literature Search

Data Visualization

See Also