Difference between revisions of "Science Agents"

From GISAXS
Jump to: navigation, search
(AI Use-cases for Science)
Line 3: Line 3:
  
 
==Literature==
 
==Literature==
===AI finding links in literature==
+
===AI finding links in literature===
 +
* 2019-07: [https://doi.org/10.1038/s41586-019-1335-8  Unsupervised word embeddings capture latent knowledge from materials science literature]
 
* 2024-11: [https://doi.org/10.1038/s41562-024-02046-9  Large language models surpass human experts in predicting neuroscience results]
 
* 2024-11: [https://doi.org/10.1038/s41562-024-02046-9  Large language models surpass human experts in predicting neuroscience results]
  
Line 37: Line 38:
 
* 2024-10: [https://github.com/xjdr-alt/entropix entropix: Entropy Based Sampling and Parallel CoT Decoding]
 
* 2024-10: [https://github.com/xjdr-alt/entropix entropix: Entropy Based Sampling and Parallel CoT Decoding]
 
* 2024-10: [https://arxiv.org/abs/2410.09724 Taming Overconfidence in LLMs: Reward Calibration in RLHF]
 
* 2024-10: [https://arxiv.org/abs/2410.09724 Taming Overconfidence in LLMs: Reward Calibration in RLHF]
 
  
 
=Science Agents=
 
=Science Agents=

Revision as of 15:25, 13 December 2024

AI Use-cases for Science

Literature

AI finding links in literature

Autonomous Ideation

Adapting LLMs to Science

AI/ML Methods tailored to Science

Symbolic Regression

Literature Discovery

Commercial

AI/ML Methods co-opted for Science

Mechanistic Interpretability

Train large model on science data. Then apply mechanistic interpretability (e.g. sparse autoencoders, SAE) to the feature/activation space.

Uncertainty

Science Agents

AI Science Systems

Inorganic Materials Discovery

Chemistry

Impact of AI in Science

Related Tools

Data Visualization

See Also