Difference between revisions of "Science Agents"

From GISAXS
Jump to: navigation, search
(AI Science Systems)
(Mechanistic Interpretability)
 
(4 intermediate revisions by the same user not shown)
Line 11: Line 11:
 
* 2019-07: [https://doi.org/10.1038/s41586-019-1335-8  Unsupervised word embeddings capture latent knowledge from materials science literature]
 
* 2019-07: [https://doi.org/10.1038/s41586-019-1335-8  Unsupervised word embeddings capture latent knowledge from materials science literature]
 
* 2024-11: [https://doi.org/10.1038/s41562-024-02046-9  Large language models surpass human experts in predicting neuroscience results]
 
* 2024-11: [https://doi.org/10.1038/s41562-024-02046-9  Large language models surpass human experts in predicting neuroscience results]
 +
 +
===(Pre) Generate Articles===
 +
* 2022-12: [https://aclanthology.org/2022.emnlp-main.296/ Re3: Generating Longer Stories With Recursive Reprompting and Revision]
 +
* 2023-03: English essays: [https://journal.unnes.ac.id/sju/index.php/elt/article/view/64069 Artificial intelligence (AI) technology in OpenAI ChatGPT application: A review of ChatGPT in writing English essay]
 +
* 2023-01: Journalism: [https://journals.sagepub.com/doi/10.1177/10776958221149577 Collaborating With ChatGPT: Considering the Implications of Generative Artificial Intelligence for Journalism and Media Education]
 +
* 2023-07: Science writing: [https://www.rbmojournal.com/article/S1472-6483(23)00219-5/fulltext Artificial intelligence in scientific writing: a friend or a foe?]
 +
* 2024-02: Wikipedia style: [https://arxiv.org/abs/2402.14207 Assisting in Writing Wikipedia-like Articles From Scratch with Large Language Models]
 +
* 2024-02: [https://arxiv.org/abs/2408.07055 LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs] ([https://github.com/THUDM/LongWriter code])
 +
* 2024-08: Scientific papers: [The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery]
 +
* 2024-09: PaperQA2: [https://paper.wikicrow.ai/ Language Models Achieve Superhuman Synthesis of Scientific Knowledge] ([https://x.com/SGRodriques/status/1833908643856818443 𝕏 post], [https://github.com/Future-House/paper-qa code])
 +
* 2025-03: [https://arxiv.org/abs/2503.18866 Reasoning to Learn from Latent Thoughts]
 +
* 2025-03: [https://arxiv.org/abs/2503.19065 WikiAutoGen: Towards Multi-Modal Wikipedia-Style Article Generation]
  
 
==Explanation==
 
==Explanation==
Line 84: Line 96:
 
* [https://www.markov.bio/ Markov Bio]: [https://www.markov.bio/research/mech-interp-path-to-e2e-biology Through a Glass Darkly: Mechanistic Interpretability as the Bridge to End-to-End Biology] ([https://x.com/adamlewisgreen/status/1853206279499751531 quick description], [https://markovbio.github.io/biomedical-progress/ background info on recent bio progress])
 
* [https://www.markov.bio/ Markov Bio]: [https://www.markov.bio/research/mech-interp-path-to-e2e-biology Through a Glass Darkly: Mechanistic Interpretability as the Bridge to End-to-End Biology] ([https://x.com/adamlewisgreen/status/1853206279499751531 quick description], [https://markovbio.github.io/biomedical-progress/ background info on recent bio progress])
 
* 2023-01: [https://arxiv.org/abs/2301.05062 Tracr: Compiled Transformers as a Laboratory for Interpretability] ([https://github.com/google-deepmind/tracr code])
 
* 2023-01: [https://arxiv.org/abs/2301.05062 Tracr: Compiled Transformers as a Laboratory for Interpretability] ([https://github.com/google-deepmind/tracr code])
 +
* 2024-10: [https://arxiv.org/abs/2410.03334 An X-Ray Is Worth 15 Features: Sparse Autoencoders for Interpretable Radiology Report Generation]
 
* 2024-12: [https://www.arxiv.org/abs/2412.16247 Towards scientific discovery with dictionary learning: Extracting biological concepts from microscopy foundation models]
 
* 2024-12: [https://www.arxiv.org/abs/2412.16247 Towards scientific discovery with dictionary learning: Extracting biological concepts from microscopy foundation models]
 
* 2024-12: [https://arxiv.org/abs/2412.12101 InterPLM: Discovering Interpretable Features in Protein Language Models via Sparse Autoencoders]
 
* 2024-12: [https://arxiv.org/abs/2412.12101 InterPLM: Discovering Interpretable Features in Protein Language Models via Sparse Autoencoders]
 
* 2025-01: [https://arxiv.org/abs/2501.00089 Insights on Galaxy Evolution from Interpretable Sparse Feature Networks]
 
* 2025-01: [https://arxiv.org/abs/2501.00089 Insights on Galaxy Evolution from Interpretable Sparse Feature Networks]
 
* 2025-02: [https://www.biorxiv.org/content/10.1101/2025.02.06.636901v1 From Mechanistic Interpretability to Mechanistic Biology: Training, Evaluating, and Interpreting Sparse Autoencoders on Protein Language Models]
 
* 2025-02: [https://www.biorxiv.org/content/10.1101/2025.02.06.636901v1 From Mechanistic Interpretability to Mechanistic Biology: Training, Evaluating, and Interpreting Sparse Autoencoders on Protein Language Models]
 +
* 2025-02: [https://www.goodfire.ai/blog/interpreting-evo-2 Interpreting Evo 2: Arc Institute's Next-Generation Genomic Foundation Model]
  
 
===Uncertainty===
 
===Uncertainty===
Line 139: Line 153:
 
* 2022-11: [https://arxiv.org/abs/2211.09085 Galactica: A Large Language Model for Science]
 
* 2022-11: [https://arxiv.org/abs/2211.09085 Galactica: A Large Language Model for Science]
 
* 2025-03: [https://arxiv.org/abs/2503.17604 OmniScience: A Domain-Specialized LLM for Scientific Reasoning and Discovery]
 
* 2025-03: [https://arxiv.org/abs/2503.17604 OmniScience: A Domain-Specialized LLM for Scientific Reasoning and Discovery]
 +
* 2025-03: Google [https://huggingface.co/collections/google/txgemma-release-67dd92e931c857d15e4d1e87 TxGemma] (2B, 9B, 27B): [https://developers.googleblog.com/en/introducing-txgemma-open-models-improving-therapeutics-development/ drug development]
  
 
=Impact of AI in Science=
 
=Impact of AI in Science=

Latest revision as of 10:40, 29 March 2025

AI Use-cases for Science

Literature

LLM extract data from papers

AI finding links in literature

(Pre) Generate Articles

Explanation

Autonomous Ideation

Adapting LLMs to Science

AI/ML Methods tailored to Science

Regression (Data Fitting)

Tabular Classification/Regression

Symbolic Regression

Literature Discovery

Commercial

AI/ML Methods in Science

Chemistry

Biology

Successes

AI/ML Methods co-opted for Science

Mechanistic Interpretability

Train large model on science data. Then apply mechanistic interpretability (e.g. sparse autoencoders, SAE) to the feature/activation space.

Uncertainty

Science Benchmarks

Science Agents

Reviews

Specific

Science Multi-Agent Setups

AI Science Systems

Inorganic Materials Discovery

Chemistry

LLMs Optimized for Science

Impact of AI in Science

Related Tools

Literature Search

Data Visualization

Generative

Chemistry

See Also