Difference between revisions of "Science Agents"

From GISAXS
Jump to: navigation, search
(Science Agents)
(Mechanistic Interpretability)
 
(One intermediate revision by the same user not shown)
Line 40: Line 40:
 
* Mechanistic interpretability for protein language models ([https://interprot.com/ visualizer], [https://github.com/etowahadams/interprot/tree/main code], [https://huggingface.co/liambai/InterProt-ESM2-SAEs SAE])
 
* Mechanistic interpretability for protein language models ([https://interprot.com/ visualizer], [https://github.com/etowahadams/interprot/tree/main code], [https://huggingface.co/liambai/InterProt-ESM2-SAEs SAE])
 
* [https://www.markov.bio/ Markov Bio]: [https://www.markov.bio/research/mech-interp-path-to-e2e-biology Through a Glass Darkly: Mechanistic Interpretability as the Bridge to End-to-End Biology] ([https://x.com/adamlewisgreen/status/1853206279499751531 quick description], [https://markovbio.github.io/biomedical-progress/ background info on recent bio progress])
 
* [https://www.markov.bio/ Markov Bio]: [https://www.markov.bio/research/mech-interp-path-to-e2e-biology Through a Glass Darkly: Mechanistic Interpretability as the Bridge to End-to-End Biology] ([https://x.com/adamlewisgreen/status/1853206279499751531 quick description], [https://markovbio.github.io/biomedical-progress/ background info on recent bio progress])
* [https://www.arxiv.org/abs/2412.16247 Towards scientific discovery with dictionary learning: Extracting biological concepts from microscopy foundation models]
+
* 2023-01: [https://arxiv.org/abs/2301.05062 Tracr: Compiled Transformers as a Laboratory for Interpretability] ([https://github.com/google-deepmind/tracr code])
* [https://arxiv.org/abs/2412.12101 InterPLM: Discovering Interpretable Features in Protein Language Models via Sparse Autoencoders]
+
* 2024-12: [https://www.arxiv.org/abs/2412.16247 Towards scientific discovery with dictionary learning: Extracting biological concepts from microscopy foundation models]
 +
* 2024-12: [https://arxiv.org/abs/2412.12101 InterPLM: Discovering Interpretable Features in Protein Language Models via Sparse Autoencoders]
 +
* 2025-01: [https://arxiv.org/abs/2501.00089 Insights on Galaxy Evolution from Interpretable Sparse Feature Networks]
  
 
===Uncertainty===
 
===Uncertainty===

Latest revision as of 09:43, 4 January 2025

AI Use-cases for Science

Literature

AI finding links in literature

Autonomous Ideation

Adapting LLMs to Science

AI/ML Methods tailored to Science

Regression (Data Fitting)

Symbolic Regression

Literature Discovery

Commercial

AI/ML Methods co-opted for Science

Mechanistic Interpretability

Train large model on science data. Then apply mechanistic interpretability (e.g. sparse autoencoders, SAE) to the feature/activation space.

Uncertainty

Science Agents

AI Science Systems

Inorganic Materials Discovery

Chemistry

Impact of AI in Science

Related Tools

Data Visualization

See Also