Difference between revisions of "Science Agents"

From GISAXS
Jump to: navigation, search
(Explanation)
(LLMs Optimized for Science)
 
(5 intermediate revisions by the same user not shown)
Line 19: Line 19:
 
* 2024-02: Wikipedia style: [https://arxiv.org/abs/2402.14207 Assisting in Writing Wikipedia-like Articles From Scratch with Large Language Models]
 
* 2024-02: Wikipedia style: [https://arxiv.org/abs/2402.14207 Assisting in Writing Wikipedia-like Articles From Scratch with Large Language Models]
 
* 2024-02: [https://arxiv.org/abs/2408.07055 LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs] ([https://github.com/THUDM/LongWriter code])
 
* 2024-02: [https://arxiv.org/abs/2408.07055 LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs] ([https://github.com/THUDM/LongWriter code])
* 2024-08: Scientific papers: [The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery]
+
* 2024-08: Scientific papers: [https://arxiv.org/abs/2408.06292 The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery]
 
* 2024-09: PaperQA2: [https://paper.wikicrow.ai/ Language Models Achieve Superhuman Synthesis of Scientific Knowledge] ([https://x.com/SGRodriques/status/1833908643856818443 𝕏 post], [https://github.com/Future-House/paper-qa code])
 
* 2024-09: PaperQA2: [https://paper.wikicrow.ai/ Language Models Achieve Superhuman Synthesis of Scientific Knowledge] ([https://x.com/SGRodriques/status/1833908643856818443 𝕏 post], [https://github.com/Future-House/paper-qa code])
 
* 2025-03: [https://arxiv.org/abs/2503.18866 Reasoning to Learn from Latent Thoughts]
 
* 2025-03: [https://arxiv.org/abs/2503.18866 Reasoning to Learn from Latent Thoughts]
 
* 2025-03: [https://arxiv.org/abs/2503.19065 WikiAutoGen: Towards Multi-Modal Wikipedia-Style Article Generation]
 
* 2025-03: [https://arxiv.org/abs/2503.19065 WikiAutoGen: Towards Multi-Modal Wikipedia-Style Article Generation]
 +
* 2025-04: [https://arxiv.org/abs/2504.13171 Sleep-time Compute: Beyond Inference Scaling at Test-time]
  
 
==Explanation==
 
==Explanation==
Line 48: Line 49:
 
* 2024-12: [https://arxiv.org/abs/2412.18161 VISION: A Modular AI Assistant for Natural Human-Instrument Interaction at Scientific User Facilities]
 
* 2024-12: [https://arxiv.org/abs/2412.18161 VISION: A Modular AI Assistant for Natural Human-Instrument Interaction at Scientific User Facilities]
 
* 2025-01: [https://www.science.org/doi/10.1126/sciadv.adr4173 Large language models for human-machine collaborative particle accelerator tuning through natural language]
 
* 2025-01: [https://www.science.org/doi/10.1126/sciadv.adr4173 Large language models for human-machine collaborative particle accelerator tuning through natural language]
 +
* 2025-04: [https://openreview.net/forum?id=iA9UN1dEgJ Operating Robotic Laboratories with Large Language Models and Teachable Agents]
  
 
==AI/ML Methods tailored to Science==
 
==AI/ML Methods tailored to Science==
Line 74: Line 76:
 
* [https://www.radical-ai.com/ Radical AI]: Material simulation/design
 
* [https://www.radical-ai.com/ Radical AI]: Material simulation/design
 
* [https://www.autoscience.ai/ Autoscience] ([https://www.autoscience.ai/blog/meet-carl-the-first-ai-system-to-produce-academically-peer-reviewed-research Carl])
 
* [https://www.autoscience.ai/ Autoscience] ([https://www.autoscience.ai/blog/meet-carl-the-first-ai-system-to-produce-academically-peer-reviewed-research Carl])
 +
====Bio====
 +
* [https://www.bioptimus.com/ Bioptimus]
 +
* [https://www.evolutionaryscale.ai/ EvolutionaryScale]
  
 
==AI/ML Methods in Science==
 
==AI/ML Methods in Science==
Line 164: Line 169:
 
* 2024-10: [https://arxiv.org/abs/2410.12771 Open Materials 2024 (OMat24) Inorganic Materials Dataset and Models] ([https://github.com/FAIR-Chem/fairchem code], [https://huggingface.co/datasets/fairchem/OMAT24 datasets], [https://huggingface.co/fairchem/OMAT24 checkpoints], [https://ai.meta.com/blog/fair-news-segment-anything-2-1-meta-spirit-lm-layer-skip-salsa-sona/ blogpost])
 
* 2024-10: [https://arxiv.org/abs/2410.12771 Open Materials 2024 (OMat24) Inorganic Materials Dataset and Models] ([https://github.com/FAIR-Chem/fairchem code], [https://huggingface.co/datasets/fairchem/OMAT24 datasets], [https://huggingface.co/fairchem/OMAT24 checkpoints], [https://ai.meta.com/blog/fair-news-segment-anything-2-1-meta-spirit-lm-layer-skip-salsa-sona/ blogpost])
 
* 2025-01: [https://www.nature.com/articles/s41586-025-08628-5 A generative model for inorganic materials design]
 
* 2025-01: [https://www.nature.com/articles/s41586-025-08628-5 A generative model for inorganic materials design]
 +
* 2025-04: [https://arxiv.org/abs/2504.14110 System of Agentic AI for the Discovery of Metal-Organic Frameworks]
  
 
===Chemistry===
 
===Chemistry===
Line 171: Line 177:
 
==LLMs Optimized for Science==
 
==LLMs Optimized for Science==
 
* 2022-11: [https://arxiv.org/abs/2211.09085 Galactica: A Large Language Model for Science]
 
* 2022-11: [https://arxiv.org/abs/2211.09085 Galactica: A Large Language Model for Science]
 +
* 2025-02: [https://arxiv.org/abs/2502.13107 MatterChat: A Multi-Modal LLM for Material Science]
 
* 2025-03: [https://arxiv.org/abs/2503.17604 OmniScience: A Domain-Specialized LLM for Scientific Reasoning and Discovery]
 
* 2025-03: [https://arxiv.org/abs/2503.17604 OmniScience: A Domain-Specialized LLM for Scientific Reasoning and Discovery]
 
* 2025-03: Google [https://huggingface.co/collections/google/txgemma-release-67dd92e931c857d15e4d1e87 TxGemma] (2B, 9B, 27B): [https://developers.googleblog.com/en/introducing-txgemma-open-models-improving-therapeutics-development/ drug development]
 
* 2025-03: Google [https://huggingface.co/collections/google/txgemma-release-67dd92e931c857d15e4d1e87 TxGemma] (2B, 9B, 27B): [https://developers.googleblog.com/en/introducing-txgemma-open-models-improving-therapeutics-development/ drug development]

Latest revision as of 09:59, 10 May 2025

AI Use-cases for Science

Literature

LLM extract data from papers

AI finding links in literature

(Pre) Generate Articles

Explanation

Autonomous Ideation

Adapting LLMs to Science

AI/LLM Control of Scientific Instruments/Facilities

AI/ML Methods tailored to Science

Regression (Data Fitting)

Tabular Classification/Regression

Symbolic Regression

Literature Discovery

Commercial

Bio

AI/ML Methods in Science

Chemistry

Biology

Medicine

See: AI_Agents#Medicine

Successes

AI/ML Methods co-opted for Science

Mechanistic Interpretability

Train large model on science data. Then apply mechanistic interpretability (e.g. sparse autoencoders, SAE) to the feature/activation space.

Uncertainty

Science Benchmarks

Science Agents

Reviews

Specific

Science Multi-Agent Setups

AI Science Systems

Inorganic Materials Discovery

Chemistry

LLMs Optimized for Science

Impact of AI in Science

Related Tools

Literature Search

Data Visualization

Generative

Chemistry

Science Datasets

See Also