Difference between revisions of "Science Agents"
KevinYager (talk | contribs) (→AI Science Systems) |
KevinYager (talk | contribs) (→(Pre) Generate Articles) |
||
(40 intermediate revisions by the same user not shown) | |||
Line 3: | Line 3: | ||
==Literature== | ==Literature== | ||
+ | * [https://www.alphaxiv.org/explore alphaXiv | Explore]: Understand arXiv papers | ||
+ | |||
===LLM extract data from papers=== | ===LLM extract data from papers=== | ||
* 2024-14: [https://pubs.rsc.org/en/content/articlelanding/2025/cs/d4cs00913d From text to insight: large language models for chemical data extraction] | * 2024-14: [https://pubs.rsc.org/en/content/articlelanding/2025/cs/d4cs00913d From text to insight: large language models for chemical data extraction] | ||
Line 9: | Line 11: | ||
* 2019-07: [https://doi.org/10.1038/s41586-019-1335-8 Unsupervised word embeddings capture latent knowledge from materials science literature] | * 2019-07: [https://doi.org/10.1038/s41586-019-1335-8 Unsupervised word embeddings capture latent knowledge from materials science literature] | ||
* 2024-11: [https://doi.org/10.1038/s41562-024-02046-9 Large language models surpass human experts in predicting neuroscience results] | * 2024-11: [https://doi.org/10.1038/s41562-024-02046-9 Large language models surpass human experts in predicting neuroscience results] | ||
+ | |||
+ | ===(Pre) Generate Articles=== | ||
+ | * 2022-12: [https://aclanthology.org/2022.emnlp-main.296/ Re3: Generating Longer Stories With Recursive Reprompting and Revision] | ||
+ | * 2023-03: English essays: [https://journal.unnes.ac.id/sju/index.php/elt/article/view/64069 Artificial intelligence (AI) technology in OpenAI ChatGPT application: A review of ChatGPT in writing English essay] | ||
+ | * 2023-01: Journalism: [https://journals.sagepub.com/doi/10.1177/10776958221149577 Collaborating With ChatGPT: Considering the Implications of Generative Artificial Intelligence for Journalism and Media Education] | ||
+ | * 2023-07: Science writing: [https://www.rbmojournal.com/article/S1472-6483(23)00219-5/fulltext Artificial intelligence in scientific writing: a friend or a foe?] | ||
+ | * 2024-02: Wikipedia style: [https://arxiv.org/abs/2402.14207 Assisting in Writing Wikipedia-like Articles From Scratch with Large Language Models] | ||
+ | * 2024-02: [https://arxiv.org/abs/2408.07055 LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs] ([https://github.com/THUDM/LongWriter code]) | ||
+ | * 2024-08: Scientific papers: [The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery] | ||
+ | * 2024-09: PaperQA2: [https://paper.wikicrow.ai/ Language Models Achieve Superhuman Synthesis of Scientific Knowledge] ([https://x.com/SGRodriques/status/1833908643856818443 𝕏 post], [https://github.com/Future-House/paper-qa code]) | ||
+ | * 2025-03: [https://arxiv.org/abs/2503.18866 Reasoning to Learn from Latent Thoughts] | ||
+ | * 2025-03: [https://arxiv.org/abs/2503.19065 WikiAutoGen: Towards Multi-Modal Wikipedia-Style Article Generation] | ||
+ | |||
+ | ==Explanation== | ||
+ | * [https://tiger-ai-lab.github.io/TheoremExplainAgent/ TheoremExplainAgent: Towards Multimodal Explanations for LLM Theorem Understanding] ([https://arxiv.org/abs/2502.19400 preprint]) | ||
==Autonomous Ideation== | ==Autonomous Ideation== | ||
Line 15: | Line 32: | ||
* 2024-12: [https://arxiv.org/abs/2412.14141 LLMs can realize combinatorial creativity: generating creative ideas via LLMs for scientific research] | * 2024-12: [https://arxiv.org/abs/2412.14141 LLMs can realize combinatorial creativity: generating creative ideas via LLMs for scientific research] | ||
* 2024-12: [https://arxiv.org/abs/2412.17596 LiveIdeaBench: Evaluating LLMs' Scientific Creativity and Idea Generation with Minimal Context] | * 2024-12: [https://arxiv.org/abs/2412.17596 LiveIdeaBench: Evaluating LLMs' Scientific Creativity and Idea Generation with Minimal Context] | ||
+ | * 2025-02: [https://arxiv.org/abs/2502.13025 Agentic Deep Graph Reasoning Yields Self-Organizing Knowledge Networks] | ||
==Adapting LLMs to Science== | ==Adapting LLMs to Science== | ||
Line 41: | Line 59: | ||
===Commercial=== | ===Commercial=== | ||
+ | * [https://sakana.ai/ai-scientist/ Sakana AI] | ||
* [https://www.cusp.ai/ Cusp AI]: Materials/AI | * [https://www.cusp.ai/ Cusp AI]: Materials/AI | ||
+ | * [https://www.lila.ai/ Lila AI]: Life sciences | ||
+ | * [https://www.radical-ai.com/ Radical AI]: Material simulation/design | ||
+ | * [https://www.autoscience.ai/ Autoscience] ([https://www.autoscience.ai/blog/meet-carl-the-first-ai-system-to-produce-academically-peer-reviewed-research Carl]) | ||
+ | |||
+ | ==AI/ML Methods in Science== | ||
+ | ===Chemistry=== | ||
+ | * 2025-01: [https://www.nature.com/articles/s41578-025-00772-8 Large language models for reticular chemistry] | ||
+ | * 2025-02: [https://www.nature.com/articles/s42256-025-00982-3 Image-based generation for molecule design with SketchMol] | ||
+ | * 2025-02: [https://www.nature.com/articles/s42256-025-00994-z Large language models for scientific discovery in molecular property prediction] | ||
+ | * [https://x.com/vant_ai/status/1903070297991110657 2025-03]: [https://www.vant.ai/ Vant AI] [https://www.vant.ai/neo-1 Neo-1]: atomistic foundation model (small molecules, proteins, etc.) | ||
+ | |||
+ | ===Biology=== | ||
+ | * 2018: [https://alphafold.ebi.ac.uk/ AlphaFold] | ||
+ | * 2021-07: [https://www.nature.com/articles/s41586-021-03819-2 AlphaFold 2] | ||
+ | * 2024-05: [https://www.nature.com/articles/s41586-024-07487-w AlphaFold 3] | ||
+ | * 2023-03: [https://www.science.org/doi/10.1126/science.ade2574 Evolutionary-scale prediction of atomic-level protein structure with a language model] ([https://esmatlas.com/resources?action=fold ESMFold]) | ||
+ | * 2023-11: [https://www.nature.com/articles/s41586-023-06728-8 Illuminating protein space with a programmable generative model] | ||
+ | * 2024-11: [https://www.science.org/doi/10.1126/science.ado9336 Sequence modeling and design from molecular to genome scale with Evo] (Evo) | ||
+ | * 2025-01: [https://www.nature.com/articles/s41586-024-08435-4 Targeting protein–ligand neosurfaces with a generalizable deep learning tool] (Chroma) | ||
+ | * 2025-01: [https://www.science.org/doi/10.1126/science.ads0018 Simulating 500 million years of evolution with a language model] ([https://github.com/evolutionaryscale/esm ESM] 3 model) | ||
+ | * 2025-02: [https://arcinstitute.org/manuscripts/Evo2 Genome modeling and design across all domains of life with Evo 2] | ||
+ | * 2025-02: [https://www.microsoft.com/en-us/research/blog/exploring-the-structural-changes-driving-protein-function-with-bioemu-1/ Exploring the structural changes driving protein function with BioEmu-1] | ||
+ | * 2025-02: [https://arxiv.org/pdf/2502.18449 Protein Large Language Models: A Comprehensive Survey] | ||
+ | * [https://x.com/vant_ai/status/1903070297991110657 2025-03]: [https://www.vant.ai/ Vant AI] [https://www.vant.ai/neo-1 Neo-1]: atomistic foundation model (small molecules, proteins, etc.) | ||
+ | * 2025-03: [https://arxiv.org/abs/2503.16351 Lyra: An Efficient and Expressive Subquadratic Architecture for Modeling Biological Sequences] | ||
+ | |||
+ | ===Successes=== | ||
+ | * 2025-02: [https://arxiv.org/abs/2502.11270 Site-Decorated Model for Unconventional Frustrated Magnets: Ultranarrow Phase Crossover and Spin Reversal Transition] | ||
==AI/ML Methods co-opted for Science== | ==AI/ML Methods co-opted for Science== | ||
Line 52: | Line 99: | ||
* 2024-12: [https://arxiv.org/abs/2412.12101 InterPLM: Discovering Interpretable Features in Protein Language Models via Sparse Autoencoders] | * 2024-12: [https://arxiv.org/abs/2412.12101 InterPLM: Discovering Interpretable Features in Protein Language Models via Sparse Autoencoders] | ||
* 2025-01: [https://arxiv.org/abs/2501.00089 Insights on Galaxy Evolution from Interpretable Sparse Feature Networks] | * 2025-01: [https://arxiv.org/abs/2501.00089 Insights on Galaxy Evolution from Interpretable Sparse Feature Networks] | ||
+ | * 2025-02: [https://www.biorxiv.org/content/10.1101/2025.02.06.636901v1 From Mechanistic Interpretability to Mechanistic Biology: Training, Evaluating, and Interpreting Sparse Autoencoders on Protein Language Models] | ||
===Uncertainty=== | ===Uncertainty=== | ||
Line 62: | Line 110: | ||
* 2024-12: [https://arxiv.org/abs/2412.17596 LiveIdeaBench: Evaluating LLMs' Scientific Creativity and Idea Generation with Minimal Context] | * 2024-12: [https://arxiv.org/abs/2412.17596 LiveIdeaBench: Evaluating LLMs' Scientific Creativity and Idea Generation with Minimal Context] | ||
* 2025-01: [https://agi.safe.ai/ Humanity's Last Exam] | * 2025-01: [https://agi.safe.ai/ Humanity's Last Exam] | ||
+ | * [https://github.com/OSU-NLP-Group/ScienceAgentBench ScienceAgentBench] | ||
+ | * 2025-02: [https://arxiv.org/abs/2502.20309 EAIRA: Establishing a Methodology for Evaluating AI Models as Scientific Research Assistants] | ||
+ | * 2025-03: [https://huggingface.co/datasets/futurehouse/BixBench BixBench]: Novel hypotheses (accept/reject) | ||
=Science Agents= | =Science Agents= | ||
Line 77: | Line 128: | ||
* 2024-10-28: [https://arxiv.org/abs/2410.20976 Large Language Model-Guided Prediction Toward Quantum Materials Synthesis] | * 2024-10-28: [https://arxiv.org/abs/2410.20976 Large Language Model-Guided Prediction Toward Quantum Materials Synthesis] | ||
* 2024-12-06: [https://www.biorxiv.org/content/10.1101/2024.11.11.623004v1 The Virtual Lab: AI Agents Design New SARS-CoV-2 Nanobodies with Experimental Validation] (writeup: [https://www.nature.com/articles/d41586-024-01684-3 Virtual lab powered by ‘AI scientists’ super-charges biomedical research: Could human–AI collaborations be the future of interdisciplinary studies?]) | * 2024-12-06: [https://www.biorxiv.org/content/10.1101/2024.11.11.623004v1 The Virtual Lab: AI Agents Design New SARS-CoV-2 Nanobodies with Experimental Validation] (writeup: [https://www.nature.com/articles/d41586-024-01684-3 Virtual lab powered by ‘AI scientists’ super-charges biomedical research: Could human–AI collaborations be the future of interdisciplinary studies?]) | ||
− | |||
* 2024-12-30: [https://arxiv.org/abs/2412.21154 Aviary: training language agents on challenging scientific tasks] | * 2024-12-30: [https://arxiv.org/abs/2412.21154 Aviary: training language agents on challenging scientific tasks] | ||
+ | * See also: [[AI_Agents#Deep_Research|AI Agents > Deep Research]] | ||
==Science Multi-Agent Setups== | ==Science Multi-Agent Setups== | ||
Line 85: | Line 136: | ||
=AI Science Systems= | =AI Science Systems= | ||
* 2025-01: [https://arxiv.org/abs/2501.03916 Dolphin: Closed-loop Open-ended Auto-research through Thinking, Practice, and Feedback] | * 2025-01: [https://arxiv.org/abs/2501.03916 Dolphin: Closed-loop Open-ended Auto-research through Thinking, Practice, and Feedback] | ||
+ | * 2025-02: [https://storage.googleapis.com/coscientist_paper/ai_coscientist.pdf Towards an AI co-scientist] (Google blog post: [https://research.google/blog/accelerating-scientific-breakthroughs-with-an-ai-co-scientist/ Accelerating scientific breakthroughs with an AI co-scientist]) | ||
===Inorganic Materials Discovery=== | ===Inorganic Materials Discovery=== | ||
Line 90: | Line 142: | ||
* 2023-11: [https://doi.org/10.1038/s41586-023-06734-w An autonomous laboratory for the accelerated synthesis of novel materials] | * 2023-11: [https://doi.org/10.1038/s41586-023-06734-w An autonomous laboratory for the accelerated synthesis of novel materials] | ||
* 2024-10: [https://arxiv.org/abs/2410.12771 Open Materials 2024 (OMat24) Inorganic Materials Dataset and Models] ([https://github.com/FAIR-Chem/fairchem code], [https://huggingface.co/datasets/fairchem/OMAT24 datasets], [https://huggingface.co/fairchem/OMAT24 checkpoints], [https://ai.meta.com/blog/fair-news-segment-anything-2-1-meta-spirit-lm-layer-skip-salsa-sona/ blogpost]) | * 2024-10: [https://arxiv.org/abs/2410.12771 Open Materials 2024 (OMat24) Inorganic Materials Dataset and Models] ([https://github.com/FAIR-Chem/fairchem code], [https://huggingface.co/datasets/fairchem/OMAT24 datasets], [https://huggingface.co/fairchem/OMAT24 checkpoints], [https://ai.meta.com/blog/fair-news-segment-anything-2-1-meta-spirit-lm-layer-skip-salsa-sona/ blogpost]) | ||
+ | * 2025-01: [https://www.nature.com/articles/s41586-025-08628-5 A generative model for inorganic materials design] | ||
===Chemistry=== | ===Chemistry=== | ||
* 2023-12: [https://doi.org/10.1038/s41586-023-06792-0 Autonomous chemical research with large language models] (Coscientist) | * 2023-12: [https://doi.org/10.1038/s41586-023-06792-0 Autonomous chemical research with large language models] (Coscientist) | ||
* 2024-11: [https://www.nature.com/articles/s41467-024-54457-x An automatic end-to-end chemical synthesis development platform powered by large language models] | * 2024-11: [https://www.nature.com/articles/s41467-024-54457-x An automatic end-to-end chemical synthesis development platform powered by large language models] | ||
− | * 2025- | + | |
+ | ==LLMs Optimized for Science== | ||
+ | * 2022-11: [https://arxiv.org/abs/2211.09085 Galactica: A Large Language Model for Science] | ||
+ | * 2025-03: [https://arxiv.org/abs/2503.17604 OmniScience: A Domain-Specialized LLM for Scientific Reasoning and Discovery] | ||
+ | * 2025-03: Google [https://huggingface.co/collections/google/txgemma-release-67dd92e931c857d15e4d1e87 TxGemma] (2B, 9B, 27B): [https://developers.googleblog.com/en/introducing-txgemma-open-models-improving-therapeutics-development/ drug development] | ||
=Impact of AI in Science= | =Impact of AI in Science= | ||
* 2024-11: [https://aidantr.github.io/files/AI_innovation.pdf Artificial Intelligence, Scientific Discovery, and Product Innovation] | * 2024-11: [https://aidantr.github.io/files/AI_innovation.pdf Artificial Intelligence, Scientific Discovery, and Product Innovation] | ||
+ | * 2025-02: [https://arxiv.org/abs/2502.05151 Transforming Science with Large Language Models: A Survey on AI-assisted Scientific Discovery, Experimentation, Content Generation, and Evaluation] | ||
=Related Tools= | =Related Tools= | ||
Line 105: | Line 163: | ||
==Data Visualization== | ==Data Visualization== | ||
− | * 2024-10: [https://www.microsoft.com/en-us/research/blog/data-formulator-exploring-how-ai-can-help-analysts-create-rich-data-visualizations/ Data Formulator: Create Rich Visualization with AI iteratively] ([https://www.microsoft.com/en-us/research/video/data-formulator-create-rich-visualization-with-ai-iteratively/ video], [https://github.com/microsoft/data-formulator code]) | + | * 2024-10: Microsoft [https://www.microsoft.com/en-us/research/blog/data-formulator-exploring-how-ai-can-help-analysts-create-rich-data-visualizations/ Data Formulator: Create Rich Visualization with AI iteratively] ([https://www.microsoft.com/en-us/research/video/data-formulator-create-rich-visualization-with-ai-iteratively/ video], [https://github.com/microsoft/data-formulator code]) |
* [https://julius.ai/ Julius AI]: Analyze your data with computational AI | * [https://julius.ai/ Julius AI]: Analyze your data with computational AI | ||
+ | |||
+ | ==Generative== | ||
+ | * 2025-03: [https://huggingface.co/collections/starvector/starvector-models-6783b22c7bd4b43d13cb5289 StarVector] 1B, 8B: text or image to SVG | ||
+ | |||
+ | ==Chemistry== | ||
+ | * 2025-03: [https://jcheminf.biomedcentral.com/articles/10.1186/s13321-024-00834-z Rxn-INSIGHT: fast chemical reaction analysis using bond-electron matrices] ([https://rxn-insight.readthedocs.io/en/latest/ docs]) | ||
=See Also= | =See Also= | ||
* [[AI agents]] | * [[AI agents]] | ||
* [https://nanobot.chat/ Nanobot.chat]: Intelligent AI for the labnetwork @ mtl.mit.edu forum | * [https://nanobot.chat/ Nanobot.chat]: Intelligent AI for the labnetwork @ mtl.mit.edu forum |
Revision as of 14:05, 26 March 2025
Contents
- 1 AI Use-cases for Science
- 2 Science Benchmarks
- 3 Science Agents
- 4 AI Science Systems
- 5 Impact of AI in Science
- 6 Related Tools
- 7 See Also
AI Use-cases for Science
Literature
- alphaXiv | Explore: Understand arXiv papers
LLM extract data from papers
AI finding links in literature
- 2019-07: Unsupervised word embeddings capture latent knowledge from materials science literature
- 2024-11: Large language models surpass human experts in predicting neuroscience results
(Pre) Generate Articles
- 2022-12: Re3: Generating Longer Stories With Recursive Reprompting and Revision
- 2023-03: English essays: Artificial intelligence (AI) technology in OpenAI ChatGPT application: A review of ChatGPT in writing English essay
- 2023-01: Journalism: Collaborating With ChatGPT: Considering the Implications of Generative Artificial Intelligence for Journalism and Media Education
- 2023-07: Science writing: Artificial intelligence in scientific writing: a friend or a foe?
- 2024-02: Wikipedia style: Assisting in Writing Wikipedia-like Articles From Scratch with Large Language Models
- 2024-02: LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs (code)
- 2024-08: Scientific papers: [The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery]
- 2024-09: PaperQA2: Language Models Achieve Superhuman Synthesis of Scientific Knowledge (𝕏 post, code)
- 2025-03: Reasoning to Learn from Latent Thoughts
- 2025-03: WikiAutoGen: Towards Multi-Modal Wikipedia-Style Article Generation
Explanation
Autonomous Ideation
- 2024-09: Mining Causality: AI-Assisted Search for Instrumental Variables
- 2024-12: Thinking Fast and Laterally: Multi-Agentic Approach for Reasoning about Uncertain Emerging Events
- 2024-12: LLMs can realize combinatorial creativity: generating creative ideas via LLMs for scientific research
- 2024-12: LiveIdeaBench: Evaluating LLMs' Scientific Creativity and Idea Generation with Minimal Context
- 2025-02: Agentic Deep Graph Reasoning Yields Self-Organizing Knowledge Networks
Adapting LLMs to Science
- 2023-06: Domain-specific chatbots for science using embeddings
- 2024-10: Personalization of Large Language Models: A Survey
- 2024-11: Adapting While Learning: Grounding LLMs for Scientific Problems with Intelligent Tool Usage Adaptation
AI/ML Methods tailored to Science
Regression (Data Fitting)
- 2024-06: Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data: training on (x,y) pairs enables inferring underlying function (define it in code, invert it, compose it)
- 2024-12: OmniPred: Language Models as Universal Regressors
Tabular Classification/Regression
Symbolic Regression
Literature Discovery
- FutureHouse
- Lumina
- Automated-AI-Web-Researcher-Ollama
- 2025-01: Search-o1: Agentic Search-Enhanced Large Reasoning Models (project, code)
Commercial
- Sakana AI
- Cusp AI: Materials/AI
- Lila AI: Life sciences
- Radical AI: Material simulation/design
- Autoscience (Carl)
AI/ML Methods in Science
Chemistry
- 2025-01: Large language models for reticular chemistry
- 2025-02: Image-based generation for molecule design with SketchMol
- 2025-02: Large language models for scientific discovery in molecular property prediction
- 2025-03: Vant AI Neo-1: atomistic foundation model (small molecules, proteins, etc.)
Biology
- 2018: AlphaFold
- 2021-07: AlphaFold 2
- 2024-05: AlphaFold 3
- 2023-03: Evolutionary-scale prediction of atomic-level protein structure with a language model (ESMFold)
- 2023-11: Illuminating protein space with a programmable generative model
- 2024-11: Sequence modeling and design from molecular to genome scale with Evo (Evo)
- 2025-01: Targeting protein–ligand neosurfaces with a generalizable deep learning tool (Chroma)
- 2025-01: Simulating 500 million years of evolution with a language model (ESM 3 model)
- 2025-02: Genome modeling and design across all domains of life with Evo 2
- 2025-02: Exploring the structural changes driving protein function with BioEmu-1
- 2025-02: Protein Large Language Models: A Comprehensive Survey
- 2025-03: Vant AI Neo-1: atomistic foundation model (small molecules, proteins, etc.)
- 2025-03: Lyra: An Efficient and Expressive Subquadratic Architecture for Modeling Biological Sequences
Successes
AI/ML Methods co-opted for Science
Mechanistic Interpretability
Train large model on science data. Then apply mechanistic interpretability (e.g. sparse autoencoders, SAE) to the feature/activation space.
- Mechanistic interpretability for protein language models (visualizer, code, SAE)
- Markov Bio: Through a Glass Darkly: Mechanistic Interpretability as the Bridge to End-to-End Biology (quick description, background info on recent bio progress)
- 2023-01: Tracr: Compiled Transformers as a Laboratory for Interpretability (code)
- 2024-12: Towards scientific discovery with dictionary learning: Extracting biological concepts from microscopy foundation models
- 2024-12: InterPLM: Discovering Interpretable Features in Protein Language Models via Sparse Autoencoders
- 2025-01: Insights on Galaxy Evolution from Interpretable Sparse Feature Networks
- 2025-02: From Mechanistic Interpretability to Mechanistic Biology: Training, Evaluating, and Interpreting Sparse Autoencoders on Protein Language Models
Uncertainty
- 2024-10: entropix: Entropy Based Sampling and Parallel CoT Decoding
- 2024-10: Taming Overconfidence in LLMs: Reward Calibration in RLHF
Science Benchmarks
- 2024-07: SciCode: A Research Coding Benchmark Curated by Scientists (project)
- 2024-11: AidanBench: Evaluating Novel Idea Generation on Open-Ended Questions (code)
- 2024-12: LiveIdeaBench: Evaluating LLMs' Scientific Creativity and Idea Generation with Minimal Context
- 2025-01: Humanity's Last Exam
- ScienceAgentBench
- 2025-02: EAIRA: Establishing a Methodology for Evaluating AI Models as Scientific Research Assistants
- 2025-03: BixBench: Novel hypotheses (accept/reject)
Science Agents
Reviews
- 2024-10: Empowering biomedical discovery with AI agents
- 2025-01: A review of large language models and autonomous agents in chemistry (github)
Specific
- 2024-01-13: ORGANA: A Robotic Assistant for Automated Chemistry Experimentation and Characterization (video)
- 2024-06-19: LLMatDesign: Autonomous Materials Discovery with Large Language Models
- 2024-08-12: Sakana AI: AI Scientist; The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery (code)
- 2024-09-09: SciAgents: Automating scientific discovery through multi-agent intelligent graph reasoning (code)
- 2024-09-11: PaperQA2: Language Models Achieve Superhuman Synthesis of Scientific Knowledge (𝕏 post, code)
- 2024-10-17: Rapid and Automated Alloy Design with Graph Neural Network-Powered LLM-Driven Multi-Agent Systems
- 2024-10-28: Large Language Model-Guided Prediction Toward Quantum Materials Synthesis
- 2024-12-06: The Virtual Lab: AI Agents Design New SARS-CoV-2 Nanobodies with Experimental Validation (writeup: Virtual lab powered by ‘AI scientists’ super-charges biomedical research: Could human–AI collaborations be the future of interdisciplinary studies?)
- 2024-12-30: Aviary: training language agents on challenging scientific tasks
- See also: AI Agents > Deep Research
Science Multi-Agent Setups
AI Science Systems
- 2025-01: Dolphin: Closed-loop Open-ended Auto-research through Thinking, Practice, and Feedback
- 2025-02: Towards an AI co-scientist (Google blog post: Accelerating scientific breakthroughs with an AI co-scientist)
Inorganic Materials Discovery
- 2023-11: Scaling deep learning for materials discovery
- 2023-11: An autonomous laboratory for the accelerated synthesis of novel materials
- 2024-10: Open Materials 2024 (OMat24) Inorganic Materials Dataset and Models (code, datasets, checkpoints, blogpost)
- 2025-01: A generative model for inorganic materials design
Chemistry
- 2023-12: Autonomous chemical research with large language models (Coscientist)
- 2024-11: An automatic end-to-end chemical synthesis development platform powered by large language models
LLMs Optimized for Science
- 2022-11: Galactica: A Large Language Model for Science
- 2025-03: OmniScience: A Domain-Specialized LLM for Scientific Reasoning and Discovery
- 2025-03: Google TxGemma (2B, 9B, 27B): drug development
Impact of AI in Science
- 2024-11: Artificial Intelligence, Scientific Discovery, and Product Innovation
- 2025-02: Transforming Science with Large Language Models: A Survey on AI-assisted Scientific Discovery, Experimentation, Content Generation, and Evaluation
Related Tools
Literature Search
Data Visualization
- 2024-10: Microsoft Data Formulator: Create Rich Visualization with AI iteratively (video, code)
- Julius AI: Analyze your data with computational AI
Generative
- 2025-03: StarVector 1B, 8B: text or image to SVG
Chemistry
See Also
- AI agents
- Nanobot.chat: Intelligent AI for the labnetwork @ mtl.mit.edu forum