Difference between revisions of "Science Agents"

From GISAXS
Jump to: navigation, search
(AI/ML Methods in Science)
(Specific)
 
(69 intermediate revisions by the same user not shown)
Line 3: Line 3:
  
 
==Literature==
 
==Literature==
 +
* [https://www.alphaxiv.org/explore alphaXiv | Explore]: Understand arXiv papers
 +
 
===LLM extract data from papers===
 
===LLM extract data from papers===
 
* 2024-14: [https://pubs.rsc.org/en/content/articlelanding/2025/cs/d4cs00913d From text to insight: large language models for chemical data extraction]
 
* 2024-14: [https://pubs.rsc.org/en/content/articlelanding/2025/cs/d4cs00913d From text to insight: large language models for chemical data extraction]
Line 9: Line 11:
 
* 2019-07: [https://doi.org/10.1038/s41586-019-1335-8  Unsupervised word embeddings capture latent knowledge from materials science literature]
 
* 2019-07: [https://doi.org/10.1038/s41586-019-1335-8  Unsupervised word embeddings capture latent knowledge from materials science literature]
 
* 2024-11: [https://doi.org/10.1038/s41562-024-02046-9  Large language models surpass human experts in predicting neuroscience results]
 
* 2024-11: [https://doi.org/10.1038/s41562-024-02046-9  Large language models surpass human experts in predicting neuroscience results]
 +
 +
===(Pre) Generate Articles===
 +
* 2022-12: [https://aclanthology.org/2022.emnlp-main.296/ Re3: Generating Longer Stories With Recursive Reprompting and Revision]
 +
* 2023-03: English essays: [https://journal.unnes.ac.id/sju/index.php/elt/article/view/64069 Artificial intelligence (AI) technology in OpenAI ChatGPT application: A review of ChatGPT in writing English essay]
 +
* 2023-01: Journalism: [https://journals.sagepub.com/doi/10.1177/10776958221149577 Collaborating With ChatGPT: Considering the Implications of Generative Artificial Intelligence for Journalism and Media Education]
 +
* 2023-07: Science writing: [https://www.rbmojournal.com/article/S1472-6483(23)00219-5/fulltext Artificial intelligence in scientific writing: a friend or a foe?]
 +
* 2024-02: Wikipedia style: [https://arxiv.org/abs/2402.14207 Assisting in Writing Wikipedia-like Articles From Scratch with Large Language Models]
 +
* 2024-02: [https://arxiv.org/abs/2408.07055 LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs] ([https://github.com/THUDM/LongWriter code])
 +
* 2024-08: Scientific papers: [https://arxiv.org/abs/2408.06292 The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery]
 +
* 2024-09: PaperQA2: [https://paper.wikicrow.ai/ Language Models Achieve Superhuman Synthesis of Scientific Knowledge] ([https://x.com/SGRodriques/status/1833908643856818443 𝕏 post], [https://github.com/Future-House/paper-qa code])
 +
* 2025-03: [https://arxiv.org/abs/2503.18866 Reasoning to Learn from Latent Thoughts]
 +
* 2025-03: [https://arxiv.org/abs/2503.19065 WikiAutoGen: Towards Multi-Modal Wikipedia-Style Article Generation]
 +
* 2025-04: [https://arxiv.org/abs/2504.13171 Sleep-time Compute: Beyond Inference Scaling at Test-time]
 +
 +
==Explanation==
 +
* 2025-02: [https://tiger-ai-lab.github.io/TheoremExplainAgent/ TheoremExplainAgent: Towards Multimodal Explanations for LLM Theorem Understanding] ([https://arxiv.org/abs/2502.19400 preprint])
 +
* 2025-04: [https://arxiv.org/abs/2504.02822 Do Two AI Scientists Agree?]
  
 
==Autonomous Ideation==
 
==Autonomous Ideation==
 +
* 2024-04: [https://arxiv.org/abs/2404.07738 ResearchAgent: Iterative Research Idea Generation over Scientific Literature with Large Language Models]
 
* 2024-09: [https://arxiv.org/abs/2409.14202 Mining Causality: AI-Assisted Search for Instrumental Variables]
 
* 2024-09: [https://arxiv.org/abs/2409.14202 Mining Causality: AI-Assisted Search for Instrumental Variables]
 
* 2024-12: [https://arxiv.org/abs/2412.07977 Thinking Fast and Laterally: Multi-Agentic Approach for Reasoning about Uncertain Emerging Events]
 
* 2024-12: [https://arxiv.org/abs/2412.07977 Thinking Fast and Laterally: Multi-Agentic Approach for Reasoning about Uncertain Emerging Events]
 
* 2024-12: [https://arxiv.org/abs/2412.14141 LLMs can realize combinatorial creativity: generating creative ideas via LLMs for scientific research]
 
* 2024-12: [https://arxiv.org/abs/2412.14141 LLMs can realize combinatorial creativity: generating creative ideas via LLMs for scientific research]
 
* 2024-12: [https://arxiv.org/abs/2412.17596 LiveIdeaBench: Evaluating LLMs' Scientific Creativity and Idea Generation with Minimal Context]
 
* 2024-12: [https://arxiv.org/abs/2412.17596 LiveIdeaBench: Evaluating LLMs' Scientific Creativity and Idea Generation with Minimal Context]
 +
* 2025-01: [https://arxiv.org/abs/2501.13299 Hypothesis Generation for Materials Discovery and Design Using Goal-Driven and Constraint-Guided LLM Agents]
 +
* 2025-02: [https://arxiv.org/abs/2502.13025 Agentic Deep Graph Reasoning Yields Self-Organizing Knowledge Networks]
 +
* 2025-06: [https://arxiv.org/abs/2506.00794 Predicting Empirical AI Research Outcomes with Language Models]
 +
* 2025-06: [https://arxiv.org/abs/2506.20803 The Ideation-Execution Gap: Execution Outcomes of LLM-Generated versus Human Research Ideas]
  
 
==Adapting LLMs to Science==
 
==Adapting LLMs to Science==
Line 20: Line 44:
 
* 2024-10: [https://arxiv.org/abs/2411.00027 Personalization of Large Language Models: A Survey]
 
* 2024-10: [https://arxiv.org/abs/2411.00027 Personalization of Large Language Models: A Survey]
 
* 2024-11: [https://arxiv.org/abs/2411.00412 Adapting While Learning: Grounding LLMs for Scientific Problems with Intelligent Tool Usage Adaptation]
 
* 2024-11: [https://arxiv.org/abs/2411.00412 Adapting While Learning: Grounding LLMs for Scientific Problems with Intelligent Tool Usage Adaptation]
 +
 +
==AI/LLM Control of Scientific Instruments/Facilities==
 +
* 2023-12: [https://www.nature.com/articles/s41524-024-01423-2 Opportunities for retrieval and tool augmented large language models in scientific facilities]
 +
* 2023-12: [https://arxiv.org/abs/2312.17180 Virtual Scientific Companion for Synchrotron Beamlines: A Prototype]
 +
* 2023-12: [https://www.nature.com/articles/s41586-023-06792-0 Autonomous chemical research with large language models]
 +
* 2024-01: [https://iopscience.iop.org/article/10.1088/2632-2153/ad52e9 Synergizing Human Expertise and AI Efficiency with Language Model for Microscopy Operation and Automated Experiment Design]
 +
* 2024-06: [https://pubs.rsc.org/en/content/articlelanding/2025/dd/d4dd00143e From Text to Test: AI-Generated Control Software for Materials Science Instruments]
 +
* 2024-12: [https://arxiv.org/abs/2412.18161 VISION: A Modular AI Assistant for Natural Human-Instrument Interaction at Scientific User Facilities]
 +
* 2025-01: [https://www.science.org/doi/10.1126/sciadv.adr4173 Large language models for human-machine collaborative particle accelerator tuning through natural language]
 +
* 2025-04: [https://openreview.net/forum?id=iA9UN1dEgJ Operating Robotic Laboratories with Large Language Models and Teachable Agents]
  
 
==AI/ML Methods tailored to Science==
 
==AI/ML Methods tailored to Science==
Line 41: Line 75:
  
 
===Commercial===
 
===Commercial===
 +
* [https://sakana.ai/ai-scientist/ Sakana AI]
 
* [https://www.cusp.ai/ Cusp AI]: Materials/AI
 
* [https://www.cusp.ai/ Cusp AI]: Materials/AI
 +
* [https://www.lila.ai/ Lila AI]: Life sciences
 +
* [https://www.radical-ai.com/ Radical AI]: Material simulation/design
 +
* [https://www.autoscience.ai/ Autoscience] ([https://www.autoscience.ai/blog/meet-carl-the-first-ai-system-to-produce-academically-peer-reviewed-research Carl])
 +
====Bio====
 +
* [https://www.bioptimus.com/ Bioptimus]
 +
* [https://www.evolutionaryscale.ai/ EvolutionaryScale]
  
 
==AI/ML Methods in Science==
 
==AI/ML Methods in Science==
 +
===Imaging===
 +
* 2025-05: [https://arxiv.org/abs/2505.08176 Behind the Noise: Conformal Quantile Regression Reveals Emergent Representations] (blog: [https://phzwart.github.io/behindthenoise/ Behind the Noise])
 +
 +
===Materials===
 +
* 2024-12: [https://www.nature.com/articles/s41467-024-54639-7 Crystal structure generation with autoregressive large language modeling
 +
* 2025-03: [https://arxiv.org/abs/2503.03965 All-atom Diffusion Transformers: Unified generative modelling of molecules and materials]
 +
 
===Chemistry===
 
===Chemistry===
 +
* 2025-01: [https://www.nature.com/articles/s41578-025-00772-8 Large language models for reticular chemistry]
 
* 2025-02: [https://www.nature.com/articles/s42256-025-00982-3 Image-based generation for molecule design with SketchMol]
 
* 2025-02: [https://www.nature.com/articles/s42256-025-00982-3 Image-based generation for molecule design with SketchMol]
 +
* 2025-02: [https://www.nature.com/articles/s42256-025-00994-z Large language models for scientific discovery in molecular property prediction]
 +
* [https://x.com/vant_ai/status/1903070297991110657 2025-03]: [https://www.vant.ai/ Vant AI] [https://www.vant.ai/neo-1 Neo-1]: atomistic foundation model (small molecules, proteins, etc.)
 +
* 2025-07: [https://arxiv.org/abs/2507.07456 General purpose models for the chemical sciences]
  
 
===Biology===
 
===Biology===
* 2025-01: [https://www.nature.com/articles/s41586-024-08435-4 Targeting protein–ligand neosurfaces with a generalizable deep learning tool]
+
* 2018: [https://alphafold.ebi.ac.uk/ AlphaFold]
 +
* 2021-07: [https://www.nature.com/articles/s41586-021-03819-2 AlphaFold 2]
 +
* 2024-05: [https://www.nature.com/articles/s41586-024-07487-w AlphaFold 3]
 +
* 2023-03: [https://www.science.org/doi/10.1126/science.ade2574 Evolutionary-scale prediction of atomic-level protein structure with a language model] ([https://esmatlas.com/resources?action=fold ESMFold])
 +
* 2023-11: [https://www.nature.com/articles/s41586-023-06728-8 Illuminating protein space with a programmable generative model]
 +
* 2024-11: [https://www.science.org/doi/10.1126/science.ado9336 Sequence modeling and design from molecular to genome scale with Evo] (Evo)
 +
* 2025-01: [https://www.nature.com/articles/s41586-024-08435-4 Targeting protein–ligand neosurfaces with a generalizable deep learning tool] (Chroma)
 +
* 2025-01: [https://www.science.org/doi/10.1126/science.ads0018 Simulating 500 million years of evolution with a language model] ([https://github.com/evolutionaryscale/esm ESM] 3 model)
 +
* 2025-02: [https://arcinstitute.org/manuscripts/Evo2 Genome modeling and design across all domains of life with Evo 2]
 +
* 2025-02: [https://www.microsoft.com/en-us/research/blog/exploring-the-structural-changes-driving-protein-function-with-bioemu-1/ Exploring the structural changes driving protein function with BioEmu-1]
 +
* 2025-02: [https://arxiv.org/pdf/2502.18449 Protein Large Language Models: A Comprehensive Survey]
 +
* [https://x.com/vant_ai/status/1903070297991110657 2025-03]: [https://www.vant.ai/ Vant AI] [https://www.vant.ai/neo-1 Neo-1]: atomistic foundation model (small molecules, proteins, etc.)
 +
* 2025-03: [https://arxiv.org/abs/2503.16351 Lyra: An Efficient and Expressive Subquadratic Architecture for Modeling Biological Sequences]
 +
 
 +
===Medicine===
 +
See: [[AI_Agents#Medicine]]
 +
 
 +
===Successes===
 +
* 2025-02: [https://arxiv.org/abs/2502.11270 Site-Decorated Model for Unconventional Frustrated Magnets: Ultranarrow Phase Crossover and Spin Reversal Transition]
  
 
==AI/ML Methods co-opted for Science==
 
==AI/ML Methods co-opted for Science==
Line 56: Line 126:
 
* [https://www.markov.bio/ Markov Bio]: [https://www.markov.bio/research/mech-interp-path-to-e2e-biology Through a Glass Darkly: Mechanistic Interpretability as the Bridge to End-to-End Biology] ([https://x.com/adamlewisgreen/status/1853206279499751531 quick description], [https://markovbio.github.io/biomedical-progress/ background info on recent bio progress])
 
* [https://www.markov.bio/ Markov Bio]: [https://www.markov.bio/research/mech-interp-path-to-e2e-biology Through a Glass Darkly: Mechanistic Interpretability as the Bridge to End-to-End Biology] ([https://x.com/adamlewisgreen/status/1853206279499751531 quick description], [https://markovbio.github.io/biomedical-progress/ background info on recent bio progress])
 
* 2023-01: [https://arxiv.org/abs/2301.05062 Tracr: Compiled Transformers as a Laboratory for Interpretability] ([https://github.com/google-deepmind/tracr code])
 
* 2023-01: [https://arxiv.org/abs/2301.05062 Tracr: Compiled Transformers as a Laboratory for Interpretability] ([https://github.com/google-deepmind/tracr code])
 +
* 2024-10: [https://arxiv.org/abs/2410.03334 An X-Ray Is Worth 15 Features: Sparse Autoencoders for Interpretable Radiology Report Generation]
 
* 2024-12: [https://www.arxiv.org/abs/2412.16247 Towards scientific discovery with dictionary learning: Extracting biological concepts from microscopy foundation models]
 
* 2024-12: [https://www.arxiv.org/abs/2412.16247 Towards scientific discovery with dictionary learning: Extracting biological concepts from microscopy foundation models]
 
* 2024-12: [https://arxiv.org/abs/2412.12101 InterPLM: Discovering Interpretable Features in Protein Language Models via Sparse Autoencoders]
 
* 2024-12: [https://arxiv.org/abs/2412.12101 InterPLM: Discovering Interpretable Features in Protein Language Models via Sparse Autoencoders]
 
* 2025-01: [https://arxiv.org/abs/2501.00089 Insights on Galaxy Evolution from Interpretable Sparse Feature Networks]
 
* 2025-01: [https://arxiv.org/abs/2501.00089 Insights on Galaxy Evolution from Interpretable Sparse Feature Networks]
 
* 2025-02: [https://www.biorxiv.org/content/10.1101/2025.02.06.636901v1 From Mechanistic Interpretability to Mechanistic Biology: Training, Evaluating, and Interpreting Sparse Autoencoders on Protein Language Models]
 
* 2025-02: [https://www.biorxiv.org/content/10.1101/2025.02.06.636901v1 From Mechanistic Interpretability to Mechanistic Biology: Training, Evaluating, and Interpreting Sparse Autoencoders on Protein Language Models]
 +
* 2025-02: [https://www.goodfire.ai/blog/interpreting-evo-2 Interpreting Evo 2: Arc Institute's Next-Generation Genomic Foundation Model]
  
 
===Uncertainty===
 
===Uncertainty===
Line 71: Line 143:
 
* 2025-01: [https://agi.safe.ai/ Humanity's Last Exam]
 
* 2025-01: [https://agi.safe.ai/ Humanity's Last Exam]
 
* [https://github.com/OSU-NLP-Group/ScienceAgentBench ScienceAgentBench]
 
* [https://github.com/OSU-NLP-Group/ScienceAgentBench ScienceAgentBench]
 +
* 2025-02: [https://arxiv.org/abs/2502.20309 EAIRA: Establishing a Methodology for Evaluating AI Models as Scientific Research Assistants]
 +
* 2025-03: [https://huggingface.co/datasets/futurehouse/BixBench BixBench]: Novel hypotheses (accept/reject)
 +
* 2025-04: [https://research.google/blog/evaluating-progress-of-llms-on-scientific-problem-solving/ Google: Evaluating progress of LLMs on scientific problem-solving]
 +
** 2025-03: [https://arxiv.org/abs/2503.13517 CURIE: Evaluating LLMs On Multitask Scientific Long Context Understanding and Reasoning]
 +
** 2024-07: [https://arxiv.org/abs/2407.09413 SPIQA: A Dataset for Multimodal Question Answering on Scientific Papers]
 +
** 2024-10: [https://neurips.cc/virtual/2024/98540 FEABench: Evaluating Language Models on Real World Physics Reasoning Ability]
  
 
=Science Agents=
 
=Science Agents=
Line 76: Line 154:
 
* 2024-10: [https://www.cell.com/cell/fulltext/S0092-8674(24)01070-5?target=_blank Empowering biomedical discovery with AI agents]
 
* 2024-10: [https://www.cell.com/cell/fulltext/S0092-8674(24)01070-5?target=_blank Empowering biomedical discovery with AI agents]
 
* 2025-01: [https://pubs.rsc.org/en/content/articlehtml/2024/sc/d4sc03921a A review of large language models and autonomous agents in chemistry] ([https://github.com/ur-whitelab/LLMs-in-science github])
 
* 2025-01: [https://pubs.rsc.org/en/content/articlehtml/2024/sc/d4sc03921a A review of large language models and autonomous agents in chemistry] ([https://github.com/ur-whitelab/LLMs-in-science github])
 +
* 2025-07: [https://arxiv.org/abs/2507.01903 AI4Research: A Survey of Artificial Intelligence for Scientific Research]
  
 
==Specific==
 
==Specific==
Line 86: Line 165:
 
* 2024-10-28: [https://arxiv.org/abs/2410.20976 Large Language Model-Guided Prediction Toward Quantum Materials Synthesis]
 
* 2024-10-28: [https://arxiv.org/abs/2410.20976 Large Language Model-Guided Prediction Toward Quantum Materials Synthesis]
 
* 2024-12-06: [https://www.biorxiv.org/content/10.1101/2024.11.11.623004v1 The Virtual Lab: AI Agents Design New SARS-CoV-2 Nanobodies with Experimental Validation] (writeup: [https://www.nature.com/articles/d41586-024-01684-3 Virtual lab powered by ‘AI scientists’ super-charges biomedical research: Could human–AI collaborations be the future of interdisciplinary studies?])
 
* 2024-12-06: [https://www.biorxiv.org/content/10.1101/2024.11.11.623004v1 The Virtual Lab: AI Agents Design New SARS-CoV-2 Nanobodies with Experimental Validation] (writeup: [https://www.nature.com/articles/d41586-024-01684-3 Virtual lab powered by ‘AI scientists’ super-charges biomedical research: Could human–AI collaborations be the future of interdisciplinary studies?])
* 2024-12-11: Google [https://blog.google/products/gemini/google-gemini-deep-research/ Deep Research]
 
 
* 2024-12-30: [https://arxiv.org/abs/2412.21154 Aviary: training language agents on challenging scientific tasks]
 
* 2024-12-30: [https://arxiv.org/abs/2412.21154 Aviary: training language agents on challenging scientific tasks]
 +
* See also: [[AI_Agents#Deep_Research|AI Agents > Deep Research]]
 +
* 2025-04-08: Sakana: [https://pub.sakana.ai/ai-scientist-v2/paper/paper.pdf The AI Scientist-v2: Workshop-Level Automated Scientific Discovery via Agentic Tree Search] ([https://github.com/SakanaAI/AI-Scientist-v2 code])
 +
* 2025-07: [https://arxiv.org/abs/2507.14267 DREAMS: Density Functional Theory Based Research Engine for Agentic Materials Simulation]
  
 
==Science Multi-Agent Setups==
 
==Science Multi-Agent Setups==
 
* 2025-01: [https://arxiv.org/abs/2501.04227 Agent Laboratory: Using LLM Agents as Research Assistants]
 
* 2025-01: [https://arxiv.org/abs/2501.04227 Agent Laboratory: Using LLM Agents as Research Assistants]
 +
* 2025-04: [https://www.nature.com/articles/s41551-025-01363-2 Coordinated AI agents for advancing healthcare] ([https://www.nature.com/articles/s41551-025-01363-2.epdf?sharing_token=CIYP3J8LZE4BX31fV3WxUdRgN0jAjWel9jnR3ZoTv0O9iD-yhgqzRaz_7VASayWRePPhWDD2xFyfuOpSXbdPaOtt7oH4nfXo7telALzNwY3V1p9SxoqBEJy2OuaJ_cA35-CYQC1XgjCNTZUw46dh1KX-Dj8e7-1Vk_RlZKFLrc8%3D pdf])
  
 
=AI Science Systems=
 
=AI Science Systems=
 
* 2025-01: [https://arxiv.org/abs/2501.03916 Dolphin: Closed-loop Open-ended Auto-research through Thinking, Practice, and Feedback]
 
* 2025-01: [https://arxiv.org/abs/2501.03916 Dolphin: Closed-loop Open-ended Auto-research through Thinking, Practice, and Feedback]
 +
* 2025-01: [https://arxiv.org/abs/2501.13299 Hypothesis Generation for Materials Discovery and Design Using Goal-Driven and Constraint-Guided LLM Agents]
 +
* 2025-02: [https://storage.googleapis.com/coscientist_paper/ai_coscientist.pdf Towards an AI co-scientist] (Google blog post: [https://research.google/blog/accelerating-scientific-breakthroughs-with-an-ai-co-scientist/ Accelerating scientific breakthroughs with an AI co-scientist])
 +
* 2025-06: [https://zenodo.org/records/15693353 The Discovery Engine]
 +
** 2025-07: [https://arxiv.org/abs/2507.00964 Benchmarking the Discovery Engine] ([https://www.leap-labs.com/blog/how-we-replicated-five-peer-reviewed-papers-in-five-hours blog])
  
 
===Inorganic Materials Discovery===
 
===Inorganic Materials Discovery===
 
* 2023-11: [https://doi.org/10.1038/s41586-023-06735-9 Scaling deep learning for materials discovery]
 
* 2023-11: [https://doi.org/10.1038/s41586-023-06735-9 Scaling deep learning for materials discovery]
 
* 2023-11: [https://doi.org/10.1038/s41586-023-06734-w An autonomous laboratory for the accelerated synthesis of novel materials]
 
* 2023-11: [https://doi.org/10.1038/s41586-023-06734-w An autonomous laboratory for the accelerated synthesis of novel materials]
 +
* 2024-09: [https://arxiv.org/abs/2409.00135 HoneyComb: A Flexible LLM-Based Agent System for Materials Science]
 
* 2024-10: [https://arxiv.org/abs/2410.12771 Open Materials 2024 (OMat24) Inorganic Materials Dataset and Models] ([https://github.com/FAIR-Chem/fairchem code], [https://huggingface.co/datasets/fairchem/OMAT24 datasets], [https://huggingface.co/fairchem/OMAT24 checkpoints], [https://ai.meta.com/blog/fair-news-segment-anything-2-1-meta-spirit-lm-layer-skip-salsa-sona/ blogpost])
 
* 2024-10: [https://arxiv.org/abs/2410.12771 Open Materials 2024 (OMat24) Inorganic Materials Dataset and Models] ([https://github.com/FAIR-Chem/fairchem code], [https://huggingface.co/datasets/fairchem/OMAT24 datasets], [https://huggingface.co/fairchem/OMAT24 checkpoints], [https://ai.meta.com/blog/fair-news-segment-anything-2-1-meta-spirit-lm-layer-skip-salsa-sona/ blogpost])
 +
* 2025-01: [https://www.nature.com/articles/s41586-025-08628-5 A generative model for inorganic materials design]
 +
* 2025-04: [https://arxiv.org/abs/2504.14110 System of Agentic AI for the Discovery of Metal-Organic Frameworks]
 +
* 2025-05: [https://arxiv.org/abs/2505.08762 The Open Molecules 2025 (OMol25) Dataset, Evaluations, and Models]
  
 
===Chemistry===
 
===Chemistry===
 
* 2023-12: [https://doi.org/10.1038/s41586-023-06792-0 Autonomous chemical research with large language models] (Coscientist)
 
* 2023-12: [https://doi.org/10.1038/s41586-023-06792-0 Autonomous chemical research with large language models] (Coscientist)
 
* 2024-11: [https://www.nature.com/articles/s41467-024-54457-x An automatic end-to-end chemical synthesis development platform powered by large language models]
 
* 2024-11: [https://www.nature.com/articles/s41467-024-54457-x An automatic end-to-end chemical synthesis development platform powered by large language models]
* 2025-01: [https://www.nature.com/articles/s41578-025-00772-8 Large language models for reticular chemistry[]
+
* 2025-06: [https://paper.ether0.ai/ Training a Scientific Reasoning Model for Chemistry]
 +
* 2025-06: [https://arxiv.org/abs/2506.06363 ChemGraph: An Agentic Framework for Computational Chemistry Workflows] ([https://github.com/argonne-lcf/ChemGraph code])
 +
 
 +
===Bio===
 +
* 2025-07: [https://arxiv.org/abs/2507.01485 BioMARS: A Multi-Agent Robotic System for Autonomous Biological Experiments]
 +
 
 +
==LLMs Optimized for Science==
 +
* 2022-11: [https://arxiv.org/abs/2211.09085 Galactica: A Large Language Model for Science]
 +
* 2024-12: [https://www.nature.com/articles/s41467-024-54639-7 Crystal structure generation with autoregressive large language modeling
 +
* 2025-02: [https://arxiv.org/abs/2502.13107 MatterChat: A Multi-Modal LLM for Material Science]
 +
* 2025-03: [https://arxiv.org/abs/2503.17604 OmniScience: A Domain-Specialized LLM for Scientific Reasoning and Discovery]
 +
* 2025-03: Google [https://huggingface.co/collections/google/txgemma-release-67dd92e931c857d15e4d1e87 TxGemma] (2B, 9B, 27B): [https://developers.googleblog.com/en/introducing-txgemma-open-models-improving-therapeutics-development/ drug development]
  
 
=Impact of AI in Science=
 
=Impact of AI in Science=
* 2024-11: [https://aidantr.github.io/files/AI_innovation.pdf Artificial Intelligence, Scientific Discovery, and Product Innovation]
+
* 2024-11: <strike>[https://aidantr.github.io/files/AI_innovation.pdf Artificial Intelligence, Scientific Discovery, and Product Innovation]</strike>
 +
** 2025-05: Retraction: [https://economics.mit.edu/news/assuring-accurate-research-record Assuring an accurate research record]
 
* 2025-02: [https://arxiv.org/abs/2502.05151 Transforming Science with Large Language Models: A Survey on AI-assisted Scientific Discovery, Experimentation, Content Generation, and Evaluation]
 
* 2025-02: [https://arxiv.org/abs/2502.05151 Transforming Science with Large Language Models: A Survey on AI-assisted Scientific Discovery, Experimentation, Content Generation, and Evaluation]
  
Line 115: Line 217:
  
 
==Data Visualization==
 
==Data Visualization==
* 2024-10: [https://www.microsoft.com/en-us/research/blog/data-formulator-exploring-how-ai-can-help-analysts-create-rich-data-visualizations/ Data Formulator: Create Rich Visualization with AI iteratively] ([https://www.microsoft.com/en-us/research/video/data-formulator-create-rich-visualization-with-ai-iteratively/ video], [https://github.com/microsoft/data-formulator code])
+
* 2024-10: Microsoft [https://www.microsoft.com/en-us/research/blog/data-formulator-exploring-how-ai-can-help-analysts-create-rich-data-visualizations/ Data Formulator: Create Rich Visualization with AI iteratively] ([https://www.microsoft.com/en-us/research/video/data-formulator-create-rich-visualization-with-ai-iteratively/ video], [https://github.com/microsoft/data-formulator code])
 
* [https://julius.ai/ Julius AI]: Analyze your data with computational AI
 
* [https://julius.ai/ Julius AI]: Analyze your data with computational AI
 +
 +
==Generative==
 +
* 2025-03: [https://huggingface.co/collections/starvector/starvector-models-6783b22c7bd4b43d13cb5289 StarVector] 1B, 8B: text or image to SVG
 +
 +
==Chemistry==
 +
* 2025-03: [https://jcheminf.biomedcentral.com/articles/10.1186/s13321-024-00834-z Rxn-INSIGHT: fast chemical reaction analysis using bond-electron matrices] ([https://rxn-insight.readthedocs.io/en/latest/ docs])
 +
 +
=Science Datasets=
 +
* [https://github.com/blaiszik/awesome-matchem-datasets/ Awesome Materials & Chemistry Datasets]
  
 
=See Also=
 
=See Also=
 
* [[AI agents]]
 
* [[AI agents]]
 
* [https://nanobot.chat/ Nanobot.chat]: Intelligent AI for the labnetwork @ mtl.mit.edu forum
 
* [https://nanobot.chat/ Nanobot.chat]: Intelligent AI for the labnetwork @ mtl.mit.edu forum

Latest revision as of 11:42, 22 July 2025

AI Use-cases for Science

Literature

LLM extract data from papers

AI finding links in literature

(Pre) Generate Articles

Explanation

Autonomous Ideation

Adapting LLMs to Science

AI/LLM Control of Scientific Instruments/Facilities

AI/ML Methods tailored to Science

Regression (Data Fitting)

Tabular Classification/Regression

Symbolic Regression

Literature Discovery

Commercial

Bio

AI/ML Methods in Science

Imaging

Materials

Chemistry

Biology

Medicine

See: AI_Agents#Medicine

Successes

AI/ML Methods co-opted for Science

Mechanistic Interpretability

Train large model on science data. Then apply mechanistic interpretability (e.g. sparse autoencoders, SAE) to the feature/activation space.

Uncertainty

Science Benchmarks

Science Agents

Reviews

Specific

Science Multi-Agent Setups

AI Science Systems

Inorganic Materials Discovery

Chemistry

Bio

LLMs Optimized for Science

Impact of AI in Science

Related Tools

Literature Search

Data Visualization

Generative

Chemistry

Science Datasets

See Also