Difference between revisions of "AI and Humans"

From GISAXS
Jump to: navigation, search
(Capabilities)
(Medical)
 
(30 intermediate revisions by the same user not shown)
Line 6: Line 6:
 
* 2023-10: [https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2023/10/31/most-students-outrunning-faculty-ai-use?utm_source=Inside+Higher+Ed&utm_campaign=23419446b9-DNU_2021_COPY_02&utm_medium=email&utm_term=0_1fcbc04421-23419446b9-236889242&mc_cid=23419446b9&mc_eid=dae49d931a Survey] shows students using AI more than professors.
 
* 2023-10: [https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2023/10/31/most-students-outrunning-faculty-ai-use?utm_source=Inside+Higher+Ed&utm_campaign=23419446b9-DNU_2021_COPY_02&utm_medium=email&utm_term=0_1fcbc04421-23419446b9-236889242&mc_cid=23419446b9&mc_eid=dae49d931a Survey] shows students using AI more than professors.
 
* 2023-11: [https://www.nature.com/articles/d41586-023-03507-3 ChatGPT has entered the classroom: how LLMs could transform education]
 
* 2023-11: [https://www.nature.com/articles/d41586-023-03507-3 ChatGPT has entered the classroom: how LLMs could transform education]
 +
* 2025-04: [https://www.anthropic.com/news/anthropic-education-report-how-university-students-use-claude Anthropic Education Report: How University Students Use Claude]
  
 
==AI improves learning/education==
 
==AI improves learning/education==
Line 22: Line 23:
 
* [https://arxiv.org/abs/2409.09047 AI Meets the Classroom: When Do Large Language Models Harm Learning?]
 
* [https://arxiv.org/abs/2409.09047 AI Meets the Classroom: When Do Large Language Models Harm Learning?]
 
** Outcomes depend on usage
 
** Outcomes depend on usage
 +
* [https://www.deeplearning.ai/the-batch/gpt-4-boosts-remote-tutors-performance-in-real-time-study-finds/ LLM Support for Tutors GPT-4 boosts remote tutors’ performance in real time, study finds]
 +
** [https://arxiv.org/abs/2410.03017 Tutor CoPilot: A Human-AI Approach for Scaling Real-Time Expertise]
  
 
==AI harms learning==
 
==AI harms learning==
Line 42: Line 45:
 
* [https://notebooklm.google.com/ NotebookLM]: Enables one to "chat with documents".
 
* [https://notebooklm.google.com/ NotebookLM]: Enables one to "chat with documents".
 
* Google [https://learning.google.com/experiments/learn-about/signup Learn About]
 
* Google [https://learning.google.com/experiments/learn-about/signup Learn About]
 +
 +
===Systems===
 +
* [https://www.anthropic.com/news/introducing-claude-for-education Anthropic] [https://www.anthropic.com/education Claude for Education]
  
 
==AI for grading==
 
==AI for grading==
Line 69: Line 75:
 
* 2024-09: PaperQA2: [https://paper.wikicrow.ai/ Language Models Achieve Superhuman Synthesis of Scientific Knowledge] ([https://x.com/SGRodriques/status/1833908643856818443 𝕏 post], [https://github.com/Future-House/paper-qa code])
 
* 2024-09: PaperQA2: [https://paper.wikicrow.ai/ Language Models Achieve Superhuman Synthesis of Scientific Knowledge] ([https://x.com/SGRodriques/status/1833908643856818443 𝕏 post], [https://github.com/Future-House/paper-qa code])
 
* 2025-03: [https://arxiv.org/abs/2503.19065 WikiAutoGen: Towards Multi-Modal Wikipedia-Style Article Generation]
 
* 2025-03: [https://arxiv.org/abs/2503.19065 WikiAutoGen: Towards Multi-Modal Wikipedia-Style Article Generation]
 +
* 2025-03: [https://arxiv.org/abs/2503.22828 Learning to Reason for Long-Form Story Generation]
  
 
==AI out-performs humans==
 
==AI out-performs humans==
Line 92: Line 99:
 
* 2024-11: [https://doi.org/10.1038/s41598-024-76900-1 AI-generated poetry is indistinguishable from human-written poetry and is rated more favorably]
 
* 2024-11: [https://doi.org/10.1038/s41598-024-76900-1 AI-generated poetry is indistinguishable from human-written poetry and is rated more favorably]
 
* 2024-11: [https://www.astralcodexten.com/p/how-did-you-do-on-the-ai-art-turing How Did You Do On The AI Art Turing Test?]
 
* 2024-11: [https://www.astralcodexten.com/p/how-did-you-do-on-the-ai-art-turing How Did You Do On The AI Art Turing Test?]
 +
 +
===Business & Marketing===
 +
* 2023-11: [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4597899 The power of generative marketing: Can generative AI create superhuman visual marketing content?]
 +
* 2024-02: [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4714776 Generative Artificial Intelligence and Evaluating Strategic Decisions]
  
 
===Professions===
 
===Professions===
 
* [https://agi.safe.ai/submit Humanity's Last Exam]
 
* [https://agi.safe.ai/submit Humanity's Last Exam]
 
** [https://x.com/alexandr_wang/status/1835738937719140440 Effort to build] a dataset of challenging (but resolvable) questions in specific domain areas, to act as a benchmark to test whether AIs are improving in these challenging topics.
 
** [https://x.com/alexandr_wang/status/1835738937719140440 Effort to build] a dataset of challenging (but resolvable) questions in specific domain areas, to act as a benchmark to test whether AIs are improving in these challenging topics.
 +
 +
====Coding====
 +
* 2025-02: [https://arxiv.org/abs/2502.06807 Competitive Programming with Large Reasoning Models]
  
 
====Medical====
 
====Medical====
Line 106: Line 120:
 
* 2024-12: [https://www.arxiv.org/abs/2412.10849 Superhuman performance of a large language model on the reasoning tasks of a physician]
 
* 2024-12: [https://www.arxiv.org/abs/2412.10849 Superhuman performance of a large language model on the reasoning tasks of a physician]
 
* 2024-12: [https://arxiv.org/abs/2412.18925 HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs]
 
* 2024-12: [https://arxiv.org/abs/2412.18925 HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs]
 +
* 2025-02: Media:
 +
** NY Times: [https://www.nytimes.com/2025/02/02/opinion/ai-doctors-medicine.html The Robot Doctor Will See You Now]
 +
** [https://erictopol.substack.com/p/when-doctors-with-ai-are-outperformed When Doctors With A.I. Are Outperformed by A.I. Alone]
 
* 2025-02: [https://www.nature.com/articles/s41591-024-03456-y GPT-4 assistance for improvement of physician performance on patient care tasks: a randomized controlled trial]
 
* 2025-02: [https://www.nature.com/articles/s41591-024-03456-y GPT-4 assistance for improvement of physician performance on patient care tasks: a randomized controlled trial]
 
* 2025-02: [https://www.nature.com/articles/s41591-025-03517-w Artificial intelligence for individualized treatment of persistent atrial fibrillation: a randomized controlled trial]
 
* 2025-02: [https://www.nature.com/articles/s41591-025-03517-w Artificial intelligence for individualized treatment of persistent atrial fibrillation: a randomized controlled trial]
Line 111: Line 128:
 
** 2024-01: [https://arxiv.org/abs/2401.05654 Towards Conversational Diagnostic AI] ([https://research.google/blog/amie-a-research-ai-system-for-diagnostic-medical-reasoning-and-conversations/ blog]: Articulate Medical Intelligence Explorer, AMIE)
 
** 2024-01: [https://arxiv.org/abs/2401.05654 Towards Conversational Diagnostic AI] ([https://research.google/blog/amie-a-research-ai-system-for-diagnostic-medical-reasoning-and-conversations/ blog]: Articulate Medical Intelligence Explorer, AMIE)
 
** 2025-03: [https://www.gstatic.com/amie/towards_conversational_ai_for_disease_management.pdf Towards Conversational AI for Disease Management] ([https://research.google/blog/from-diagnosis-to-treatment-advancing-amie-for-longitudinal-disease-management/ blog])
 
** 2025-03: [https://www.gstatic.com/amie/towards_conversational_ai_for_disease_management.pdf Towards Conversational AI for Disease Management] ([https://research.google/blog/from-diagnosis-to-treatment-advancing-amie-for-longitudinal-disease-management/ blog])
 +
* 2025-02: [https://arxiv.org/abs/2502.19655 Med-RLVR: Emerging Medical Reasoning from a 3B base model via reinforcement Learning]
 
* 2025-03: [https://arxiv.org/abs/2503.13939 Med-R1: Reinforcement Learning for Generalizable Medical Reasoning in Vision-Language Models]
 
* 2025-03: [https://arxiv.org/abs/2503.13939 Med-R1: Reinforcement Learning for Generalizable Medical Reasoning in Vision-Language Models]
 +
* 2025-04: [https://www.acpjournals.org/doi/10.7326/ANNALS-24-03283 Comparison of Initial Artificial Intelligence (AI) and Final Physician Recommendations in AI-Assisted Virtual Urgent Care Visits]
 +
* 2025-04: [https://www.nature.com/articles/s41586-025-08866-7?linkId=13898052 Towards conversational diagnostic artificial intelligence]
 +
* 2025-04: [https://www.nature.com/articles/s41586-025-08869-4?linkId=13898054 Towards accurate differential diagnosis with large language models]
 +
 +
====Bio====
 +
* 2025-04: [https://www.virologytest.ai/vct_paper.pdf Virology Capabilities Test (VCT): A Multimodal Virology Q&A Benchmark]
 +
** Time: [https://time.com/7279010/ai-virus-lab-biohazard-study/ Exclusive: AI Outsmarts Virus Experts in the Lab, Raising Biohazard Fears]
 +
** AI Frontiers: [https://www.ai-frontiers.org/articles/ais-are-disseminating-expert-level-virology-skills AIs Are Disseminating Expert-Level Virology Skills]
  
 
====Therapy====
 
====Therapy====
 
* 2025-02: [https://journals.plos.org/mentalhealth/article?id=10.1371/journal.pmen.0000145 When ELIZA meets therapists: A Turing test for the heart and mind]
 
* 2025-02: [https://journals.plos.org/mentalhealth/article?id=10.1371/journal.pmen.0000145 When ELIZA meets therapists: A Turing test for the heart and mind]
 +
* 2025-03: Therabot: [https://ai.nejm.org/doi/full/10.1056/AIoa2400802 Randomized Trial of a Generative AI Chatbot for Mental Health Treatment]
  
 
====Financial====
 
====Financial====
Line 144: Line 171:
  
 
===Medical===
 
===Medical===
* 2025-03L: [https://www.medrxiv.org/content/10.1101/2025.02.28.25323115v1.full Medical Hallucination in Foundation Models and Their Impact on Healthcare]
+
* 2025-03: [https://www.medrxiv.org/content/10.1101/2025.02.28.25323115v1.full Medical Hallucination in Foundation Models and Their Impact on Healthcare]
 +
* 2025-03: [https://journals.lww.com/international-journal-of-surgery/fulltext/2025/03000/chatgpt_s_role_in_alleviating_anxiety_in_total.20.aspx ChatGPT’s role in alleviating anxiety in total knee arthroplasty consent process: a randomized controlled trial pilot study]
  
 
===Translation===
 
===Translation===
 
* 2025-01: [https://simonwillison.net/2025/Feb/2/workflow-for-translation/ A professional workflow for translation using LLMs] ([https://news.ycombinator.com/item?id=42897856 based on this])
 
* 2025-01: [https://simonwillison.net/2025/Feb/2/workflow-for-translation/ A professional workflow for translation using LLMs] ([https://news.ycombinator.com/item?id=42897856 based on this])
 +
 +
===Customer service===
 +
* 2023-11: [https://www.nber.org/papers/w31161 Generative AI at Work]: Improvements for workers and clients (though also a ceiling to improvement)
  
 
===Creativity===
 
===Creativity===
Line 164: Line 195:
 
* 2024-07: [https://arxiv.org/abs/2407.19096 AI Companions Reduce Loneliness]
 
* 2024-07: [https://arxiv.org/abs/2407.19096 AI Companions Reduce Loneliness]
 
* 2025-03: [https://dam-prod2.media.mit.edu/x/2025/03/21/Randomized_Control_Study_on_Chatbot_Psychosocial_Effect.pdf How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Controlled Study]
 
* 2025-03: [https://dam-prod2.media.mit.edu/x/2025/03/21/Randomized_Control_Study_on_Chatbot_Psychosocial_Effect.pdf How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Controlled Study]
 +
 +
==AI worse than humans==
 +
* 2025-04: [https://spinup-000d1a-wp-offload-media.s3.amazonaws.com/faculty/wp-content/uploads/sites/27/2025/03/AI-debt-collection-20250331.pdf How Good is AI at Twisting Arms? Experiments in Debt Collection]
  
 
==Human Perceptions of AI==
 
==Human Perceptions of AI==
Line 176: Line 210:
 
* 2024-05: [https://arxiv.org/abs/2405.08007 People cannot distinguish GPT-4 from a human in a Turing test]
 
* 2024-05: [https://arxiv.org/abs/2405.08007 People cannot distinguish GPT-4 from a human in a Turing test]
 
* 2024-07: [https://arxiv.org/abs/2407.08853 GPT-4 is judged more human than humans in displaced and inverted Turing tests]
 
* 2024-07: [https://arxiv.org/abs/2407.08853 GPT-4 is judged more human than humans in displaced and inverted Turing tests]
 +
* 2025-03: [https://arxiv.org/abs/2503.23674 Large Language Models Pass the Turing Test]
 +
* 2025-04: [https://www.sciencedirect.com/science/article/abs/pii/S0022103117303980 A Minimal Turing Test]
 +
 
'''Art'''
 
'''Art'''
 
* 2024-11: [https://www.astralcodexten.com/p/how-did-you-do-on-the-ai-art-turing How Did You Do On The AI Art Turing Test?] Differentiation was only slightly above random (60%). AI art was often ranked higher than human-made.
 
* 2024-11: [https://www.astralcodexten.com/p/how-did-you-do-on-the-ai-art-turing How Did You Do On The AI Art Turing Test?] Differentiation was only slightly above random (60%). AI art was often ranked higher than human-made.
Line 183: Line 220:
 
* 2025-03: [https://cdn.openai.com/papers/15987609-5f71-433c-9972-e91131f399a1/openai-affective-use-study.pdf Investigating Affective Use and Emotional Well-being on ChatGPT]
 
* 2025-03: [https://cdn.openai.com/papers/15987609-5f71-433c-9972-e91131f399a1/openai-affective-use-study.pdf Investigating Affective Use and Emotional Well-being on ChatGPT]
 
* 2025-03: [https://dam-prod2.media.mit.edu/x/2025/03/21/Randomized_Control_Study_on_Chatbot_Psychosocial_Effect.pdf How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Controlled Study]
 
* 2025-03: [https://dam-prod2.media.mit.edu/x/2025/03/21/Randomized_Control_Study_on_Chatbot_Psychosocial_Effect.pdf How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Controlled Study]
 +
* 2025-03: [https://www.microsoft.com/en-us/research/publication/the-impact-of-generative-ai-on-critical-thinking-self-reported-reductions-in-cognitive-effort-and-confidence-effects-from-a-survey-of-knowledge-workers/ The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers]
  
 
=Uptake=
 
=Uptake=
Line 202: Line 240:
 
==Usage For==
 
==Usage For==
 
* 2024-12: [https://assets.anthropic.com/m/7e1ab885d1b24176/original/Clio-Privacy-Preserving-Insights-into-Real-World-AI-Use.pdf Clio: A system for privacy-preserving insights into real-world AI use] (Anthropic [https://www.anthropic.com/research/clio Clio])
 
* 2024-12: [https://assets.anthropic.com/m/7e1ab885d1b24176/original/Clio-Privacy-Preserving-Insights-into-Real-World-AI-Use.pdf Clio: A system for privacy-preserving insights into real-world AI use] (Anthropic [https://www.anthropic.com/research/clio Clio])
 +
* 2025-03: [https://learn.filtered.com/hubfs/The%202025%20Top-100%20Gen%20AI%20Use%20Case%20Report.pdf How People are Really Using Generative AI Now] ([https://hbr.org/2025/04/how-people-are-really-using-gen-ai-in-2025 writeup])
 +
* 2025-04: [https://www.anthropic.com/news/anthropic-education-report-how-university-students-use-claude Anthropic Education Report: How University Students Use Claude]
 +
 +
=Sentiment=
 +
* 2025-04: Pew Research: [https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/ How the U.S. Public and AI Experts View Artificial Intelligence]
  
 
=Persuasion=
 
=Persuasion=
Line 210: Line 253:
 
* 2024-09: [https://www.science.org/doi/10.1126/science.adq1814 Durably reducing conspiracy beliefs through dialogues with AI]
 
* 2024-09: [https://www.science.org/doi/10.1126/science.adq1814 Durably reducing conspiracy beliefs through dialogues with AI]
 
* 2025-03: [https://www.pnas.org/doi/10.1073/pnas.2413443122 Scaling language model size yields diminishing returns for single-message political persuasion]
 
* 2025-03: [https://www.pnas.org/doi/10.1073/pnas.2413443122 Scaling language model size yields diminishing returns for single-message political persuasion]
 +
 +
=Simulate Humans=
 +
* See also: [[Human brain]]
 +
* 2021-10: [https://www.doi.org/10.1007/s10588-021-09351-y Explaining and predicting human behavior and social dynamics in simulated virtual worlds: reproducibility, generalizability, and robustness of causal discovery methods]
 +
* 2023-12: Google: [https://arxiv.org/abs/2312.03664 Generative agent-based modeling with actions grounded in physical, social, or digital space using Concordia]
 +
* 2024-02: [https://arxiv.org/abs/2402.12620 Are Large Language Models (LLMs) Good Social Predictors?]
 +
* 2024-04: [https://arxiv.org/abs/2404.11794 Automated Social Science: Language Models as Scientist and Subjects]
 +
* 2024-07: [https://academic.oup.com/pnasnexus/article/3/7/pgae245/7712371 Perils and opportunities in using large language models in psychological research]
 +
* 2024-08: [https://samim.io/dl/Predicting%20results%20of%20social%20science%20experiments%20using%20large%20language%20models.pdf Predicting Results of Social Science Experiments Using Large Language Models]
 +
* 2024-10: [https://www.pnas.org/doi/10.1073/pnas.2407639121 Large Language Models based on historical text could offer informative tools for behavioral science]
 +
* 2025-04: [https://arxiv.org/abs/2504.02234 LLM Social Simulations Are a Promising Research Method]
 +
* 2025-04: [https://www.nber.org/papers/w33662 Measuring Human Leadership Skills with AI Agents]
 +
* 2025-04: [https://arxiv.org/abs/2504.10157 SocioVerse: A World Model for Social Simulation Powered by LLM Agents and A Pool of 10 Million Real-World Users]
  
 
=See Also=
 
=See Also=
 
* [https://www.google.com/books/edition/_/cKnYEAAAQBAJ?hl=en&gbpv=1&pg=PA2 UNESCO. Guidance for Generative AI in Education and Research]
 
* [https://www.google.com/books/edition/_/cKnYEAAAQBAJ?hl=en&gbpv=1&pg=PA2 UNESCO. Guidance for Generative AI in Education and Research]
 
* [[AI]]
 
* [[AI]]

Latest revision as of 09:28, 28 April 2025

AI in Education

Survey/study of

AI improves learning/education

AI harms learning

Software/systems

LLMs

Individual tools

Systems

AI for grading

Detection

AI Text Detectors Don't Work

AI/human

Capabilities

Writing

AI out-performs humans

Tests

Creativity

Art

Business & Marketing

Professions

  • Humanity's Last Exam
    • Effort to build a dataset of challenging (but resolvable) questions in specific domain areas, to act as a benchmark to test whether AIs are improving in these challenging topics.

Coding

Medical

Bio

Therapy

Financial

AI improves human work

Coding

Forecasting

Finance

Law

Medical

Translation

Customer service

  • 2023-11: Generative AI at Work: Improvements for workers and clients (though also a ceiling to improvement)

Creativity

Equity

Counter loneliness

AI worse than humans

Human Perceptions of AI

AI passes Turing Test

Text Dialog

Art

Psychological Effects of AI Usage

Uptake

Usage For

Sentiment

Persuasion

(AI can update beliefs, change opinions, tackle conspiracy theories, etc.)

Simulate Humans

See Also