Difference between revisions of "AI and Humans"
KevinYager (talk | contribs) (→Medical) |
KevinYager (talk | contribs) (→Usage For) |
||
(14 intermediate revisions by the same user not shown) | |||
Line 6: | Line 6: | ||
* 2023-10: [https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2023/10/31/most-students-outrunning-faculty-ai-use?utm_source=Inside+Higher+Ed&utm_campaign=23419446b9-DNU_2021_COPY_02&utm_medium=email&utm_term=0_1fcbc04421-23419446b9-236889242&mc_cid=23419446b9&mc_eid=dae49d931a Survey] shows students using AI more than professors. | * 2023-10: [https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2023/10/31/most-students-outrunning-faculty-ai-use?utm_source=Inside+Higher+Ed&utm_campaign=23419446b9-DNU_2021_COPY_02&utm_medium=email&utm_term=0_1fcbc04421-23419446b9-236889242&mc_cid=23419446b9&mc_eid=dae49d931a Survey] shows students using AI more than professors. | ||
* 2023-11: [https://www.nature.com/articles/d41586-023-03507-3 ChatGPT has entered the classroom: how LLMs could transform education] | * 2023-11: [https://www.nature.com/articles/d41586-023-03507-3 ChatGPT has entered the classroom: how LLMs could transform education] | ||
+ | * 2025-04: [https://www.anthropic.com/news/anthropic-education-report-how-university-students-use-claude Anthropic Education Report: How University Students Use Claude] | ||
==AI improves learning/education== | ==AI improves learning/education== | ||
Line 74: | Line 75: | ||
* 2024-09: PaperQA2: [https://paper.wikicrow.ai/ Language Models Achieve Superhuman Synthesis of Scientific Knowledge] ([https://x.com/SGRodriques/status/1833908643856818443 𝕏 post], [https://github.com/Future-House/paper-qa code]) | * 2024-09: PaperQA2: [https://paper.wikicrow.ai/ Language Models Achieve Superhuman Synthesis of Scientific Knowledge] ([https://x.com/SGRodriques/status/1833908643856818443 𝕏 post], [https://github.com/Future-House/paper-qa code]) | ||
* 2025-03: [https://arxiv.org/abs/2503.19065 WikiAutoGen: Towards Multi-Modal Wikipedia-Style Article Generation] | * 2025-03: [https://arxiv.org/abs/2503.19065 WikiAutoGen: Towards Multi-Modal Wikipedia-Style Article Generation] | ||
+ | * 2025-03: [https://arxiv.org/abs/2503.22828 Learning to Reason for Long-Form Story Generation] | ||
==AI out-performs humans== | ==AI out-performs humans== | ||
Line 97: | Line 99: | ||
* 2024-11: [https://doi.org/10.1038/s41598-024-76900-1 AI-generated poetry is indistinguishable from human-written poetry and is rated more favorably] | * 2024-11: [https://doi.org/10.1038/s41598-024-76900-1 AI-generated poetry is indistinguishable from human-written poetry and is rated more favorably] | ||
* 2024-11: [https://www.astralcodexten.com/p/how-did-you-do-on-the-ai-art-turing How Did You Do On The AI Art Turing Test?] | * 2024-11: [https://www.astralcodexten.com/p/how-did-you-do-on-the-ai-art-turing How Did You Do On The AI Art Turing Test?] | ||
+ | |||
+ | ===Marketing=== | ||
+ | * 2023-11: [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4597899 The power of generative marketing: Can generative AI create superhuman visual marketing content?] | ||
===Professions=== | ===Professions=== | ||
Line 111: | Line 116: | ||
* 2024-12: [https://www.arxiv.org/abs/2412.10849 Superhuman performance of a large language model on the reasoning tasks of a physician] | * 2024-12: [https://www.arxiv.org/abs/2412.10849 Superhuman performance of a large language model on the reasoning tasks of a physician] | ||
* 2024-12: [https://arxiv.org/abs/2412.18925 HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs] | * 2024-12: [https://arxiv.org/abs/2412.18925 HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs] | ||
+ | * 2025-02: Media: | ||
+ | ** NY Times: [https://www.nytimes.com/2025/02/02/opinion/ai-doctors-medicine.html The Robot Doctor Will See You Now] | ||
+ | ** [https://erictopol.substack.com/p/when-doctors-with-ai-are-outperformed When Doctors With A.I. Are Outperformed by A.I. Alone] | ||
* 2025-02: [https://www.nature.com/articles/s41591-024-03456-y GPT-4 assistance for improvement of physician performance on patient care tasks: a randomized controlled trial] | * 2025-02: [https://www.nature.com/articles/s41591-024-03456-y GPT-4 assistance for improvement of physician performance on patient care tasks: a randomized controlled trial] | ||
* 2025-02: [https://www.nature.com/articles/s41591-025-03517-w Artificial intelligence for individualized treatment of persistent atrial fibrillation: a randomized controlled trial] | * 2025-02: [https://www.nature.com/articles/s41591-025-03517-w Artificial intelligence for individualized treatment of persistent atrial fibrillation: a randomized controlled trial] | ||
Line 119: | Line 127: | ||
* 2025-03: [https://arxiv.org/abs/2503.13939 Med-R1: Reinforcement Learning for Generalizable Medical Reasoning in Vision-Language Models] | * 2025-03: [https://arxiv.org/abs/2503.13939 Med-R1: Reinforcement Learning for Generalizable Medical Reasoning in Vision-Language Models] | ||
* 2025-04: [https://www.acpjournals.org/doi/10.7326/ANNALS-24-03283 Comparison of Initial Artificial Intelligence (AI) and Final Physician Recommendations in AI-Assisted Virtual Urgent Care Visits] | * 2025-04: [https://www.acpjournals.org/doi/10.7326/ANNALS-24-03283 Comparison of Initial Artificial Intelligence (AI) and Final Physician Recommendations in AI-Assisted Virtual Urgent Care Visits] | ||
+ | * 2025-04: [https://www.nature.com/articles/s41586-025-08866-7?linkId=13898052 Towards conversational diagnostic artificial intelligence] | ||
+ | * 2025-04: [https://www.nature.com/articles/s41586-025-08869-4?linkId=13898054 Towards accurate differential diagnosis with large language models] | ||
====Therapy==== | ====Therapy==== | ||
Line 156: | Line 166: | ||
===Translation=== | ===Translation=== | ||
* 2025-01: [https://simonwillison.net/2025/Feb/2/workflow-for-translation/ A professional workflow for translation using LLMs] ([https://news.ycombinator.com/item?id=42897856 based on this]) | * 2025-01: [https://simonwillison.net/2025/Feb/2/workflow-for-translation/ A professional workflow for translation using LLMs] ([https://news.ycombinator.com/item?id=42897856 based on this]) | ||
+ | |||
+ | ===Customer service=== | ||
+ | * 2023-11: [https://www.nber.org/papers/w31161 Generative AI at Work]: Improvements for workers and clients (though also a ceiling to improvement) | ||
===Creativity=== | ===Creativity=== | ||
Line 172: | Line 185: | ||
* 2024-07: [https://arxiv.org/abs/2407.19096 AI Companions Reduce Loneliness] | * 2024-07: [https://arxiv.org/abs/2407.19096 AI Companions Reduce Loneliness] | ||
* 2025-03: [https://dam-prod2.media.mit.edu/x/2025/03/21/Randomized_Control_Study_on_Chatbot_Psychosocial_Effect.pdf How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Controlled Study] | * 2025-03: [https://dam-prod2.media.mit.edu/x/2025/03/21/Randomized_Control_Study_on_Chatbot_Psychosocial_Effect.pdf How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Controlled Study] | ||
+ | |||
+ | ==AI worse than humans== | ||
+ | * 2025-04: [https://spinup-000d1a-wp-offload-media.s3.amazonaws.com/faculty/wp-content/uploads/sites/27/2025/03/AI-debt-collection-20250331.pdf How Good is AI at Twisting Arms? Experiments in Debt Collection] | ||
==Human Perceptions of AI== | ==Human Perceptions of AI== | ||
Line 212: | Line 228: | ||
==Usage For== | ==Usage For== | ||
* 2024-12: [https://assets.anthropic.com/m/7e1ab885d1b24176/original/Clio-Privacy-Preserving-Insights-into-Real-World-AI-Use.pdf Clio: A system for privacy-preserving insights into real-world AI use] (Anthropic [https://www.anthropic.com/research/clio Clio]) | * 2024-12: [https://assets.anthropic.com/m/7e1ab885d1b24176/original/Clio-Privacy-Preserving-Insights-into-Real-World-AI-Use.pdf Clio: A system for privacy-preserving insights into real-world AI use] (Anthropic [https://www.anthropic.com/research/clio Clio]) | ||
+ | * 2025-03: [https://learn.filtered.com/hubfs/The%202025%20Top-100%20Gen%20AI%20Use%20Case%20Report.pdf How People are Really Using Generative AI Now] ([https://hbr.org/2025/04/how-people-are-really-using-gen-ai-in-2025 writeup]) | ||
+ | * 2025-04: [https://www.anthropic.com/news/anthropic-education-report-how-university-students-use-claude Anthropic Education Report: How University Students Use Claude] | ||
+ | |||
+ | =Sentiment= | ||
+ | * 2025-04: Pew Research: [https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/ How the U.S. Public and AI Experts View Artificial Intelligence] | ||
=Persuasion= | =Persuasion= | ||
Line 220: | Line 241: | ||
* 2024-09: [https://www.science.org/doi/10.1126/science.adq1814 Durably reducing conspiracy beliefs through dialogues with AI] | * 2024-09: [https://www.science.org/doi/10.1126/science.adq1814 Durably reducing conspiracy beliefs through dialogues with AI] | ||
* 2025-03: [https://www.pnas.org/doi/10.1073/pnas.2413443122 Scaling language model size yields diminishing returns for single-message political persuasion] | * 2025-03: [https://www.pnas.org/doi/10.1073/pnas.2413443122 Scaling language model size yields diminishing returns for single-message political persuasion] | ||
+ | |||
+ | =Simulate Humans= | ||
+ | * See also: [[Human brain]] | ||
+ | * 2021-10: [https://www.doi.org/10.1007/s10588-021-09351-y Explaining and predicting human behavior and social dynamics in simulated virtual worlds: reproducibility, generalizability, and robustness of causal discovery methods] | ||
+ | * 2023-12: Google: [https://arxiv.org/abs/2312.03664 Generative agent-based modeling with actions grounded in physical, social, or digital space using Concordia] | ||
+ | * 2024-02: [https://arxiv.org/abs/2402.12620 Are Large Language Models (LLMs) Good Social Predictors?] | ||
+ | * 2024-04: [https://arxiv.org/abs/2404.11794 Automated Social Science: Language Models as Scientist and Subjects] | ||
+ | * 2024-07: [https://academic.oup.com/pnasnexus/article/3/7/pgae245/7712371 Perils and opportunities in using large language models in psychological research] | ||
+ | * 2024-08: [https://samim.io/dl/Predicting%20results%20of%20social%20science%20experiments%20using%20large%20language%20models.pdf Predicting Results of Social Science Experiments Using Large Language Models] | ||
+ | * 2024-10: [https://www.pnas.org/doi/10.1073/pnas.2407639121 Large Language Models based on historical text could offer informative tools for behavioral science] | ||
+ | * 2025-04: [https://arxiv.org/abs/2504.02234 LLM Social Simulations Are a Promising Research Method] | ||
+ | * 2025-04: [https://www.nber.org/papers/w33662 Measuring Human Leadership Skills with AI Agents] | ||
=See Also= | =See Also= | ||
* [https://www.google.com/books/edition/_/cKnYEAAAQBAJ?hl=en&gbpv=1&pg=PA2 UNESCO. Guidance for Generative AI in Education and Research] | * [https://www.google.com/books/edition/_/cKnYEAAAQBAJ?hl=en&gbpv=1&pg=PA2 UNESCO. Guidance for Generative AI in Education and Research] | ||
* [[AI]] | * [[AI]] |
Latest revision as of 18:49, 11 April 2025
Contents
AI in Education
Survey/study of
- 2023-08: Perception, performance, and detectability of conversational artificial intelligence across 32 university courses
- 2023-10: Employees secretly using AI at work.
- 2023-10: Survey shows students using AI more than professors.
- 2023-11: ChatGPT has entered the classroom: how LLMs could transform education
- 2025-04: Anthropic Education Report: How University Students Use Claude
AI improves learning/education
- Mollick, Ethan R. and Mollick, Lilach and Bach, Natalie and Ciccarelli, LJ and Przystanski, Ben and Ravipinto, Daniel, AI Agents and Education: Simulated Practice at Scale (June 17, 2024). The Wharton School Research Paper. doi: 10.2139/ssrn.4871171
- Can enable personalized education.
- Generative AI for Programming Education: Benchmarking ChatGPT, GPT-4, and Human Tutors
- GPT4 can out-perform human tutors.
- Keppler, Samantha and Sinchaisri, Wichinpong and Snyder, Clare, Backwards Planning with Generative AI: Case Study Evidence from US K12 Teachers (August 13, 2024). doi: 10.2139/ssrn.4924786
- Teachers benefit from using AI as a co-pilot to aid in tasks (planning, how to teach topic, explore ideas).
- There is smaller utility in using AI purely as a text-generator (to make quizzes, workbooks, etc.).
- Effective and Scalable Math Support: Evidence on the Impact of an AI- Tutor on Math Achievement in Ghana
- AI Tutoring Outperforms Active Learning
- From chalkboards to chatbots: Transforming learning in Nigeria, one prompt at a time
- 6 weeks of after-school AI tutoring = 2 years of typical learning gains
- outperforms 80% of other educational interventions
- AI Meets the Classroom: When Do Large Language Models Harm Learning?
- Outcomes depend on usage
- LLM Support for Tutors GPT-4 boosts remote tutors’ performance in real time, study finds
AI harms learning
- A real-world test of artificial intelligence infiltration of a university examinations system: A “Turing Test” case study
- Current grading systems cannot detect AI.
- Bastani, Hamsa and Bastani, Osbert and Sungu, Alp and Ge, Haosen and Kabakcı, Özge and Mariman, Rei, Generative AI Can Harm Learning (July 15, 2024). The Wharton School Research Paper.doi: 10.2139/ssrn.4895486
- Access to ChatGPT harmed math education outcomes.
- 2024-09: AI Meets the Classroom: When Does ChatGPT Harm Learning?
Software/systems
- GPTutor (code)
- EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education
- Eureka Labs (founded by Andrej Karpathy) aims to create AI-driven courses (first course is Intro to LLMs)
LLMs
Individual tools
- Chatbot (OpenAI ChatGPT, Anthropic Claude, Google Gemini)
- NotebookLM: Enables one to "chat with documents".
- Google Learn About
Systems
AI for grading
- Can Large Language Models Make the Grade? An Empirical Study Evaluating LLMs Ability To Mark Short Answer Questions in K-12 Education (preprint)
Detection
- Do teachers spot AI? Evaluating the detectability of AI-generated texts among student essays
- GenAI can simulate student writing in a way that teachers cannot detect.
- AI essays are assessed more positively than student-written.
- Teachers are overconfident in their source identification.
- Both novice and experienced teachers could not identify texts generated by ChatGPT vs. students
AI Text Detectors Don't Work
- 2024-05: RAID: A Shared Benchmark for Robust Evaluation of Machine-Generated Text Detectors
- 2024-06: Testing of Detection Tools for AI-Generated Text
AI/human
Capabilities
Writing
- 2022-12: Re3: Generating Longer Stories With Recursive Reprompting and Revision
- 2023-03: English essays: Artificial intelligence (AI) technology in OpenAI ChatGPT application: A review of ChatGPT in writing English essay
- 2023-01: Journalism: Collaborating With ChatGPT: Considering the Implications of Generative Artificial Intelligence for Journalism and Media Education
- 2023-07: Science writing: Artificial intelligence in scientific writing: a friend or a foe?
- 2024-02: Wikipedia style: Assisting in Writing Wikipedia-like Articles From Scratch with Large Language Models
- 2024-02: LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs (code)
- 2024-08: Scientific papers: [The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery]
- 2024-09: PaperQA2: Language Models Achieve Superhuman Synthesis of Scientific Knowledge (𝕏 post, code)
- 2025-03: WikiAutoGen: Towards Multi-Modal Wikipedia-Style Article Generation
- 2025-03: Learning to Reason for Long-Form Story Generation
AI out-performs humans
Tests
- 2023-07: SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models
- 2024-06: A real-world test of artificial intelligence infiltration of a university examinations system: A “Turing Test” case study
- AI scores higher than median students.
Creativity
- 2023-07: Ideas Are Dimes A Dozen: Large Language Models For Idea Generation In Innovation
- 2023-09: Best humans still outperform artificial intelligence in a creative divergent thinking task
- Best humans out-perform AI at creativity. (By implication, median humans may not.)
- 2024-02: The current state of artificial intelligence generative language models is more creative than humans on divergent thinking tasks
- 2024-02: Felin, Teppo and Holweg, Matthias, Theory Is All You Need: AI, Human Cognition, and Causal Reasoning (February 24, 2024). doi: 10.2139/ssrn.4737265
- Argues that human "theory-based" creativity is better than AI "data-based".
- 2024-07: Pron vs Prompt: Can Large Language Models already Challenge a World-Class Fiction Author at Creative Text Writing?
- Top human (professional author) out-performs GPT4.
- 2024-09: Can LLMs Generate Novel Research Ideas? A Large-Scale Human Study with 100+ NLP Researchers
- LLMs can be creative
- 2024-09: Creative and Strategic Capabilities of Generative AI: Evidence from Large-Scale Experiments
Art
- 2024-11: AI-generated poetry is indistinguishable from human-written poetry and is rated more favorably
- 2024-11: How Did You Do On The AI Art Turing Test?
Marketing
- 2023-11: The power of generative marketing: Can generative AI create superhuman visual marketing content?
Professions
- Humanity's Last Exam
- Effort to build a dataset of challenging (but resolvable) questions in specific domain areas, to act as a benchmark to test whether AIs are improving in these challenging topics.
Medical
- 2024-03: Influence of a Large Language Model on Diagnostic Reasoning: A Randomized Clinical Vignette Study
- GPT4 improves medical practitioner work; surprisingly, GPT4 alone scored better than a human with GPT4 as aid (on selected tasks).
- 2024-10: Perspectives on Artificial Intelligence–Generated Responses to Patient Messages
- 2024-10: Large Language Model Influence on Diagnostic Reasoning; A Randomized Clinical Trial
- Use of ChatGPT does not strongly improve medical expert work; but AI alone out-scores human or human+AI
- 2024-11: Large language models surpass human experts in predicting neuroscience results (writeup: AI can predict neuroscience study results better than human experts, study finds)
- 2024-12: Superhuman performance of a large language model on the reasoning tasks of a physician
- 2024-12: HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs
- 2025-02: Media:
- 2025-02: GPT-4 assistance for improvement of physician performance on patient care tasks: a randomized controlled trial
- 2025-02: Artificial intelligence for individualized treatment of persistent atrial fibrillation: a randomized controlled trial
- Google AI Clinician:
- 2024-01: Towards Conversational Diagnostic AI (blog: Articulate Medical Intelligence Explorer, AMIE)
- 2025-03: Towards Conversational AI for Disease Management (blog)
- 2025-02: Med-RLVR: Emerging Medical Reasoning from a 3B base model via reinforcement Learning
- 2025-03: Med-R1: Reinforcement Learning for Generalizable Medical Reasoning in Vision-Language Models
- 2025-04: Comparison of Initial Artificial Intelligence (AI) and Final Physician Recommendations in AI-Assisted Virtual Urgent Care Visits
- 2025-04: Towards conversational diagnostic artificial intelligence
- 2025-04: Towards accurate differential diagnosis with large language models
Therapy
- 2025-02: When ELIZA meets therapists: A Turing test for the heart and mind
- 2025-03: Therabot: Randomized Trial of a Generative AI Chatbot for Mental Health Treatment
Financial
AI improves human work
- 2023-07: Experimental evidence on the productivity effects of generative artificial intelligence
- 2023-09: Dell'Acqua, Fabrizio and McFowland III, Edward and Mollick, Ethan R. and Lifshitz-Assaf, Hila and Kellogg, Katherine and Rajendran, Saran and Krayer, Lisa and Candelon, François and Lakhani, Karim R., Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality (September 15, 2023). Harvard Business School Technology & Operations Mgt. Unit Working Paper No. 24-013, The Wharton School Research Paper doi: 10.2139/ssrn.4573321
- 2023-11: Generative AI at Work (National Bureau of Economic Research)
- 2023-12: The Uneven Impact of Generative AI on Entrepreneurial Performance (doi: 10.31219/osf.io/hdjpk)
- 2023-12: Artificial Intelligence in the Knowledge Economy: Non-autonomous AI (chatbot) benefits least knowledgeable workers; autonomous agents benefit the most knowledgeable workers
- 2024-07: Generative AI in Real-World Workplaces: The Second Microsoft Report on AI and Productivity Research
- 2025-03: The Cybernetic Teammate: A Field Experiment on Generative AI Reshaping Teamwork and Expertise
- 2025-03: Ethan Mollick: Cybernetic Teammate: Having an AI on your team can increase performance, provide expertise, and improve your experience
Coding
- 2023-02: The Impact of AI on Developer Productivity: Evidence from GitHub Copilot
- 2024-09: Cui, Zheyuan and Demirer, Mert and Jaffe, Sonia and Musolff, Leon and Peng, Sida and Salz, Tobias, The Effects of Generative AI on High Skilled Work: Evidence from Three Field Experiments with Software Developers (September 03, 2024). doi: 10.2139/ssrn.4945566
- 2024-11: Hoffmann, Manuel and Boysel, Sam and Nagle, Frank and Peng, Sida and Xu, Kevin, Generative AI and the Nature of Work (October 27, 2024). Harvard Business School Strategy Unit Working Paper No. 25-021, Harvard Business Working Paper No. No. 25-021, doi: 10.2139/ssrn.5007084
Forecasting
Finance
- 2024-12: AI, Investment Decisions, and Inequality: Novices see improvements in investment performance, sophisticated investors see even greater improvements.
Law
Medical
Translation
Customer service
- 2023-11: Generative AI at Work: Improvements for workers and clients (though also a ceiling to improvement)
Creativity
- 2024-07: Generative AI enhances individual creativity but reduces the collective diversity of novel content
- 2024-08: An empirical investigation of the impact of ChatGPT on creativity
- 2024-08: The Crowdless Future? Generative AI and Creative Problem-Solving
- 2024-10: Human Creativity in the Age of LLMs
- 2024-11: Artificial Intelligence, Scientific Discovery, and Product Innovation: diffusion model increases "innovation" (patents), boosts the best performers, but also removes some enjoyable tasks.
- 2024-12: Using AI to Generate Visual Art: Do Individual Differences in Creativity Predict AI-Assisted Art Quality? (preprint): shows that more creative humans produce more creative genAI outputs
- 2025-01: One Does Not Simply Meme Alone: Evaluating Co-Creativity Between LLMs and Humans in the Generation of Humor
Equity
Counter loneliness
- 2024-07: AI Companions Reduce Loneliness
- 2025-03: How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Controlled Study
AI worse than humans
Human Perceptions of AI
- 2023-09: AI and science: what 1,600 researchers think. A Nature survey finds that scientists are concerned, as well as excited, by the increasing use of artificial-intelligence tools in research.
- 2024-11: Attitudes and perceptions of medical researchers towards the use of artificial intelligence chatbots in the scientific process: an international cross-sectional survey (Nature commentary: Quest for AI literacy)
- 2025-03: Users Favor LLM-Generated Content -- Until They Know It's AI
AI passes Turing Test
Text Dialog
- 2023-05: Human or Not? A Gamified Approach to the Turing Test
- 2023-10: Does GPT-4 pass the Turing test?
- 2024-05: People cannot distinguish GPT-4 from a human in a Turing test
- 2024-07: GPT-4 is judged more human than humans in displaced and inverted Turing tests
- 2025-03: Large Language Models Pass the Turing Test
Art
- 2024-11: How Did You Do On The AI Art Turing Test? Differentiation was only slightly above random (60%). AI art was often ranked higher than human-made.
- 2024-11: AI-generated poetry is indistinguishable from human-written poetry and is rated more favorably
Psychological Effects of AI Usage
- 2025-03: Investigating Affective Use and Emotional Well-being on ChatGPT
- 2025-03: How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Controlled Study
- 2025-03: The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers
Uptake
- 2023-07: ChatGPT: Early Adopters, Teething Issues and the Way Forward
- 2024-03: Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews
- 2024-05: Humlum, Anders and Vestergaard, Emilie, The Adoption of ChatGPT. IZA Discussion Paper No. 16992 doi: 10.2139/ssrn.4827166
- 2024-06: Kellogg, Katherine and Lifshitz-Assaf, Hila and Randazzo, Steven and Mollick, Ethan R. and Dell'Acqua, Fabrizio and McFowland III, Edward and Candelon, Francois and Lakhani, Karim R., Don't Expect Juniors to Teach Senior Professionals to Use Generative AI: Emerging Technology Risks and Novice AI Risk Mitigation Tactics (June 03, 2024). Harvard Business School Technology & Operations Mgt. Unit Working Paper 24-074, Harvard Business Working Paper No. 24-074, The Wharton School Research Paper doi: 10.2139/ssrn.4857373
- 2024-06: Delving into ChatGPT usage in academic writing through excess vocabulary
- 2024-09: The Rapid Adoption of Generative AI
- 2024-10: Growing Up: Navigating Generative AI’s Early Years – AI Adoption Report (executive summary, full report)
- 72% of leaders use genAI at least once a week (c.f. 23% in 2023); 90% agree AI enhances skills (c.f. 80% in 2023)
- Spending on genAI is up 130% (most companies plan to invest going forward)
- 2024-12: The unequal adoption of ChatGPT exacerbates existing inequalities among workers
- Higher adoption among young and less experienced
- Lower adoption among women and lower-earning workers
- 2025-02: The Widespread Adoption of Large Language Model-Assisted Writing Across Society: 10-25% adoption across a range of contexts
- 2025-02: Local Heterogeneity in Artificial Intelligence Jobs Over Time and Space
Usage For
- 2024-12: Clio: A system for privacy-preserving insights into real-world AI use (Anthropic Clio)
- 2025-03: How People are Really Using Generative AI Now (writeup)
- 2025-04: Anthropic Education Report: How University Students Use Claude
Sentiment
- 2025-04: Pew Research: How the U.S. Public and AI Experts View Artificial Intelligence
Persuasion
(AI can update beliefs, change opinions, tackle conspiracy theories, etc.)
- 2022-11: Fine-tuning language models to find agreement among humans with diverse preferences
- 2024-08: Conversational AI Powered by Large Language Models Amplifies False Memories in Witness Interviews
- 2024-04: Just the facts: How dialogues with AI reduce conspiracy beliefs
- 2024-09: Durably reducing conspiracy beliefs through dialogues with AI
- 2025-03: Scaling language model size yields diminishing returns for single-message political persuasion
Simulate Humans
- See also: Human brain
- 2021-10: Explaining and predicting human behavior and social dynamics in simulated virtual worlds: reproducibility, generalizability, and robustness of causal discovery methods
- 2023-12: Google: Generative agent-based modeling with actions grounded in physical, social, or digital space using Concordia
- 2024-02: Are Large Language Models (LLMs) Good Social Predictors?
- 2024-04: Automated Social Science: Language Models as Scientist and Subjects
- 2024-07: Perils and opportunities in using large language models in psychological research
- 2024-08: Predicting Results of Social Science Experiments Using Large Language Models
- 2024-10: Large Language Models based on historical text could offer informative tools for behavioral science
- 2025-04: LLM Social Simulations Are a Promising Research Method
- 2025-04: Measuring Human Leadership Skills with AI Agents