Difference between revisions of "AI and Humans"

From GISAXS
Jump to: navigation, search
(AI improves human work)
(Persuasion)
 
(36 intermediate revisions by the same user not shown)
Line 6: Line 6:
 
* 2023-10: [https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2023/10/31/most-students-outrunning-faculty-ai-use?utm_source=Inside+Higher+Ed&utm_campaign=23419446b9-DNU_2021_COPY_02&utm_medium=email&utm_term=0_1fcbc04421-23419446b9-236889242&mc_cid=23419446b9&mc_eid=dae49d931a Survey] shows students using AI more than professors.
 
* 2023-10: [https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2023/10/31/most-students-outrunning-faculty-ai-use?utm_source=Inside+Higher+Ed&utm_campaign=23419446b9-DNU_2021_COPY_02&utm_medium=email&utm_term=0_1fcbc04421-23419446b9-236889242&mc_cid=23419446b9&mc_eid=dae49d931a Survey] shows students using AI more than professors.
 
* 2023-11: [https://www.nature.com/articles/d41586-023-03507-3 ChatGPT has entered the classroom: how LLMs could transform education]
 
* 2023-11: [https://www.nature.com/articles/d41586-023-03507-3 ChatGPT has entered the classroom: how LLMs could transform education]
 +
* 2025-04: [https://www.anthropic.com/news/anthropic-education-report-how-university-students-use-claude Anthropic Education Report: How University Students Use Claude]
  
 
==AI improves learning/education==
 
==AI improves learning/education==
Line 22: Line 23:
 
* [https://arxiv.org/abs/2409.09047 AI Meets the Classroom: When Do Large Language Models Harm Learning?]
 
* [https://arxiv.org/abs/2409.09047 AI Meets the Classroom: When Do Large Language Models Harm Learning?]
 
** Outcomes depend on usage
 
** Outcomes depend on usage
 +
* [https://www.deeplearning.ai/the-batch/gpt-4-boosts-remote-tutors-performance-in-real-time-study-finds/ LLM Support for Tutors GPT-4 boosts remote tutors’ performance in real time, study finds]
 +
** [https://arxiv.org/abs/2410.03017 Tutor CoPilot: A Human-AI Approach for Scaling Real-Time Expertise]
  
 
==AI harms learning==
 
==AI harms learning==
Line 42: Line 45:
 
* [https://notebooklm.google.com/ NotebookLM]: Enables one to "chat with documents".
 
* [https://notebooklm.google.com/ NotebookLM]: Enables one to "chat with documents".
 
* Google [https://learning.google.com/experiments/learn-about/signup Learn About]
 
* Google [https://learning.google.com/experiments/learn-about/signup Learn About]
 +
 +
===Systems===
 +
* [https://www.anthropic.com/news/introducing-claude-for-education Anthropic] [https://www.anthropic.com/education Claude for Education]
  
 
==AI for grading==
 
==AI for grading==
Line 57: Line 63:
  
 
=AI/human=
 
=AI/human=
 +
==Capabilities==
 +
===Writing===
 +
 +
* 2022-12: [https://aclanthology.org/2022.emnlp-main.296/ Re3: Generating Longer Stories With Recursive Reprompting and Revision]
 +
* 2023-03: English essays: [https://journal.unnes.ac.id/sju/index.php/elt/article/view/64069 Artificial intelligence (AI) technology in OpenAI ChatGPT application: A review of ChatGPT in writing English essay]
 +
* 2023-01: Journalism: [https://journals.sagepub.com/doi/10.1177/10776958221149577 Collaborating With ChatGPT: Considering the Implications of Generative Artificial Intelligence for Journalism and Media Education]
 +
* 2023-07: Science writing: [https://www.rbmojournal.com/article/S1472-6483(23)00219-5/fulltext Artificial intelligence in scientific writing: a friend or a foe?]
 +
* 2024-02: Wikipedia style: [https://arxiv.org/abs/2402.14207 Assisting in Writing Wikipedia-like Articles From Scratch with Large Language Models]
 +
* 2024-02: [https://arxiv.org/abs/2408.07055 LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs] ([https://github.com/THUDM/LongWriter code])
 +
* 2024-08: Scientific papers: [The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery]
 +
* 2024-09: PaperQA2: [https://paper.wikicrow.ai/ Language Models Achieve Superhuman Synthesis of Scientific Knowledge] ([https://x.com/SGRodriques/status/1833908643856818443 𝕏 post], [https://github.com/Future-House/paper-qa code])
 +
* 2025-03: [https://arxiv.org/abs/2503.19065 WikiAutoGen: Towards Multi-Modal Wikipedia-Style Article Generation]
 +
* 2025-03: [https://arxiv.org/abs/2503.22828 Learning to Reason for Long-Form Story Generation]
 +
 
==AI out-performs humans==
 
==AI out-performs humans==
 
===Tests===
 
===Tests===
Line 79: Line 99:
 
* 2024-11: [https://doi.org/10.1038/s41598-024-76900-1 AI-generated poetry is indistinguishable from human-written poetry and is rated more favorably]
 
* 2024-11: [https://doi.org/10.1038/s41598-024-76900-1 AI-generated poetry is indistinguishable from human-written poetry and is rated more favorably]
 
* 2024-11: [https://www.astralcodexten.com/p/how-did-you-do-on-the-ai-art-turing How Did You Do On The AI Art Turing Test?]
 
* 2024-11: [https://www.astralcodexten.com/p/how-did-you-do-on-the-ai-art-turing How Did You Do On The AI Art Turing Test?]
 +
 +
===Business & Marketing===
 +
* 2023-11: [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4597899 The power of generative marketing: Can generative AI create superhuman visual marketing content?]
 +
* 2024-02: [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4714776 Generative Artificial Intelligence and Evaluating Strategic Decisions]
  
 
===Professions===
 
===Professions===
 
* [https://agi.safe.ai/submit Humanity's Last Exam]
 
* [https://agi.safe.ai/submit Humanity's Last Exam]
 
** [https://x.com/alexandr_wang/status/1835738937719140440 Effort to build] a dataset of challenging (but resolvable) questions in specific domain areas, to act as a benchmark to test whether AIs are improving in these challenging topics.
 
** [https://x.com/alexandr_wang/status/1835738937719140440 Effort to build] a dataset of challenging (but resolvable) questions in specific domain areas, to act as a benchmark to test whether AIs are improving in these challenging topics.
 +
 +
====Coding====
 +
* 2025-02: [https://arxiv.org/abs/2502.06807 Competitive Programming with Large Reasoning Models]
  
 
====Medical====
 
====Medical====
Line 93: Line 120:
 
* 2024-12: [https://www.arxiv.org/abs/2412.10849 Superhuman performance of a large language model on the reasoning tasks of a physician]
 
* 2024-12: [https://www.arxiv.org/abs/2412.10849 Superhuman performance of a large language model on the reasoning tasks of a physician]
 
* 2024-12: [https://arxiv.org/abs/2412.18925 HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs]
 
* 2024-12: [https://arxiv.org/abs/2412.18925 HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs]
 +
* 2025-02: Media:
 +
** NY Times: [https://www.nytimes.com/2025/02/02/opinion/ai-doctors-medicine.html The Robot Doctor Will See You Now]
 +
** [https://erictopol.substack.com/p/when-doctors-with-ai-are-outperformed When Doctors With A.I. Are Outperformed by A.I. Alone]
 
* 2025-02: [https://www.nature.com/articles/s41591-024-03456-y GPT-4 assistance for improvement of physician performance on patient care tasks: a randomized controlled trial]
 
* 2025-02: [https://www.nature.com/articles/s41591-024-03456-y GPT-4 assistance for improvement of physician performance on patient care tasks: a randomized controlled trial]
 
* 2025-02: [https://www.nature.com/articles/s41591-025-03517-w Artificial intelligence for individualized treatment of persistent atrial fibrillation: a randomized controlled trial]
 
* 2025-02: [https://www.nature.com/articles/s41591-025-03517-w Artificial intelligence for individualized treatment of persistent atrial fibrillation: a randomized controlled trial]
Line 98: Line 128:
 
** 2024-01: [https://arxiv.org/abs/2401.05654 Towards Conversational Diagnostic AI] ([https://research.google/blog/amie-a-research-ai-system-for-diagnostic-medical-reasoning-and-conversations/ blog]: Articulate Medical Intelligence Explorer, AMIE)
 
** 2024-01: [https://arxiv.org/abs/2401.05654 Towards Conversational Diagnostic AI] ([https://research.google/blog/amie-a-research-ai-system-for-diagnostic-medical-reasoning-and-conversations/ blog]: Articulate Medical Intelligence Explorer, AMIE)
 
** 2025-03: [https://www.gstatic.com/amie/towards_conversational_ai_for_disease_management.pdf Towards Conversational AI for Disease Management] ([https://research.google/blog/from-diagnosis-to-treatment-advancing-amie-for-longitudinal-disease-management/ blog])
 
** 2025-03: [https://www.gstatic.com/amie/towards_conversational_ai_for_disease_management.pdf Towards Conversational AI for Disease Management] ([https://research.google/blog/from-diagnosis-to-treatment-advancing-amie-for-longitudinal-disease-management/ blog])
 +
* 2025-02: [https://arxiv.org/abs/2502.19655 Med-RLVR: Emerging Medical Reasoning from a 3B base model via reinforcement Learning]
 
* 2025-03: [https://arxiv.org/abs/2503.13939 Med-R1: Reinforcement Learning for Generalizable Medical Reasoning in Vision-Language Models]
 
* 2025-03: [https://arxiv.org/abs/2503.13939 Med-R1: Reinforcement Learning for Generalizable Medical Reasoning in Vision-Language Models]
 +
* 2025-04: [https://www.acpjournals.org/doi/10.7326/ANNALS-24-03283 Comparison of Initial Artificial Intelligence (AI) and Final Physician Recommendations in AI-Assisted Virtual Urgent Care Visits]
 +
* 2025-04: [https://www.nature.com/articles/s41586-025-08866-7?linkId=13898052 Towards conversational diagnostic artificial intelligence]
 +
* 2025-04: [https://www.nature.com/articles/s41586-025-08869-4?linkId=13898054 Towards accurate differential diagnosis with large language models]
 +
 +
====Bio====
 +
* 2025-04: [https://www.virologytest.ai/vct_paper.pdf Virology Capabilities Test (VCT): A Multimodal Virology Q&A Benchmark]
 +
** Time: [https://time.com/7279010/ai-virus-lab-biohazard-study/ Exclusive: AI Outsmarts Virus Experts in the Lab, Raising Biohazard Fears]
 +
** AI Frontiers: [https://www.ai-frontiers.org/articles/ais-are-disseminating-expert-level-virology-skills AIs Are Disseminating Expert-Level Virology Skills]
  
 
====Therapy====
 
====Therapy====
 
* 2025-02: [https://journals.plos.org/mentalhealth/article?id=10.1371/journal.pmen.0000145 When ELIZA meets therapists: A Turing test for the heart and mind]
 
* 2025-02: [https://journals.plos.org/mentalhealth/article?id=10.1371/journal.pmen.0000145 When ELIZA meets therapists: A Turing test for the heart and mind]
 +
* 2025-03: Therabot: [https://ai.nejm.org/doi/full/10.1056/AIoa2400802 Randomized Trial of a Generative AI Chatbot for Mental Health Treatment]
  
 
====Financial====
 
====Financial====
Line 131: Line 171:
  
 
===Medical===
 
===Medical===
* 2025-03L: [https://www.medrxiv.org/content/10.1101/2025.02.28.25323115v1.full Medical Hallucination in Foundation Models and Their Impact on Healthcare]
+
* 2025-03: [https://www.medrxiv.org/content/10.1101/2025.02.28.25323115v1.full Medical Hallucination in Foundation Models and Their Impact on Healthcare]
 +
* 2025-03: [https://journals.lww.com/international-journal-of-surgery/fulltext/2025/03000/chatgpt_s_role_in_alleviating_anxiety_in_total.20.aspx ChatGPT’s role in alleviating anxiety in total knee arthroplasty consent process: a randomized controlled trial pilot study]
  
 
===Translation===
 
===Translation===
 
* 2025-01: [https://simonwillison.net/2025/Feb/2/workflow-for-translation/ A professional workflow for translation using LLMs] ([https://news.ycombinator.com/item?id=42897856 based on this])
 
* 2025-01: [https://simonwillison.net/2025/Feb/2/workflow-for-translation/ A professional workflow for translation using LLMs] ([https://news.ycombinator.com/item?id=42897856 based on this])
 +
 +
===Customer service===
 +
* 2023-11: [https://www.nber.org/papers/w31161 Generative AI at Work]: Improvements for workers and clients (though also a ceiling to improvement)
  
 
===Creativity===
 
===Creativity===
Line 151: Line 195:
 
* 2024-07: [https://arxiv.org/abs/2407.19096 AI Companions Reduce Loneliness]
 
* 2024-07: [https://arxiv.org/abs/2407.19096 AI Companions Reduce Loneliness]
 
* 2025-03: [https://dam-prod2.media.mit.edu/x/2025/03/21/Randomized_Control_Study_on_Chatbot_Psychosocial_Effect.pdf How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Controlled Study]
 
* 2025-03: [https://dam-prod2.media.mit.edu/x/2025/03/21/Randomized_Control_Study_on_Chatbot_Psychosocial_Effect.pdf How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Controlled Study]
 +
 +
==AI worse than humans==
 +
* 2025-04: [https://spinup-000d1a-wp-offload-media.s3.amazonaws.com/faculty/wp-content/uploads/sites/27/2025/03/AI-debt-collection-20250331.pdf How Good is AI at Twisting Arms? Experiments in Debt Collection]
  
 
==Human Perceptions of AI==
 
==Human Perceptions of AI==
 
* 2023-09: [https://www.nature.com/articles/d41586-023-02980-0 AI and science: what 1,600 researchers think. A Nature survey finds that scientists are concerned, as well as excited, by the increasing use of artificial-intelligence tools in research.]
 
* 2023-09: [https://www.nature.com/articles/d41586-023-02980-0 AI and science: what 1,600 researchers think. A Nature survey finds that scientists are concerned, as well as excited, by the increasing use of artificial-intelligence tools in research.]
 
* 2024-11: [https://doi.org/10.1016/S2589-7500(24)00202-4 Attitudes and perceptions of medical researchers towards the use of artificial intelligence chatbots in the scientific process: an international cross-sectional survey] (Nature commentary: [https://www.nature.com/articles/s41592-024-02369-5 Quest for AI literacy])
 
* 2024-11: [https://doi.org/10.1016/S2589-7500(24)00202-4 Attitudes and perceptions of medical researchers towards the use of artificial intelligence chatbots in the scientific process: an international cross-sectional survey] (Nature commentary: [https://www.nature.com/articles/s41592-024-02369-5 Quest for AI literacy])
 +
* 2025-03: [https://www.arxiv.org/abs/2503.16458 Users Favor LLM-Generated Content -- Until They Know It's AI]
  
 
===AI passes Turing Test===
 
===AI passes Turing Test===
Line 162: Line 210:
 
* 2024-05: [https://arxiv.org/abs/2405.08007 People cannot distinguish GPT-4 from a human in a Turing test]
 
* 2024-05: [https://arxiv.org/abs/2405.08007 People cannot distinguish GPT-4 from a human in a Turing test]
 
* 2024-07: [https://arxiv.org/abs/2407.08853 GPT-4 is judged more human than humans in displaced and inverted Turing tests]
 
* 2024-07: [https://arxiv.org/abs/2407.08853 GPT-4 is judged more human than humans in displaced and inverted Turing tests]
 +
* 2025-03: [https://arxiv.org/abs/2503.23674 Large Language Models Pass the Turing Test]
 +
* 2025-04: [https://www.sciencedirect.com/science/article/abs/pii/S0022103117303980 A Minimal Turing Test]
 +
 
'''Art'''
 
'''Art'''
 
* 2024-11: [https://www.astralcodexten.com/p/how-did-you-do-on-the-ai-art-turing How Did You Do On The AI Art Turing Test?] Differentiation was only slightly above random (60%). AI art was often ranked higher than human-made.
 
* 2024-11: [https://www.astralcodexten.com/p/how-did-you-do-on-the-ai-art-turing How Did You Do On The AI Art Turing Test?] Differentiation was only slightly above random (60%). AI art was often ranked higher than human-made.
Line 167: Line 218:
  
 
==Psychological Effects of AI Usage==
 
==Psychological Effects of AI Usage==
 +
* 2025-03: [https://cdn.openai.com/papers/15987609-5f71-433c-9972-e91131f399a1/openai-affective-use-study.pdf Investigating Affective Use and Emotional Well-being on ChatGPT]
 
* 2025-03: [https://dam-prod2.media.mit.edu/x/2025/03/21/Randomized_Control_Study_on_Chatbot_Psychosocial_Effect.pdf How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Controlled Study]
 
* 2025-03: [https://dam-prod2.media.mit.edu/x/2025/03/21/Randomized_Control_Study_on_Chatbot_Psychosocial_Effect.pdf How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Controlled Study]
 +
* 2025-03: [https://www.microsoft.com/en-us/research/publication/the-impact-of-generative-ai-on-critical-thinking-self-reported-reductions-in-cognitive-effort-and-confidence-effects-from-a-survey-of-knowledge-workers/ The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers]
  
 
=Uptake=
 
=Uptake=
Line 179: Line 232:
 
** 72% of leaders use genAI at least once a week (c.f. 23% in 2023); 90% agree AI enhances skills (c.f. 80% in 2023)
 
** 72% of leaders use genAI at least once a week (c.f. 23% in 2023); 90% agree AI enhances skills (c.f. 80% in 2023)
 
** Spending on genAI is up 130% (most companies plan to invest going forward)
 
** Spending on genAI is up 130% (most companies plan to invest going forward)
 +
* 2024-12: [https://www.pnas.org/doi/10.1073/pnas.2414972121 The unequal adoption of ChatGPT exacerbates existing inequalities among workers]
 +
** Higher adoption among young and less experienced
 +
** Lower adoption among women and lower-earning workers
 
* 2025-02: [https://arxiv.org/abs/2502.09747 The Widespread Adoption of Large Language Model-Assisted Writing Across Society]: 10-25% adoption across a range of contexts
 
* 2025-02: [https://arxiv.org/abs/2502.09747 The Widespread Adoption of Large Language Model-Assisted Writing Across Society]: 10-25% adoption across a range of contexts
 
* 2025-02: [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5078805 Local Heterogeneity in Artificial Intelligence Jobs Over Time and Space]
 
* 2025-02: [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5078805 Local Heterogeneity in Artificial Intelligence Jobs Over Time and Space]
Line 184: Line 240:
 
==Usage For==
 
==Usage For==
 
* 2024-12: [https://assets.anthropic.com/m/7e1ab885d1b24176/original/Clio-Privacy-Preserving-Insights-into-Real-World-AI-Use.pdf Clio: A system for privacy-preserving insights into real-world AI use] (Anthropic [https://www.anthropic.com/research/clio Clio])
 
* 2024-12: [https://assets.anthropic.com/m/7e1ab885d1b24176/original/Clio-Privacy-Preserving-Insights-into-Real-World-AI-Use.pdf Clio: A system for privacy-preserving insights into real-world AI use] (Anthropic [https://www.anthropic.com/research/clio Clio])
 +
* 2025-03: [https://learn.filtered.com/hubfs/The%202025%20Top-100%20Gen%20AI%20Use%20Case%20Report.pdf How People are Really Using Generative AI Now] ([https://hbr.org/2025/04/how-people-are-really-using-gen-ai-in-2025 writeup])
 +
* 2025-04: [https://www.anthropic.com/news/anthropic-education-report-how-university-students-use-claude Anthropic Education Report: How University Students Use Claude]
 +
 +
=Sentiment=
 +
* 2025-04: Pew Research: [https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/ How the U.S. Public and AI Experts View Artificial Intelligence]
  
 
=Persuasion=
 
=Persuasion=
Line 192: Line 253:
 
* 2024-09: [https://www.science.org/doi/10.1126/science.adq1814 Durably reducing conspiracy beliefs through dialogues with AI]
 
* 2024-09: [https://www.science.org/doi/10.1126/science.adq1814 Durably reducing conspiracy beliefs through dialogues with AI]
 
* 2025-03: [https://www.pnas.org/doi/10.1073/pnas.2413443122 Scaling language model size yields diminishing returns for single-message political persuasion]
 
* 2025-03: [https://www.pnas.org/doi/10.1073/pnas.2413443122 Scaling language model size yields diminishing returns for single-message political persuasion]
 +
* 2025-04: [https://drive.google.com/file/d/1Eo4SHrKGPErTzL1t_QmQhfZGU27jKBjx/edit Can AI Change Your View? Evidence from a Large-Scale Online Field Experiment]
 +
** [https://www.404media.co/researchers-secretly-ran-a-massive-unauthorized-ai-persuasion-experiment-on-reddit-users/ Researchers Secretly Ran a Massive, Unauthorized AI Persuasion Experiment on Reddit Users]
 +
 +
=Simulate Humans=
 +
* See also: [[Human brain]]
 +
* 2021-10: [https://www.doi.org/10.1007/s10588-021-09351-y Explaining and predicting human behavior and social dynamics in simulated virtual worlds: reproducibility, generalizability, and robustness of causal discovery methods]
 +
* 2023-12: Google: [https://arxiv.org/abs/2312.03664 Generative agent-based modeling with actions grounded in physical, social, or digital space using Concordia]
 +
* 2024-02: [https://arxiv.org/abs/2402.12620 Are Large Language Models (LLMs) Good Social Predictors?]
 +
* 2024-04: [https://arxiv.org/abs/2404.11794 Automated Social Science: Language Models as Scientist and Subjects]
 +
* 2024-07: [https://academic.oup.com/pnasnexus/article/3/7/pgae245/7712371 Perils and opportunities in using large language models in psychological research]
 +
* 2024-08: [https://samim.io/dl/Predicting%20results%20of%20social%20science%20experiments%20using%20large%20language%20models.pdf Predicting Results of Social Science Experiments Using Large Language Models]
 +
* 2024-10: [https://www.pnas.org/doi/10.1073/pnas.2407639121 Large Language Models based on historical text could offer informative tools for behavioral science]
 +
* 2025-04: [https://arxiv.org/abs/2504.02234 LLM Social Simulations Are a Promising Research Method]
 +
* 2025-04: [https://www.nber.org/papers/w33662 Measuring Human Leadership Skills with AI Agents]
 +
* 2025-04: [https://arxiv.org/abs/2504.10157 SocioVerse: A World Model for Social Simulation Powered by LLM Agents and A Pool of 10 Million Real-World Users]
  
 
=See Also=
 
=See Also=
 
* [https://www.google.com/books/edition/_/cKnYEAAAQBAJ?hl=en&gbpv=1&pg=PA2 UNESCO. Guidance for Generative AI in Education and Research]
 
* [https://www.google.com/books/edition/_/cKnYEAAAQBAJ?hl=en&gbpv=1&pg=PA2 UNESCO. Guidance for Generative AI in Education and Research]
 
* [[AI]]
 
* [[AI]]

Latest revision as of 21:07, 28 April 2025

AI in Education

Survey/study of

AI improves learning/education

AI harms learning

Software/systems

LLMs

Individual tools

Systems

AI for grading

Detection

AI Text Detectors Don't Work

AI/human

Capabilities

Writing

AI out-performs humans

Tests

Creativity

Art

Business & Marketing

Professions

  • Humanity's Last Exam
    • Effort to build a dataset of challenging (but resolvable) questions in specific domain areas, to act as a benchmark to test whether AIs are improving in these challenging topics.

Coding

Medical

Bio

Therapy

Financial

AI improves human work

Coding

Forecasting

Finance

Law

Medical

Translation

Customer service

  • 2023-11: Generative AI at Work: Improvements for workers and clients (though also a ceiling to improvement)

Creativity

Equity

Counter loneliness

AI worse than humans

Human Perceptions of AI

AI passes Turing Test

Text Dialog

Art

Psychological Effects of AI Usage

Uptake

Usage For

Sentiment

Persuasion

(AI can update beliefs, change opinions, tackle conspiracy theories, etc.)

Simulate Humans

See Also