Difference between revisions of "AI creativity"

From GISAXS
Jump to: navigation, search
(Research)
(Collapse)
 
(9 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
=Research=
 
=Research=
 +
* 2024-01: [https://arxiv.org/abs/2402.01727 Prompting Diverse Ideas: Increasing AI Idea Variance]
 +
* 2024-01: [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4686415 Creativity and AI]
 
* 2024-09: [https://arxiv.org/abs/2409.04109 Can LLMs Generate Novel Research Ideas? A Large-Scale Human Study with 100+ NLP Researchers]
 
* 2024-09: [https://arxiv.org/abs/2409.04109 Can LLMs Generate Novel Research Ideas? A Large-Scale Human Study with 100+ NLP Researchers]
 
* 2024-11: <strike>[https://aidantr.github.io/files/AI_innovation.pdf Artificial Intelligence, Scientific Discovery, and Product Innovation]</strike>
 
* 2024-11: <strike>[https://aidantr.github.io/files/AI_innovation.pdf Artificial Intelligence, Scientific Discovery, and Product Innovation]</strike>
Line 6: Line 8:
 
* 2024-12: [https://arxiv.org/abs/2412.14141 LLMs can realize combinatorial creativity: generating creative ideas via LLMs for scientific research]
 
* 2024-12: [https://arxiv.org/abs/2412.14141 LLMs can realize combinatorial creativity: generating creative ideas via LLMs for scientific research]
 
* 2024-12: [https://arxiv.org/abs/2412.17596 LiveIdeaBench: Evaluating LLMs' Scientific Creativity and Idea Generation with Minimal Context]
 
* 2024-12: [https://arxiv.org/abs/2412.17596 LiveIdeaBench: Evaluating LLMs' Scientific Creativity and Idea Generation with Minimal Context]
 +
* 2025-03: [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5188231 The Cybernetic Teammate: A Field Experiment on Generative AI Reshaping Teamwork and Expertise]
 
* 2025-03: Midjourney: [https://www.arxiv.org/abs/2503.17126 Modifying Large Language Model Post-Training for Diverse Creative Writing]
 
* 2025-03: Midjourney: [https://www.arxiv.org/abs/2503.17126 Modifying Large Language Model Post-Training for Diverse Creative Writing]
 
* 2025-04: [https://arxiv.org/abs/2504.12320 Has the Creativity of Large-Language Models peaked? An analysis of inter- and intra-LLM variability]
 
* 2025-04: [https://arxiv.org/abs/2504.12320 Has the Creativity of Large-Language Models peaked? An analysis of inter- and intra-LLM variability]
 
* 2025-05: [https://arxiv.org/abs/2505.14442 Creative Preference Optimization]
 
* 2025-05: [https://arxiv.org/abs/2505.14442 Creative Preference Optimization]
 
* 2025-05: [https://arxiv.org/abs/2505.17241 Generative AI and Creativity: A Systematic Literature Review and Meta-Analysis]
 
* 2025-05: [https://arxiv.org/abs/2505.17241 Generative AI and Creativity: A Systematic Literature Review and Meta-Analysis]
 +
* 2025-10: [https://arxiv.org/abs/2510.01171 Verbalized Sampling: How to Mitigate Mode Collapse and Unlock LLM Diversity]
 +
* 2025-10: [https://arxiv.org/abs/2510.20635 Why Did Apple Fall To The Ground: Evaluating Curiosity In Large Language Model]
 +
* 2025-10: [https://arxiv.org/abs/2510.22954 Artificial Hivemind: The Open-Ended Homogeneity of Language Models (and Beyond)]
 +
* 2025-12: [https://www.nature.com/articles/s41562-025-02331-1 A large-scale comparison of divergent creativity in humans and large language models]
 +
* 2026-01: [https://www.arxiv.org/abs/2601.01576 OpenNovelty: An LLM-powered Agentic System for Verifiable Scholarly Novelty Assessment]
 +
* 2026-01: [https://www.nature.com/articles/s41598-025-25157-3 Divergent creativity in humans and large language models]
  
 
=Benchmarks=
 
=Benchmarks=
 
See: [[AI_benchmarks#Creativity| AI benchmarks > Creativity]]
 
See: [[AI_benchmarks#Creativity| AI benchmarks > Creativity]]
 +
 +
=Collapse=
 +
* 2024-07: [https://www.nature.com/articles/s41586-024-07566-y AI models collapse when trained on recursively generated data]
 +
* 2024-07: [https://arxiv.org/abs/2407.02209 Generative Monoculture in Large Language Models]
 +
* 2024-12: [https://arxiv.org/abs/2412.17646 Rate of Model Collapse in Recursive Training]
 +
==Analysis==
 +
* 2024-02: [https://arxiv.org/abs/2402.04376 Scaling laws for learning with real and surrogate data]
 +
* 2024-02: [https://arxiv.org/abs/2402.07712 Model Collapse Demystified: The Case of Regression]
 +
==LLM==
 +
* 2023-05: [https://arxiv.org/abs/2305.17493 The Curse of Recursion: Training on Generated Data Makes Models Forget]
 +
* 2024-04: [https://arxiv.org/abs/2404.03502 AI and the Problem of Knowledge Collapse]
 +
* 2025-10: [https://arxiv.org/abs/2510.13928 LLMs Can Get "Brain Rot"!]
 +
==Image Models==
 +
* 2023-07: [https://arxiv.org/abs/2307.01850 Self-Consuming Generative Models Go MAD]
 +
* 2023-10: [https://arxiv.org/abs/2310.00429 On the Stability of Iterative Retraining of Generative Models on their own Data]
 +
==Solutions==
 +
* 2024-03: [https://arxiv.org/abs/2403.04706 Common 7B Language Models Already Possess Strong Math Capabilities]
 +
* 2024-04: [https://arxiv.org/abs/2404.01413 Is Model Collapse Inevitable? Breaking the Curse of Recursion by Accumulating Real and Synthetic Data]
 +
* 2024-06: [https://arxiv.org/abs/2406.07515 Beyond Model Collapse: Scaling Up with Synthesized Data Requires Verification]
 +
* 2024-07: [https://arxiv.org/abs/2407.01490 LLM See, LLM Do: Guiding Data Generation to Target Non-Differentiable Objectives]
 +
* 2024-08: [https://arxiv.org/abs/2408.14960 Multilingual Arbitrage: Optimizing Data Pools to Accelerate Multilingual Progress]
  
 
=See Also=
 
=See Also=

Latest revision as of 08:56, 4 February 2026

Research

Benchmarks

See: AI benchmarks > Creativity

Collapse

Analysis

LLM

Image Models

Solutions

See Also