Difference between revisions of "AI creativity"
KevinYager (talk | contribs) (→Research) |
KevinYager (talk | contribs) |
||
| Line 22: | Line 22: | ||
=Benchmarks= | =Benchmarks= | ||
See: [[AI_benchmarks#Creativity| AI benchmarks > Creativity]] | See: [[AI_benchmarks#Creativity| AI benchmarks > Creativity]] | ||
| + | |||
| + | =Collapse= | ||
| + | * 2024-07: [https://www.nature.com/articles/s41586-024-07566-y AI models collapse when trained on recursively generated data] | ||
| + | * 2024-07: [https://arxiv.org/abs/2407.02209 Generative Monoculture in Large Language Models] | ||
| + | * 2024-12: [https://arxiv.org/abs/2412.17646 Rate of Model Collapse in Recursive Training] | ||
| + | ==Analysis== | ||
| + | * 2024-02: [https://arxiv.org/abs/2402.04376 Scaling laws for learning with real and surrogate data] | ||
| + | * 2024-02: [https://arxiv.org/abs/2402.07712 Model Collapse Demystified: The Case of Regression] | ||
| + | ==LLM== | ||
| + | * 2025-10: [https://arxiv.org/abs/2510.13928 LLMs Can Get "Brain Rot"!] | ||
| + | ==Image Models== | ||
| + | * 2023-07: [https://arxiv.org/abs/2307.01850 Self-Consuming Generative Models Go MAD] | ||
| + | * 2023-10: [https://arxiv.org/abs/2310.00429 On the Stability of Iterative Retraining of Generative Models on their own Data] | ||
=See Also= | =See Also= | ||
Revision as of 08:53, 4 February 2026
Research
- 2024-01: Prompting Diverse Ideas: Increasing AI Idea Variance
- 2024-01: Creativity and AI
- 2024-09: Can LLMs Generate Novel Research Ideas? A Large-Scale Human Study with 100+ NLP Researchers
- 2024-11:
Artificial Intelligence, Scientific Discovery, and Product Innovation- 2025-05: Retraction: Assuring an accurate research record
- 2024-12: Surveying the Effects of Quality, Diversity, and Complexity in Synthetic Data From Large Language Models
- 2024-12: LLMs can realize combinatorial creativity: generating creative ideas via LLMs for scientific research
- 2024-12: LiveIdeaBench: Evaluating LLMs' Scientific Creativity and Idea Generation with Minimal Context
- 2025-03: The Cybernetic Teammate: A Field Experiment on Generative AI Reshaping Teamwork and Expertise
- 2025-03: Midjourney: Modifying Large Language Model Post-Training for Diverse Creative Writing
- 2025-04: Has the Creativity of Large-Language Models peaked? An analysis of inter- and intra-LLM variability
- 2025-05: Creative Preference Optimization
- 2025-05: Generative AI and Creativity: A Systematic Literature Review and Meta-Analysis
- 2025-10: Verbalized Sampling: How to Mitigate Mode Collapse and Unlock LLM Diversity
- 2025-10: Why Did Apple Fall To The Ground: Evaluating Curiosity In Large Language Model
- 2025-10: Artificial Hivemind: The Open-Ended Homogeneity of Language Models (and Beyond)
- 2025-12: A large-scale comparison of divergent creativity in humans and large language models
- 2026-01: OpenNovelty: An LLM-powered Agentic System for Verifiable Scholarly Novelty Assessment
- 2026-01: Divergent creativity in humans and large language models
Benchmarks
See: AI benchmarks > Creativity
Collapse
- 2024-07: AI models collapse when trained on recursively generated data
- 2024-07: Generative Monoculture in Large Language Models
- 2024-12: Rate of Model Collapse in Recursive Training
Analysis
- 2024-02: Scaling laws for learning with real and surrogate data
- 2024-02: Model Collapse Demystified: The Case of Regression
LLM
- 2025-10: LLMs Can Get "Brain Rot"!
Image Models
- 2023-07: Self-Consuming Generative Models Go MAD
- 2023-10: On the Stability of Iterative Retraining of Generative Models on their own Data