Difference between revisions of "AI creativity"

From GISAXS
Jump to: navigation, search
(Research)
(Collapse)
 
(2 intermediate revisions by the same user not shown)
Line 16: Line 16:
 
* 2025-10: [https://arxiv.org/abs/2510.20635 Why Did Apple Fall To The Ground: Evaluating Curiosity In Large Language Model]
 
* 2025-10: [https://arxiv.org/abs/2510.20635 Why Did Apple Fall To The Ground: Evaluating Curiosity In Large Language Model]
 
* 2025-10: [https://arxiv.org/abs/2510.22954 Artificial Hivemind: The Open-Ended Homogeneity of Language Models (and Beyond)]
 
* 2025-10: [https://arxiv.org/abs/2510.22954 Artificial Hivemind: The Open-Ended Homogeneity of Language Models (and Beyond)]
 +
* 2025-12: [https://www.nature.com/articles/s41562-025-02331-1 A large-scale comparison of divergent creativity in humans and large language models]
 
* 2026-01: [https://www.arxiv.org/abs/2601.01576 OpenNovelty: An LLM-powered Agentic System for Verifiable Scholarly Novelty Assessment]
 
* 2026-01: [https://www.arxiv.org/abs/2601.01576 OpenNovelty: An LLM-powered Agentic System for Verifiable Scholarly Novelty Assessment]
 +
* 2026-01: [https://www.nature.com/articles/s41598-025-25157-3 Divergent creativity in humans and large language models]
  
 
=Benchmarks=
 
=Benchmarks=
 
See: [[AI_benchmarks#Creativity| AI benchmarks > Creativity]]
 
See: [[AI_benchmarks#Creativity| AI benchmarks > Creativity]]
 +
 +
=Collapse=
 +
* 2024-07: [https://www.nature.com/articles/s41586-024-07566-y AI models collapse when trained on recursively generated data]
 +
* 2024-07: [https://arxiv.org/abs/2407.02209 Generative Monoculture in Large Language Models]
 +
* 2024-12: [https://arxiv.org/abs/2412.17646 Rate of Model Collapse in Recursive Training]
 +
==Analysis==
 +
* 2024-02: [https://arxiv.org/abs/2402.04376 Scaling laws for learning with real and surrogate data]
 +
* 2024-02: [https://arxiv.org/abs/2402.07712 Model Collapse Demystified: The Case of Regression]
 +
==LLM==
 +
* 2023-05: [https://arxiv.org/abs/2305.17493 The Curse of Recursion: Training on Generated Data Makes Models Forget]
 +
* 2024-04: [https://arxiv.org/abs/2404.03502 AI and the Problem of Knowledge Collapse]
 +
* 2025-10: [https://arxiv.org/abs/2510.13928 LLMs Can Get "Brain Rot"!]
 +
==Image Models==
 +
* 2023-07: [https://arxiv.org/abs/2307.01850 Self-Consuming Generative Models Go MAD]
 +
* 2023-10: [https://arxiv.org/abs/2310.00429 On the Stability of Iterative Retraining of Generative Models on their own Data]
 +
==Solutions==
 +
* 2024-03: [https://arxiv.org/abs/2403.04706 Common 7B Language Models Already Possess Strong Math Capabilities]
 +
* 2024-04: [https://arxiv.org/abs/2404.01413 Is Model Collapse Inevitable? Breaking the Curse of Recursion by Accumulating Real and Synthetic Data]
 +
* 2024-06: [https://arxiv.org/abs/2406.07515 Beyond Model Collapse: Scaling Up with Synthesized Data Requires Verification]
 +
* 2024-07: [https://arxiv.org/abs/2407.01490 LLM See, LLM Do: Guiding Data Generation to Target Non-Differentiable Objectives]
 +
* 2024-08: [https://arxiv.org/abs/2408.14960 Multilingual Arbitrage: Optimizing Data Pools to Accelerate Multilingual Progress]
  
 
=See Also=
 
=See Also=

Latest revision as of 08:56, 4 February 2026

Research

Benchmarks

See: AI benchmarks > Creativity

Collapse

Analysis

LLM

Image Models

Solutions

See Also