Difference between revisions of "AI video"
KevinYager (talk | contribs) |
KevinYager (talk | contribs) (→Evolution of Capabilities) |
||
Line 1: | Line 1: | ||
==Evolution of Capabilities== | ==Evolution of Capabilities== | ||
+ | ===Early=== | ||
* Nov 2016: [https://arxiv.org/abs/1611.10314 Sync-Draw] | * Nov 2016: [https://arxiv.org/abs/1611.10314 Sync-Draw] | ||
* April 2021: [https://arxiv.org/abs/2104.14806 GODIVA] | * April 2021: [https://arxiv.org/abs/2104.14806 GODIVA] | ||
* Oct 2022: [https://makeavideo.studio/ Meta Make-a-video] | * Oct 2022: [https://makeavideo.studio/ Meta Make-a-video] | ||
* Oct 2022: [https://imagen.research.google/video/ Google Imagen video] | * Oct 2022: [https://imagen.research.google/video/ Google Imagen video] | ||
+ | ===2023=== | ||
* April 2023: [https://www.youtube.com/watch?v=XQr4Xklqzw8 Will Smith eating spaghetti] | * April 2023: [https://www.youtube.com/watch?v=XQr4Xklqzw8 Will Smith eating spaghetti] | ||
* April 2023: [https://x.com/mrjonfinger/status/1645953033636048896?cxt=HHwWgMDT7YfkzNctAAAA Runway Gen 2] | * April 2023: [https://x.com/mrjonfinger/status/1645953033636048896?cxt=HHwWgMDT7YfkzNctAAAA Runway Gen 2] | ||
* April 2023: [https://research.nvidia.com/labs/toronto-ai/VideoLDM/ Nvidia latents] | * April 2023: [https://research.nvidia.com/labs/toronto-ai/VideoLDM/ Nvidia latents] | ||
* December 2023: [https://www.threads.net/@luokai/post/C0vvEnTP4Oj Fei-Fei Li] | * December 2023: [https://www.threads.net/@luokai/post/C0vvEnTP4Oj Fei-Fei Li] | ||
+ | ===2024=== | ||
* January 2024: [https://sites.research.google/videopoet/ Google VideoPoet] | * January 2024: [https://sites.research.google/videopoet/ Google VideoPoet] | ||
* January 2024: [https://lumiere-video.github.io/ Google Lumiere] | * January 2024: [https://lumiere-video.github.io/ Google Lumiere] | ||
Line 115: | Line 118: | ||
** [https://www.reddit.com/r/midjourney/comments/1h5u2gw/we_made_a_10_minute_gen_ai_batman_film/ 10 minute Batman film] | ** [https://www.reddit.com/r/midjourney/comments/1h5u2gw/we_made_a_10_minute_gen_ai_batman_film/ 10 minute Batman film] | ||
* December 2024: Tencent [https://aivideo.hunyuan.tencent.com/ Hunyuan Video] open-source video model ([https://x.com/CharaspowerAI/status/1863862585554010530 example]) | * December 2024: Tencent [https://aivideo.hunyuan.tencent.com/ Hunyuan Video] open-source video model ([https://x.com/CharaspowerAI/status/1863862585554010530 example]) | ||
− | * December 2024: [https://sora.com/ Sora] release | + | * December 2024: [https://sora.com/ Sora] release ([https://x.com/CharaspowerAI/status/1866203050982916532 examples]) |
* December 2024: Examples: | * December 2024: Examples: | ||
** [https://www.youtube.com/watch?v=c_kKKRQ5gYw Synthetic Youth: Takenoko Zoku · Made by Emi Kusano with Sora] | ** [https://www.youtube.com/watch?v=c_kKKRQ5gYw Synthetic Youth: Takenoko Zoku · Made by Emi Kusano with Sora] | ||
+ | ===2025=== | ||
+ | * January 2025: TBD |
Revision as of 16:00, 9 December 2024
Evolution of Capabilities
Early
- Nov 2016: Sync-Draw
- April 2021: GODIVA
- Oct 2022: Meta Make-a-video
- Oct 2022: Google Imagen video
2023
- April 2023: Will Smith eating spaghetti
- April 2023: Runway Gen 2
- April 2023: Nvidia latents
- December 2023: Fei-Fei Li
2024
- January 2024: Google VideoPoet
- January 2024: Google Lumiere
- February 2024: OpenAI Sora
- April 2024: Vidu
- May 2024: Veo
- May 2024: Kling
- June 2024: Luma DreamMachine
- June 2024: RunwayML Gen-3 Alpha
- July 2024: Examples:
- July 2024: haiper.ai
- August 2024: Hotshot (examples, more examples)
- August 2024: Luma Dream Machine v1.5
- August 2024: Examples:
- Runway Gen3 music video
- Runway Gen3 for adding FX to live action (another example)
- Midjourney + Runway Gen3: Hey It’s Snowing
- Flux/LoRA image + Runway Gen3 woman presenter
- McDonald’s AI commercial
- Sora used by Izanami AI Art to create dreamlike video and by Alexia Adana to create sci-fi film concept
- September 2024: Hailuo Minimax (examples)
- September 2024: Examples:
- Space colonization
- Consistent characters
- Sea monsters
- Music video
- Animated characters
- AI influencer
- Ten short examples
- Seven examples
- Clip from horror film
- "Gone" featuring astronaut and something ethereal
- Two dancers (surprisingly good consistency despite movement)
- Music video about flying
- The Paperclip Maximizer
- La Baie Aréa
- "To Dear Me" by Gisele Tong (winner of AI shorts film festival)
- Various scenes
- Directing emotions
- September 2024: Kling 1.5 (examples, showing emotions)
- September 2024: Examples:
- Runway video-to-video to restyle classic video games
- Realistic presenter
- Skateboarding (demonstrates getting closer to meaningfully simulating motion/physics)
- Examples of short clips with cinematic feel
- Short: 4 Minutes to Live
- Short: Neon Nights (Arcade)
- Random Access Memories: AI-generated, but then projected onto Kodak film stock. Gives the final output some of the dreamy analog quality we associate with nostalgic footage
- Sora used to make a sort of weird dreamlike video
- October 2024: Pika v1.5, including Pikaffects (explode, melt, inflate, and cake-ify; examples: 1, 2, 3, 4, 5, 6)
- October 2024: Examples:
- AI avatar with good lip-sync
- Battalion: 5 minute short about war
- Short film: To Wonderland (credits)
- 9 to 5: Created with Luma Dream Machine keyframes and camera features; music by Suno
- October 2024: Meta Movie Gen
- October 2024: Examples:
- AI Avatar (using HeyGen)
- Generic Movies
- Pyramid-flow (open source) model: examples
- Building the Pyramids
- People showing realistic emotion (using Hailuo AI)
- Keyframes and Luma AI to make novel speed-ramp motion
- October 2024: Genmo Mochi 1 (open source)
- October 2024: Examples:
- Meta Movie Gen examples
- Emotional range of Minimax
- Car commercial: Bear
- Diner conversation
- Loved and Lost (a meditation on grief)
- November 2024: Examples:
- Pasta Doble
- Bird protecting young
- Camera moving around sushi
- Various examples of Hailuo AI
- Trains
- Light of Imagination
- Bringing historic images to life
- Plants dancing
- Insect on tree
- Trailers for The Silmarillion and The Fall of Gondolin (by Abandoned Films)
- Moody sci-fi
- Migration (made by combining Runway ML Gen3-Alpha and traditional animation)
- After the Winter (music made using Suno v4)
- Horror: Ridge to Southwest
- The Gardener (by Machine Mythos)
- Coca-Cola holiday ad and parody thereof
- A Dream Within A Dream (by PZF, selected for the Czech International AI Film Festival)
- Making Friends (by Everett World; see also Childhood Dream and City Echoes)
- Anime: test shots, Ultimate Ceremony, Echoes of Love
- Echoes of Grace (KakuDrop using Sora)
- Morphing hands, hands and faces (Vibeke Bertelsen)
- Dragon Ball Z live action
- Pitch Black (abstract and dark)
- Animals Running (zoomed-in ultra-wide camera)
- Dreams of Tomorrow (panning shots of high-tech car, Scottish manor)
- Desert Planet Cinematics
- November 2024: Leaked Sora turbo model; examples, Dog chasing Cat in snow
- December 2024: Examples:
- Realistic (Minimax by Hailuo AI)
- Trailer for Paradise Lost (to be released on Sandwatch AI)
- Music video example with consistent characters
- Human expressions (u/Kind_Distance9504 on Reddit, using Hailuo)
- Vodafone ad: The Rhythm Of Life
- 10 minute Batman film
- December 2024: Tencent Hunyuan Video open-source video model (example)
- December 2024: Sora release (examples)
- December 2024: Examples:
2025
- January 2025: TBD