AI video
Revision as of 12:02, 25 November 2024 by KevinYager (talk | contribs) (Created page with "==Evolution of Capabilities== * Nov 2016: [https://arxiv.org/abs/1611.10314 Sync-Draw] * April 2021: [https://arxiv.org/abs/2104.14806 GODIVA] * Oct 2022: [https://makeavideo....")
Evolution of Capabilities
- Nov 2016: Sync-Draw
- April 2021: GODIVA
- Oct 2022: Meta Make-a-video
- Oct 2022: Google Imagen video
- April 2023: Will Smith eating spaghetti
- April 2023: Runway Gen 2
- April 2023: Nvidia latents
- December 2023: Fei-Fei Li
- January 2024: Google VideoPoet
- January 2024: Google Lumiere
- February 2024: OpenAI Sora
- April 2024: Vidu
- May 2024: Veo
- May 2024: Kling
- June 2024: Luma DreamMachine
- June 2024: RunwayML Gen-3 Alpha
- July 2024: Examples:
- July 2024: haiper.ai
- August 2024: Hotshot (examples)
- August 2024: Examples:
- Runway Gen3 music video
- Runway Gen3 for adding FX to live action (another example)
- Midjourney + Runway Gen3: Hey It’s Snowing
- Flux/LoRA image + Runway Gen3 woman presenter
- McDonald’s AI commercial
- Sora used by Izanami AI Art to create dreamlike video and by Alexia Adana to create sci-fi film concept