Build and scale creative products with the world's most popular and intuitive video generation models in the Dream Machine API
Access state of the art Dream Machine models in an intuitive API and open-source SDKs.
Scale your creative AI products with higher rate limits and hands-on support from the Luma team.
Dream Machine's intuitive text instruction means users do not need to learn prompt-engineering. This means you can build generative products that reach new markets.
Build workflows that take static images and instantly create magical high quality animations. Instruct Dream Machine in natural language to create narratives.
Control the narrative that Dream Machine generates with start and end image keyframes…
…and extend these narratives into stories. All without any pixel editing complexity in your apps.
Create seamless loops for engaging UIs, product marketing, and backgrounds.
Ground-breaking generative camera allows even your most inexperienced users to get videos looking just right with simple text instructions.
Your app can now produce content perfect for various platforms without complex video and image editing UIs.
Video is the carrier of culture, ideas, and connections around the world. Video is a universal language. Unlike text, creating video is a physical process and editing high dimensional video data is very difficult. This has meant that those few with the means of production create and most of us just passively consume.They tell the stories and we all just listen.
Video can be the most effective medium of visual storytelling.
Video — if made as universally manipulable as text — can be the most effective medium of thought.
To make this happen we have been training Dream Machine, a family of generative AI models capable of producing and manipulating video. We have seen explosive adoption of the Dream Machine v1, far more than we ever anticipated and in ways we never expected. This means we should go faster.
We are launching the Dream Machine API in beta today with the latest family of Dream Machine v1.6 models. It includes high quality text-to-video, image-to-video, video extension, loop creation, as well as Luma's groundbreaking camera control capabilities.
Through Luma's research and engineering we aim to bring about the age of abundance in visual exploration and creation so more ideas can be tried, better narratives can be built, and diverse stories can be told by those who never could before. We are pricing the API for such a future at mere cents per video and will work to build even better efficiencies. This is just the start.
You start by buying credits that are consumed as you use the API. For higher tiers of usage or for enterprises, please reach out to us so we can scale the service to fit your needs. Inputs you provide and the outputs you generate are not used in training unless you explicitly ask us to do so.
To build intelligence that can keep pace with humans, we are working on models that fuse video, images, audio, 3d and language in pre-training. Basically, similar to how the human brain learns and works. With such rich context these models exhibit understanding of causality and follow user intent like intelligent partners.
Building with the Dream Machine API gives you access to this creative intelligence and helps you bring value to markets by accomplishing previously impossible things.
We are excited to learn and grow with you!
Video is a powerful and persuasive medium, and to prevent misuse, we have developed a multi-layered moderation system that combines AI filters with human oversight. Our API allows you to tailor this system to match the preferences of your market and users. With ongoing feedback and learning, we continuously refine our approach to ensure our models and products comply with legal standards and are used constructively.
API terms of service