Mastering AI Video: How New Tools Are Shaping the Future of Digital Content

 

Mastering AI Video: How New Tools Are Shaping the Future of Digital Content

I’ve been experimenting with AI video tools and covering them on this channel for a long time, and right now is the most exciting and fun time in AI video yet.

Runway and Luma Labs have been in the spotlight, and I’ll cover some immediate real-world use cases for them, but my favorite tool is not one of those making the headlines.

I’ll get to that a little later, plus some amazing lip-syncing tools and some open-source models.

Runway Gen-3

Dominating the Timeline

I’ve got to start with Runway Gen-3 since it’s been dominating my timeline. It is the best text-to-video model available to use right now.

Something particularly useful with Gen-3 is text like these title sequences. It is amazing at this, and I’ll show a few of my favorite examples.

Real-World Use Cases

This is such a perfect title sequence for a cooking channel.

This one also showcases how good it is at fluid simulation.

The physics in a lot of these is really good.

Adding some sound design would take these to the next level for sure and be production-ready title sequences.

Creating a Title Sequence

So I’ll try one for Futurepedia.

I’ve logged in already, and here’s a whole suite of image and video tools that Runway offers, but for Gen-3, I’ll just click right here, then type in a prompt: a title screen with dynamic movement.

The scene starts with intricate neon circuitry patterns that light up and move on a dark background.

Suddenly, the circuits converge and form the word Futurepedia with a glowing pulsating effect.

Then I can choose between 5 seconds and 10 seconds.

I’ll lower it to five, then click generate.

That’s all there is to it right now.

This is how that came back, and that looks amazing.

Tips and Tricks

Since Futurepedia is a longer word, I have gotten some misspellings, but it gets it right a lot of the time.

This prompt structure has worked well, so I used one of the sample prompts in the Gen-3 prompting guide and modified it.

I modified it a lot, really just kept the basic structure.

So I’d recommend trying to use the prompt structure they give here when you’re starting: camera movement, establishing scene, then additional details.

Hopefully, that will help you cut down on rolls.

They also have a lot of solid keywords in this guide to help give you some inspiration.

Transforming Between Scenes

Another thing it’s good at is transforming between scenes.

This is another prompt you can modify for yourself that comes back with great results pretty consistently.

I will copy that and paste it in here.

I’ll change this to a wormhole and into an alien civilization.

Dream Machine from Luma Labs

Image-to-Video Excellence

The best image-to-video right now is Dream Machine from Luma Labs.

You can also do text-to-video, which gets some good results, but where Luma shines is image-to-video, and even better than that is with keyframes.

Keyframes Example

I’ll show a couple of straight image-to-video examples first.

It’s easy to use: upload an image, then add a prompt.

Volcano erupting contained within a drinking glass surrounded by peaceful tranquility.

This was the original image, then here’s the result, and that looks perfect.

I’ll show a few more straight image-to-video examples, then we’ll move on to keyframes.

Using Keyframes

These are all within one or two tries. It does weird things on occasion like this astronaut grows an extra finger, or here’s a weirder one, or how about a much weirder one.

Most of the time, they do look good without having to do too many roles.

The next step is adding an ending frame, and you can start doing some cool stuff.

You upload a starting frame, then also upload an ending frame.

Now add a prompt for it to use to create the whole clip in between those frames.

LTX Studio

Most Control and Fastest Speed

LTX Studio has the most control out of any of these platforms and the fastest speed.

They can build out an entire short film in a few minutes.

Creating a Short Film

How it works is you can start from scratch, input an entire script, or start from a simple prompt.

I’ll go with the prompt option for this example.

I will paste this in: it’s about a futuristic city controlled by AI entities and a hacker that can communicate with them.

He joins a resistance; they battle, all that good stuff.

Hit next, and it starts working.

It will show the basic story and the cast, then you can select a style.

Customization Options

You can change anything about them that you want.

I think this will be better without a last name.

I could change the essence, appearance, the clothes.

I can test out different voices if I want.

As an organizer, I start from where the world is as it is, not as I would like it to be.

You can even face-swap these with a face you upload.

Generating the Film

I want to change the title to Neon Flux.

This looks good here, so I’ll click Start, and just from that, it builds out an entire short film with fully editable scenes.

This whole thing was generated in less time than it took either of the others to generate a single clip.

Korea

Abstract Creations

The platform I’ve been having the most fun with out of all of these is Korea.

With this one, you can do quite a bit for free.

It’s a lot different than the other ones we’ve covered.

It’s more for abstract stuff, not so much for realism, so it focuses more on these trippy, morphing-type animations, which I like a lot.

Creating Animations

I’ll go up to Generate, then Video.

They also have a creative upscale that’s useful.

I’ll show that one next. These are the three images I want to use: translucent, kind of bioluminescent flower, jellyfish, and dragon.

I think they look cool together, so I’ll click Add Keyframe, select the flower, and then I’ll add another keyframe for the jellyfish and another for the dragon.

You can make these longer or shorter; I’ll lengthen it a little bit.

Keyframe Example

Then you can add a text prompt.

I just want these to morph into each other, so I don’t need any longer prompts.

I’ll just say what they are, then drag the length of the prompt, and that’s where the transitions will start.

At least that’s where they’re supposed to be.

It’s not perfect.

Adjusting the Animation

There are a couple of other settings for the aspect ratio and motion intensity.

I like it around 60 usually.

Then you can switch the looping on or off.

It defaults to where the end will morph back into the first frames, so it can just endlessly loop.

Then you have four styles to choose from.

I’ll start with film, then click Generate Video, and I’ll generate one at each of those for comparison in a second.

Video Upscaler

Creative Upscaling

Now, I also want to demo their video upscale.

This is not like a traditional upscale where it resembles the original video as much as possible, just at a higher resolution.

It does a creative upscale similar to how Magnific works but for video.

It stays close but kind of reimagines everything with AI.

Example Upscaling

So I’ll use a clip from an LTX video I made.

This one has a face that was just completely warped.

We’ll see if it can fix that.

I don’t need it upscaled that much; I’ll leave that at 1.5. Frame rate: 30 FPS. Write out a prompt of what it thinks is in the video.

That looks good, so I’ll leave the strength and resemblance at the defaults first and start with Cinematic.

Turn the loop off and enhance.

This took around 30 seconds, and here’s the result.

You can see that there’s a face on here now.

That’s pretty amazing; it was able to fix that.

Lip Syncing Tools

Hedra and Live Portrait

Lip syncing has made some big improvements recently.

There are tons of impressive demos that have come out that we don’t have access to, but there are two platforms I want to show that we do have access to one that’s completely free and one that’s free for five uses per day.

It’s Hedra and Live Portrait.

hedra’s Expressive Avatars

Hedra has some of the most expressive talking avatars I’ve seen.

It’s pretty easy to use: either generate the audio or upload some of your own.

I’ll use the classic Fight Club line.

Now I’ll upload a character.

You can also generate one here if you need to.

Now I just generate the video. It works pretty quickly, and here’s what I got:

“The first rule of Fight Club is you do not talk about Fight Club. The second rule of Fight Club is you do not talk about Fight Club.”

Live Portrait Mapping

Live Portrait takes a different route.

You upload a reference video, and it will map that onto the avatar.

This allows you to have more control over your expressiveness.

It’s on Hugging Face, so you can use it for free.

Upload a source portrait; I’ll give my face a try, then upload a driving video.

It works best with a straight-on shot like the ones they have in these examples.

Next Post Previous Post
No Comment
Add Comment
comment url