Skip to content

Latest commit

 

History

History

video_synthesis

Text to Video

In DiffSynth Studio, we can use some video models to generate videos.

Example: Text-to-Video using CogVideoX-5B (Experimental)

See cogvideo_text_to_video.py.

First, we generate a video using prompt "an astronaut riding a horse on Mars".

1_video_1.mp4

Then, we convert the astronaut to a robot.

1_video_2.mp4

Upscale the video using the model itself.

1_video_3.mp4

Make the video look smoother by interpolating frames.

1_video_4.mp4

Here is another example.

First, we generate a video using prompt "a dog is running".

video_1.mp4

Then, we add a blue collar to the dog.

video_2.mp4

Upscale the video using the model itself.

video_3.mp4

Make the video look smoother by interpolating frames.

video_4.mp4

Example: Text-to-Video using AnimateDiff

Generate a video using a Stable Diffusion model and an AnimateDiff model. We can break the limitation of number of frames! See sd_text_to_video.py.

lightning.mp4