Big-sleep video #23
Unanswered
kingdrippa
asked this question in
Q&A
Replies: 1 comment
-
@lucidrains would love any insight you can throw my way!! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hey! huge fan of this repo and all the hard work that went into making it! Been messing around with the notebook nonstop...
I'm coming at this code from somewhat of an outsiders perspective, looking to make video art that utilizes semantically aware rendering...
For the most part I know my way around python and the basic components of a GAN. I'm also generally familiar with the concept of latent space transformations.
I'm wondering if someone can help me answer a few questions:
Is there a way to grab the latent vector for each trained image, and then manipulate that? For example fading between z latents for two different prompts...
Is there a way to utlilize the above to cut down on rendering time per completed 'frame' in this transition? In other words, Is there some part of the process that just needs to be done once, training wise so that when run subsequently just uses a cached encoded vector.
Whats a good way to mess around with making class level latent changes? like rotation etc...
Lastly, Im pretty sure this caps out at 512px but is there a way to use a 1024 biggan model?
Anyways, keep up the good work, this is amazing!
Beta Was this translation helpful? Give feedback.
All reactions