AnnouncementsFunnyVideosMusicAncapsTechnologyEconomicsPrivacyGIFSCringeAnarchyFilmPicsThemesIdeas4MatrixAskMatrixHelpTop Subs
2
Add topics
Comment preview
[-]x0x7
0(+0|0)

They said the predecessor YouAnimate couldn't do the job despite using a 50GB GPU. But they never mentioned the GPU size for their result.

I also noticed that the mesh model can sometime end up with it's center of gravity not over it's feet. I don't know if they could ever constrain something like that.

Also sometimes it doesn't catch small hops. So if a dancer is using very small jumps to rotate then the end result is you just start rotating. https://mks0601.github.io/ExAvatar/images/dr_hong/compressed.mp4

The political deep fakes are going to get crazy. Thankfully it also seems to struggle painting open mouths. But if someone wants to make someone talk there are other methods you might be able to compose together.

The missing of micro movement may be a camera and FPS issue, not that im defending their project, just my speculation. For faces live portrait is 100% gas fire fr fr. You can do video with life portrait as well, so i think the future workflow would be you hire talents that are all dancing and all singing, then this is essentially poor mans mocap. Now, if the render simple-x mesh is exposed, you could then address these micro movements and clean up the animation, which is how you do it with mocap anyways. Then, once thats cleaned and good, you then finish out its render with your subject and use something like live portrait to do the facial animations.