Everybody Dance Now


Everybody Dance Now


“This paper presents a simple method for “do as I do” motion transfer: given a source video of a person dancing, we can transfer that performance to a novel (amateur) target after only a few minutes of the target subject performing standard moves. We approach this problem as video-to-video translation using pose as an intermediate representation. To transfer the motion, we extract poses from the source subject and apply the learned pose-to-appearance mapping to generate the target subject. We predict two consecutive frames for temporally coherent video results and introduce a separate pipeline for realistic face synthesis…”

Source: https://carolineec.github.io/everybody_dance_now/

GitHub: github.com/carolineec/EverybodyDanceNow

October 9, 2020
Notify of
Inline Feedbacks
View all comments

Subscribe to our Digest