Facial Animation Retargeting by Unsupervised Learning of Graph Convolutional Networks

Nicograph International 2024 Full paper

Authors: Yuhao Dou and Tomohiko Mukai

Download: [preprint] [slides]

Abstract: This paper proposes an unsupervised framework for retargeting human facial animations to different characters. Our method uses a branching structure of two parallel autoencoders and a variant of generative adversarial networks. The two autoencoder branches, composed of graph convolutional networks, share a common latent space through which the retargeting between different mesh structures can be performed. The shared latent codes are obtained by graph pooling operators, and the character face is reconstructed from the latent codes by the unpooling operators. The graph pooling and unpooling operators are designed based on multiple landmarks in optical-based facial motion capture systems. The GAN-based unsupervised learning method requires no paired training animation data between source and target characters. Our experimental results demonstrated that the proposed framework provides a reasonable estimation of a target facial expression that mimics a source character.

© 2023 IEEE.  Personal use of this material is permitted.  Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

1 thought on “Facial Animation Retargeting by Unsupervised Learning of Graph Convolutional Networks

  1. Pingback: NICOGRAPH International 2024 – Mukai Laboratory

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.