Author of the publication

MEAD: A Large-Scale Audio-Visual Dataset for Emotional Talking-Face Generation.

, , , , , , , , and . ECCV (21), volume 12366 of Lecture Notes in Computer Science, page 700-717. Springer, (2020)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

StyleSync: High-Fidelity Generalized and Personalized Lip Sync in Style-Based Generator., , , , , , , , , and 1 other author(s). CVPR, page 1505-1515. IEEE, (2023)ReSyncer: Rewiring Style-based Generator for Unified Audio-Visually Synced Facial Performer., , , , , , , , , and 3 other author(s). CoRR, (2024)Make Your Brief Stroke Real and Stereoscopic: 3D-Aware Simplified Sketch to Portrait Generation., , , , , , , , , and 3 other author(s). CoRR, (2023)ObjectSDF++: Improved Object-Compositional Neural Implicit Surfaces., , , , and . ICCV, page 21707-21717. IEEE, (2023)EAMM: One-Shot Emotional Talking Face via Audio-Based Emotion-Aware Motion Model., , , , , , and . SIGGRAPH (Conference Paper Track), page 61:1-61:10. ACM, (2022)MEAD: A Large-Scale Audio-Visual Dataset for Emotional Talking-Face Generation., , , , , , , , and . ECCV (21), volume 12366 of Lecture Notes in Computer Science, page 700-717. Springer, (2020)Make Your Brief Stroke Real and Stereoscopic: 3D-Aware Simplified Sketch to Portrait Generation., , , , , , , , and . ICMI, page 388-396. ACM, (2023)Robust Video Portrait Reenactment via Personalized Representation Quantization., , , , , , , , , and 1 other author(s). AAAI, page 2564-2572. AAAI Press, (2023)Audio-Driven Emotional Video Portraits., , , , , , and . CVPR, page 14080-14089. Computer Vision Foundation / IEEE, (2021)Efficient Video Portrait Reenactment via Grid-based Codebook., , , , , , , , , and 1 other author(s). SIGGRAPH (Conference Paper Track), page 66:1-66:9. ACM, (2023)