JIFF: Jointly-aligned Implicit Face Function for
High Quality Single View Clothed Human Reconstruction
Yukang Cao1
Guanying Chen2
Kai Han1
Wenqi Yang1
Kwan-Yee K. Wong1
1The University of Hong Kong, 2The Future Network of Intelligence Institute (FNii), CUHK-Shenzhen
Code
CVPR 2022 [Paper]





Abstract

This paper addresses the problem of single view 3D human reconstruction. Recent implicit function based methods have shown impressive results, but they fail to recover fine face details in their reconstructions. This largely degrades user experience in applications like 3D telepresence. In this paper, we focus on improving the quality of face in the reconstruction and propose a novel Jointly-aligned Implicit Face Function (JIFF) that combines the merits of the implicit function based approach and model based approach. We employ a 3D morphable face model as our shape prior and compute space-aligned 3D features that capture detailed face geometry information. Such spacealigned 3D features are combined with pixel-aligned 2D features to jointly predict an implicit face function for high quality face reconstruction. We further extend our pipeline and introduce a coarse-to-fine architecture to predict high quality texture for our detailed face model. Extensive evaluations have been carried out on public datasets and our proposed JIFF has demonstrates superior performance (both quantitatively and qualitatively) over existing state-of-thearts

Video will be released soon


Method


Results













Paper

Y. Cao, G. Chen, K. Han, W. Yang, K.-Y. K. Wong.
JIFF: Jointly-aligned Implicit Face Function for High Quality Single View Clothed Human Reconstruction.
In CVPR, 2022. [pdf]

@inproceedings{cao2022jiff,
author    = {Yukang Cao and Guanying Chen and Kai Han and Wenqi Yang and Kwan-Yee K. Wong},
title     = {JIFF: Jointly-aligned Implicit Face Function for High Quality Single View Clothed Human Reconstruction},
booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year      = {2022},
}



Code and models will be available at Github!


Acknowledgements

This work was partially supported by Hong Kong RGC GRF grant (project# 17203119), the National Key R&D Program of China (No.2018YFB1800800), and the Basic Research Project No. HZQB-KCZYZ2021067 of Hetao Shenzhen-HK S&T Cooperation Zone. We thank Yuanlu Xu for ARCH and ARCH++ results


Webpage template borrowed from Split-Brain Autoencoders, CVPR 2017.