Reconstructing and reenacting digital human heads is a task that can be applied in VR/AR, teleconferencing, games, and the movie industry in the future. A recent paper on arXiv.org presents Neural Head Avatars, an explicit shape and appearance representation of the complete human head.
Coordinate-based multi-layer perceptrons are employed to predict the 3D meshes and dynamic textures with regards to pending on the facial expression and pose of humans. The explicit head representation can be optimized based on a short monocular RGB video sequence with color-dependent and color-independent energy terms. The optimization allows the disentanglement of the surface shape and color detail.
The resulting controllable avatar generates novel poses and expressions while preserving high photo-realism. It can also produce photorealistic results even under large view point changes.
We present Neural Head Avatars, a novel neural representation that explicitly models the surface geometry and appearance of an animatable human avatar that can be used for teleconferencing in AR/VR or other applications in the movie or games industry that rely on a digital human. Our representation can be learned from a monocular RGB portrait video that features a range of different expressions and views. Specifically, we propose a hybrid representation consisting of a morphable model for the coarse shape and expressions of the face, and two feed-forward networks, predicting vertex offsets of the underlying mesh as well as a view- and expression-dependent texture. We demonstrate that this representation is able to accurately extrapolate to unseen poses and view points, and generates natural expressions while providing sharp texture details. Compared to previous works on head avatars, our method provides a disentangled shape and appearance model of the complete human head (including hair) that is compatible with the standard graphics pipeline. Moreover, it quantitatively and qualitatively outperforms current state of the art in terms of reconstruction quality and novel-view synthesis.
Research paper: Grassal, P.-W., Prinzler, M., Leistner, T., Rother, C., Nießner, M., and Thies, J., “Neural Head Avatars from Monocular RGB Videos”, 2021. Link to the article: https://arxiv.org/abs/2112.01554
Link to the project site: https://philgras.github.io/neural_head_avatars/neural_head_avatars.html
Discover more from Today Headline
Subscribe to get the latest posts to your email.