Avatars
Deferred Diffusion for High-fidelity 3D Head Avatars

1Technical University of Munich

Animated DiffusionAvatars

Abstract

DiffusionAvatars synthesizes a high-fidelity 3D head avatar of a person, offering intuitive control over both pose and expression.

We propose a diffusion-based neural renderer that leverages generic 2D priors to produce compelling images of faces. For coarse guidance of the expression and head pose, we render a neural parametric head model (NPHM) from the target viewpoint, which acts as a proxy geometry of the person. Additionally, to enhance the modeling of intricate facial expressions, we condition DiffusionAvatars directly on the expression codes obtained from NPHM via cross-attention. Finally, to synthesize consistent surface details across different viewpoints and expressions, we rig learnable spatial features to the head’s surface via TriPlane lookup in NPHM’s canonical space.

We train DiffusionAvatars on RGB videos and corresponding tracked NPHM meshes of a person and test the obtained avatars in both self-reenactment and animation scenarios. Our experiments demonstrate that DiffusionAvatars generates temporally consistent and visually appealing videos for novel poses and expressions of a person, outperforming existing approaches.

Video

Avatar Animation via Expression Transfer

A DiffusionAvatar can be animated by expression transfer. We obtain the expression code zexp by fitting NPHM to a video sequence of a different person. Our diffusion-based neural renderer then transfers the expression to the avatar.


Source sequence Animated DiffusionAvatar

Avatar Animation via NPHM

A DiffusionAvatar can also be controlled via the underlying NPHM. We obtain expression codes zexp by interpolating between several target expressions. Using rasterization and our diffusion-based neural renderer, the expression codes are translated into a realistic avatar with viewpoint control.


Method Overview

DiffusionAvatars decodes an NPHM expression code zexp in two ways to obtain a realistic image:

  1. We first extract an NPHM mesh and rasterize it from the desired viewpoint, giving us canonical coordinates, depths, and normal renderings for the head mesh.
  2. The canonical coordinates xcan are used to look up spatial features in a TriPlanes structure, rigging the features onto the mesh surface. Together with the rasterizer output, these mapped features form the input for the ControlNet part of DiffusionAvatars.
  3. The second route for the expression code goes through a linear layer. It yields expression tokens that are subsequently used in a newly added cross-attention layer inside the pre-trained latent diffusion model. Intuitively, the rasterized inputs should encode pose, shape and rough expression while the direct expression conditioning hints at more detailed facial expressions.
  4. The final image is synthesized by iteratively denoising Gaussian noise.

Animate a DiffusionAvatar yourself

Here is an interactive viewer allowing for interpolations between four different expressions. Drag the blue cursor around to change zexp, which animates the avatar on the right.

Latent Expression Coordinates
(Quadrilateral linear interpolation between 4 cornering expressions.)
Animated DiffusionAvatar

BibTeX

@article{kirschstein2023diffusionavatars,
  title={DiffusionAvatars: Deferred Diffusion for High-fidelity 3D Head Avatars},
  author={Kirschstein, Tobias and Giebenhain, Simon and Nie{\ss}ner, Matthias},
  journal={arXiv preprint arXiv:2311.18635},
  year={2023}
}