Avat3r
Large Animatable Gaussian Reconstruction Model for High-fidelity 3D Head Avatars

1Technical University of Munich, 2Meta Reality Labs

Avat3r takes 4 input images of a person's face and generates an animatable 3D head avatar in a single forward pass. The resulting 3D head representation can be animated at interactive rates. The entire creation process of the 3D avatar, from taking 4 smartphone pictures to the final result, can be executed within minutes.

Abstract

Traditionally, creating photo-realistic 3D head avatars requires a studio-level multi-view capture setup and expensive optimization during test-time, limiting the use of digital human doubles to the VFX industry or offline renderings.

To address this shortcoming, we present Avat3r, which regresses a high-quality and animatable 3D head avatar from just a few input images, vastly reducing compute requirements during inference. More specifically, we make Large Reconstruction Models animatable and learn a powerful prior over 3D human heads from a large multi-view video dataset. For better 3D head reconstructions, we employ position maps from DUSt3R and generalized feature maps from the human foundation model Sapiens. To animate the 3D head, our key discovery is that simple cross-attention to an expression code is already sufficient. Finally, we increase robustness by feeding input images with different expressions to our model during training, enabling the reconstruction of 3D head avatars from inconsistent inputs, e.g., an imperfect phone capture with accidental movement, or frames from a monocular video.

We compare Avat3r with current state-of-the-art methods for few-input and single-input scenarios, and find that our method has a competetive advantage in both tasks. Finally, we demonstrate the wide applicability of our proposed model, creating 3D head avatars from images of different sources, smartphone captures, single images, and even out-of-domain inputs like antique busts.

Video

Method Overview

Avat3r follows the framework of Large Reconstruction Models (LRMs) to build a Vision Transformer that predicts a pixel-aligned 3D Gaussian for each pixel in the 4 input images. The final 3D representation is then simply the aggregation of all predicted 3D Gaussians:

  1. In addition to the 4 input images, we provide 3D position maps from DUSt3R and feature maps from Sapiens to simplify the cross-view matching task of the subsequent Vision Transformer.
  2. All inputs are patchified into tokens and fed into the Vision Transfomer back-end where dense self-attention learns to infer 3D structure.
  3. To make the head animatable, we use simple cross-attention layers that attend to a short expression code sequence. The sequence is decoded from the provided expression code with a simple MLP. The expression code itself can be any descriptor of the state of a facial state such as an expression code from the FLAME face model.
  4. The resulting intermediate feature maps are decoded into Gaussian position, scale, rotation, color, and opactiy maps, and then upsampled to the original input image resolution.
  5. Finally, we add skip connections to the predicted position and color maps which should further simplify the model's task.
  6. Gaussians that belong to pixels where DUSt3R's confidence was larger than a pre-defined threshold are collected and can be rendered from arbitrary viewpoints. This thresholding naturally adapts the number of Gaussians to the identity: People with a larger silhouette, e.g., due to voluminous hair, will be modelled with more Gaussians than a bald person.

Due to the use of DUSt3R, Sapiens and dense self-attention, a full pass through the model is expensive. However, for animating the head, one can simply cache the activations until the first cross-attention layer since all previous network blocks are not expression-dependent. Doing so, results in ~8fps animation speed on a single RTX 3090 GPU.

Few-shot Results

Single-view Applications of Avat3r

BibTeX

@misc{kirschstein2025avat3r,
      title={Avat3r: Large Animatable Gaussian Reconstruction Model for High-fidelity 3D Head Avatars},
      author={Tobias Kirschstein and Javier Romero and Artem Sevastopolsky and Matthias Nie\ss{}ner and Shunsuke Saito},
      year={2025},
      eprint={2502.20220},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2502.20220},
}