GGHead
Fast and Generalizable 3D Gaussian Heads

1Technical University of Munich, 2Independent Researcher
GGHead generates high-quality and view-consistent 3D heads at lightning speed. The below results are generated and rendered at 1k resolution in real-time on a consumer-grade GPU.

Drag cursor to change the split

Abstract

Learning 3D head priors from large 2D image collections is an important step towards high-quality 3D-aware human modeling. A core requirement is an efficient architecture that scales well to large-scale datasets and large image resolutions. Unfortunately, existing 3D GANs struggle to scale to generate samples at high resolutions due to their relatively slow train and render speeds, and typically have to rely on 2D superresolution networks at the expense of global 3D consistency.

To address these challenges, we propose Generative Gaussian Heads (GGHead), which adopts the recent 3D Gaussian Splatting representation within a 3D GAN framework. To generate a 3D representation, we employ a powerful 2D CNN generator to predict Gaussian attributes in the UV space of a template head mesh. This way, GGHead exploits the regularity of the template’s UV layout, substantially facilitating the challenging task of predicting an unstructured set of 3D Gaussians. We further improve the geometric fidelity of the generated 3D representations with a novel total variation loss on rendered UV coordinates. Intuitively, this regularization encourages that neighboring rendered pixels should stem from neighboring Gaussians in the template’s UV space.

Taken together, our pipeline can efficiently generate 3D heads trained only from single-view 2D image observations. Our proposed framework matches the quality of existing 3D head GANs on FFHQ while being both substantially faster and fully 3D consistent. As a result, we demonstrate real-time generation and rendering of high-quality 3D-consistent heads at 1024² resolution for the first time.

Video



3D Head Generation on FFHQ-512

Uncurated samples from GGHead on FFHQ (seeds 0-15)


3D Head Generation on AFHQ-512

Curated samples from GGHead on AFHQ (seeds 6-13)

Method Overview

GGHead adopts a 3D GAN framework to learn an unconditional 3D head generation model from 2D image datasets. The generator creates a 3D Gaussian representation whose renderings are supervised by the discriminator:

  1. We employ a StyleGAN2 generator to predict Gaussian attributes (position, scale, rotation, color, opacity) in the UV space of a template head mesh and then sample the UV space uniformly to spawn 3D Gaussian primitives.
  2. The 3D Gaussian representation is rendered with a differentiable rasterizer and fed into the discriminator for supervision.
  3. To improve stability, especially in the early stages of adversarial training, we employ regularization terms on the predicted 2D maps to punish large Gaussians, discourage semi-transparent Gaussians, and ensure that the 3D Gaussians do not move away too far from the template mesh.
  4. Finally, we propose a novel UV total variation loss to improve the geometric fidelity of generated 3D heads. Intuitively, this regularization encourages that neighboring rendered pixels should stem from Gaussians that are close in UV space.

View-Consistency

GGHead is fully view-consistent since it does not have to rely on 2D super-resolution networks such as EG3D.

Side-view Comparison

GGHead maintains a high rendering quality even for extreme side-views.

BibTeX

@article{kirschstein2024gghead,
  title={GGHead: Fast and Generalizable 3D Gaussian Heads},
  author={Kirschstein, Tobias and Giebenhain, Simon and Tang, Jiapeng and Georgopoulos, Markos and Nie{\ss}ner, Matthias},
  journal={arXiv preprint arXiv:2406.09377},
  year={2024}
}