• Home
  • Download
  • License
DifflocksDifflocks
  • Home
  • Download
  • License
  • Sign In
    Logout

DiffLocks

DiffLocks is a method that generates realistic 3D hair strands from a single image.
It was trained on a large dataset of 40K synthetic hairstyles created in Blender.

Please register to download the DiffLocks dataset, checkpoints and validation set. After registration you will be able to access the Downloads menu at the top menubar.

Video

Authors

  • Radu Alexandru Rosu
  • Keyu Wu
  • Yao Feng
  • Youyi Zheng
  • Michael J. Black

Abstract

We address the task of reconstructing 3D hair geometry from a single image, which is challenging due to the diversity of hairstyles and the lack of paired image-to-3D hair data. Previous methods are primarily trained on synthetic data and cope with the limited amount of such data by using low-dimensional intermediate representations, such as guide strands and scalp-level embeddings, that require post-processing to decode, upsample, and add realism. These approaches fail to reconstruct detailed hair, struggle with curly hair, or are limited to handling only a few hairstyles. To overcome these limitations, we propose DiffLocks, a novel framework that enables detailed reconstruction of a wide variety of hairstyles directly from a single image. First, we address the lack of 3D hair data by automating the creation of the largest synthetic hair dataset to date, containing 40K hairstyles. Second, we leverage the synthetic hair dataset to learn an image-conditioned diffusion-transfomer model that reconstructs accurate 3D strands from a single frontal image. By using a pretrained image backbone, our method generalizes to in-the-wild images despite being trained only on synthetic data. Our diffusion model predicts a scalp texture map in which any point in the map contains the latent code for an individual hair strand. These codes are directly decoded to 3D strands without post-processing techniques. Representing individual strands, instead of guide strands, enables the transformer to model the detailed spatial structure of complex hairstyles. With this, DiffLocks can reconstruct highly curled hair, like afro hairstyles, from a single image for the first time. Qualitative and quantitative results demonstrate that DiffLocks outperforms exising state-of-the-art approaches. Data and code is be available for research.

Referencing DiffLocks

@inproceedings{difflocks2025,
  title = {{DiffLocks}: Generating 3D Hair from a Single Image using Diffusion Models},
  author = {Rosu, Radu Alexandru and Wu, Keyu and Feng, Yao and Zheng, Youyi and Black, Michael J.},
  booktitle = {Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
  year = {2025}
}

Contact

For commercial licensing, please contact sales@meshcapade.com

© 2025 Max-Planck-Gesellschaft -Imprint-Privacy Policy-License
RegisterSign In
© 2025 Max-Planck-Gesellschaft