The Stable-Hair framework consists of a two-stage pipeline, where the first stage involves removing hair from the user-provided face image using a Bald Converter alongside stable diffusion, and the second stage involves transferring the target hairstyle onto the bald image using a Hair Extractor, Latent IdentityNet, and Hair Cross-Attention Layers. This approach enables highly detailed and high-fidelity hairstyle transfers that preserve the original identity content and structure.
Key features of Stable-Hair include:
- Robust transfer of diverse and intricate hairstyles
- Highly detailed and high-fidelity transfers
- Preservation of original identity content and structure
- Ability to transfer hairstyles across diverse domains
- Two-stage pipeline consisting of Bald Converter and Hair Extractor modules
- Use of stable diffusion and Hair Cross-Attention Layers for precise hairstyle transfer