MagicFace: High-Fidelity Facial Expression Editing with Action-Unit Control
Paper โข 2501.02260 โข Published โข 5
import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("mengtingwei/magicface", dtype=torch.bfloat16, device_map="cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]MagicFace is an efficient and effective facial expression editing model conditioned on facial action units (AU). It provides a more flexible, user-friendly, and highly interpretable method for editing expressions.
You can directly download the model in this repository or download in python script:
# Download a specific file
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="mengtingwei/magicface", filename="79999_iter.pth", local_dir="./utils")
# Download all files
from huggingface_hub import snapshot_download
snapshot_download(repo_id="mengtingwei/magicface", local_dir="./")