Double-Lora Digital Human: Based on Wan2.2, Unified Character Animation and Replacement with Holistic Replication, Multi-scale Adaptability, and Long-Term Temporal Coherence.

Digital Human Team

Communication Uni of China
*Course Work Corresponding Author

We have trained a video generation model for images, which converts static pictures into dynamic video content by analyzing the image's content, structure, and latent motion patterns. We also designed and implemented a dual-stage LoRA fine-tuning strategy to improve the Wan2.2-Animate 14B model’s performance on 2D character animation and long-range motion generation

Abstract

We introduce Wan-Animate, a unified framework for character animation and replacement. Given a character image and a reference video, Wan-Animate can animate the character by precisely replicating the expressions and mp4ements of the character in the video to generate high-fidelity character videos. Alternatively, it can integrate the animated character into the reference video to replace the original character, replicating the scene's lighting and color tone to achieve seamless environmental integration. Wan-Animate is built upon the Wan model. To adapt it for character animation tasks, we employ a modified input paradigm to differentiate between reference conditions and regions for generation. This design unifies multiple tasks into a common symbolic representation. We use spatially-aligned skeleton signals to replicate body motion and implicit facial features extracted from source images to reenact expressions, enabling the generation of character videos with high controllability and expressiveness. Furthermore, to enhance environmental integration during character replacement, we develop an auxiliary Relighting LoRA. This module preserves the character's appearance consistency while applying the appropriate environmental lighting and color tone. Experimental results demonstrate that Wan-Animate achieves state-of-the-art performance. We are committed to open-sourcing the model weights and its source code.

Method Overview

Interpolate start reference image.

Overview of Double-Lora Digital Human. Overview of Wan-Animate, which is built upon Wan-I2V. We modify it s input formulation to unify reference image input, temporal frame guidance, and environmental information (for dual-mode compatibility) under a common symbolic representation. For body motion control, we use skeleton signals that are merged via spatial alignment. For facial expression control, we leverage implicit features extracted from face images as the driving signal. Additionally, for character replacement, we train an auxiliary Relighting LoRA to enhance the character's integration with the new environment.

Interpolate start reference image.

Diversity-Styled Videos

Our model is trained on a cultural database and excels at generating culturally-styled videos.

Advantages

BibTeX

@article{luo2025dreamactor,
  title={Double-Lora Digital Human: Holistic, Expressive and Robust Human Image Animation with Hybrid Guidance},
  author={Luo, Yuxuan and Rong, Zhengkun and Wang, Lizhen and Zhang, Longhao and Hu, Tianshu and Zhu, Yongming},
  journal={arXiv preprint arXiv:2504.01724},
  year={2025}
}