Transforming 3D-rendered animation frames into temporally coherent anime style using a novel GAN pipeline.
This work from Cole Feuer presents a two-stage, unpaired approach: first a perceptual pretraining phase on anime datasets; then a CycleGAN-based domain adaptation using 8-channel inputs (RGB, depth, edges, blurred prior frames) to enforce visual and temporal fidelity without paired data. Ideal for studios seeking automated stylization at scale.
View the complete implementation, training scripts, and setup instructions on GitHub:
Planned enhancements include explicit optical-flow losses, multi-style conditional transfer, and user-driven style parameter controls. Feedback and collaborations welcomed.
Questions or collaborations? Reach out to coledfeuer@gmail.com or connect on LinkedIn.