Introduction
This work presents SkinningNet, a Two-Stream Graph Convolutional Neural Network that automatically generates skinning weights for an input mesh and its associated skeleton. The SkinningNet architecture is based on the novel Multi-Aggregator Graph Convolution layer that allows the network to better generalize for unseen topologies. Moreover, the proposed joint-based skin binding and Mesh-Skeleton network learns to find the best relationships between the mesh and skeleton, helping to improve the final skinning weight predictions. The proposed architecture outperforms current approaches with over a 20% improvement on mesh deformation error and is also able to generalize better for complex characters of unseen domains.
If you find this work useful, please consider citing:
Albert Mosella-Montoro, Javier Ruiz-Hidalgo, SkinningNet: Two-Stream Graph Convolutional Neural Network for Skinning Prediction of Synthetic Characters, CVPR, 2022
@inproceedings{MOSELLAMONTORO2022, author = {Albert Mosella-Montoro and Javier Ruiz-Hidalgo}, title = {SkinningNet: Two-Stream Graph Convolutional Neural Network for Skinning Prediction of Synthetic Characters}, booktitle = {CVPR - CVF/IEEE Conference on Computer Vision and Pattern Recognition}, year = {2022} }
Check our paper here.
Architecture
SkinningNet architecture is composed of four main stages. Stage 1 is in charge of building the needed graphs from the input mesh and its associated skeleton. Stage 2 is responsible for extracting features independently for the mesh and skeleton. Stage 3 combines the previous mesh and skeleton features to extract a descriptor that relates both structures. Stage 4 predicts the skinning weights.
The Multi-Aggregator Graph Convolution (MAGC) is an extension of the Message-Passing scheme [11], where multiple aggregators are used to let the graph covolution layer distinguish between neighbourhoods with identical features but with different cardinalities.
Results
acknowledgements
This work has been partially supported by the project PID2020-117142GB-I00, funded by MCIN/ AEI /10.13039/501100011033. The authors would like to thank Denis Tome for his technical advice during the development of this project.