Learning Deformation Patterns of Surface Meshes of Different Sizes
Sara Hahner  1, 2@  
1 : Fraunhofer SCAI
2 : University of Bonn

We analyze the deformation of three-dimensional shapes that are represented by surface meshes. To detect underlying dynamics in the deformation behavior, we represent the data in a low-dimensional space using non-linear dimensionality reduction methods.

The area of geometry processing offers different ways to obtain these low-dimensional representations. The surface meshes can be projected onto Laplacian eigenvectors describing functions on the meshes. Autoencoders are a type of neural network for unsupervised representation learning that project the data to a learned low-dimensional space. These networks have successfully been applied to surface meshes combining graph convolutional filters and pooling operators, that reduce the mesh density.

The state-of-the-art methods generally depend on the adjacency matrix of the surface mesh. Therefore, knowledge and patterns, which have been learned and detected on the training data, cannot be transferred to surface data with a different mesh representation. In addition, note that a method that can handle meshes of different sizes and shapes has more training data available and can in particular be applied in scenarios where there are too few examples to train a separate network.

Based on this observation, we developed a novel approach to handle meshes of different shapes and sizes: Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes [2]. We calculate an alternative discrete approximation of the surface data based on semi-regular meshes. Semi-regular meshes have regular regional patches, which means that every vertex inside the patch has exactly six neighbors. The local regularities in the meshes allow us to reutilize learned convolutional filters and define a pooling based on the local mesh regularity that is the same for all meshes of this type. Since the convolutional neural networks learn local features, we feed the regular regional patches, which all have the same meshing, separately to the network. The global context is not lost but fed to the network via padding. In this way, we use one network to analyze the deformation of shapes that have different mesh representations.

We apply the same mesh autoencoder to diverse classes of datasets, including moving humans and animals as well as deforming car components. Our reconstruction error is more than 50% lower than the error from state-of-the-art models [1,3], which have to be trained for every mesh separately. Additionally, we visualize the underlying dynamics of unseen mesh sequences with an autoencoder trained on different classes of meshes.


Joint work with: Jochen Garcke.

References:

[1] G. Bouritsas, S. Bokhnyak et al. Neural 3D morphable models: Spiral convolutional networks for 3D shape representation learning and generation. IEEE International Conference on Computer Vision, 2019.

[2] S. Hahner and J. Garcke. Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes. IEEE Winter Conference on Applications of Computer Vision, 2022.

[3] A. Ranjan, T. Bolkart et al. Generating 3D Faces Using Convolutional Mesh Autoencoders. IEEE European Conference on Computer Vision, 2018.


Personnes connectées : 15 Vie privée
Chargement...