scispace - formally typeset
Open AccessJournal ArticleDOI

MeshCNN: a network with an edge

Reads0
Chats0
TLDR
This paper utilizes the unique properties of the mesh for a direct analysis of 3D shapes using MeshCNN, a convolutional neural network designed specifically for triangular meshes, and demonstrates the effectiveness of MeshCNN on various learning tasks applied to 3D meshes.
Abstract
Polygonal meshes provide an efficient representation for 3D shapes. They explicitly captureboth shape surface and topology, and leverage non-uniformity to represent large flat regions as well as sharp, intricate features. This non-uniformity and irregularity, however, inhibits mesh analysis efforts using neural networks that combine convolution and pooling operations. In this paper, we utilize the unique properties of the mesh for a direct analysis of 3D shapes using MeshCNN, a convolutional neural network designed specifically for triangular meshes. Analogous to classic CNNs, MeshCNN combines specialized convolution and pooling layers that operate on the mesh edges, by leveraging their intrinsic geodesic connections. Convolutions are applied on edges and the four edges of their incident triangles, and pooling is applied via an edge collapse operation that retains surface topology, thereby, generating new mesh connectivity for the subsequent convolutions. MeshCNN learns which edges to collapse, thus forming a task-driven process where the network exposes and expands the important features while discarding the redundant ones. We demonstrate the effectiveness of MeshCNN on various learning tasks applied to 3D meshes.

read more

Citations
More filters
Proceedings ArticleDOI

Multi-Scale Progressive Fusion Network for Single Image Deraining

TL;DR: This work explores the multi-scale collaborative representation for rain streaks from the perspective of input image scales and hierarchical deep features in a unified framework, termed multi- scale progressive fusion network (MSPFN) for single image rain streak removal.
Proceedings Article

Learning Mesh-Based Simulation with Graph Networks

TL;DR: MeshGraphNets is introduced, a framework for learning mesh-based simulations using graph neural networks that can be trained to pass messages on a mesh graph and to adapt the mesh discretization during forward simulation, and can accurately predict the dynamics of a wide range of physical systems.
Proceedings ArticleDOI

Local Implicit Grid Representations for 3D Scenes

TL;DR: Local Implicit Grid Representations (LIGR) as mentioned in this paper is a 3D shape representation designed for scalability and generality, which can be used to reconstruct 3D objects from partial or noisy data.
Journal ArticleDOI

MeshCNN: A Network with an Edge

TL;DR: MeshCNN as discussed by the authors combines specialized convolution and pooling layers that operate on the mesh edges, by leveraging their intrinsic geodesic connections, and learns which edges to collapse, thus forming a task-driven process where the network exposes and expands the important features while discarding the redundant ones.
Posted Content

Local Implicit Grid Representations for 3D Scenes

TL;DR: This paper introduces Local Implicit Grid Representations, a new 3D shape representation designed for scalability and generality and demonstrates the value of this proposed approach for 3D surface reconstruction from sparse point observations, showing significantly better results than alternative approaches.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Book ChapterDOI

U-Net: Convolutional Networks for Biomedical Image Segmentation

TL;DR: Neber et al. as discussed by the authors proposed a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently, which can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Journal ArticleDOI

ImageNet classification with deep convolutional neural networks

TL;DR: A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.
Related Papers (5)
Trending Questions (1)
What is a mesh convolutional network?

The paper describes MeshCNN, a convolutional neural network designed specifically for triangular meshes. It combines specialized convolution and pooling layers that operate on the mesh edges, leveraging their intrinsic geodesic connections.