scispace - formally typeset
Search or ask a question
Book ChapterDOI

Introduction to PyTorch

01 Jan 2017-pp 195-208
TL;DR: In this chapter, the authors will cover PyTorch which is a more recent addition to the ecosystem of the deep learning framework and has fairly good Graphical Processing Unit (GPU) support and is a fast-maturing framework.
Abstract: In this chapter, we will cover PyTorch which is a more recent addition to the ecosystem of the deep learning framework. PyTorch can be seen as a Python front end to the Torch engine (which initially only had Lua bindings) which at its heart provides the ability to define mathematical functions and compute their gradients. PyTorch has fairly good Graphical Processing Unit (GPU) support and is a fast-maturing framework.
Citations
More filters
Journal ArticleDOI
TL;DR: This contribution focuses in mechanical problems and analyze the energetic format of the PDE, where the energy of a mechanical system seems to be the natural loss function for a machine learning method to approach a mechanical problem.

721 citations


Cites methods from "Introduction to PyTorch"

  • ...Libraries such as Tensorflow [5] and PyTorch [6] provide the building blocks to devise learning machines for very different problems....

    [...]

  • ...We do this by using machine learning tools available in open-source libraries like TensorFlow and PyTorch....

    [...]

Journal ArticleDOI
TL;DR: This study proposed an approach that uses deep transfer learning to automatically classify normal and abnormal brain MR images, and achieved 5-fold classification accuracy of 100% on 613 MR images.

339 citations

Proceedings ArticleDOI
20 Apr 2020
TL;DR: An unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder is developed, which outperforms state-of-the-art unsuper supervised counterparts, and even sometimes exceeds the performance of supervised ones.
Abstract: The richness in the content of various information networks such as social networks and communication networks provides the unprecedented potential for learning high-quality expressive representations without external supervision. This paper investigates how to preserve and extract the abundant information from graph-structured data into embedding space in an unsupervised manner. To this end, we propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations. GMI generalizes the idea of conventional mutual information computations from vector space to the graph domain where measuring mutual information from two aspects of node features and topological structure is indispensable. GMI exhibits several benefits: First, it is invariant to the isomorphic transformation of input graphs—an inevitable constraint in many existing graph representation learning algorithms; Besides, it can be efficiently estimated and maximized by current mutual information estimation methods such as MINE; Finally, our theoretical analysis confirms its correctness and rationality. With the aid of GMI, we develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder. Considerable experiments on transductive as well as inductive node classification and link prediction demonstrate that our method outperforms state-of-the-art unsupervised counterparts, and even sometimes exceeds the performance of supervised ones.

320 citations


Cites methods from "Introduction to PyTorch"

  • ...All experiments are implemented in PyTorch [23] with Glorot initialization [13] and conducted on a single Tesla P40 GPU....

    [...]

Proceedings ArticleDOI
01 Jun 2018
TL;DR: This paper presents a multi-task perspective, which is not embraced by any existing work, to jointly learn both detection and removal in an end-to-end fashion that aims at enjoying the mutually improved benefits from each other.
Abstract: Understanding shadows from a single image consists of two types of task in previous studies, containing shadow detection and shadow removal. In this paper, we present a multi-task perspective, which is not embraced by any existing work, to jointly learn both detection and removal in an end-to-end fashion that aims at enjoying the mutually improved benefits from each other. Our framework is based on a novel STacked Conditional Generative Adversarial Network (ST-CGAN), which is composed of two stacked CGANs, each with a generator and a discriminator. Specifically, a shadow image is fed into the first generator which produces a shadow detection mask. That shadow image, concatenated with its predicted mask, goes through the second generator in order to recover its shadow-free image consequently. In addition, the two corresponding discriminators are very likely to model higher level relationships and global scene characteristics for the detected shadow region and reconstruction via removing shadows, respectively. More importantly, for multi-task learning, our design of stacked paradigm provides a novel view which is notably different from the commonly used one as the multi-branch version. To fully evaluate the performance of our proposed framework, we construct the first large-scale benchmark with 1870 image triplets (shadow image, shadow mask image, and shadow-free image) under 135 scenes. Extensive experimental results consistently show the advantages of STC-GAN over several representative state-of-the-art methods on two large-scale publicly available datasets and our newly released one.

222 citations

Journal ArticleDOI
TL;DR: A comprehensive survey on state-of-the-art deep learning, IoT security, and big data technologies is conducted and a thematic taxonomy is derived from the comparative analysis of technical studies of the three aforementioned domains.

193 citations


Cites background from "Introduction to PyTorch"

  • ...PyTorch: PyTorch is a deep learning framework based on python that acts as a replacement for NumPy to use the power of GPUs and for deep learning research that provides maximum flexibility and speed [93]....

    [...]