scispace - formally typeset
A

Artem Sevastopolsky

Researcher at Samsung

Publications -  15
Citations -  1038

Artem Sevastopolsky is an academic researcher from Samsung. The author has contributed to research in topics: Rendering (computer graphics) & Point cloud. The author has an hindex of 9, co-authored 13 publications receiving 666 citations. Previous affiliations of Artem Sevastopolsky include Skolkovo Institute of Science and Technology & Moscow State University.

Papers
More filters
Journal ArticleDOI

Optic disc and cup segmentation methods for glaucoma detection with modification of U-Net convolutional neural network

TL;DR: In this paper, a universal approach for automatic optic disc and cup segmentation, which is based on deep learning, namely, modification of U-Net convolutional neural network, is presented.
Journal ArticleDOI

Optic Disc and Cup Segmentation Methods for Glaucoma Detection with Modification of U-Net Convolutional Neural Network

TL;DR: This work presents universal approach for automatic optic disc and cup segmentation, which is based on deep learning, namely, modification of U-Net convolutional neural network, and achieves quality comparable to current state-of-the-art methods, outperforming them in terms of the prediction time.
Posted Content

Neural Point-Based Graphics

TL;DR: In this article, a deep rendering network is learned in parallel with the descriptors, so that new views of the scene can be obtained by passing the rasterizations of a point cloud from new viewpoints through this network.
Proceedings ArticleDOI

Coordinate-Based Texture Inpainting for Pose-Guided Human Image Generation

TL;DR: A new deep learning approach to pose-guided resynthesis of human photographs using a fully-convolutional architecture with deformable skip connections guided by the estimated correspondence field and a new inpainting method that completes the texture of the human body.
Book ChapterDOI

Neural Point-Based Graphics

TL;DR: This work presents a new point-based approach for modeling the appearance of real scenes that uses a raw point cloud as the geometric representation of a scene, and augments each point with a learnable neural descriptor that encodes local geometry and appearance.