O
Oleksii Hrinchuk
Researcher at Skolkovo Institute of Science and Technology
Publications - 20
Citations - 480
Oleksii Hrinchuk is an academic researcher from Skolkovo Institute of Science and Technology. The author has contributed to research in topics: Computer science & Language model. The author has an hindex of 8, co-authored 15 publications receiving 268 citations. Previous affiliations of Oleksii Hrinchuk include Nvidia & Moscow Institute of Physics and Technology.
Papers
More filters
Posted Content
NeMo: a toolkit for building AI applications using Neural Modules.
Oleksii Kuchaiev,Jason Li,Huyen Nguyen,Oleksii Hrinchuk,Ryan Leary,Boris Ginsburg,Samuel Kriman,Stanislav Beliaev,Vitaly Lavrukhin,Jack Cook,Patrice Castonguay,Mariya Popova,Jocelyn Huang,Jonathan Cohen +13 more
TL;DR: NeMo (Neural Modules) is a Python framework-agnostic toolkit for creating AI applications through re-usability, abstraction, and composition that provides built-in support for distributed training and mixed precision on latest NVIDIA GPUs.
Posted Content
Stochastic Gradient Methods with Layer-wise Adaptive Moments for Training of Deep Networks
Boris Ginsburg,Patrice Castonguay,Oleksii Hrinchuk,Oleksii Kuchaiev,Vitaly Lavrukhin,Ryan Leary,Jason Li,Huyen Nguyen,Jonathan Cohen +8 more
TL;DR: NovoGrad, an adaptive stochastic gradient descent method with layer-wise gradient normalization and decoupled weight decay, performs on par or better than well tuned SGD with momentum and Adam or AdamW in experiments on neural networks.
Journal ArticleDOI
Pervasive Agriculture: IoT-Enabled Greenhouse for Plant Growth Control
Andrey Somov,Dmitry Shadrin,Ilia Fastovets,Artyom Nikitin,Sergey A. Matveev,Ivan seledets,Oleksii Hrinchuk +6 more
TL;DR: IoT enabling technologies applied for this deployment comprise a wireless sensor network, cloud computing, and artificial intelligence to help in monitoring and controlling of both plants and greenhouse conditions as well as predicting the growth rate of tomatoes.
Proceedings ArticleDOI
Correction of Automatic Speech Recognition with Transformer Sequence-To-Sequence Model
TL;DR: This work introduces a simple yet efficient post-processing model for automatic speech recognition which has Transformer-based encoder-decoder architecture which "translates" acoustic model output into grammatically and semantically correct text.
Posted Content
Tensorized Embedding Layers for Efficient Model Compression
TL;DR: This work introduces a novel way of parametrizing embedding layers based on the Tensor Train (TT) decomposition, which allows compressing the model significantly at the cost of a negligible drop or even a slight gain in performance.