Institution
Naver Corporation
Company•Seongnam-si, South Korea•
About: Naver Corporation is a company organization based out in Seongnam-si, South Korea. It is known for research contribution in the topics: Terminal (electronics) & Computer science. The organization has 4038 authors who have published 4294 publications receiving 35045 citations. The organization is also known as: NAVER Corporation & NAVER.
Papers published on a yearly basis
Papers
More filters
••
01 Jul 2020TL;DR: This paper aims to improve the quality of each phrase embedding by augmenting it with a contextualized sparse representation (Sparc) and shows 4%+ improvement in CuratedTREC and SQuAD-Open.
Abstract: Open-domain question answering can be formulated as a phrase retrieval problem, in which we can expect huge scalability and speed benefit but often suffer from low accuracy due to the limitation of existing phrase representation models. In this paper, we aim to improve the quality of each phrase embedding by augmenting it with a contextualized sparse representation (Sparc). Unlike previous sparse vectors that are term-frequency-based (e.g., tf-idf) or directly learned (only few thousand dimensions), we leverage rectified self-attention to indirectly learn sparse vectors in n-gram vocabulary space. By augmenting the previous phrase retrieval model (Seo et al., 2019) with Sparc, we show 4%+ improvement in CuratedTREC and SQuAD-Open. Our CuratedTREC score is even better than the best known retrieve & read model with at least 45x faster inference speed.
23 citations
••
TL;DR: In this paper, the authors investigate the factors that impact the resistance to in-vehicle infotainment (IVI) systems in the Korean market and show that the technographics, subjective norm, and prior similar experience are direct and powerful antecedents for resistance.
23 citations
••
07 Feb 2019TL;DR: In this article, the authors propose two end-to-end loss functions for speaker verification using the concept of speaker bases, which are trainable parameters, which enable hard negative mining and calculations of between-speaker variations with consideration of all speakers.
Abstract: In recent years, speaker verification has primarily performed using deep neural networks that are trained to output embeddings from input features such as spectrograms or Mel-filterbank energies. Studies that design various loss functions, including metric learning have been widely explored. In this study, we propose two end-to-end loss functions for speaker verification using the concept of speaker bases, which are trainable parameters. One loss function is designed to further increase the inter-speaker variation, and the other is designed to conduct the identical concept with hard negative mining. Each speaker basis is designed to represent the corresponding speaker in the process of training deep neural networks. In contrast to the conventional loss functions that can consider only a limited number of speakers included in a mini-batch, the proposed loss functions can consider all the speakers in the training set regardless of the mini-batch composition. In particular, the proposed loss functions enable hard negative mining and calculations of between-speaker variations with consideration of all speakers. Through experiments on VoxCeleb1 and VoxCeleb2 datasets, we confirmed that the proposed loss functions could supplement conventional softmax and center loss functions.
23 citations
••
01 Mar 2020TL;DR: MoVNect, a lightweight deep neural network to capture 3D human pose using a single RGB camera, is presented and the teacher-student learning method based knowledge distillation is applied to improve the overall performance.
Abstract: We present MoVNect, a lightweight deep neural network to capture 3D human pose using a single RGB camera. To improve the overall performance of the model, we apply the teacher-student learning method based knowledge distillation to 3D human pose estimation. Real-time post-processing makes the CNN output yield temporally stable 3D skeletal information, which can be used in applications directly. We implement a 3D avatar application running on mobile in real-time to demonstrate that our network achieves both high accuracy and fast inference time. Extensive evaluations show the advantages of our lightweight model with the proposed training method over previous 3D pose estimation methods on the Human3.6M dataset and mobile devices.
23 citations
•
TL;DR: Wang et al. as discussed by the authors proposed background suppression network (BaS-Net) which introduces an auxiliary class for background and has a two-branch weight-sharing architecture with an asymmetrical training strategy.
Abstract: Weakly-supervised temporal action localization is a very challenging problem because frame-wise labels are not given in the training stage while the only hint is video-level labels: whether each video contains action frames of interest. Previous methods aggregate frame-level class scores to produce video-level prediction and learn from video-level action labels. This formulation does not fully model the problem in that background frames are forced to be misclassified as action classes to predict video-level labels accurately. In this paper, we design Background Suppression Network (BaS-Net) which introduces an auxiliary class for background and has a two-branch weight-sharing architecture with an asymmetrical training strategy. This enables BaS-Net to suppress activations from background frames to improve localization performance. Extensive experiments demonstrate the effectiveness of BaS-Net and its superiority over the state-of-the-art methods on the most popular benchmarks - THUMOS'14 and ActivityNet. Our code and the trained model are available at this https URL.
23 citations
Authors
Showing all 4041 results
Name | H-index | Papers | Citations |
---|---|---|---|
Andrea Vedaldi | 89 | 305 | 63305 |
Sunghun Kim | 51 | 115 | 12994 |
Eric Gaussier | 41 | 231 | 8203 |
Un Ju Jung | 39 | 98 | 5696 |
Hyun-Soo Kim | 37 | 421 | 5650 |
Gabriela Csurka | 37 | 145 | 10959 |
Nojun Kwak | 34 | 234 | 6026 |
Young-Jin Park | 31 | 257 | 3759 |
Sung Joo Kim | 31 | 196 | 3078 |
Jae-Hoon Kim | 30 | 323 | 5847 |
Jung-Ryul Lee | 29 | 222 | 3322 |
Joon Son Chung | 28 | 73 | 4900 |
Ok-Hwan Lee | 27 | 163 | 2896 |
Diane Larlus | 27 | 69 | 4722 |
Jung Goo Lee | 26 | 142 | 1917 |