scispace - formally typeset
J

Jaesik Choi

Researcher at KAIST

Publications -  118
Citations -  2202

Jaesik Choi is an academic researcher from KAIST. The author has contributed to research in topics: Inference & Computer science. The author has an hindex of 20, co-authored 107 publications receiving 1385 citations. Previous affiliations of Jaesik Choi include University of California & Lawrence Berkeley National Laboratory.

Papers
More filters
Journal ArticleDOI

XAI—Explainable artificial intelligence

TL;DR: This research presents a meta-modelling architecture that automates the very labor-intensive and therefore time-heavy and therefore expensive and expensive process of manually cataloging and cataloging artificial intelligence applications.
Proceedings ArticleDOI

SecureCore: A multicore-based intrusion detection architecture for real-time embedded systems

TL;DR: The SecureCore framework is presented, coupled with novel monitoring techniques, that is able to improve the security of realtime embedded systems and detect malicious activities by analyzing and observing the inherent properties of the real-time system using statistical analyses of their execution profiles.
Proceedings ArticleDOI

Spatio-temporal pyramid matching for sports videos

TL;DR: This paper addresses the problem of querying video shots based on content-based matching and proposes the spatio-temporal pyramid matching (STPM), which considers temporal information in conjunction with spatial locations to match objects in video shots.
Journal ArticleDOI

Safe Navigation of a Mobile Robot Considering Visibility of Environment

TL;DR: It is experimentally verified that a robot safely navigates in dynamic indoor environment by adopting the proposed scheme, which clearly indicates the structural procedure on how to model and to exploit the risk of navigation.
Proceedings Article

HetPipe: Enabling Large DNN Training on (Whimpy) Heterogeneous GPU Clusters through Integration of Pipelined Model Parallelism and Data Parallelism

TL;DR: HetPipe as discussed by the authors integrates pipelined model parallelism with data parallelism to enable training of large DNN models on a heterogeneous GPU cluster that possibly includes whimpy GPUs that, as a standalone, could not be used for training.