H
Hyuckchul Jung
Researcher at Florida Institute for Human and Machine Cognition
Publications - 42
Citations - 1346
Hyuckchul Jung is an academic researcher from Florida Institute for Human and Machine Cognition. The author has contributed to research in topics: KAOS & Constraint satisfaction problem. The author has an hindex of 18, co-authored 42 publications receiving 1307 citations. Previous affiliations of Hyuckchul Jung include Nuance Communications & AT&T.
Papers
More filters
Book ChapterDOI
Toward trustworthy adjustable autonomy in KAoS
Jeffrey M. Bradshaw,Hyuckchul Jung,Shri Kulkarni,Matthew Johnson,Paul J. Feltovich,James F. Allen,Larry Bunch,Nathanael Chambers,Lucian Galescu,R. Jeffers,Niranjan Suri,William Taysom,Andrzej Uszok +12 more
TL;DR: Some important dimensions relating to autonomy are described and examples of how these dimensions might be adjusted in order to enhance performance of human-agent teams are given.
Book ChapterDOI
Dynamic Distributed Resource Allocation: A Distributed Constraint Satisfaction Approach
TL;DR: In this article, the authors propose a formalization of distributed resource allocation that represents both dynamic and distributed aspects of the problem, and define four categories of different difficulties of this problem.
Proceedings ArticleDOI
Policy-based coordination in joint human-agent activity
Jeffrey M. Bradshaw,Paul J. Feltovich,Hyuckchul Jung,Shri Kulkarni,James F. Allen,Larry Bunch,Nathanael Chambers,Lucian Galescu,R. Jeffers,Matthew Johnson,M. Sierhuis,William Taysom,Andrzej Uszok,R. Van Hoof +13 more
TL;DR: This paper outlines an approach to policy-based coordination in joint human-agent activity grounded in a theory of joint activity originally developed in the context of discourse, and now applied to the broader realm of human- agent interaction.
Proceedings Article
ATT1: Temporal Annotation Using Big Windows and Rich Syntactic and Semantic Features
Hyuckchul Jung,Amanda Stent +1 more
TL;DR: It is shown that it is possible for models using only lexical features to approach the performance of models using rich syntactic and semantic feature sets.
Proceedings ArticleDOI
MVA: The Multimodal Virtual Assistant
Michael V. Johnston,John Chen,Patrick Ehlen,Hyuckchul Jung,Jay Henry Lieske,Aarthi M. Reddy,Ethan Selfridge,Svetlana Stoyanchev,Brant J. Vasilieff,Jay G. Wilpon +9 more
TL;DR: This demonstration will highlight incremental recognition, multimodal speech and gesture input, contextually-aware language understanding, and the targeted clarification of potentially incorrect segments within user input.