Author
Hyunggon Park
Other affiliations: University of California, Los Angeles, École Polytechnique Fédérale de Lausanne
Bio: Hyunggon Park is an academic researcher from Ewha Womans University. The author has contributed to research in topics: Linear network coding & Decoding methods. The author has an hindex of 14, co-authored 91 publications receiving 757 citations. Previous affiliations of Hyunggon Park include University of California, Los Angeles & École Polytechnique Fédérale de Lausanne.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: This work proposes to deploy the well-known game theoretic concept of bargaining to allocate the bandwidth fairly and optimally among multiple collaborative users and considers two bargaining solutions for the resource management problem: the Nash bargaining solution (NBS) and the Kalai-Smorodinsky bargaining Solution (KSBS).
Abstract: Multiuser multimedia applications such as enterprise streaming, surveillance, and gaming are recently emerging, and they are often deployed over bandwidth-constrained network infrastructures. To ensure the quality of service (QoS) required by the delay-sensitive and bandwidth intensive multimedia data for these applications, efficient resource (bandwidth) management becomes paramount. We propose to deploy the well-known game theoretic concept of bargaining to allocate the bandwidth fairly and optimally among multiple collaborative users. Specifically, we consider two bargaining solutions for our resource management problem: the Nash bargaining solution (NBS) and the Kalai-Smorodinsky bargaining solution (KSBS). We provide interpretations for the two investigated bargaining solutions for multiuser resource allocation: the NBS can be used to maximize the system utility, while the KSBS ensures that all users incur the same utility penalty relative to the maximum achievable utility. The bargaining strategies and solutions are implemented in the network using a resource manager, which explicitly considers the application-specific distortion for the bandwidth allocation. We show that the bargaining solutions exhibit important properties (axioms) that can be used for effective multimedia resource allocation. Moreover, we propose several criteria for determining bargaining powers for these solutions, which enable us to provide additional flexibility in choosing solution by taking into consideration the visual quality impact, the deployed spatiotemporal resolutions, etc. We also determine the complexity of these solutions for our application and quantify the performance of the proposed bargaining-based resource strategies for different scenarios.
154 citations
••
TL;DR: Efficient (delay tolerant and intolerant) data sharing mechanisms in P2P and current video coding trends are elaborated in detail and the conclusion is drawn with key challenges and open issues related to video streaming over P1P.
Abstract: A robust real-time video communication service over the Internet in a distributed manner is an important challenge, as it influences not only the current Internet structure but also the future Internet evolution. In this context, Peer-to-Peer (P2P) networks are playing an imperative position for providing efficient video transmission over the Internet. Recently, several P2P video transmission systems have been proposed for live video streaming services or video-on-demand services over the Internet. In this paper, we describe and discuss existing video streaming systems over P2P. Efficient (delay tolerant and intolerant) data sharing mechanisms in P2P and current video coding trends are elaborated in detail. Moreover, video streaming solutions (live and on-demand) over P2P from the perspective of tree-based and mesh-based systems are explained. Finally, the conclusion is drawn with key challenges and open issues related to video streaming over P2P.
91 citations
••
TL;DR: The use of cross correlation of the templates extracted during the registration and authentication stages can reduce the time required to achieve the target false acceptance rate (FAR) and false rejection rate (FRR).
Abstract: We propose a practical system design for biometrics authentication based on electrocardiogram (ECG) signals collected from mobile or wearable devices. The ECG signals from such devices can be corrupted by noise as a result of movement, signal acquisition type, etc. This leads to a tradeoff between captured signal quality and ease of use. We propose the use of cross correlation of the templates extracted during the registration and authentication stages. The proposed approach can reduce the time required to achieve the target false acceptance rate (FAR) and false rejection rate (FRR). The proposed algorithms are implemented in a wearable watch for verification of feasibility. In the experiment results, the FAR and FRR are 5.2% and 1.9%, respectively, at approximately 3 s of authentication and 30 s of registration.
81 citations
••
TL;DR: In this paper, the authors proposed a novel strategy for BEMS (Building Energy Management System), which efficiently controls energy flows in a building so as to minimize the total cost of energy for a finite period.
48 citations
••
TL;DR: This paper proposes a distributed solution to design a multi-hop ad hoc Internet of Things network where mobile IoT devices strategically determine their wireless transmission ranges based on a deep reinforcement learning approach and builds a network with higher network performance than the current state-of-the-art solutions in terms of system goodput and connectivity ratio.
Abstract: In this paper, we propose a distributed solution to design a multi-hop ad hoc Internet of Things (IoT) network where mobile IoT devices strategically determine their wireless transmission ranges based on a deep reinforcement learning approach. We consider scenarios where only a limited networking infrastructure is available but a large number of IoT devices are deployed in building a multi-hop ad hoc network to deliver source data to the destination. An IoT device is considered as a decision-making agent that strategically determines its transmission range in a way that maximizes network throughput while minimizing the corresponding transmission power consumption. Each IoT device collects information from its partial observations and learns its environment through a sequence of experiences. Hence, the proposed solution requires only a minimal amount of information from the system. We show that the actions that the IoT devices take from its policy are determined as to activate or inactivate its transmission, i.e., only necessary relay nodes are activated with the maximum transmit power, and nonessential nodes are deactivated to minimize power consumption. Using extensive experiments, we confirm that the proposed solution builds a network with higher network performance than the current state-of-the-art solutions in terms of system goodput and connectivity ratio.
40 citations
Cited by
More filters
••
[...]
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).
13,246 citations
••
TL;DR: This work surveys the current state-of-the-art of information fusion by presenting the known methods, algorithms, architectures, and models, and discusses their applicability in the context of wireless sensor networks.
Abstract: Wireless sensor networks produce a large amount of data that needs to be processed, delivered, and assessed according to the application objectives. The way these data are manipulated by the sensor nodes is a fundamental issue. Information fusion arises as a response to process data gathered by sensor nodes and benefits from their processing capability. By exploiting the synergy among the available data, information fusion techniques can reduce the amount of data traffic, filter noisy measurements, and make predictions and inferences about a monitored entity. In this work, we survey the current state-of-the-art of information fusion by presenting the known methods, algorithms, architectures, and models of information fusion, and discuss their applicability in the context of wireless sensor networks.
606 citations
••
TL;DR: This paper analyzes the ME structure in HEVC and proposes a parallel framework to decouple ME for different partitions on many-core processors and achieves more than 30 and 40 times speedup for 1920 × 1080 and 2560 × 1600 video sequences, respectively.
Abstract: High Efficiency Video Coding (HEVC) provides superior coding efficiency than previous video coding standards at the cost of increasing encoding complexity. The complexity increase of motion estimation (ME) procedure is rather significant, especially when considering the complicated partitioning structure of HEVC. To fully exploit the coding efficiency brought by HEVC requires a huge amount of computations. In this paper, we analyze the ME structure in HEVC and propose a parallel framework to decouple ME for different partitions on many-core processors. Based on local parallel method (LPM), we first use the directed acyclic graph (DAG)-based order to parallelize coding tree units (CTUs) and adopt improved LPM (ILPM) within each CTU (DAGILPM), which exploits the CTU-level and prediction unit (PU)-level parallelism. Then, we find that there exist completely independent PUs (CIPUs) and partially independent PUs (PIPUs). When the degree of parallelism (DP) is smaller than the maximum DP of DAGILPM, we process the CIPUs and PIPUs, which further increases the DP. The data dependencies and coding efficiency stay the same as LPM. Experiments show that on a 64-core system, compared with serial execution, our proposed scheme achieves more than 30 and 40 times speedup for 1920 × 1080 and 2560 × 1600 video sequences, respectively.
366 citations
••
TL;DR: A use case of fully autonomous driving is presented to show 6G supports massive IoT and some breakthrough technologies, such as machine learning and blockchain, in 6G are introduced, where the motivations, applications, and open issues of these technologies for massive IoT are summarized.
Abstract: Nowadays, many disruptive Internet-of-Things (IoT) applications emerge, such as augmented/virtual reality online games, autonomous driving, and smart everything, which are massive in number, data intensive, computation intensive, and delay sensitive. Due to the mismatch between the fifth generation (5G) and the requirements of such massive IoT-enabled applications, there is a need for technological advancements and evolutions for wireless communications and networking toward the sixth-generation (6G) networks. 6G is expected to deliver extended 5G capabilities at a very high level, such as Tbps data rate, sub-ms latency, cm-level localization, and so on, which will play a significant role in supporting massive IoT devices to operate seamlessly with highly diverse service requirements. Motivated by the aforementioned facts, in this article, we present a comprehensive survey on 6G-enabled massive IoT. First, we present the drivers and requirements by summarizing the emerging IoT-enabled applications and the corresponding requirements, along with the limitations of 5G. Second, visions of 6G are provided in terms of core technical requirements, use cases, and trends. Third, a new network architecture provided by 6G to enable massive IoT is introduced, i.e., space–air–ground–underwater/sea networks enhanced by edge computing. Fourth, some breakthrough technologies, such as machine learning and blockchain, in 6G are introduced, where the motivations, applications, and open issues of these technologies for massive IoT are summarized. Finally, a use case of fully autonomous driving is presented to show 6G supports massive IoT.
263 citations