scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Consumer Electronics in 2018"


Journal ArticleDOI
TL;DR: An emotion based music recommendation framework that learns the emotion of a user from the signals obtained via wearable physiological sensors that can be integrated to any recommendation engine.
Abstract: Most of the existing music recommendation systems use collaborative or content based recommendation engines. However, the music choice of a user is not only dependent to the historical preferences or music contents. But also dependent to the mood of that user. This paper proposes an emotion based music recommendation framework that learns the emotion of a user from the signals obtained via wearable physiological sensors. In particular, the emotion of a user is classified by a wearable computing device which is integrated with a galvanic skin response (GSR) and photo plethysmography (PPG) physiological sensors. This emotion information is feed to any collaborative or content based recommendation engine as a supplementary data. Thus, existing recommendation engine performances can be increased using these data. Therefore, in this paper emotion recognition problem is considered as arousal and valence prediction from multi-channel physiological signals. Experimental results are obtained on 32 subjects’ GSR and PPG signal data with/out feature fusion using decision tree, random forest, support vector machine and k-nearest neighbors algorithms. The results of comprehensive experiments on real data confirm the accuracy of the proposed emotion classification system that can be integrated to any recommendation engine.

153 citations


Journal ArticleDOI
TL;DR: A novel 5-layer perceptron neural network and a Bayesian network-based accurate meal prediction algorithm are presented in this paper to advance the state-of-art in smart healthcare.
Abstract: A correct balance of nutrient intake is very important, particularly in infants When the body is deprived of essential nutrients, it can lead to serious disease and organ deterioration which can cause serious health issues in adulthood Automated monitoring of the nutritional content of food provided to infants, not only at home but also in daycare facilities, is essential for their healthy development To address this challenge, this paper presents a new Internet of Things (IoT)-based fully automated nutrition monitoring system, called Smart-Log, to advance the state-of-art in smart healthcare For the realization of Smart-Log, a novel 5-layer perceptron neural network and a Bayesian network-based accurate meal prediction algorithm are presented in this paper Smart-Log is prototyped as a consumer electronics product which consists of WiFi enabled sensors for food nutrition quantification, and a smart phone application that collects nutritional facts of the food ingredients The Smart-Log prototype uses an open IoT platform for data analytics and storage Experimental results consisting of 8172 food items for 1000 meals show that the prediction accuracy of Smart-Log is 986%

96 citations


Journal ArticleDOI
TL;DR: Results demonstrate the ability of the proposed NILM algorithm to accurately identify and allocate individual energy signatures in a computationally efficient manner, which makes it suitable for inexpensive home energy management.
Abstract: This paper presents a nonintrusive load monitoring (NILM) algorithm based on mixed-integer linear programming. The formulation deals with the problem of multiple switching that arises when disaggregating individual appliance’s consumptions from a compound power measurement. Mixed-integer linear constraints are used to efficiently represent the load signatures of each appliance. Also, a window-based strategy is used to enhance the computational performance of the proposed NILM algorithm. The disaggregation can be made using only active power measurements at a low sampling rate, which is available in most energy meters. Moreover, if available, other signatures can be added to the model to improve its accuracy, such as reactive power signatures or harmonics. The performance of the algorithm is evaluated using three test cases from the almanac of minutely power dataset. The proposed method is also compared with a disaggregation method called aided linear integer programming. Results demonstrate the ability of the proposed method to accurately identify and allocate individual energy signatures in a computationally efficient manner, which makes it suitable for inexpensive home energy management.

84 citations


Journal ArticleDOI
TL;DR: The proposed drowsiness-fatigue-detection system is composed of a pair of wearable smart glasses, an in-vehicle infotainment telematics platform, an on-board diagnostics-II-based automotive diagnostic bridge, an active vehicle rear light alert mechanism, and a cloud-based management platform.
Abstract: This paper proposes a drowsiness-fatigue-detection system based on wearable smart glasses to increase road safety. The proposed system is composed of a pair of wearable smart glasses, an in-vehicle infotainment telematics platform, an on-board diagnostics-II-based automotive diagnostic bridge, an active vehicle rear light alert mechanism, and a cloud-based management platform. A dedicated miniature bandpass infrared (IR) light sensor is also proposed and implemented for the low-cost, lightweight, wearable smart glasses, which can provide a higher signal-to-noise ratio than a general commercial IR light sensor, minimize the ambient environmental light image, and efficiently increase the accuracy of detection. The proposed system can detect the status of the vehicle driver with respect to drowsiness or fatigue conditions in real time. When drowsiness or fatigue is detected, the active vehicle real light alert mechanism will automatically be flickered to alert following vehicles. The related information will also be concurrently transmitted to a cloud-based management platform. As a result, the proposed system can lead to increased road safety.

49 citations


Journal ArticleDOI
Jinqiang Bai1, Lian Shiguo, Zhaoxiang Liu, Kai Wang, Dijun Liu 
TL;DR: A novel scheme which utilizes a dynamic subgoal selecting strategy to guide the users to the destination and help them bypass obstacles at the same time is proposed and has been tested on a collection of individuals and proved to be effective on indoor navigation tasks.
Abstract: To help the blind people walk to the destination efficiently and safely in indoor environment, a novel wearable navigation device is presented in this paper. The locating, way-finding, route following, and obstacle avoiding modules are the essential components in a navigation system, while it remains a challenging task to consider obstacle avoiding during route following, as the indoor environment is complex, changeable, and possibly with dynamic objects. To address this issue, we propose a novel scheme which utilizes a dynamic subgoal selecting strategy to guide the users to the destination and help them bypass obstacles at the same time. This scheme serves as the key component of a complete navigation system deployed on a pair of wearable optical see-through glasses for the ease of use of blind people’s daily walks. The proposed navigation device has been tested on a collection of individuals and proved to be effective on indoor navigation tasks. The sensors embedded are of low cost, small volume, and easy integration, making it possible for the glasses to be widely used as a wearable consumer device.

49 citations


Journal ArticleDOI
Jinqiang Bai1, Lian Shiguo, Zhaoxiang Liu, Kai Wang, Dijun Liu 
TL;DR: In this paper, a garbage pickup robot which operates on the grass is presented, which is able to detect the garbage accurately and autonomously by using a deep neural network for garbage recognition.
Abstract: This paper presents a novel garbage pickup robot which operates on the grass. The robot is able to detect the garbage accurately and autonomously by using a deep neural network for garbage recognition. In addition, with the ground segmentation using a deep neural network, a novel navigation strategy is proposed to guide the robot to move around. With the garbage recognition and automatic navigation functions, the robot can clean garbage on the ground in places like parks or schools efficiently and autonomously. Experimental results show that the garbage recognition accuracy can reach as high as 95%, and even without path planning, the navigation strategy can reach almost the same cleaning efficiency with traditional methods. Thus, the proposed robot can serve as a good assistance to relieve dustman’s physical labor on garbage cleaning tasks.

46 citations


Journal ArticleDOI
TL;DR: This paper proposes a software-defined networking (SDN)-based firewall platform that is capable of detecting horizontal port scans in home networks and uses FleXight, the proposed new information channel between SDN controller and data path elements to access packet-level information.
Abstract: Internet-connected consumer electronics marketed as smart devices (also known as Internet-of-Things devices) usually lack essential security protection mechanisms. This puts user privacy and security in great danger. One of the essential steps to compromise vulnerable devices is locating them through horizontal port scans. In this paper, we focus on the problem of detecting horizontal port scans in home networks. We propose a software-defined networking (SDN)-based firewall platform that is capable of detecting horizontal port scans. Current SDN implementations (e.g., OpenFlow) do not provide access to packet-level information, which is essential for network security applications, due to performance limitations. Our platform uses FleXight, our proposed new information channel between SDN controller and data path elements to access packet-level information. FleXight uses per-flow sampling and dynamical sampling rate adjustments to provide the necessary information to the controller while keeping the overhead very low. We evaluate our solution on a large real-world packet trace from an ISP and show that our system can identify all attackers and 99% of susceptible victims with only 0.75% network overhead. We also present a detailed usability analysis of our system.

34 citations


Journal ArticleDOI
TL;DR: A efficient pipelined hardware implementation of the adaptive multiple transform (AMT) as a new approach of the transform core design with larger and more flexible partitioning block sizes is presented.
Abstract: Versatile video coding is the next generation video coding standard expected by the end of 2020. Several new contributions have been proposed to enhance the coding efficiency beyond the high efficiency video coding standard. One of these tools is the adaptive multiple transform (AMT) as a new approach of the transform core design. The AMT involves five discrete cosine transform/discrete sine transform types with larger and more flexible partitioning block sizes. However, the AMT coding efficiency comes with the cost of higher computational complexity, especially at the encoder side. In this paper, a efficient pipelined hardware implementation of the AMT including the five types of sizes $4\times 4$ , $8\times 8$ , $16\times 16$ and $32\times 32$ is proposed. The architecture design takes advantage of the internal software/hardware resources of the target field-programmable gate array device such as library of parametrized modules core intellectual properties and digital signal processing blocks. The proposed 1-D 32-point AMT design allows to process 4K video at 44 frames/s. A unified 2-D implementation of the 4, 8, 16, and 32-point AMT design is also presented.The implementation takes into account all the asymmetric 2-D block size combinations from 4 to 32. The 2-D architecture design is able to sustain 2K video coding at 50 frames/s with an operational frequency up to 147 MHz.

31 citations


Journal ArticleDOI
TL;DR: This paper proposes an efficient charging/discharging scheduling mechanism for electric vehicles in multiple homes common parking lot for smart households prosumers that has low complexity and ensures the energy satisfaction for all consumers.
Abstract: Plug-in electric vehicles are becoming one of indispensable prosumer electronics components for smart households and therefore, their cost efficient energy scheduling is one of the main challenging issues. In the current schemas, the charging and discharging interval of the vehicles are normally announced by the owners in advance leading to the suboptimal profit gain in some situations and hence consumers dissatisfaction. In this paper, we propose an efficient charging/discharging scheduling mechanism for electric vehicles in multiple homes common parking lot for smart households prosumers. The proposed mechanism takes into account the optimal interval allocation considering the instantaneous electricity load and the vehicles request pattern. Based on the data from the vehicles, a mixed optimization model is formulated by the central scheduler which aims to maximize the profit of consumers and is then solved using an effective algorithm. The optimization results are then sent to the system controller determining the interval and energy trading patterns between the power grid and the vehicles. The proposed algorithm has low complexity and ensures the energy satisfaction for all consumers. The performance of the scheduling schema is verified through multiple simulation scenarios.

31 citations


Journal ArticleDOI
TL;DR: The results showed that the proposed system reduced energy consumption up to 43% by replacing the existing fluorescent lights with the proposed lighting control systems.
Abstract: Lighting consumes the largest amount of energy in buildings. Recently, many studies of energy-efficient lighting systems with a variety of sensor and communication technologies have been conducted as a way to increase the cost efficiency of lighting. However, earlier studies have mostly focused on energy efficiency, whereas they have not significantly considered the occupant’s satisfaction. Therefore, this paper proposes an energy-saving lighting control system considering the occupant’s satisfaction. The proposed system improves the energy efficiency and the occupant’s satisfaction by controlling lighting control parameters considering the characteristics of space and the occupant’s behavior patterns. Moreover, this paper deployed the proposed lighting systems in a building and operated them in a real work environment to evaluate the performance. The results showed that the proposed system reduced energy consumption up to 43% by replacing the existing fluorescent lights with the proposed lighting control systems.

29 citations


Journal ArticleDOI
TL;DR: This is the first implementation of an architecture for the FVC Adaptive Multiple Core Transform supporting ${4 \times 4}$ transform sizes.
Abstract: Future video coding (FVC) will be the next generation video coding standard. Yet in the first stages of the standardization process, it is expected to replace high efficiency video coding beyond 2020. One of the enhancements in discussion is the adaptive multiple core transform, which uses five different types of 2-D discrete sine/cosine transforms (DCT-II, DCT-V, DCT-VIII, DST-I, and DST-VII) and up to ${64 \times 64}$ transform unit sizes. This schema, increases the computational complexity of both, encoder and decoder. In this paper, a deeply pipelined high performance architecture to implement the five transforms for ${4 \times 4}$ , ${8 \times 8}$ , ${16 \times 16}$ , and ${32 \times 32}$ sizes is proposed. The design has been described in VHDL and it has been synthesized for different Field-programmable gate array chips, being able to process up 182 fps ${@} 3840\times2160$ for ${4 \times 4}$ transform sizes. Up the best of our knowledge, this is the first implementation of an architecture for the FVC Adaptive Multiple Core Transform supporting ${4 \times 4}$ , ${8 \times 8}$ , ${16 \times 16}$ , and ${32 \times 32}$ sizes.

Journal ArticleDOI
TL;DR: Two hardware architectures of the modified LZ4 algorithm (MLZ4) are proposed with both compressors and decompressors, which are implemented on an FPGA evaluation kit and results show that the proposed compressor architecture can achieve a high throughput and compression ratio, which is higher than all previous LZ algorithm designs implemented on FPGAs.
Abstract: Data compression is commonly used in NAND flash-based solid state drives (SSDs) to increase their storage performance and lifetime as it can reduce the amount of data written to and read from NAND flash memory. Software-based data compression reduces SSD performance significantly and, as such, hardware-based data compression designs are required. This paper studies the latest lossless data compression algorithm, i.e., the Lempel-Ziv (LZ)4 algorithm which is one of the fastest compression algorithms reported to date. A data compression FPGA prototype based on the LZ4 lossless compression algorithm is studied. The original LZ4 compression algorithm is modified for real-time hardware implementation. Two hardware architectures of the modified LZ4 algorithm (MLZ4) are proposed with both compressors and decompressors, which are implemented on an FPGA evaluation kit. The implementation results show that the proposed compressor architecture can achieve a high throughput of up to 1.92 Gb/s with a compression ratio of up to 2.05, which is higher than all previous LZ algorithm designs implemented on FPGAs. The compression device can be used in high-end SSDs to further increase their storage performance and lifetime.

Journal ArticleDOI
TL;DR: A super fast attitude solution for consumer electronics accelerometer-magnetometer combination is obtained and it is proven in the paper that the proposed algorithm is equivalent to two recent representative methods.
Abstract: A super fast attitude solution is obtained for consumer electronics accelerometer-magnetometer combination. The quaternion parameterizing the orientation is analytically derived from a least-square optimization that maintains very simple form. Like previously developed approaches, this algorithm does not require predetermined magnetometer reference vector. It has been proven in the paper that the proposed algorithm is equivalent to two recent representative methods. Computational complexity analysis shows that the proposed algorithm has the least floating-number operations. Comparisons with recent and classical methods indicate the definite superiority of the proposed algorithm on execution time in real embedded applications.

Journal ArticleDOI
TL;DR: A novel analysis method implements real-time detection of gait events (heel strike, toe off, and mid-stance phase) and immediately provides detailed spatiotemporal parameters, which is applicable in the management of the health status and on injury prevention.
Abstract: The background of this paper is to apply advanced real-time gait analysis to walking interventions in daily life setting. A vast of wearable devices provide gait information but not more than pedometer functions such as step counting, displacement, and velocity. This paper suggests a real-time gait analysis method based on a head-worn inertial measurement unit. A novel analysis method implements real-time detection of gait events (heel strike, toe off, and mid-stance phase) and immediately provides detailed spatiotemporal parameters. The reliability of this method was proven by a measurement with over 11 000 steps from seven participants on a 400-m outdoor track. The advanced gait analysis was conducted without any limitation of a fixed reference frame (e.g., indoor stage and infrared cameras). The mean absolute error in step-counting was 0.24%. Compared to a pedometer, additional gait parameters were obtained such as foot-ground contact time (CT) and CT ratio. The gait monitoring system can be used as real-time and long-term feedback, which is applicable in the management of the health status and on injury prevention.

Journal ArticleDOI
TL;DR: The design and implementation of a novel low-power, low-cost, hand-held wireless device called a SensePod, which can be used by a consumer to interact with a smart home using simple gestures like rubbing, taping or rolling the device on any home surface like a dining table.
Abstract: Low-cost sensors and ubiquitous wireless networking is enabling novel ways in which homeowners can interact with their smart homes. Many complementary approaches like using voice commands, direct interaction by using touch or weight, or by using body gestures are emerging. This paper shows the design and implementation of a novel low-power, low-cost, hand-held wireless device called a SensePod. SensePods can be used by a consumer to interact with a smart home using simple gestures like rubbing, taping or rolling the device on any home surface like a dining table. The device is only 4.5 cm long, forms an ad-hoc wireless network using the ZigBee protocol, and can be easily interfaced to existing home management systems using a universal serial bus port. The gestures in each device can be programmed to control various objects of a smart home like smart curtains, for example. Hidden Markov models were used to train the device to recognize various gestures. The device was tested with a variety of gestures and has a recognition rate of over 99.7%, and a response time of less than two milliseconds.

Journal ArticleDOI
TL;DR: A novel framework for the NFC secure element (SE)-based mutual authentication and attestation for IoT access with a user device such as a mobile device using NFC-based Host Card Emulation (HCE) mode for the first time.
Abstract: Certain resourceful and powered Internet of Things (IoT) can become victims to launch cyber attacks. Near field communication (NFC) can be used for their secure on-demand access. In this paper, we present a novel framework for the NFC secure element (SE)-based mutual authentication and attestation for IoT access with a user device such as a mobile device using NFC-based Host Card Emulation (HCE) mode for the first time. HCE is robust as compared to the other NFC modes. A cloud-based Trusted Certified Authority (TCA) manages all cryptographic credentials and stores them in the tamper-resistant SE and Trusted Platform Module (TPM)-based attestation modules on the devices. It uses a newly proposed NFC SE-based mutual authentication and attestation (NSE-AA) protocol for proof-of-locality, end-to-end anonymous mutual authentication between the SEs and an associated remote attestation for trust. The protocol is robust and lightweight as compared to the existing schemes. We provide its informal and formal security analysis using the Real-Or-Random (ROR) model. A simulation on the Automated Validation of Internet Security Protocols and Applications (AVISPA) tool proves its safety. We also briefly present the details of a prototype with a commercial mid-range priced mobile device and Single Board Computer (SBC)-based IoT device.

Journal ArticleDOI
TL;DR: A novel ECG signal analysis method using discrete cosine stockwell transform for feature extraction and artificial bee colony optimized least-square twin support vector machines as classifier is developed and prototyped using commercially available 32-bit microcontroller test platform.
Abstract: With the advancement in personalized healthcare technology, the usage of wearable devices for continuous monitoring and analysis of long-term biomedical signals, such as electrocardiography (ECG) has shown explosive growth. However, the existing ECG monitoring devices exhibit limited performance, such as they only store the ECG data, have low accuracy and their inability to perform event-by-event diagnosis at the place of data acquired. Therefore, the personalized healthcare demands an efficient method and point-of-care platform capable of providing real-time feedback to consumers as well as subjects. In this paper, a novel ECG signal analysis method using discrete cosine stockwell transform for feature extraction and artificial bee colony optimized least-square twin support vector machines as classifier is developed and prototyped using commercially available 32-bit microcontroller test platform. The prototype is evaluated under two schemes, i.e., the class and personalized schemes and validated on the benchmark MIT-BIH arrhythmia data. A higher overall accuracy of 96.14% and 86.5% respectively is reported by the prototype in the aforesaid two evaluation schemes than the existing works. The platform can be utilized as an early warning system in detecting abnormal ECG in home care environment to the state-of-art diagnosis.

Journal ArticleDOI
TL;DR: Results show that the system can acquire and classify ECG signals and the classification algorithm is verified with data from the MIT/BIH arrhythmia database.
Abstract: Electrocardiography (ECG) is a fundamental method not only commonly used in the hospital for clinical requirement but also widely adopted in home and personal healthcare systems to obtain the electrical activity of the heart. An arrhythmia monitoring system is proposed and used in a clinical trial. The proposed system has three parts. The first is a high-resolution, low-power analog front-end circuit for implementing bio-signal sensing circuits. This part is developed with a chopper-based pre-amplifier and a high-pass sigma–delta modulator. The features of the circuits are low complexity, high resolution, and low power consumption. The second part is a digital signal processor with a decimation filter and a universal asynchronous receiver/transmitter package generator. The last part is used to realize a software interface on smartphone for ECG signal recording, display, and classification. A wavelet-based classification method is also proposed to classify the rhythm. The chip used in the system is fabricated through the 0.18 $\boldsymbol {\mu }\text{m}$ standard complementary metal–oxide–semiconductor process, and the operation voltage is 1.2 V. The classification algorithm is verified with data from the MIT/BIH arrhythmia database. The accuracy of beat detection and arrhythmia classification is 99.4% and 95.83%, respectively. Eight patients are enrolled in a human study to verify the performance of the proposed arrhythmia monitoring system. Results show that the system can acquire and classify ECG signals.

Journal ArticleDOI
TL;DR: An ML control scheme is presented that enables dynamic evaluation of a match-line by effectively activating or deactivating ML sections to improve the energy efficiency and improves the energy-delay from compared designs.
Abstract: Hardware search engines are widely used in network routers for high-speed look up and parallel data processing. Content addressable memory (CAM) is such an engine that performs high-speed search at the expense of large energy dissipation. Match-line (ML) power dissipation is one of the critical concerns in designing low-power CAM architectures. NOR-MLs make this issue more severe due to the higher number of short-circuit discharge paths during search. In this paper, an ML control scheme is presented that enables dynamic evaluation of a match-line by effectively activating or deactivating ML sections to improve the energy efficiency. $128{\boldsymbol \times }32$ -bit memory arrays have been designed using 45-nm CMOS technology and verified at different process-voltage-temperature and frequency variations to test the improvements of performance. A search frequency of 100 MHz under 1-V supply, at 27 °C applied on the proposed CAM results 48.25%, 52.55%, and 54.80% reduction in energy per search compared to a conventional CAM, an early predict and terminate ML precharge CAM (EPTP-CAM) and an ML selective charging scheme CAM, respectively. ML partition also minimizes precharge activities between subsequent searches to reduce total precharge power in the proposed scheme. An approximate reduction of 2.5 times from conventional and EPTP schemes is observed in the precharge dissipation. Besides low search power, proposed design improves the energy-delay by 42% to 88% from compared designs.

Journal ArticleDOI
TL;DR: A pseudonym-based secure authentication protocol (PSAP) for NFC applications, which is effective in lifetime and includes time synchronization- based method and nonce-based method, and proves that PSAP can provide traceability and more secure features with a little more cost.
Abstract: Nowadays, near field communication (NFC) has been widely used in electronic payment, ticketing, and many other areas. NFC security standard requires the use of public key infrastructure (PKI) to implement mutual authentication and session keys negotiation in order to ensure communication security. In traditional PKI-based schemes, every user uses a fixed public/private key pair to implement authentication and key agreement. An attacker can create a profile based on user’s public key to track and compromise the user’s privacy. Recently, He et al. and Odelu et al. successively proposed pseudonym-based authentication key and agreement protocols for NFC after Eun et al. ’s protocol (2013), which is first claimed to provide conditional privacy for NFC. They respectively claimed that their scheme can satisfy the security requirements. In this paper, first, we prove that their protocols still have security flaws, including the confusion of the user’s identity and the random identity. Then, we propose a pseudonym-based secure authentication protocol (PSAP) for NFC applications, which is effective in lifetime and includes time synchronization-based method and nonce-based method. In our scheme, trusted service manager issues pseudonyms but does not need to maintain verification tables and it could reveal the user’s identity of internal attackers. Furthermore, security and performance analysis proves that PSAP can provide traceability and more secure features with a little more cost.

Journal ArticleDOI
TL;DR: This paper proposes a framework for the scalable development of ADAS from consumer level to different automotive safety levels, provides unified access toward algorithm building blocks, multi-sensor real-time environment and easy integration of algorithms, thus enabling shorter development time.
Abstract: Modern technologies lead to more sophisticated hardware, while software is becoming more complex. These trends are widely present in consumer electronics and do not bypass automotive electronics either. There is an evident recent growth in in-vehicle infotainment, telematics, advanced driver assistance systems (ADASs) and cluster development. The number of electronic control units (ECUs) in vehicle constantly grows. Since typical vehicle ECU is providing one function per vehicle, it becomes harder for manufacturers to manage these ECUs due to diverse nature of the system, hence a rising demand for ECU consolidation exists. With the availability of sophisticated hardware, powerful system-on-chips (SoCs) can be used for multiple functions inside a vehicle. The transition toward less ECUs is an ongoing process, in which software needs to be aligned first and then transferred to the same SoC. This paper presents the software platform for heterogeneous immersive in-vehicle environments, providing a step in software consolidation, by allowing same abstractions for diverse applications executing on various hardware platforms. It proposes a framework for the scalable development of ADAS from consumer level to different automotive safety levels, provides unified access toward algorithm building blocks, multi-sensor real-time environment and easy integration of algorithms, thus enabling shorter development time.

Journal ArticleDOI
TL;DR: A new cloud computing model for context-aware Internet of Things services that supports intelligent service-context management using a supervised and reinforcement learning-based machine learning framework is presented.
Abstract: This paper presents a new cloud computing model for context-aware Internet of Things services. The proposed computing model is hierarchically composed of two layers: a cloud control layer (CCL) and a user control layer (UCL). The CCL manages cloud resource allocation, service scheduling, service profile, and service adaptation policy from a system performance point of view. Meanwhile, the UCL manages end-to-end service connection and service context from a user performance point of view. The proposed model can support nonuniform service binding and its real-time adaptation using meta-objects. Furthermore, it supports intelligent service-context management using a supervised and reinforcement learning-based machine learning framework. We implemented a lightweight prototype of the proposed computing model. Evaluations confirm that the proposed computing model offers enhanced performance compared with legacy uniform computing models.

Journal ArticleDOI
TL;DR: A novel approach for obfuscated JPEG compression/decompression (CODEC) IP core design methodology, suitable for use in re-usable IP core designs, which incorporates structural obfuscation for architecture or structure hiding from an adversary in order to maximize the complexity against reverse engineering.
Abstract: A novel approach for obfuscated JPEG compression/decompression (CODEC) IP core design methodology, suitable for use in re-usable IP core designs, is presented This incorporates structural obfuscation for architecture or structure hiding from an adversary in order to maximize the complexity against reverse engineering Additionally, the proposed methodology performs the entire compression and decompression through a single dedicated hardware IP core To obtain a lightweight (low cost version) of the proposed obfuscated JPEG CODEC IP, particle swarm optimization driven design space exploration is employed Results of the proposed low cost, obfuscated JPEG CODEC IP core design when compared to un-protected (un-obfuscated) design yielded enhancement in strength of obfuscation of 76%, as well as reduction of 5% compared to un-optimized design

Journal ArticleDOI
TL;DR: This paper proposed a low power ME algorithm and architecture of the HEVC for consumer applications, which utilizes sub-sampling, data reuse, pixel truncation and adaptive search range techniques for reducing the computational power.
Abstract: High-quality videos like high-definition (HD) and ultra HD became an essential requirement in recent applications such as security surveillance, television system, etc However, due to increase in resolution of the videos, the volume of visual information data increases significantly, which became a challenge for storage, transmission and processing the HD video data The new video compression standard, high efficient video coding (HEVC), achieved two-fold video efficiency improvements as compared to H264/AVC using efficient compression techniques Motion estimation (ME) is one of the computationally intensive blocks in video CODEC In HEVC, the complexity of ME further increases due to a large processing unit and flexible partitioning of the prediction unit (PU) In this paper, we proposed a low power ME algorithm and architecture of the HEVC for consumer applications The proposed algorithm and architecture utilizes sub-sampling, data reuse, pixel truncation and adaptive search range techniques for reducing the computational power Simulations result shows that the proposed ME algorithm requires an average of 5382% fewer search points as compared to the reference software HM with a small degradation in PSNR and little increment in bit-rate The proposed architecture is simulated and synthesized using standard 90 nm technology The proposed ME architecture can process $3840\times2160$ @ 30 fps video sequences with only 45193 mm2 of the area and 8192 KB of SRAM The operating frequency of the proposed architecture is 250 MHz with 1517619 mW of power

Journal ArticleDOI
Kwanwoo Park1, Soohwan Yu1, Seonhee Park1, Sangkeun Lee1, Joonki Paik1 
TL;DR: Experimental results show that the proposed method can generate an optimal set of LDR images using neural network and provide improved HDR images, and can be implemented as a preprocessing step in most existing HDR frameworks.
Abstract: This paper presents a neural network-based method to generate multiple images with different exposures from a single input low dynamic range (LDR) image for improved high dynamic range (HDR) imaging. The proposed algorithm consists of three steps: 1) 2-D histogram estimation; 2) neural network-based LDR images estimation; and 3) generation of an optimal set of differently exposed images. The proposed method first generates image features by estimating a patched-based 2-D histogram. The extracted features are used in an input layer of the neural network, which plays a role to select an optimal set of LDR images. A set of LDR images is generated using a curvature-based contrast enhancement method. Experimental results show that the proposed method can generate an optimal set of LDR images using neural network and provide improved HDR images. In addition, the proposed method can be implemented as a preprocessing step in most existing HDR frameworks. The proposed HDR approach is considered as a single-input method that gives almost the same performance to multiple image-based HDR method.

Journal ArticleDOI
TL;DR: A parallel strategy for in-loop filtering in HEVC encoder on graphics processing unit (GPU) that can achieve about 47% (up to 67%) time saving on average for test sequences and can improve the degree of parallelism and ease the computational burden of the CPU.
Abstract: In-loop filtering is an important part of high efficiency video coding (HEVC), which consists of deblocking filter and sample adaptive offset (SAO) filter It can not only improve the compression efficiency of HEVC, but also improve the visual quality of the reconstructed videos significantly However, the high computational complexity hampers its applications for real-time encoding scenarios In this paper, we propose a parallel strategy for in-loop filtering in HEVC encoder on graphics processing unit (GPU) In the proposed strategy, the pipeline structure for HEVC encoding by parallel processing deblocking filter and SAO on GPU is described first Then, the joint optimization for deblocking filter and SAO on GPU is detailed by parallel processing of deblocking filter and parallel processing of SAO separately The joint optimization can improve the degree of parallelism and ease the computational burden of the CPU Experimental results demonstrate that the proposed method can achieve about 47% (up to 67%) time saving on average for test sequences

Journal ArticleDOI
TL;DR: Experimental results illustrate that the proposed DBF and SAO architecture decreases the processing cycles required for processing each large coding unit compared with the state-of-the-art literature with the increase of gate count including memory.
Abstract: This paper aims to design an efficient mixed serial five-stage pipeline processing hardware architecture of deblocking filter (DBF) and sample adaptive offset (SAO) filter for high efficiency video coding decoder. The proposed hardware is designed to increase the throughput and reduce the number of clock cycles by processing the pixels in a stream of ${4 \times 36}$ samples in which edge filters are applied vertically in a parallel fashion for processing of luma/chroma samples. Subsequently these filtered pixels are transposed and reprocessed through vertical filter for horizontal filtering in a pipeline fashion. Finally, the filtered block transposed back to the original orientation and forwarded to a three-stage pipeline SAO filter. The proposed architecture is implemented in field programmable gate array and application specific integrated circuit platform using 90-nm library. Experimental results illustrate that the proposed DBF and SAO architecture decreases the processing cycles (172) required for processing each ${64 \times 64}$ or large coding unit compared with the state-of-the-art literature with the increase of gate count (593.32K) including memory. The results show that the throughput of the proposed filter can successfully decode ultrahigh definition video sequences at 200 frames/s at 341 MHz.

Journal ArticleDOI
TL;DR: The subjective quality of the decoded HDR images was improved by reducing color artifacts, and a new rate control method was implemented in an HEVC real-time encoding system.
Abstract: This paper describes a 4K/60p high efficiency video coding (HEVC, the latest coding standard) real-time encoding system with high quality high dynamic range (HDR) color representations. The HDR technology provides attractive video contents with a wider dynamic range of luminance, which can make shadows and brighter details appear clearly without any blown out highlights or blocked up shadows. We propose a new rate control method for HDR video coding. The method consists of two algorithms for improving perceptual quality by reducing degradation of HDR-specific color artifacts. The first one, adaptive block size and prediction mode decision, is conducted to suppress perceptual degradation due to prediction error in high chroma areas. The second, local quantization parameter control, is carried out to improve the visual quality with appropriate bit allocation in low chroma areas. The method was implemented in an HEVC real-time encoding system and its performance was assessed by measuring the color difference metrics by 3.86% at the most degraded frame, while peak signal-to-noise ratio remained almost the same. The subjective quality of the decoded HDR images was improved by reducing color artifacts.

Journal ArticleDOI
TL;DR: A technique for delay reduction that predicts the next channel that will be selected by the user that is applicable to DVB-C/S/T, as well as in IPTV, is presented.
Abstract: The channel zapping delay problem is a well-known problem in digital television. It has been researched quite extensively in the context of IPTV. In this paper, we present a technique for delay reduction that predicts the next channel that will be selected by the user. The technique is applicable to DVB-C/S/T, as well as in IPTV. The prediction is based on an adaptive model, which is built during a training process in which the history of channel changes is collected. The model is afterwards constantly updated. The problem is approached as a classification problem and C4.5 decision trees have shown the best performance among the tested algorithms. The proposed technique has been evaluated using channel history datasets obtained by simulation. The results of the evaluation show an increase of 5.67% in the probability that the next selected channel is already pre-joined, compared to the case when only neighboring channels are pre-joined. The application of the proposed technique is explained in detail.

Journal ArticleDOI
TL;DR: Two approximate HEVC fractional interpolation filters are proposed that can be used in consumer electronics products that require a low energy HEVC encoder with a negligible PSNR loss and bit rate increase.
Abstract: High efficiency video coding (HEVC) fractional interpolation algorithm has very high computational complexity. In this paper, two approximate HEVC fractional interpolation filters are proposed. They significantly reduce computational complexity of HEVC fractional interpolation with a negligible PSNR loss and bit rate increase. In addition, two approximate HEVC fractional interpolation hardware are proposed. They, in the worst case, can process 45 quad full HD ( ${3840\times 2160}$ ) fps. They consume up to 67.1% less energy than original HEVC fractional interpolation hardware. Therefore, they can be used in consumer electronics products that require a low energy HEVC encoder.