scispace - formally typeset
Search or ask a question

Showing papers by "Shun-Feng Su published in 2016"


Journal ArticleDOI
01 May 2016
TL;DR: These methods could be applied to real-time energy-saving transformation of any displaying media, including video playback on OLEDs, and it will save up to 20% of the displaying power consumption based on the predicted SSIM = 0.9, which has very good image quality after the transformation.
Abstract: This paper investigates how to precisely transform images displaying color on organic light-emitting diodes (OLEDs) in real time for the purpose of energy-saving that meets the personal viewing quality requirements based on the structural similarity (SSIM) assessment. These methods could be applied to real-time energy-saving transformation of any displaying media, including video playback on OLEDs, and it will save up to 20% of the displaying power consumption based on the predicted SSIM = 0.9, which has very good image quality after the transformation.

23 citations


Book
01 Apr 2016
TL;DR: It can be found that the distance based approach can have better performance than that of using forsaking the nearest reader with normalized weights, and this new approach based on the distance where the traditional one is based on power.
Abstract: The LANDMARC system is a radio frequency identification (RFID) based location system and has attracted great attention recently. In our implementation of the LANDMARC system, we have observed that some unusual large errors occur in the traditional LANDMARC. Thus, in this study, we proposed ways of resolving these problems. There are two major ideas proposed in this study. The first one is to forsake the nearest reader with normalized weights to reduce the effects of nonlinear relationship between the tag distance and the received signal strength difference. The second one is to use another completely different calculation algorithm. This new approach is based on the distance where the traditional one is based on power. These two new methods can indeed improve accuracy apparently. It can be found that the distance based approach can have better performance than that of using forsaking the nearest reader with normalized weights.

15 citations


Journal ArticleDOI
TL;DR: Granular/symbolic data processing hinges on a general computation theory that effectively uses granules such as classes, clusters, subsets, groups, and intervals to build an efficient computational model for complex applications realized in the presence of huge amounts of data, information, and knowledge.
Abstract: Granular/symbolic data processing is an emerging conceptual and computing paradigm of information processing. In the era of big data, the emergence of granular/symbolic processing has been motivated by the urgent need for intelligent transformation of empirical data that are now commonly available in vast quantities, into a human-manageable knowledge. In such an aggregation process, we hope to retain as much information as possible while making the findings easily understood and well-supported by the existing experimental evidence. Those aggregated entities are often referred to as symbolic or granular data. Research areas referred to as symbolic data analysis in statistics and multivariate data analysis address some of the fundamental or applied facets of granular computing. The theoretical fundamentals of granular/symbolic data processing are well-established. They involve set theory (interval mathematics), fuzzy sets, rough sets, and random sets linked together in a highly comprehensive treatment of this emerging paradigm. In addition to interval-based formalism of information granules, we also encounter histograms, distributions, lists of values, etc. Hence, granular/symbolic data processing hinges on a general computation theory that effectively uses granules such as classes, clusters, subsets, groups, and intervals to build an efficient computational model for complex applications realized in the presence of huge amounts of data, information, and knowledge. This research arises as a substantial shift from the current machine-centric to human-centric approach to information and knowledge.

7 citations


Proceedings ArticleDOI
01 Aug 2016
TL;DR: In this study, an enhanced edge detection method is proposed for Laplace of Gaussian (LoG) based SIFT and it is evident that the proposed method can find more feature matching in the video image.
Abstract: This paper is to report our study on the moving object detection from surveillance images. For motion detection, some existed methods are used to find specific features between images and then to define the moving speeds of objects. However, human-created features may be difficult to define and to acquire especially when the objects are unknown. In this paper, the scale-invariant feature transform (SIFT) method is adopted to define features for motion detection. In the image, SIFT can be used to catch the properties of scale and rotation invariant. Even if the foreground target is partially obscured and the image is taken in a different angle and distance, SIFT can still have nice matching performances. However, when applied in detecting moving objects, SIFT does not work well due to finding incorrect features in the match. In this study, an enhanced edge detection method is proposed for Laplace of Gaussian (LoG) based SIFT. From the simulation results, it is evident that our proposed method can find more feature matching in the video image.

5 citations


Journal ArticleDOI
TL;DR: The high correlation between the CFS with the proposed RBFN measurements and the AFM revealed the potential to implement the radial basis learning kernel in optical metrology to achieve intelligent lithography.
Abstract: This paper applied a radial basis function network (RBFN) in coherent Fourier scatterometry (CFS) to reconstruct the linewidth of periodic line/space (L/S) patterns. The fast, nondestructive, and repeatable measurement capability of CFS enables its integration with intelligent lithography systems. Two steps to reconstruct the linewidth of the L/S patterns were performed in this paper. The first step was to use the finite difference time domain numerical electromagnetic tool to rigorously establish the library of modeled diffraction signatures by using the L/S patterns. Each modeled signature was converted to an intensity vector as the training data to construct the RBFN. The trained RBFN has a simple architecture consisting of three layers: input, hidden, and output layers. The second step was to collect the experimental signatures and feed them into the trained RBFN model to predict the linewidth of L/S patterns. This paper used the transverse electric polarized incident beam at the wavelength of 632 nm in the experimental setup of the CFS. Five L/S patterns were used to test the constructed RBFN. The experimental results indicated that the maximal difference was 13 nm between the CFS and the atomic force microscopy (AFM) measurements for the sample D with an L/S of 200 nm. The minimum difference was 2 nm for the sample A with an L/S of 140 nm. The correlation coefficient between the CFS and AFM metrology measurement running through five samples was 0.972. The high correlation between the CFS with the proposed RBFN measurements and the AFM revealed the potential to implement the radial basis learning kernel in optical metrology to achieve intelligent lithography.

3 citations


Journal ArticleDOI
TL;DR: This paper successfully integrates an image processing module with a local collision-free path planning, and also applies them to the collision- free and path planning of a mobile robot and the measurement method only uses a webcam and four laser projectors.
Abstract: Monocular image-based local collision-free path planning for autonomous robots is presented in this paper. According to a pre-set pair of parallel lines, transformation equations from the image domain to the real world domain are easily defined. Moreover, the distances to obstacles in the robot’s visual domain can be estimated. Our proposed method can not only easily identify obstacles and wall edges, but also estimate the distances and plan a collision-free path. Besides, this paper successfully integrates an image processing module with a local collision-free path planning, and also applies them to the collision-free and path planning of a mobile robot. For the proposed local collision-free path planning, the webcam can be located at two different situations: one is setting a webcam located on the ceiling and the other is setting a webcam on a mobile robot. In addition, the measurement method only uses a webcam and four laser projectors. Thus, we do not need to purchase expensive equipment to accomplish the desired results. From the experimental results, it shows that our proposed method can be effectively applied to the local collision-free path planning.

2 citations


Proceedings ArticleDOI
24 Jul 2016
TL;DR: An improved radial basis function network to degrade the influence of the heteroscedasticity noises in the training data using the Box-Cox transformation and the LTS-SVR to address this problem.
Abstract: The paper presents an improved redial basis function network to degrade the influence of the heteroscedasticity noises in the training data. A general purpose learning algorithm is regarded as the statistical nonlinear regression model which is assumed the constant noise level. However, the heteroscedasticity noises always exist in the real data. The transformation based least trimmed squares-support vector regression radial basis function network (LTS-SVR RBFN) employs the Box-Cox transformation and the LTS-SVR to address this problem. From the experiment results, it is evident that our proposed method can be used in a more realistic data.

1 citations


Proceedings ArticleDOI
07 Jul 2016
TL;DR: The design utilizes image processing methods to record human breath fluctuation, to calculate the breath rate and to predict the peak time of inhaling for the next breath cycle to capture automatically X-ray image.
Abstract: Quality of X-ray image plays an important role in diagnostic results. The traditional method for taking x-ray image depends on the operator's judgments and requires high professional quality of the radiologist. The quality of X-ray image for chest region obtains the best results when the ribcage is full air. That corresponds the peak time of inhaling cycle. In this paper, we use a depth camera of Microsoft Kinect to record the patient's breath and find the time of the deepest inhaling to improve the quality of X-ray image. Our design utilizes image processing methods to record human breath fluctuation, to calculate the breath rate and to predict the peak time of inhaling for the next breath cycle to capture automatically X-ray image.

1 citations


Book ChapterDOI
01 Jan 2016
TL;DR: This chapter deals with selection of feature genes and classification of microarray data under support vector machine (SVM) approach and multi-class support vector classification and cross-validation methods apply for great prediction classification accuracy and less computing time.
Abstract: Microarray data analysis approach has became a widely used tool for disease detection. It uses tens of thousands of genes as input dimension that would be a huge computational problem for data analysis. In this chapter, the proposed approach deals with selection of feature genes and classification of microarray data under support vector machine (SVM) approach. Feature genes can be finding out according to the adjustable epsilon-support vector regression (epsilon-SVR) and then to select high ranked genes after all microarray data. Moreover, multi-class support vector classification (multi-class SVC) and cross-validation methods apply to acquire great prediction classification accuracy and less computing time.