scispace - formally typeset
Search or ask a question

Showing papers in "Frontiers of Computer Science in 2019"


Journal ArticleDOI
TL;DR: No significant difference in student performance between online and face-to-face (F2F) learners overall and with respect to gender, or withrespect to class rank were found, demonstrating the ability to similarly translate environmental science concepts for non-STEM majors in both traditional and online platforms irrespective of gender or class rank.
Abstract: A growing number of students are now opting for online classes. They find the traditional classroom modality restrictive, inflexible, and impractical. In this age of technological advancement, schools can now provide effective classroom teaching via the Web. This shift in pedagogical medium is forcing academic institutions to rethink how they want to deliver their course content. The overarching purpose of this research was to determine which teaching method proved more effective over the eight-year period. The scores of 548 students, 401 traditional students and 147 online students, in an environmental science class were used to determine which instructional modality generated better student performance. In addition to the overarching objective, we also examined score variabilities between genders and classifications to determine if teaching modality had a greater impact on specific groups. No significant difference in student performance between online and face-to-face (F2F) learners overall, with respect to gender, or with respect to class rank were found. These data demonstrate the ability to similarly translate environmental science concepts for non-STEM majors in both traditional and online platforms irrespective of gender or class rank. A potential exists for increasing the number of non-STEM majors engaged in citizen science using the flexibility of online learning to teach environmental science core concepts.

172 citations


Journal ArticleDOI
TL;DR: This article reviews some research progress of safe semi-supervised learning, focusing on three types of safeness issue: data quality, where the training data is risky or of low-quality; model uncertainty,Where the learning algorithm fails to handle the uncertainty during training; measure diversity, whereThe safe performance could be adapted to diverse measures.
Abstract: Semi-supervised learning constructs the predictive model by learning from a few labeled training examples and a large pool of unlabeled ones. It has a wide range of application scenarios and has attracted much attention in the past decades. However, it is noteworthy that although the learning performance is expected to be improved by exploiting unlabeled data, some empirical studies show that there are situations where the use of unlabeled data may degenerate the performance. Thus, it is advisable to be able to exploit unlabeled data safely. This article reviews some research progress of safe semi-supervised learning, focusing on three types of safeness issue: data quality, where the training data is risky or of low-quality; model uncertainty, where the learning algorithm fails to handle the uncertainty during training; measure diversity, where the safe performance could be adapted to diverse measures.

80 citations


Journal ArticleDOI
Kang Li1, Kang Li2, Fazhi He2, Haiping Yu2, Xiao Chen2 
TL;DR: A novel tracking algorithm which integrates two complementary trackers that is very robust and effective in comparison to the state-of-the-art trackers and proposes to calculate projected coordinates using maximum posterior probability which results in a more accurate reconstruction error than traditional subspace learning tracker.
Abstract: This paper presents a novel tracking algorithm which integrates two complementary trackers. Firstly, an improved Bayesian tracker(B-tracker) with adaptive learning rate is presented. The classification score of B-tracker reflects tracking reliability, and a low score usually results from large appearance change. Therefore, if the score is low, we decrease the learning rate to update the classifier fast so that B-tracker can adapt to the variation and vice versa. In this way, B-tracker is more suitable than its traditional version to solve appearance change problem. Secondly, we present an improved incremental subspace learning method tracker(S-tracker). We propose to calculate projected coordinates using maximum posterior probability, which results in a more accurate reconstruction error than traditional subspace learning tracker. Instead of updating at every time, we present a stop-strategy to deal with occlusion problem. Finally, we present an integrated framework(BAST), in which the pair of trackers run in parallel and return two candidate target states separately. For each candidate state, we define a tracking reliability metrics to measure whether the candidate state is reliable or not, and the reliable candidate state will be chosen as the target state at the end of each frame. Experimental results on challenging sequences show that the proposed approach is very robust and effective in comparison to the state-of-the-art trackers.

61 citations


Journal ArticleDOI
TL;DR: In this article, the authors formulated an Ising model to control a large number of automated guided vehicles in a factory without collision, and evaluated the efficiency of their formulation with an actual factory in Japan.
Abstract: Recent advance on quantum devices realizes an artificial quantum spin system known as the D-Wave 2000Q, which implements the Ising model with tunable transverse field. In this system, we perform a specific protocol of quantum annealing to attain the ground state, the minimizer of the energy. Therefore the device is often called the quantum annealer. However the resulting spin configurations are not always in the ground state. It can rather quickly generate many spin configurations following the Gibbs-Boltzmann distribution. In the present study, we formulate an Ising model to control a large number of automated guided vehicles in a factory without collision. We deal with an actual factory in Japan, in which vehicles run, and assess efficiency of our formulation. Compared to the conventional powerful techniques performed in digital computer, still the quantum annealer does not show outstanding advantage in the practical problem. Our study demonstrates a possibility of the quantum annealer to contribute solving industrial problems.

52 citations


Journal ArticleDOI
TL;DR: This work proposes a joint neural network model to predict the stance and sentiment of a post simultaneously, and uses a neural stacking model to leverage sentimental information for the stance detection task.
Abstract: Stance detection aims to automatically determine whether the author is in favor of or against a given target. In principle, the sentiment information of a post highly influences the stance. In this study, we aim to leverage the sentiment information of a post to improve the performance of stance detection. However, conventional discrete models with sentimental features can cause error propagation. We thus propose a joint neural network model to predict the stance and sentiment of a post simultaneously, because the neural network model can learn both representation and interaction between the stance and sentiment collectively. Specifically, we first learn a deep shared representation between stance and sentiment information, and then use a neural stacking model to leverage sentimental information for the stance detection task. Empirical studies demonstrate the effectiveness of our proposed joint neural model.

38 citations


Journal ArticleDOI
TL;DR: In this paper, a soft video parsing method with segmental regular grammars is proposed to construct a tree structure for the video, each leaf of the tree stands for a video clip of background or sub-action.
Abstract: In this paper, we tackle the problem of segmenting out a sequence of actions from videos. The videos contain background and actions which are usually composed of ordered sub-actions. We refer the sub-actions and the background as semantic units. Considering the possible overlap between two adjacent semantic units, we propose a bidirectional sliding window method to generate the label distributions for various segments in the video. The label distribution covers a certain number of semantic unit labels, representing the degree to which each label describes the video segment. The mapping from a video segment to its label distribution is then learned by a Label Distribution Learning (LDL) algorithm. Based on the LDL model, a soft video parsing method with segmental regular grammars is proposed to construct a tree structure for the video. Each leaf of the tree stands for a video clip of background or sub-action. The proposed method shows promising results on the THUMOS’14, MSR-II and UCF101 datasets and its computational complexity is much less than the compared state-of-the-art video parsing method.

37 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel algorithm called CLS-Miner, which utilizes the utility-list structure to directly compute the utilities of itemsets without producing candidates, and introduces three novel strategies to reduce the search space, namely chain-estimated utility co-occurrence pruned, lower branch pruning, and pruning by coverage.
Abstract: High-utility itemset mining (HUIM) is a popular data mining task with applications in numerous domains. However, traditional HUIM algorithms often produce a very large set of high-utility itemsets (HUIs). As a result, analyzing HUIs can be very time consuming for users. Moreover, a large set of HUIs also makes HUIM algorithms less efficient in terms of execution time and memory consumption. To address this problem, closed high-utility itemsets (CHUIs), concise and lossless representations of all HUIs, were proposed recently. Although mining CHUIs is useful and desirable, it remains a computationally expensive task. This is because current algorithms often generate a huge number of candidate itemsets and are unable to prune the search space effectively. In this paper, we address these issues by proposing a novel algorithm called CLS-Miner. The proposed algorithm utilizes the utility-list structure to directly compute the utilities of itemsets without producing candidates. It also introduces three novel strategies to reduce the search space, namely chain-estimated utility co-occurrence pruning, lower branch pruning, and pruning by coverage. Moreover, an effective method for checking whether an itemset is a subset of another itemset is introduced to further reduce the time required for discovering CHUIs. To evaluate the performance of the proposed algorithm and its novel strategies, extensive experiments have been conducted on six benchmark datasets having various characteristics. Results show that the proposed strategies are highly efficient and effective, that the proposed CLS-Miner algorithm outperforms the current state-of-the-art CHUD and CHUI-Miner algorithms, and that CLS-Miner scales linearly.

36 citations


Journal ArticleDOI
TL;DR: This work presents how research in the field of Human-Computer Interaction can provide a user-centered design approach to co-create innovative ideas around the future of food and eating in space, balancing functional and experiential factors.
Abstract: Given the increasing possibilities of short- and long-term space travel to the Moon and Mars, it is essential not only to design nutritious foods but also to make eating an enjoyable experience. To date, though, most research on space food design has emphasized the functional and nutritional aspects of food, and there are no systematic studies that focus on the human experience of eating in space. It is known, however, that food has a multi-dimensional and multisensorial role in societies and that sensory, hedonic, and social features of eating and food design should not be underestimated. Here, we present how research in the field of Human-Computer Interaction (HCI) can provide a user-centered design approach to co-create innovative ideas around the future of food and eating in space, balancing functional and experiential factors. Based on our research and inspired by advances in human-food interaction design, we have developed three design concepts that integrate and tackle the functional, sensorial, emotional, social, and environmental/ atmospheric aspects of “eating experiences in space”. We can particularly capitalize on recent technological advances around digital fabrication, 3D food printing technology, and virtual and augmented reality to enable the design and integration of multisensory eating experiences. We also highlight that in future space travel, the target users will diversify. In relation to such future users, we need to consider not only astronauts (current users, paid to do the job) but also paying customers (non-astronauts) who will be able to book a space holiday to the Moon or Mars. To create the right conditions for space travel and satisfy those users, we need to innovate beyond the initial excitement of designing an “eating like an astronaut” experience. To do so we can draw upon prior HCI research in human-food interaction design and build on insights from food science and multisensory research, particularly research that has shown that the environments in which we eat and drink, and their multisensory components, can be crucial for an enjoyable food experience.

35 citations


Journal ArticleDOI
TL;DR: This study employs penalties for items with high similarity being placed next to each other in the list and transforms the item listing problem to a quadratic assignment problem (QAP), and proposes a problem decomposition method based on the structure of theitem listing problem.
Abstract: For e-commerce websites, deciding the manner in which items are listed on webpages is an important issue because it can dramatically affect item sales. One of the simplest strategies of listing items to improve the overall sales is to do so in a descending order of sales or sales numbers. However, in lists generated using this strategy, items with high similarity are often placed consecutively. In other words, the generated item list might be biased toward a specific preference. Therefore, this study employs penalties for items with high similarity being placed next to each other in the list and transforms the item listing problem to a quadratic assignment problem (QAP). The QAP is well-known as an NP-hard problem that cannot be solved in polynomial time. To solve the QAP, we employ quantum annealing (QA), which exploits the quantum tunneling effect to efficiently solve an optimization problem. In addition, we propose a problem decomposition method based on the structure of the item listing problem because the quantum annealer we use (i.e., D-Wave 2000Q) has a limited number of quantum bits. Our experimental results indicate that we can create an item list that considers both sales and diversity. In addition, we observe that using the problem decomposition method based on a problem structure can lead to a better solution with the quantum annealer in comparison with the existing problem decomposition method.

33 citations


Journal ArticleDOI
TL;DR: This paper proposes a supervised learning approach for jointly addressing the salient object detection and existence prediction problems and adopts the structural SVM framework, which forms the two problems jointly in a single integrated objective function.
Abstract: Recent advances in supervised salient object detection modeling has resulted in significant performance improvements on benchmark datasets. However, most of the existing salient object detection models assume that at least one salient object exists in the input image. Such an assumption often leads to less appealing saliency maps on the background images with no salient object at all. Therefore, handling those cases can reduce the false positive rate of a model. In this paper, we propose a supervised learning approach for jointly addressing the salient object detection and existence prediction problems. Given a set of background-only images and images with salient objects, as well as their salient object annotations, we adopt the structural SVM framework and formulate the two problems jointly in a single integrated objective function: saliency labels of superpixels are involved in a classification term conditioned on the salient object existence variable, which in turn depends on both global image and regional saliency features and saliency labels assignments. The loss function also considers both image-level and region-level mis-classifications. Extensive evaluation on benchmark datasets validate the effectiveness of our proposed joint approach compared to the baseline and state-of-the-art models.

30 citations


Journal ArticleDOI
TL;DR: This article proposes a novel framework for the recognition of six universal facial expressions based on three sets of features extracted from a face image: entropy, brightness, and local binary pattern and validated the performance of the saliency detection algorithm against the human visual system.
Abstract: This article proposes a novel framework for the recognition of six universal facial expressions. The framework is based on three sets of features extracted from a face image: entropy, brightness, and local binary pattern. First, saliency maps are obtained using the state-of-the-art saliency detection algorithm “frequency-tuned salient region detection”. The idea is to use saliency maps to determine appropriate weights or values for the extracted features (i.e., brightness and entropy). We have performed a visual experiment to validate the performance of the saliency detection algorithm against the human visual system. Eye movements of 15 subjects were recorded using an eye-tracker in free-viewing conditions while they watched a collection of 54 videos selected from the Cohn-Kanade facial expression database. The results of the visual experiment demonstrated that the obtained saliency maps are consistent with the data on human fixations. Finally, the performance of the proposed framework is demonstrated via satisfactory classification results achieved with the Cohn-Kanade database, FG-NET FEED database, and Dartmouth database of children’s faces.

Journal ArticleDOI
TL;DR: A dynamic patch-attentive deep network is proposed, called D-PAttNet, for AU detection that controls for 3D head and face rotation, learns mappings of patches to AUs, and models spatiotemporal dynamics.
Abstract: Facial action units (AUs) relate to specific local facial regions. Recent efforts in automated AU detection have focused on learning the facial patch representations to detect specific AUs. These efforts have encountered three hurdles. First, they implicitly assume that facial patches are robust to head rotation; yet non-frontal rotation is common. Second, mappings between AUs and patches are defined a priori, which ignores co-occurrences among AUs. And third, the dynamics of AUs are either ignored or modeled sequentially rather than simultaneously as in human perception. Inspired by recent advances in human perception, we propose a dynamic patch-attentive deep network, called D-PAttNet, for AU detection that (i) controls for 3D head and face rotation, (ii) learns mappings of patches to AUs, and (iii) models spatiotemporal dynamics. D-PAttNet approach significantly improves upon existing state of the art.

Journal ArticleDOI
TL;DR: In this article, a short note reviews the properties of high-reliability organizations and draws implications for the development of AI technology and the safe application of that technology in high risk applications.
Abstract: Every AI system is deployed by a human organization. In high risk applications, the combined human plus AI system must function as a high-reliability organization in order to avoid catastrophic errors. This short note reviews the properties of high-reliability organizations and draws implications for the development of AI technology and the safe application of that technology.

Journal ArticleDOI
TL;DR: It is shown how domain-specific information can be leveraged by expressing it as long-range interactions in a graph partitioning problem known as the lifted multicut problem, demonstrating significant improvement in segmentation accuracy for three challenging EM segmentation problems from neuroscience and cell biology.
Abstract: The throughput of electron microscopes has increased significantly in recent years, enabling detailed analysis of cell morphology and ultrastructure in fairly large tissue volumes. Analysis of neural circuits at single-synapse resolution remains the flagship target of this technique, but applications to cell and developmental biology are also starting to emerge at scale. On the light microscopy side, continuous development of light-sheet microscopes has led to a rapid increase in imaged volume dimensions, making Terabyte-scale acquisitions routine in the field. The amount of data acquired in such studies makes manual instance segmentation, a fundamental step in many analysis pipelines, impossible. While automatic segmentation approaches have improved significantly thanks to the adoption of convolutional neural networks, their accuracy still lags behind human annotations and requires additional manual proof-reading. A major hindrance to further improvements is the limited field of view of the segmentation networks preventing them from learning to exploit the expected cell morphology or other prior biological knowledge which humans use to inform their segmentation decisions. In this contribution, we show how such domain-specific information can be leveraged by expressing it as long-range interactions in a graph partitioning problem known as the lifted multicut problem. Using this formulation, we demonstrate significant improvement in segmentation accuracy for four challenging boundary-based segmentation problems from neuroscience and developmental biology.

Journal ArticleDOI
TL;DR: This paper presents a framework for signature-based key feature construction, and proposes a frequency-based feature elimination algorithm to select the key features and construct the fingerprints of ten malware families, including twenty key features in three categories.
Abstract: The domination of the Android operating system in the market share of smart terminals has engendered increasing threats of malicious applications (apps). Research on Android malware detection has received considerable attention in academia and the industry. In particular, studies on malware families have been beneficial to malware detection and behavior analysis. However, identifying the characteristics of malware families and the features that can describe a particular family have been less frequently discussed in existing work. In this paper, we are motivated to explore the key features that can classify and describe the behaviors of Android malware families to enable fingerprinting the malware families with these features. We present a framework for signature-based key feature construction. In addition, we propose a frequency-based feature elimination algorithm to select the key features. Finally, we construct the fingerprints of ten malware families, including twenty key features in three categories. Results of extensive experiments using Support Vector Machine demonstrate that the malware family classification achieves an accuracy of 92% to 99%. The typical behaviors of malware families are analyzed based on the selected key features. The results demonstrate the feasibility and effectiveness of the presented algorithm and fingerprinting method.

Journal ArticleDOI
TL;DR: It is shown that it is possible to directly generate synthetic 3D cell images using GANs, but limitations are excessive training times, dependence on high-quality segmentations of 3D images, and that the number of z-slices cannot be freely adjusted without retraining the network.
Abstract: Generative adversarial networks (GANs) have recently been successfully used to create realistic synthetic microscopy cell images in 2D and predict intermediate cell stages. In the current paper we highlight that GANs can not only be used for creating synthetic cell images optimized for different fluorescent molecular labels, but that by using GANs for augmentation of training data involving scaling or other transformations the inherent length scale of biological structures is retained. In addition, GANs make it possible to create synthetic cells with specific shape features, which can be used, for example, to validate different methods for feature extraction. Here, we apply GANs to create 2D distributions of fluorescent markers for F-actin in the cell cortex of Dictyostelium cells (ABD), a membrane receptor (cAR1), and a cortex-membrane linker protein (TalA). The recent more widespread use of 3D lightsheet microscopy, where obtaining sufficient training data is considerably more difficult than in 2D, creates significant demand for novel approaches to data augmentation. We show that it is possible to directly generate synthetic 3D cell images using GANs, but limitations are excessive training times, dependence on high-quality segmentations of 3D images, and that the number of z-slices cannot be freely adjusted without retraining the network. We demonstrate that in the case of molecular labels that are highly correlated with cell shape, like F-actin in our example, 2D GANs can be used efficiently to create pseudo-3D synthetic cell data from individually generated 2D slices. Because high quality segmented 2D cell data are more readily available, this is an attractive alternative to using less efficient 3D networks.

Journal ArticleDOI
TL;DR: This paper proposes a novel architecture, called Dual-Channel Parallel Broadcast model (DCPB), which could address the inefficient transaction processing speed of BC to a greater extent by using three methods which are dual communication channels, parallel pipeline processing and block broadcast strategy.
Abstract: Blockchain(BC), as an emerging distributed database technology with advanced security and reliability, has attracted much attention from experts who devoted to e-finance, intellectual property protection, the Internet of Things (IoT) and so forth. However, the inefficient transaction processing speed, which hinders the BC’s widespread, has not been well tackled yet. In this paper, we propose a novel architecture, called Dual-Channel Parallel Broadcast model (DCPB), which could address such a problem to a greater extent by using three methods which are dual communication channels, parallel pipeline processing and block broadcast strategy. In the dual-channel model, one channel processes transactions, and the other engages in the execution of BFT. The parallel pipeline processing allows the system to operate asynchronously. The block generation strategy improves the efficiency and speed of processing. Extensive experiments have been applied to BeihangChain, a simplified prototype for BC system, illustrates that its transaction processing speed could be improved to 16K transaction per second which could well support many real-world scenarios such as BC-based energy trading system and Micro-film copyright trading system in CCTV.

Journal ArticleDOI
TL;DR: An augmented reality app for helping interpret the nutritional information about carbohydrates in real packaged foods with the shape of boxes or cans is presented and it is shown that the users had a statistically significant increase in knowledge about carb choices using the app.
Abstract: Healthy eating habits involve controlling your diet. It is important to know how to interpret the nutritional information of the packaged foods that you consume. These packaged foods are usually processed and contain carbohydrates and fats. Monitoring carbohydrates intake is particularly important for weight-loss diets and for some pathologies such as diabetes. In this paper, we present an augmented reality app for helping interpret the nutritional information about carbohydrates in real packaged foods with the shape of boxes or cans. The app tracks the full object and guides the user in finding the surface or area of the real package where the information about carbohydrates is located using augmented reality and helps the user to interpret this information. The portions of carbohydrates (also called carb choices or carb servings) that correspond to the visualized food are shown. We carried out a study to check the effectiveness of our app regarding learning outcomes, usability, and perceived satisfaction. A total of 40 people participated in the study (20 men and 20 women). The participants were between 14 and 55 years old. The results reported that their initial knowledge about carb choices was very low. This indicates that education about nutritional information in packaged foods is needed. An analysis of the pre-knowledge and post-knowledge questionnaires showed that the users had a statistically significant increase in knowledge about carb choices using our app. Gender and age did not influence the knowledge acquired. The participants were highly satisfied with our app. In conclusion, our app and similar apps could be used to effectively learn how to interpret the nutritional information on the labels of real packaged foods and thus help users acquire healthy life habits.

Journal ArticleDOI
TL;DR: The experimental results prove the outperformance of the proposed Inforence method compared to the state-of-the-art techniques.
Abstract: In this paper, a novel approach, Inforence, is proposed to isolate the suspicious codes that likely contain faults. Inforence employs a feature selection method, based on mutual information, to identify those bug-related statements that may cause the program to fail. Because the majority of a program faults may be revealed as undesired joint effect of the program statements on each other and on program termination state, unlike the state-of-the-art methods, Inforence tries to identify and select groups of interdependent statements which altogether may affect the program failure. The interdependence amongst the statements is measured according to their mutual effect on each other and on the program termination state. To provide the context of failure, the selected bug-related statements are chained to each other, considering the program static structure. Eventually, the resultant cause-effect chains are ranked according to their combined causal effect on program failure. To validate Inforence, the results of our experiments with seven sets of programs include Siemens suite, gzip, grep, sed, space, make and bash are presented. The experimental results are then compared with those provided by different fault localization techniques for the both single-fault and multi-fault programs. The experimental results prove the outperformance of the proposed method compared to the state-of-the-art techniques.

Journal ArticleDOI
TL;DR: The proposed edge-assisted privacy-preserving outsourced computing framework for image processing, including image retrieval and classification schemes greatly reduce the computational, communication and storage burden of IoT terminal device while ensuring image data security.
Abstract: Internet of Things (IoT) has drawn much attention in recent years. However, the image data captured by IoT terminal devices are closely related to users’ personal information, which are sensitive and should be protected. Though traditional privacy-preserving outsourced computing solutions such as homomorphic cryptographic primitives can support privacy-preserving computing, they consume a significant amount of computation and storage resources. Thus, it becomes a heavy burden on IoT terminal devices with limited resources. In order to reduce the resource consumption of terminal device, we propose an edge-assisted privacy-preserving outsourced computing framework for image processing, including image retrieval and classification. The edge nodes cooperate with the terminal device to protect data and support privacy-preserving computing on the semi-trusted cloud server. Under this framework, edge-assisted privacy-preserving image retrieval and classification schemes are proposed in this paper. The security analysis and performance evaluation show that the proposed schemes greatly reduce the computational, communication and storage burden of IoT terminal device while ensuring image data security.

Journal ArticleDOI
TL;DR: A fast registration algorithm of rock mass point cloud is proposed based on the improved iterative closest point (ICP) algorithm, which has the similar accuracy and better registration efficiency compared with the ICP algorithm and other algorithms.
Abstract: Point cloud registration is an essential step in the process of 3D reconstruction. In this paper, a fast registration algorithm of rock mass point cloud is proposed based on the improved iterative closest point (ICP) algorithm. In our proposed algorithm, the point cloud data of single station scanner is transformed into digital images by spherical polar coordinates, then image features are extracted and edge points are removed, the features used in this algorithm is scale-invariant feature transform (SIFT). By analyzing the corresponding relationship between digital images and 3D points, the 3D feature points are extracted, from which we can search for the two-way correspondence as candidates. After the false matches are eliminated by the exhaustive search method based on random sampling, the transformation is computed via the Levenberg-Marquardt-Iterative Closest Point (LM-ICP) algorithm. Experiments on real data of rock mass show that the proposed algorithm has the similar accuracy and better registration efficiency compared with the ICP algorithm and other algorithms.

Journal ArticleDOI
TL;DR: This work presents a comprehensive review of JND estimation technology, and describes the visual mechanism and its corresponding computational modules, which include luminance adaptation, contrast masks, pattern masking, and the contrast sensitivity function.
Abstract: The concept of just noticeable difference (JND), which accounts for the visibility threshold (visual redundancy) of the human visual system, is useful in perception-oriented signal processing systems. In this work, we present a comprehensive review of JND estimation technology. First, the visual mechanism and its corresponding computational modules are illustrated. These include luminance adaptation, contrast masking, pattern masking, and the contrast sensitivity function. Next, the existing pixel domain and subband domain JND models are presented and analyzed. Finally, the challenges associated with JND estimation are discussed.

Journal ArticleDOI
TL;DR: A secure method to exchange resources (SMER) between heterogeneous IoT devices is proposed, which adopts a compensable mechanism for resource exchange and a series of security mechanisms to ensure the security of resource exchanges.
Abstract: The number of IoT (Internet of things) connected devices increases rapidly. These devices have different operation systems and therefore cannot communicate with each other. As a result, the data they collected is limited within their own platform. Besides, IoT devices have very constrained resources like weak MCU (micro control unit) and limited storage. Therefore, they need direct communication method to cooperate with each other, or with the help of nearby devices with rich resources. In this paper, we propose a secure method to exchange resources (SMER) between heterogeneous IoT devices. In order to exchange resources among devices, SMER adopts a compensable mechanism for resource exchange and a series of security mechanisms to ensure the security of resource exchanges. Besides, SMER uses a smart contract based scheme to supervise resource exchange, which guarantees the safety and benefits of IoT devices. We also introduce a prototype system and make a comprehensive discussion.

Journal ArticleDOI
TL;DR: This paper explores a recently proposed heuristic algorithm named the fireworks algorithm (FWA), which is a swarm intelligence algorithm, adopted for the combinatorial CVRP problem with several modifications of the original FWA.
Abstract: The capacitated vehicle routing problem (CVRP), which aims at minimizing travel costs, is a well-known NP-hard combinatorial optimization. Owing to its hardness, many heuristic search algorithms have been proposed to tackle this problem. This paper explores a recently proposed heuristic algorithm named the fireworks algorithm (FWA), which is a swarm intelligence algorithm. We adopt FWA for the combinatorial CVRP problem with several modifications of the original FWA: it employs a new method to generate “sparks” according to the selection rule, and it uses a new method to determine the explosion amplitude for each firework. The proposed algorithm is compared with several heuristic search methods on some classical benchmark CVRP instances. The experimental results show a promising performance of the proposed method. We also discuss the strengths and weaknesses of our algorithm in contrast to traditional algorithms.

Journal ArticleDOI
TL;DR: An investigation of existing works on the analysis of Android apps finds that various program analysis approaches with techniques in other fields are applied in analyzing Android apps; however, they can be improved with more precise techniques to be more applicable.
Abstract: Android applications (APPS) are in widespread use and have enriched our life. To ensure the quality and security of the apps, many approaches have been proposed in recent years for detecting bugs and defects in the apps, of which program analysis is a major one. This paper mainly makes an investigation of existing works on the analysis of Android apps. We summarize the purposes and proposed techniques of existing approaches, and make a taxonomy of these works, based on which we point out the trends and challenges of research in this field. From our survey, we sum up four main findings: (1) program analysis in Android security field has gained particular attention in the past years, the fields of functionality and performance should also gain proper attention; the infrastructure that supports detection of various defects should be enriched to meet the industry’s need; (2) many kinds of defects result from developers’ misunderstanding or misuse of the characteristics and mechanisms in Android system, thus the works that can systematically collect and formalize Android recommendations are in demand; (3) various program analysis approaches with techniques in other fields are applied in analyzing Android apps; however, they can be improved with more precise techniques to be more applicable; (4) The fragmentation and evolution of Android system blocks the usability of existing tools, which should be taken into consideration when developing new approaches.

Journal ArticleDOI
TL;DR: The proposed Traso framework treats synthetic minority class samples as an additional data source, and exploits transfer learning to transfer knowledge from them to minority class, and outperforms many popular class-imbalance learning methods.
Abstract: The problem of limited minority class data is encountered in many class imbalanced applications, but has received little attention. Synthetic over-sampling, as popular class-imbalance learning methods, could introduce much noise when minority class has limited data since the synthetic samples are not i.i.d. samples of minority class. Most sophisticated synthetic sampling methods tackle this problem by denoising or generating samples more consistent with ground-truth data distribution. But their assumptions about true noise or ground-truth data distribution may not hold. To adapt synthetic sampling to the problem of limited minority class data, the proposed Traso framework treats synthetic minority class samples as an additional data source, and exploits transfer learning to transfer knowledge from them to minority class. As an implementation, TrasoBoost method firstly generates synthetic samples to balance class sizes. Then in each boosting iteration, the weights of synthetic samples and original data decrease and increase respectively when being misclassified, and remain unchanged otherwise. The misclassified synthetic samples are potential noise, and thus have smaller influence in the following iterations. Besides, the weights of minority class instances have greater change than those of majority class instances to be more influential. And only original data are used to estimate error rate to be immune from noise. Finally, since the synthetic samples are highly related to minority class, all of the weak learners are aggregated for prediction. Experimental results show TrasoBoost outperforms many popular class-imbalance learning methods.

Journal ArticleDOI
TL;DR: This work introduces a dynamic partition method to gather the important vertices for high locality, and then uses a priority-based scheduling algorithm to assign them with a higher priority for an effective processing order to reduce the number of updates and increase the locality, thereby reducing the convergence time.
Abstract: Although many graph processing systems have been proposed, graphs in the real-world are often dynamic. It is important to keep the results of graph computation up-to-date. Incremental computation is demonstrated to be an efficient solution to update calculated results. Recently, many incremental graph processing systems have been proposed to handle dynamic graphs in an asynchronous way and are able to achieve better performance than those processed in a synchronous way. However, these solutions still suffer from suboptimal convergence speed due to their slow propagation of important vertex state (important to convergence speed) and poor locality. In order to solve these problems, we propose a novel graph processing framework. It introduces a dynamic partition method to gather the important vertices for high locality, and then uses a priority-based scheduling algorithm to assign them with a higher priority for an effective processing order. By such means, it is able to reduce the number of updates and increase the locality, thereby reducing the convergence time. Experimental results show that our method reduces the number of updates by 30%, and reduces the total execution time by 35%, compared with state-of-the-art systems.

Journal ArticleDOI
TL;DR: A novel Dyna variant, called Dyna-LSTD-PA, aiming to handle problems with continuous action spaces, which outperforms two representative methods in terms of convergence rate, success rate, and stability performance on four benchmark RL problems.
Abstract: Dyna is an effective reinforcement learning (RL) approach that combines value function evaluation with model learning. However, existing works on Dyna mostly discuss only its efficiency in RL problems with discrete action spaces. This paper proposes a novel Dyna variant, called Dyna-LSTD-PA, aiming to handle problems with continuous action spaces. Dyna-LSTD-PA stands for Dyna based on least-squares temporal difference (LSTD) and policy approximation. Dyna-LSTD-PA consists of two simultaneous, interacting processes. The learning process determines the probability distribution over action spaces using the Gaussian distribution; estimates the underlying value function, policy, and model by linear representation; and updates their parameter vectors online by LSTD(λ). The planning process updates the parameter vector of the value function again by using offline LSTD(λ). Dyna-LSTD-PA also uses the Sherman–Morrison formula to improve the efficiency of LSTD(λ), and weights the parameter vector of the value function to bring the two processes together. Theoretically, the global error bound is derived by considering approximation, estimation, and model errors. Experimentally, Dyna-LSTD-PA outperforms two representative methods in terms of convergence rate, success rate, and stability performance on four benchmark RL problems.

Journal ArticleDOI
TL;DR: The experimental results on the dataset of the weed species images show that the proposed method is effective for weed identification species, and can preliminarily meet the requirements of multi-row spraying of crop based on machine vision.
Abstract: Weed species identification is the premise to control weeds in smart agriculture. It is a challenging topic to control weeds in field, because the weeds in field are quite various and irregular with complex background. An identification method of weed species in crop field is proposed based on Grabcut and local discriminant projections (LWMDP) algorithm. First, Grabcut is used to remove the most background and K-means clustering (KMC) is utilized to segment weeds from the whole image. Then, LWMDP is employed to extract the low-dimensional discriminant features. Finally, the support vector machine (SVM) classifier is adopted to identify weed species. The characteristics of the method are that (1) Grabcut and KMC utilize the texture (color) information and boundary (contrast) information in the image to remove the most of background and obtain the clean weed image, which can reduce the burden of the subsequent feature extraction; (2) LWMDP aims to seek a transformation by the training samples, such that in the low-dimensional feature subspace, the different-class data points are mapped as far as possible while the within-class data points are projected as close as possible, and the matrix inverse computation is ignored in the generalized eigenvalue problem, thus the small sample size (SSS) problem is avoided naturally. The experimental results on the dataset of the weed species images show that the proposed method is effective for weed identification species, and can preliminarily meet the requirements of multi-row spraying of crop based on machine vision.

Journal ArticleDOI
TL;DR: A novel non-negative matrix factorization (NMF) based modeling and training algorithm that learns from both the adjacencies of the instances and the labels of the training set, and takes the advantage of smoothness assumption, so that the labels are properly propagated.
Abstract: Multi-label learning is more complicated than single-label learning since the semantics of the instances are usually overlapped and not identical. The effectiveness of many algorithms often fails when the correlations in the feature and label space are not fully exploited. To this end, we propose a novel non-negative matrix factorization (NMF) based modeling and training algorithm that learns from both the adjacencies of the instances and the labels of the training set. In the modeling process, a set of generators are constructed, and the associations among generators, instances, and labels are set up, with which the label prediction is conducted. In the training process, the parameters involved in the process of modeling are determined. Specifically, an NMF based algorithm is proposed to determine the associations between generators and instances, and a non-negative least square optimization algorithm is applied to determine the associations between generators and labels. The proposed algorithm fully takes the advantage of smoothness assumption, so that the labels are properly propagated. The experiments were carried out on six set of benchmarks. The results demonstrate the effectiveness of the proposed algorithms.