scispace - formally typeset
Search or ask a question

Showing papers presented at "Parallel and Distributed Computing: Applications and Technologies in 2018"


Book ChapterDOI
20 Aug 2018
TL;DR: This paper proposes autoencoder (AE) using the deep learning based anomaly detection with invasion scoring for the smart factory environments using the KDD data set and analysis F-Score and accuracy between the Density Based Spatial Clustering of Applications with Noise (DBSCAN).
Abstract: The industry 4.0 and Industrial IoT is leading new industrial revolution. Industrial IoT technologies make more reliable and sustainable products than traditional products in automation industry. Industrial IoT devices transfer data between one another. This concept is need for advanced connectivity and intelligent security services. We focus on the security threat in Industrial IoT. The general security systems enable to detect normal security threat. However, it is not easy to detect anomaly threat or network intrusion or new hacking methods. In the paper, we propose autoencoder (AE) using the deep learning based anomaly detection with invasion scoring for the smart factory environments. We have analysis F-Score and accuracy between the Density Based Spatial Clustering of Applications with Noise (DBSCAN) and the autoencoder using the KDD data set. We have used real data from Korea steel companies and the collected data is general data such as temperature, stream flow, the shocks of machines, and etc. Finally, experiments show that the proposed autoencoder model is better than DBSCAN.

7 citations


Book ChapterDOI
Xizi Wang1, Li Wang1
20 Aug 2018
TL;DR: The findings show that the more structural social capital the fundraisers have, the high success rate the project will achieve, and frequent updates on project progress can attract more supporters and more donation.
Abstract: The charitable crowdfunding platforms, as a new model for donation, enable fund seekers to solicit funds from the public over the Internet. Despite of the rapid development of crowdfunding platforms, knowledge about charitable crowdfunding remains obscure. Few studies have investigated the determinants of charitable crowdfunding projects’ success. In this paper, we adopt data mining techniques to collect data from ZhongChou platform, an important crowdfunding website in China. The theoretical foundation of our research model is social capital theory. Our question is what factors can affect the success of charitable crowdfunding projects. The findings show that the more structural social capital the fundraisers have, the high success rate the project will achieve. Besides, frequent updates on project progress can attract more supporters and more donation. More comments and more followers can attract more supporters, but have no significant impact on attracting more donation.

5 citations


Book ChapterDOI
20 Aug 2018
TL;DR: This paper considers the design requirements for online deep learning based intelligent video analytics service INCUVAS, a platform that continuously enhances the video analysis performance by updating real-time dataset with the deep neural network on a cloud environment.
Abstract: Recently, deep neural network and cloud computing based intelligent video surveillance technology are growing interests in the industrial and academia. The synergy with both technologies emerges as a key role of the public safety and video surveillance in the field. Reflecting these trends, we have been studying a cloud-based intelligent video analytic service using deep learning technology. INCUVAS (cloud-based INCUbating platform for Video Analytic Service) is a platform that continuously enhances the video analysis performance by updating real-time dataset with the deep neural network on a cloud environment. The goal of this cloud service can provide continuous performance enhancement and management using image dataset from the real environment. In this paper, we consider the design requirements for online deep learning based intelligent video analytics service.

4 citations


Book ChapterDOI
20 Aug 2018
TL;DR: The GOB Chain connects every Blockchain in order to facilitate the exchange of data between Blockchains based on W3C standard meta data, thus guaranteeing inter-operability and use, as well as searches of transaction information.
Abstract: The GOB Chain is a universal Blockchain platform designed to play the role of a perfect Blockchain hub for every Blockchain. First, it connects every Blockchain in order to facilitate the exchange of data between Blockchains based on W3C standard meta data, thus guaranteeing inter-operability and use, as well as searches of transaction information. In addition, it links the Blockchain technology to various existing legacy systems so that it can be immediately applied to the business in service. It allows the GOB chain to apply the Blockchain technology to existing businesses at the sites of the Fourth Industrial Revolution, thus enabling new productivity and competitiveness and improving profitability to a remarkable extent. The GOB Chain cooperates with all Blockchain technologies, companies, and researchers around the world to overcome the limitations of the existing Blockchain technology, and implements the Blockchain-platform ecology chain and the eco system, making our lives more convenient while revolutionizing every industry. In addition, it automates all block-chain systems and suggests a system that anyone can easily provide, operate, and maintain block-chain services.

3 citations


Book ChapterDOI
20 Aug 2018
TL;DR: This paper designs a heuristic rendezvous point selection strategy to determine the trajectory of MS and designs a routing protocol based on hops and energy and demonstrates that GES-DGS scheme not only significantly extends network lifespan compared with MS-based data gathering schemes, but also pro-actively adapts to the changes in delay in specific applications.
Abstract: Wireless sensor networks based on mobile sink (MS) can significantly alleviate the problem of network congestion and energy hole, but it results in large delay because of restriction of moving speed and lead to the loss of data due to the limited communication time. In this paper, a grid-based efficient scheduling and data gathering scheme (GES-DGS) is proposed for maximizing the amount of data collected and reducing energy consumption simultaneously within the delay of network tolerance. The main challenges of our scheme are how to optimize the trajectory and the sojourn times of MS and how to deliver the sensed data to MS in an energy-efficient way. To deal with the above problems, we first divide the monitoring field into multiple grids and construct the hop gradient of each grid. Second, we design a heuristic rendezvous point selection strategy to determine the trajectory of MS and devise a routing protocol based on hops and energy. With extensive simulation, we demonstrate that GES-DGS scheme not only significantly extends network lifespan compared with MS-based data gathering schemes, but also pro-actively adapts to the changes in delay in specific applications.

3 citations


Book ChapterDOI
20 Aug 2018
TL;DR: The behavioral verification problem on the SCDL/WS-BPEL service-component architectures was discussed, and the Wright formal ADL and the Ada concurrent language were used as a target models to achieve a set of systematic translation rules.
Abstract: Web systems verification is a crucial activity throughout the systems development life cycle, especially in the phase of service-component architectural design. Indeed, this activity allows the detection and consequently the correction of errors early in Web systems development life cycle. In this paper, we discuss the behavioral verification problem on the SCDL/WS-BPEL service-component architectures. To do so, the Wright formal ADL and the Ada concurrent language were used as a target models. To achieve this, a set of systematic translation rules are proposed. This allows the verification of the standard behavioral properties using the Wr2fdr tool. In addition, using an Ada dynamic analysis tool, we could detect the potential behavioral properties such as the deadlock of an Ada concurrent program.

2 citations


Book ChapterDOI
20 Aug 2018
TL;DR: This paper aims to apply data mining techniques (Association rules mining) to recorded data in the context of massive learning process to extend process model with additional knowledge to provide more details for future process improvement.
Abstract: In today’s learning environments (MOOCs), process learning in highly dynamic. This dynamicity generates a large volume of data describing the handling of such processes. Mining that data is a powerful discipline for getting valuable insights. Thus big data mining becomes more and more an integral part of handling a business process management (BPM) project. Furthermore, using data mining outputs to extend process model with additional knowledge allows to provide more details for future process improvement. In this paper we aim to apply data mining techniques (Association rules mining) to recorded data in the context of massive learning process.

2 citations


Book ChapterDOI
20 Aug 2018
TL;DR: A new memory contention aware (MC-aware) power management scheme to reduce the power consumption of the GPU with little impact on the performance and increase the power efficiency, IPC per watt, by up to 31.4% compared to the conventional architecture.
Abstract: To improve the performance of the GPU, more parallelism should be exploited and the GPU should be operated at higher clock frequency. However, high parallelism and high clock frequency cause serious memory contention problems, resulting in significant power consumption and increased idle cycles in the GPU. This paper proposes a new memory contention aware (MC-aware) power management scheme to reduce the power consumption of the GPU with little impact on the performance. When serious memory contention problems occur in the GPU, the proposed MC-aware scheme changes the mode of the SM (Streaming Multiprocessor) to power saving mode with little performance degradation. The proposed scheme monitors the degree of memory contention, since severe memory contention causes serious performance degradation. The proposed GPU architecture includes SM management unit that generates the control signals based on the estimated degree of memory contention. According to our simulation results, the proposed MC-aware scheme can increase the power efficiency, IPC per watt, by up to 31.4% compared to the conventional architecture.

2 citations


Book ChapterDOI
20 Aug 2018
TL;DR: This paper proposes a spectrum-centric differential privacy scheme for hypergraph clustering based on the exponential mechanism by using stochastic matrix perturbation and results reveal that the proposed mechanism achieves a better performance in both data utility and efficiency.
Abstract: In real world, most of complex networks can be represented by hypergraphs. Hypergraph spectral clustering has drawn wide attention in recent years due to it is more suitable to describe high-order information between objects. In this paper, we focus on the spectrum of hypergraph-based complex networks, propose a spectrum-centric differential privacy scheme by using stochastic matrix perturbation. The main idea is to project the hypergraph Laplacian matrix into a low dimensional space and perturb the eigenvector with random noise. We present a differential privacy mechanism for hypergraph clustering based on the exponential mechanism. We evaluated the computational efficiency and data utility of the existing methods on both synthetic datasets and real datasets, and the experimental results reveal that our proposed mechanism achieves a better performance in both data utility and efficiency.

2 citations


Book ChapterDOI
20 Aug 2018
TL;DR: A deep Spatio-temporal approach is proposed which merges the temporal normalization method which is the energy binary motion information (EBMI) with deep learning based on stacked auto-encoder (SAE) for emotional body gesture recognition in job interview and the results prove the efficiency of the proposed approach.
Abstract: Social psychologists have long studied job interviews with the aim of knowing the relationships between behaviors, interview outcomes, and job performance. Several companies give great importance to psycho-test based on observation of the candidate is behavior more than the answers they even especially in sensitive positions like trade, marketing, investigation, etc. Our work will be a combination between two interesting topics of research in the last decades which are social psychology and affective computing. Some techniques were proposed until today to analyze automatically the candidate is non verbal behavior. This paper concentrates in body gestures which is an important non-verbal expression channel during affective communication that is not very studied in comparison to facial expressions. We proposed in this work a deep Spatio-temporal approach, it merges the temporal normalization method which is the energy binary motion information (EBMI) with deep learning based on stacked auto-encoder (SAE) for emotional body gesture recognition in job interview and the results prove the efficiency of our proposed approach.

2 citations


Book ChapterDOI
20 Aug 2018
TL;DR: Through the continuous algorithm optimization and the experimental verification of real trajectory data, this model and algorithm can effectively protect privacy under the security constraint of (k, δ).
Abstract: Since Abul et al. first proposed the k-anonymity based privacy protection for trajectory data, the researchers have proposed a variety of trajectory privacy-preserving methods, these methods mainly adopt the static anonymity algorithm, which directly anonymize processing and data publishing after initialization. They do not take into account the real application scenarios of moving trajectory data. The objective of this paper is to realize the dynamic data publishing of high dimensional vehicle trajectory data privacy protection under (k, δ) security constraints. First of all, we propose the partition storage and calculation for trajectory data. According to the spatial and temporal characteristics of vehicle trajectory data, we choose the sample point \( (x_{2} ,y_{2} ,t) \) at the time \( t_{i} \) as partition fields, partition storage of the trajectory data according to the time sequence and the location of the running vehicle is \( Region\left( {m,n} \right)\_\left( {x_{i} ,y_{i} ,t_{i} } \right) \). The computation of data scanning in trajectory data clustering and privacy processing is reduced greatly through this method. Secondly, the dynamic clustering method is used to cluster the regional data. According to the characteristics of the vehicle trajectory data, \( \left( {x_{i} ,y_{i} ,t_{m - n} } \right) \) as the release data identifier, trajectory attributes of the vehicle as the sensitive attributes, we use Data Partitioning and Cartesian Product (DPCP) method to cluster trajectory data under the (k, δ) security constraints. Thirdly, the anonymization function \( f_{DPCP} \) is used to preserve the privacy of clustering trajectory data. In each sampling time slice, \( f_{DPCP} \) function is used to generalize the location data in the grouping. Through the continuous algorithm optimization and the experimental verification of real trajectory data, this model and algorithm can effectively protect privacy under the security constraint of (k, δ). By means of data simulation and data availability evaluation, the data processed by the anonymization method has a certain usability under the threshold of δ. At the same time, the experimental results are compared with the classical NWA algorithm, and DLBG, the method in this paper have been proved to be advanced in time cost and data availability evaluation.

Book ChapterDOI
20 Aug 2018
TL;DR: The specification and design of an automated tool that manage and maintains information that pertains to estimating the security risk supported by the risk assessment model is discussed.
Abstract: In previous work, we presented a quantitative cyber security risk assessment model that quantifies the security of a system in financial terms. Our model assesses the cost of the failure of an information system security with regards to threats dimensions. In this assessment, we consider that the threats world can be divided into several threats dimensions and perspectives. In this paper, we discuss the specification and design of an automated tool that manage and maintains information that pertains to estimating the security risk supported by our risk assessment model.

Book ChapterDOI
20 Aug 2018
TL;DR: This paper proposes to optimize the deep convolution neural networks for real time video processing on detecting faces and facial landmarks by utilizing the strengths of the two previous powerful algorithms which have shown the best performance, and proposes a grid-based one-shot detection method.
Abstract: This paper proposes to optimize the deep convolution neural networks for real time video processing on detecting faces and facial landmarks. For that, we have to reduce the existing weight size and duplication of weight parameters. By utilizing the strengths of the two previous powerful algorithms which have shown the best performance, we overcome the weakness of the existing methods. Instead of using the old-fashioned searching method like sliding window, we propose our grid-based one-shot detection method. Furthermore, instead of forwarding one image frame through a very deep CNN, we divide the process into 3 stages for incremental detection improvements to overcome the existing limitation of grid-based detection. After lots of experiments with different frameworks, deep learning frameworks are chosen as the best for integration of 3-stage DCNN. By using transfer learning, we can remove the unnecessary convolution layers in the existing DCNN and retrain hidden layers repeatedly and finally succeed in obtaining the best speed and accuracy which can run on the embedded platform. The performance to find small sized faces is better than YOLO v2.

Book ChapterDOI
20 Aug 2018
TL;DR: The aim of this paper is to identify and evaluate the different types of patterns proposed in literature by suggesting a systematic review of the various patterns proposed, and identify the most suitable patterns to the area of process improvement.
Abstract: The continuous evolutions of information technology, together with the various changes, have a significant impact on Business Processes (BP) and their performance. To effectively deal with these changes, several solutions that introduce a new way to better control a BP were proposed. Among them, emerges the concept of business process improvement (BPI) that focuses on continuous improvement and evolution of BPs by adopting a number of techniques. Nowadays, a new technique for modeling and executing a BP has regained a lot of attention. It is based on the concept of patterns defined as reusable solutions used for dealing with problems occurring in a certain context, concerning a given BP. Several studies were interested in this concept, by proposing a pattern approach for modeling, executing and improving a BP. For this, the aim of this paper is to identify and evaluate the different types of patterns proposed in literature by suggesting a systematic review of the various patterns proposed. The result of the review was analyzed using a number of criteria that enable us to identify the most suitable patterns to the area of process improvement and thus positioning our work in this area. This can later be iteratively corrected and completed in order to obtain a continuously improved set of BPIP.

Book ChapterDOI
20 Aug 2018
TL;DR: By adjusting scheduling policy dynamically, the performance and cache efficiency are improved compared LRR and GTO significantly and the proposed technique provides IPC improvement by 19% and 3.8% over LRR or GTO, respectively.
Abstract: Warp scheduling policy for GPUs has significant impact on performance since the order of executed warps determines the degree of data cache locality. Greedy warp scheduling policy such as GTO shows better performance than fair scheduling policy for numerous applications. However, cache locality by multiple warps is underutilized when the GTO is adopted, resulting in overall performance degradation. In this paper, we propose a dynamic selective warp scheduling exploiting data locality of workload. Inter-warp locality and intra-warp locality are determined based on the access history information of the L1 data cache. By adjusting scheduling policy dynamically, the performance and cache efficiency are improved compared LRR and GTO significantly. According to our experimental results, the proposed technique provides IPC improvement by 19% and 3.8% over LRR and GTO, respectively.

Book ChapterDOI
20 Aug 2018
TL;DR: This paper offers an alternative cloud integration solution centered on user data privacy, its main purpose being to help software services providers and public institutions to comply with the General Data Protection Regulation.
Abstract: With the everlasting development of technology and society, data privacy has proven to grow into a pressing issue. The bureaucratic state system seems to expand the number of personal documents required for any kind of request. Therefore, it becomes obvious that the number of people having access to information that should be private is on the rise as well. This paper offers an alternative cloud integration solution centered on user data privacy, its main purpose being to help software services providers and public institutions to comply with the General Data Protection Regulation. Throughout this proposal we describe how data confidentiality can be achieved by transitioning complex human procedures into a coordinated and decoupled swarm system, whose core lies within the “Privacy by Design” principles.

Book ChapterDOI
20 Aug 2018
TL;DR: PINUS, an indoor weighted centroid localization (WCL) method with crowdsourced calibration, relies on crowdsourcing to do the calibration for WCL to improve localization accuracy without the device diversity problem.
Abstract: PINUS, an indoor weighted centroid localization (WCL) method with crowdsourced calibration, is proposed in this paper. It relies on crowdsourcing to do the calibration for WCL to improve localization accuracy without the device diversity problem. Smartphones and Bluetooth Low Energy (BLE) beacon devices are applied to realize PINUS for the sake of design validation and performance evaluation.

Book ChapterDOI
20 Aug 2018
TL;DR: This paper aims to evaluate the reduction method of glyph number and its efficiency and thinks that it is important to reduce their number using structural properties of syllables and factor font drawing.
Abstract: It is not easy to develop fonts for individual syllables because the scientific principles of Hunminjeongeum generate a huge number of syllables. So we need to develop syllable fonts combined with three phonemic glyphs. Here, the development of glyphs according to each syllable structure is also a great effort, so we think that it is important to reduce their number using structural properties of syllables and factor font drawing. Therefore, this paper aims to evaluate the reduction method of glyph number and its efficiency.

Book ChapterDOI
20 Aug 2018
TL;DR: A direction of improving network operation is provided by grasping an exact number of nodes in the smart grip service environment based on the correlations revealed.
Abstract: There are a series of nodes in a Smart Grid environment and to let them work efficiently, their tasks should be adequately scheduled. As for the scheduling methods, this study proposes two kinds of scenarios: use of the greedy algorithm or the Floyd-Warshall algorithm both of which have their own merits and demerits. The effectiveness of the scheduling algorithm becomes different depending on the number of nodes. Also, there are two kinds of nodes: mobile nodes and non-mobile nodes. One good example of a node that easily moves is a person. The performing a headcount for the people with their personal information such as their images or whereabouts is not an easy task due to ever strengthening civil rights. It is also difficult to select an effective scheduling algorithm due to the number of dynamic nods. Thus, to determine an efficient scheduling method, some meaningful correlations between the number of AP access, which can be regarded as the number of people, and the number of people in a certain space have been studied by using the AP access record of a Smart Device (Smart Phone, Tablet, etc.) always carried by most of the people these days instead of using personal information. This study then provides a direction of improving network operation by grasping an exact number of nodes in the smart grip service environment based on the correlations revealed.

Book ChapterDOI
20 Aug 2018
TL;DR: This paper proposes an efficient SDN-based architecture for Wi-Fi networks, which can exploit MPTCP to enable simultaneously transmission for a specific application, and achieve fine-grade path assignment for a single subflow.
Abstract: Wi-Fi technologies have attracted great attention due to the low cost of construction and maintenance. However, the explosive growth of mobile applications bring new challenges in terms of difficult service quality requirement, such as ultra bandwidth, seamless mobility and high reliability. In this paper, we propose an efficient SDN-based architecture for Wi-Fi networks, which can exploit MPTCP to enable simultaneously transmission for a specific application, and achieve fine-grade path assignment for a single subflow. To evaluate our design, we build a real-world testbed that consists of commodity AP devices, the experimental results demonstrate that our system can significantly improve the wireless throughput and reduce handover delay at the same time.

Book ChapterDOI
20 Aug 2018
TL;DR: A method of recognizing 8 commercial protocols and transformed protocols of their own using deep learning techniques is suggested and when the proposed method is conducted prior to APRE (Automatic Protocol Reverse Engineering) process, it is possible to obtain useful information beforehand.
Abstract: Protocol reverse-engineering technique can be used to extract the specification of an unknown protocol. However, there is no standardized method and in most cases, the extracting process is done manually or semi-automatically. Since only frequently seen values are extracted as fields from the messages of a protocol, it is difficult to understand complete specification of the protocol. Therefore, if the information about the structure of the unknown protocol could be acquired in advance, it would be easy to conduct reverse engineering. This paper suggests a method of recognizing 8 commercial protocols and transformed protocols of their own using deep learning techniques. When the proposed method is conducted prior to APRE (Automatic Protocol Reverse Engineering) process, it is possible to obtain useful information beforehand when similarities exist between unknown protocols and learned protocols.

Book ChapterDOI
20 Aug 2018
TL;DR: A novel covert timing channel is presented, which considers actual execution time distribution of tasks and controls execution time to leak data between conspirators; it is demonstrated that it is possible to leaks data in real-time systems.
Abstract: Different from a general-purpose system, a real-time system requires stringent timing guarantees. While existing offline analysis techniques can provide timing guarantees using the worst-case execution time (WCET) of individual tasks, a variation of actual execution time makes it difficult to build covert timing channel. In this paper, we first present a novel covert timing channel, which considers actual execution time distribution of tasks and controls execution time to leak data between conspirators; we demonstrate that it is possible to leak data in real-time systems. Second, we suggest two enhancing techniques called S-R LCM (sender-receiver least common multiple) and noise area to reduce noise in communication. Through simulations, we demonstrate that our covert timing channel can serve trade-off between transmission speed and accuracy; that is, it shows average 50.2%, 54.6% and 51.3% accuracy for 100 test cases with thresholds 0, 1.4 and 2.8. Average 58.4% accuracy is accomplished with best threshold values for 100 test cases, and the maximum accuracy for a single test case is recorded 100.0%.

Book ChapterDOI
20 Aug 2018
TL;DR: The proposed adaptive nonlinear ARX fuzzy C-means (NARXNF) clustering technique obtains both an improved convergence and error reduction relative to that of the traditional fuzzy C -means clustering algorithm.
Abstract: This paper proposes a reliable, intelligent, model-based (hybrid) fault detection and diagnosis (FDD) technique for wireless sensor networks (WSNs) in the presence of noise and uncertainties. A wireless sensor network is a network formed by a large number of sensor nodes in which each node is equipped with a sensor to detect physical phenomena such as light, heat, pressure, and temperature. Increasing the number of sensor nodes can cause an increase in the number of faulty nodes, which adversely affects the quality of service (QoS). Herein, the WSN modeling is based on an adaptive method that combines the fuzzy C-means clustering algorithm with the modified auto-regressive eXternal (ARX) model and is utilized for fault identification in WSNs. The proposed adaptive nonlinear ARX fuzzy C-means (NARXNF) clustering technique obtains both an improved convergence and error reduction relative to that of the traditional fuzzy C-means clustering algorithm. In addition, the proportional integral (PI) distributed observation is used for diagnosing multiple faults, where the convergence, robustness, and stability are validated by a fuzzy linear matrix inequality (FLMI). To test the proposed method, this technique was implemented through simulation using Omnet++ and MATLAB.

Book ChapterDOI
20 Aug 2018
TL;DR: Experimental results show that the proposed quantization-based semi-fragile watermarking algorithm can distinguish high quality JPEG compression, small amount of noise from shear attack, classification of intentional and incidental tampering, and has good localization ability.
Abstract: In this paper, we propose a novel quantization-based semi-fragile watermarking algorithm for image authentication and tamper localization based on discrete wavelet transform (DWT). In this algorithm, the watermarking is generated by extracting image feature from the approximation subband in the wavelet domain. The linear congruential pseudo random number generator is used to control the watermark embedding location and realize adaptive embedding. The generation and embedding of watermark are carried out in the original image. The authentication are not require original image and any additional information, which improves the security and confidentiality of watermark. The scheme can resist the mild modification of digital image and able to detect the malicious modifications precisely. Experimental results show that the proposed algorithm can distinguish high quality JPEG compression, small amount of noise from shear attack, classification of intentional and incidental tampering, and has good localization ability.

Book ChapterDOI
20 Aug 2018
TL;DR: This paper proposes autonomous control method of drones to crack down on traffic law violations by flying autonomously and shooting traffic law violation rather than manually controlling the drones.
Abstract: Recently, drones are used for monitoring in various fields. Especially, the pilots manually fly the drones in the sky in order to control the traffic law violation that occur at unspecified locations. However, in order to control traffic law violation by drone, it is necessary to fly autonomously and shooting traffic law violation rather than manually controlling the drones. This paper proposes autonomous control method of drones to crack down on traffic law violations. The pilot collects flight records to crack down on traffic law violations. The collected flight records generate a flight path for the autonomous flight of the drones. The generated flight path selects the optimal flight path for the drone to fly. The control signal is generated considering the obstacle and the flight path. The drones autonomously fly based on the control signal. It is possible to fly autonomously based on the proposed method by the drone and to crack down on traffic law violatios.

Book ChapterDOI
20 Aug 2018
TL;DR: The aim is to sensitize the end-user of Erle-Copters by describing the security vulnerabilities and to show how the Erre-Copter can be secured from unauthorized access, as well as providing instructions to secure the drones Wi-Fi connection and its operation with the official smartphone app or third party PC software.
Abstract: In this paper we describe the security vulnerabilities of the Erle-Copter quadcopters. Due to the fact that it is promoted as a toy with low acquisition costs, it may end up being used by many individuals which make it a target for harmful attacks. In addition, the video stream of the drone could be of interest for a potential attacker due to its ability of revealing confidential information. Therefore, we perform a security threat analysis on this particular family of drones. We set the focus mainly on obvious security vulnerabilities like the unencrypted Wi-Fi connection or the user management of the GNU/Linux operating system which runs on the drone. We show how the drone can be hacked in order to hijack the Erle-Copter. Our aim is to sensitize the end-user of Erle-Copters by describing the security vulnerabilities and to show how the Erle-Copter can be secured from unauthorized access. We provide instructions to secure the drones Wi-Fi connection and its operation with the official smartphone app or third party PC software.

Book ChapterDOI
20 Aug 2018
TL;DR: This paper presents a programming framework that enables the expression of pipeline patterns at a high-level of abstraction by adding pragma directives to sequential C++ codes and presents experimental results for a real-world face-detection application indicating that a performance competitive with low-level programming approaches can be achieved.
Abstract: Task-based runtime systems have gained a lot of interest in recent years since they support separating the specification of parallel computations from the concrete mapping onto a parallel architecture. This separation of concerns is considered key to coping with the increased complexity, performance variability, and heterogeneity of future parallel systems and to facilitating portability of applications across different architectures. In this paper we present our work on a programming framework that enables the expression of pipeline patterns at a high-level of abstraction by adding pragma directives to sequential C++ codes. Such high-level abstractions are then transformed to a runtime coordination layer, which utilizes different task-based runtime systems including StarPU and OCR to realize efficient parallel execution on single-node multi-core architectures. We describe the major aspects of our approach for mapping pipeline patterns to task-based runtimes and present experimental results for a real-world face-detection application indicating that a performance competitive with low-level programming approaches can be achieved.

Book ChapterDOI
20 Aug 2018
TL;DR: This paper presents a partition-based AVF calculation methodology from local to global that combines local AVF for each component and Input-Output Masking calculation between components to calculate the global AVF fast using probabilistic theory in a cost-effective way.
Abstract: Soft error induced bits upset has received increasing attention in reliable processor design. To measure the processor reliability, Architectural Vulnerability Factor (AVF) is often calculated by fast Architectural Correct Execution (ACE) analysis or accurate fault injection in a CPU core (e.g., alpha, x86, ARM) processor or GPU. However, the AVF calculation in the entire multicore system composed of several CPU cores, GPU, caches and memory banks, mostly depends on time consuming realistic laser tests or complex fault injection (days or years). To shorten the evaluation time, this paper presents a partition-based AVF calculation methodology from local to global. This approach combines local AVF for each component and Input-Output Masking (IOM) calculation between components to calculate the global AVF fast using probabilistic theory in a cost-effective way. The comprehensive simulation results of a 7-way parallelized motion detection application demonstrate the error location and error propagation path affects global AVF values. The probabilistic theory driven global AVF estimation time is only the order of magnitude in seconds.

Book ChapterDOI
Xizi Wang1, Li Wang1
20 Aug 2018
TL;DR: Results derived from data suggest that simply offering higher return rate is less attractive for investors and those platforms which possess high registered capital and have successful financing history along with low risk are more favored by investors.
Abstract: The financial industry has experienced a wide range of changes and innovations that bring from technology and Internet. The online Peer-to-Peer (P2P) lending platform is a relatively new phenomenon that thoroughly change the finance and e-commerce industries. In fact, in China the number of P2P online lending platforms has grown rapidly and will probably continue to increase over the next decade. However, researches in this field is still in its infant stage. Deeply understanding this business pattern has significant implications both theoretically and practically. We studied the investors’ investment intention towards P2P platform based on perceived value theory. Unlike most empirical researches that employ questionnaire to collect data, this paper crawled data from WDZJ (www.wdzj.com) which is a third-party online loan information platform. We collected 517 platforms data to investigate the relationship between platform characteristics and investment intention. Results derived from data suggest that simply offering higher return rate is less attractive for investors. Those platforms which possess high registered capital and have successful financing history along with low risk are more favored by investors.

Book ChapterDOI
20 Aug 2018
TL;DR: An automated software effort measurement method that can apply during the entire software development life cycle is proposed to overcome problems in continuous and consistent measurement and to achieve improvement of effort measurement outcomes.
Abstract: Software companies have adopted project management methodologies suitable for their organizations and have made significant efforts in successfully applying them to improve the quality of software. In particular, a technology that can measure and analyze software project data is essential for effective project management and productivity improvement. Of these software project data, software effort is the key metric to be measured, given its direct relation to process improvement and quality but also have general management interest. However, in practice, there have been many difficulties in actually measuring effort data because of problems in continuous and consistent measurement. Therefore, in this paper, we propose an automated software effort measurement method that can apply during the entire software development life cycle, to overcome these problems and to achieve improvement of effort measurement outcomes. Experiments are performed to evaluate the proposed method from the viewpoint of effort measurement accuracy. The results indicate that the proposed method shows a significant improvement compared to the existing methods.