scispace - formally typeset
Search or ask a question

Showing papers presented at "Computational Science and Engineering in 2019"


Journal ArticleDOI
01 Jul 2019
TL;DR: The evolution of Pegasus over time is described and motivations behind the design decisions are provided and selected lessons learned are concluded.
Abstract: Since 2001, the Pegasus Workflow Management System has evolved into a robust and scalable system that automates the execution of a number of complex applications running on a variety of heterogeneous, distributed high-throughput, and high-performance computing environments. Pegasus was built on the principle of separation between the workflow description and workflow execution, providing the ability to port and adapt the workflow based on the target execution environment. Through its user-driven research and development, it has adapted to the needs of a number of scientific communities, utilizing and developing novel algorithms and software engineering solutions. This paper describes the evolution of Pegasus over time and provides motivations behind the design decisions. It concludes with selected lessons learned.

66 citations


Journal ArticleDOI
01 Jan 2019
TL;DR: It is concluded that these models execute about 100–250 times too slow for operational throughput rates at a horizontal resolution of 1 km, even when executed on a full petascale system with nearly 5000 state-of-the-art hybrid GPU-CPU nodes.
Abstract: We present a roadmap towards exascale computing based on true application performance goals. It is based on two state-of-the art European numerical weather prediction models (IFS from ECMWF and COSMO from MeteoSwiss) and their current performance when run at very high spatial resolution on present-day supercomputers. We conclude that these models execute about 100–250 times too slow for operational throughput rates at a horizontal resolution of 1 km, even when executed on a full petascale system with nearly 5000 state-of-the-art hybrid GPU-CPU nodes. Our analysis of the performance in terms of a metric that assesses the efficiency of memory use shows a path to improve the performance of hardware and software in order to meet operational requirements early next decade.

61 citations


Journal ArticleDOI
01 Jan 2019
TL;DR: The U.S. is a long-time international leader in HPC, rooted in a strong and innovative computing industry that is complemented by partnerships with and among federal agencies, academia, and industries whose success relies on HPC.
Abstract: The U.S. is a long-time international leader in HPC, rooted in a strong and innovative computing industry that is complemented by partnerships with and among federal agencies, academia, and industries whose success relies on HPC. The advent of exascale computing brings challenges in traditional simulation as well as in areas colloquially referred to as “Big Data.” Within this context, we describe the U.S. exascale computing strategy: 1) the National Strategic Computing Initiative, a multiple U.S. federal agency effort comprehensively addressing computing and computational science requirements in the U.S.; 2) the Exascale Computing Initiative, a DOE effort to acquire, develop, and deploy exascale computing platforms within DOE laboratories on a given timeline; and, 3) the Exascale Computing Project (a component of the Exascale Computing Initiative), dedicated to the creation and enhancement of applications, software, and hardware technologies for exascale computers, focused on vital U.S. national security and science needs.

38 citations


Journal ArticleDOI
01 Jul 2019
TL;DR: In this article, a building-block approach to the design of scientific workflow systems is described, and a set of building blocks that enable multiple points of integration, "unifying" conceptual reasoning across otherwise very different tools and systems are discussed.
Abstract: This paper describes a building-block approach to the design of scientific workflow systems. We discuss RADICALCybertools as one implementation of building-block concept, showing how they are designed and developed in accordance with this approach. This paper offers three main contributions: (i) showing the relevance of the design principles underlying the building blocks approach to support scientific workflows on high performance computing platforms; (ii) illustrating a set of building blocks that enable multiple points of integration, “unifying” conceptual reasoning across otherwise very different tools and systems; and (iii) case studies discussing how RADICAL-Cybertools are integrated with existing workflow, workload, and general purpose computing systems, and used to develop domain-specific workflow systems.

27 citations


Journal ArticleDOI
01 May 2019
TL;DR: It could be concluded that the HTS maglev train has great advantage in ride comfort.
Abstract: This paper builds a six degree-of-freedom dynamic model of the high-temperature superconducting (HTS) maglev vehicle/bridge coupled system and simulates its dynamic responses in different conditions. The influences of velocity, air spring dynamics parameters, and guideway irregularity on the ride quality of vehicle are also systematically studied. Results show that the resonant frequency of the carbody acceleration is about 0.5–1.5 Hz, and the main vibration frequency of carbody is related to the periodic configuration of rigid pillars when the HTS maglev train travels over the flexible elevated-guideway. Generally, the maximum carbody acceleration is usually less than 0.3 m/s2, and the maximum elevated-guideway acceleration is also at a low level. In a word, this paper develops and computes a dynamic model of HTS maglev vehicle/bridge coupled system. It could be concluded that the HTS maglev train has great advantage in ride comfort.

27 citations


Proceedings ArticleDOI
05 Dec 2019
TL;DR: This paper introduces an uplink VLC system based on an infrared transmitter with beam steering to provide high data rates using a 4 branches angle diversity receiver (ADR) and the resultant delay spread and SNR are examined.
Abstract: Providing uplink high data rate is one of the big concerns in visible light communication (VLC) systems. This paper introduces an uplink VLC system based on an infrared transmitter with beam steering to provide high data rates. In this work, a 4 branches angle diversity receiver (ADR) is used and the resultant delay spread and SNR are examined. The proposed system achieved data rates up to 3.57 Gb/s using simple on-off-keying (OOK).

26 citations


Journal ArticleDOI
01 Mar 2019
TL;DR: Software is the key crosscutting technology that enables advances in mathematics, computer science, and domain-specific science and engineering to achieve robust simulations and analysis for science, engineering, and other research fields as mentioned in this paper.
Abstract: Software is the key crosscutting technology that enables advances in mathematics, computer science, and domain-specific science and engineering to achieve robust simulations and analysis for science, engineering, and other research fields. However, software itself has not traditionally received focused attention from research communities; rather, software has evolved organically and inconsistently, with its development largely as by-products of other initiatives. Moreover, challenges in scientific software are expanding due to disruptive changes in computer hardware, increasing scale and complexity of data, and demands for more complex simulations involving multiphysics, multiscale modeling and outer-loop analysis. In recent years, community members have established a range of grass-roots organizations and projects to address these growing technical and social challenges in software productivity, quality, reproducibility, and sustainability. This article provides an overview of such groups and discusses opportunities to leverage their synergistic activities while nurturing work toward emerging software ecosystems.

25 citations


Proceedings ArticleDOI
01 Aug 2019
TL;DR: This work shows that under normal conditions, the packet count distance between the TCP control plane and corresponding data plane falls within a range of values and exceeds these values during anomalous activities.
Abstract: Analyzing network traffic behavior is essential for detecting network anomalies. However, it remains a challenge to effectively analyze this behavior for anomaly diagnosis. One promising approach is to decompose network traffic into control and data planes, and statistically analyze each plane's packet features. Both control and data planes behave similarly during benign traffic. However, any difference in the behavior of these planes may indicate an anomaly. In this work, We show that under normal conditions, the packet count distance between the two planes falls within a range of values. Consecutive outliers to these values may reveal the presence of anomalies. We exploit Dynamic Time Warping (DTW) to get the best alignment of the two planes and measure the Euclidean distance between their corresponding instances. We investigate our approach using recent Internet traffic captured at King Saud University. Results support our argument and show that the distance between the TCP control plane and corresponding data plane falls within a certain range of values during benign applications and exceeds these values during anomalous activities.

23 citations


Proceedings ArticleDOI
01 Aug 2019
TL;DR: CANDY CREAM is presented, an attack made of two parts: CANDY aiming at exploiting a vulnerability exposed by an infotainment system based on Android operating system connected to the vehicle's CAN bus network, and CREAM, a post-exploitation script that injects customized CAN frame to alter the behaviour of the vehicle.
Abstract: Modern vehicles functionalities are regulated by Electronic Control Units (ECU), from a few tens to a hundred, commonly interconnected through the Controller Area Network (CAN) communication protocol. CAN is not secure-by-design: authentication, integrity and confidentiality are not considered in the design and implementation of the protocol. This represents one of the main vulnerability of modern vehicle: getting the access (physical or remote) to CAN communication allows a possible malicious entity to inject unauthorised messages on the CAN bus. These messages may lead to unexpected and possible very dangerous behaviour of the target vehicle. In this paper, we present CANDY CREAM, an attack made of two parts: CANDY aiming at exploiting a vulnerability exposed by an infotainment system based on Android operating system connected to the vehicle's CAN bus network, and CREAM, a post-exploitation script that injects customized CAN frame to alter the behaviour of the vehicle.

16 citations


Proceedings ArticleDOI
Sukun Li1, Sonal Savaliya1, Leonard Marino1, Avery Leider1, Charles C. Tappert1 
01 Aug 2019
TL;DR: Whether active portions of the brain are influenced by the presence of Virtual Reality and whether those brain signals can be used for user authentication and a new feature extraction method is proposed in this paper combined with an EEG authentication model.
Abstract: The purpose of this research was to determine whether active portions of the brain are influenced by the presence of Virtual Reality(VR) and whether those brain signals can be used for user authentication. Electroencephalography(EEG) signals are individually unique and non-trivial to collect. Because of this, EEG based biometrics is one of the most reliable and most secure forms of biometric data. For this study, EEG signals were collected under two conditions: subjects viewed video using a VR headset and then using a laptop. During the collection of EEG data, volunteers started in a resting stage and then went into an active stage. In addition, a new feature extraction method is proposed in this paper combined with an EEG authentication model. Brain waves collected for both the VR stage and the non-VR stages are compared for analysis purposes. Pre-processing and feature extraction of VR and non-VR EEG data were performed, and distance computation was calculated in an attempt to authenticate the identity of subjects.

16 citations


Proceedings ArticleDOI
01 Aug 2019
TL;DR: It is expected that the work presented in this paper will contribute to better alignment of big data terminology with its characteristics by various researchers, and it will provide a standard set of characteristics for the big data.
Abstract: Today, big data is considered an important area in IT and business fields. Big data is the driving force behind the effective running and competitiveness of several companies and organizations. Based on a research literature survey, there is a lack of standard definition and agreed upon characteristics for big data from scientific and business perspectives. This paper provides a review of big data definitions by various researchers and attempts to standardize a definition. In doing so, the paper identifies some issues related to big data definition and characteristics as well as raise some research questions. The paper categorizes these characteristics in terms of those relevant to big data versus those that related to processing and tools of big data. It is expected that the work presented in this paper will contribute to better alignment of big data terminology with its characteristics by various researchers, and it will provide a standard set of characteristics for the big data.

Journal ArticleDOI
01 Jan 2019
TL;DR: The effectiveness of the proposed degradation degree considered method for remaining useful life prediction based on similarity is demonstrated by using turbofan engine data from NASA and a real case study.
Abstract: In view of the problems of degradation index construction and predicted lag of similarity-based methods, a degradation degree considered method for remaining useful life prediction based on similarity is proposed. The effectiveness of the proposed method is demonstrated by using turbofan engine data from NASA and a real case study.

Journal ArticleDOI
01 Jun 2019
TL;DR: In this article, a selection of the suitable probability density function is used to model average daily wind speed data recorded at for 10 years in Gaza strip, Weibull probability distribution function has been estimated for Gaza based on average wind speed for ten years.
Abstract: The need of clean and renewable energy, as well as the power shortage in Gaza strip with few wind energy studies conducted in Palestine, provide the importance of this paper. Probability density function is commonly used to represent wind speed frequency distributions for the evaluation of wind energy potential in a specific area. This study shows the analysis of the climatology of the wind profile over the State of Palestine; the selections of the suitable probability density function decrease the wind power estimation error percentage. A selection of probability density function is used to model average daily wind speed data recorded at for 10 years in Gaza strip. Weibull probability distribution function has been estimated for Gaza based on average wind speed for 10 years. This assessment is done by analyzing wind data using Weibull probability function to find out the characteristics of wind energy conversion. The wind speed data measured from January 1996 to December 2005 in Gaza is used as a sample of actual data to this study. The main aim is to use the Weibull representative wind data for Gaza strip to show how statistical model for Gaza Strip over ten years. Weibull parameters determine by author depend on the pervious study using seven numerical methods, Weibull shape factor parameter is 1.7848, scale factor parameter is 4.3642 ms-1, average wind speed for Gaza strip based on 10 years actual data is 2.95 ms-1 per a day so the behavior of wind velocity based on probability density function show that we can produce energy in Gaza strip.

Journal ArticleDOI
01 Jan 2019
TL;DR: Discusses the dangers of fake news or misinformation being disseminated via online channels and how to protect yourself from it.
Abstract: Discusses the dangers of fake news or misinformation being disseminated via online channels.

Proceedings ArticleDOI
01 Aug 2019
TL;DR: The built-in front-facing camera of the smartphone is leveraged to continuously track driving facial features and early recognize driver's drowsiness and distraction dangerous states.
Abstract: The real-time driving behavior monitoring plays a significant role in intelligent transportation systems. Such monitoring increases traffic safety by reducing and eliminating the risk of potential traffic accidents. Vision-based approach including video cameras for dangerous situation detection is undoubtedly one of the most perspective and commonly used in sensing driver environment. In this case, the images of a driver, captured with video cameras, can describe its facial features, like head movements, eye state, mouth state, and, afterwards, identify a current level of fatigue state. In this paper, we leverage the built-in front-facing camera of the smartphone to continuously track driving facial features and early recognize driver's drowsiness and distraction dangerous states. Dangerous state recognition is classified into online and offline modes. Due to efficiency and performance of smartphones in online mode, the driving dangerous states are determined in real time on the mobile devices with aid of computer vision libraries OpenCV and Dlib while driving. Otherwise, the offline mode is based on the results of statistical analysis provided by a cloud service, utilizing not only the accumulated statistics in real time, but also the previously collected, stored and produced by machine learning tools.

Proceedings ArticleDOI
01 Aug 2019
TL;DR: The paper investigates the vulnerabilities and security controls in terms of secure cryptography, technological security vulnerability, operational weakness and suggests secure ways for cloud systems.
Abstract: Cloud computing is rapidly increasing for achieving comfortable computing. Cloud computing has essentially security vulnerability of software and hardware. For achieving secure cloud computing, the vulnerabilities of cloud computing could be analyzed in a various and systematic approach from perspective of the service designer, service operator, the designer of cloud security and certifiers of cloud systems. The paper investigates the vulnerabilities and security controls in terms of secure cryptography. For achieving the research aims, this paper analyzes technological security vulnerability, operational weakness and suggests secure ways for cloud systems.

Proceedings ArticleDOI
01 Aug 2019
TL;DR: Evaluation results show that the proposed abnormal behavior detection accurately identifies normal and aggressive driving patterns with the optimal number of the abnormal behavior Detection instances.
Abstract: In the real world, normal and abnormal behavior patterns vary depending on a given environment, which means that the abnormal behavior detection model should be customized. To address this issue, in this paper, we employ OS-ELM (Online Sequential Extreme Learning Machine) and Autoencoder for adaptive abnormal behavior detection. First, state-transition probability tables of a target during an initial learning period are learned as normal behaviors. Then, Autoencoder-based anomaly detection is performed for the state-transition probability tables of subsequent time frames. The abnormal behavior detection model is updated by using OS-ELM algorithm every time a new probability table or behavior comes. The number of abnormal behavior detection instances is dynamically tuned to reflect the recent normal patterns or modes. Also, the table is compressed to reduce the computation cost. Evaluation results using a driving dataset of cars show that the proposed abnormal behavior detection accurately identifies normal and aggressive driving patterns with the optimal number of the abnormal behavior detection instances.

Proceedings ArticleDOI
01 Aug 2019
TL;DR: This paper developed a classification algorithm for process signal data through segmented dynamic time warping using maximum overlap discrete wavelet transform (MODWT) and random sample consensus (RANSAC) and validated it.
Abstract: The semiconductor manufacturing process is divided into fabrication process and packaging process. Fabrication process is a core process for manufacturing semiconductors and consists of about 700 unit processes. This unit process accumulates vast amounts of data, and many manufacturing companies apply data-based algorithms to manufacturing systems to improve process yield and quality. Data generated during the semiconductor manufacturing process from the process equipment is called fault detection and classification (FDC) trace data, and this data has time-series characteristics of different patterns depending on the sensor type or the recipe. Therefore, it is necessary to develop a classification algorithm appropriate to the signal pattern for process monitoring. In this paper, we develop segmented dynamic time warping technique which is specialized for process signal classification. Generally, it is known that dynamic time warping (DTW) has superior classification performance for time series data. However, there is a limit to classification that reflects the characteristics of semiconductor process signals. Therefore, we developed a classification algorithm for process signal data through segmented DTW using maximum overlap discrete wavelet transform (MODWT) and random sample consensus (RANSAC), and validated it.

Proceedings ArticleDOI
01 Aug 2019
TL;DR: This paper proposed an architecture that securely stores and distribute malicious signature injection, manipulation or deletion in real time for the purpose of prompt detection that leverages the distributed ledger technology, data immutability and tamper-proof abilities of blockchain technology.
Abstract: The proliferation of cloud database has increased its vulnerability to cyberattacks. Despite several proposed methods of securing databases, malicious intruders find ways to exploit their vulnerabilities and gain access to data. This is because cyberattacks are becoming more sophisticated and harder to detect. As a result, it is becoming very difficult for a single or isolated intrusion detection system (IDS) node to detect all attacks. With the adoption of cooperative intrusion detection system, all attacks can be detected by an IDS node with the help of other IDS nodes. In cooperative intrusion detection, IDS nodes exchange attack signatures with the view of promptly detecting any attack that has been detected by other IDS. Therefore, the security of the database that houses these shared attack signatures becomes a huge problem. More specifically, detecting and/or preventing malicious signature injection, manipulation or deletion becomes important. This paper proposed an architecture that securely stores and distribute these attack signatures in real time for the purpose of prompt detection. Our proposed architecture leverages the distributed ledger technology, data immutability and tamper-proof abilities of blockchain technology. The performance of our system was examined by using the latency of the blockchain network.

Proceedings ArticleDOI
Lizong Zhang, Shen Xiang, Fengming Zhang, Ren Minghui, Ge Binbin1, Bo Li1 
01 Aug 2019
TL;DR: A time series anomaly detection model is proposed, which is based on the periodic extraction method of discrete Fourier transform, and determines the sequence position of each element in the period by periodic overlapping mapping, thereby accurately describe the timing relationship between each network message.
Abstract: In the process of informationization and networking of smart grids, the original physical isolation was broken, potential risks increased, and the increasingly serious cyber security situation was faced. Therefore, it is critical to develop accuracy and efficient anomaly detection methods to disclose various threats. However, in the industry, mainstream security devices such as firewalls are not able to detect and resist some advanced behavior attacks. In this paper, we propose a time series anomaly detection model, which is based on the periodic extraction method of discrete Fourier transform, and determines the sequence position of each element in the period by periodic overlapping mapping, thereby accurately describe the timing relationship between each network message. The experiments demonstrate that our model can detect cyber attacks such as man-in-the-middle, malicious injection, and Dos in a highly periodic network.

Journal ArticleDOI
01 Jan 2019
TL;DR: Why MT is an appropriate testing technique for scientists and engineers who are not primarily trained as software developers and how it can be used to conduct systematic and effective testing on programs that do not have test oracles without requiring additional testing tools are discussed.
Abstract: Testing scientific software is a difficult task due to their inherent complexity and the lack of test oracles. In addition, these software systems are usually developed by end-user developers who are not normally trained as professional software developers nor testers. These factors often lead to inadequate testing. Metamorphic testing (MT) is a simple yet effective testing technique for testing such applications. Even though MT is a wellknown technique in the software testing community, it is not very well utilized by the scientific software developers. The objective of this paper is to present MT as an effective technique for testing scientific software. To this end, we discuss why MT is an appropriate testing technique for scientists and engineers who are not primarily trained as software developers. Specifically, how it can be used to conduct systematic and effective testing on programs that do not have test oracles without requiring additional testing tools.

Proceedings ArticleDOI
01 Aug 2019
TL;DR: This paper proposes a new geographic routing strategy of applying NDN in vehicular networks with Delay Tolerant Networking (DTN) support, called GeoDTN-NDN, and introduces a hybrid geographic routing solution with restricted greedy, greedy, perimeter, and DTN modes in packet forwarding.
Abstract: The Vehicular Ad Hoc Network (VANET) is used for communication among vehicles to provide traffic and other important information critical for smart transportation. Named Data Networking (NDN) is a recently proposed future Internet architecture that focuses on what the content is rather than where the host is. In this paper, we propose a new geographic routing strategy of applying NDN in vehicular networks with Delay Tolerant Networking (DTN) support, called GeoDTN-NDN. One challenge of using NDN in VANET is that in addition to the flooding problem of interest forwarding, data forwarding and delivery may also experience disruption because of the high mobility of vehicles in VANETs. We adapt geographical routing mechanisms to deal with the flooding problem of interest forwarding and the disruption problem of data forwarding in NDN. We introduce a hybrid geographic routing solution with restricted greedy, greedy, perimeter, and DTN modes in packet forwarding. To evaluate the performance, we compare the results of our solution with the original Vehicular Inter-Networking via Named Data Networking (V-NDN). Our hybrid geographic routing solution that deals with both interest forwarding and data delivery results in better performance.

Journal ArticleDOI
Xun Wu1, Tefang Chen1, Yating Chen1, Chaoqun Xiang1, Zhi Liu1, Kaidi Li1 
01 May 2019
TL;DR: A noninvasive diagnosis method for open-circuit faults of a locomotive inverter using two different voltage transients according to output voltage values, and the ratio of them is used for diagnosis.
Abstract: A noninvasive diagnosis method for open-circuit faults of a locomotive inverter is proposed. Two different voltage transients are defined according to output voltage values, and the ratio of them is used for diagnosis. Faults can be detected immediately when they occur. The method and its robustness are verified on dSPACE.

Journal ArticleDOI
01 Mar 2019
TL;DR: The lazy refactoring technique, which calls for code with clearly defined interfaces and sharply delimited scopes to maximize reuse and integrability, helps reduce total development time and accelerates the production of scientific results.
Abstract: A critical challenge in scientific computing is balancing developing high-quality software with the need for immediate scientific progress. We present a flexible approach that emphasizes writing specialized code that is refactored only when future immediate scientific goals demand it. Our lazy refactoring technique, which calls for code with clearly defined interfaces and sharply delimited scopes to maximize reuse and integrability, helps reduce total development time and accelerates the production of scientific results. We offer guidelines for how to implement such code, as well as criteria to aid in the evaluation of existing tools. To demonstrate their application, we showcase the development progression of tools for particle simulations originating from the Glotzer Group at the University of Michigan. We emphasize the evolution of these tools into a loosely integrated software stack of highly reusable software that can be maintained to ensure the long-term stability of established research workflows.

Journal ArticleDOI
01 Nov 2019
TL;DR: The experimental analysis has shown that the proposed HFRNN algorithm significantly outperforms current leading algorithms in terms of fuzzy–rough nearest-neighbor, vaguely quantified rough sets, similarity nearest-northern neighbour, and aggregated-similarity nearest-NEighbor.
Abstract: The fusion of hesitant fuzzy set (HFS) and fuzzy–rough set (FRS) is explored and applied into the task of classification due to its capability of conveying hesitant and uncertainty information. In this paper, on the basis of studying the equivalence relations between hesitant fuzzy elements and HFS operation updating, the target instances are classified by employing the lower and upper approximations in hesitant FRS theory. Extensive performance analysis has been conducted including classification accuracy results, execution time, and the impact of k parameter to evaluate the proposed hesitant fuzzy–rough nearest-neighbor (HFRNN) algorithm. The experimental analysis has shown that the proposed HFRNN algorithm significantly outperforms current leading algorithms in terms of fuzzy–rough nearest-neighbor, vaguely quantified rough sets, similarity nearest-neighbor, and aggregated-similarity nearest-neighbor.

Proceedings ArticleDOI
01 Aug 2019
TL;DR: This paper is about extracting live twitter data regarding any topic and converting it into structured form from un-structured one and extracting opinions are extracted from the text data and polarity is assigned against each tweet.
Abstract: Opinion mining and extracting Sentiments of people is the need of today. This is the era of Big data. Because of social networking sites, it's easy to analyze sentiments of people. Sentiment analysis is a technique to extract opinions of people regarding any product, issue or personality. This paper is about extracting live twitter data regarding any topic and converting it into structured form from un-structured one. Opinions are extracted from the text data and polarity is assigned against each tweet. Polarity of data can be positive, negative or neutral. Most recent and popular opinions can be extracted. It is useful for both market analyzers as well as customers. Customers get to know honest reviews about any product and companies can know their customer's interests. Furthermore, predictions are also given based on the classified data.

Proceedings ArticleDOI
01 Aug 2019
TL;DR: A generic, protocol-independent approach for the detection of network storage covert channels is proposed using a supervised machine learning technique and can lead to a reduction of necessary techniques to prevent covert channel communication in network traffic.
Abstract: Network covert channels are used in various cyberattacks, including disclosure of sensitive information and enabling stealth tunnels for botnet commands. With time and technology, covert channels are becoming more prevalent, complex, and difficult to detect. The current methods for detection are protocol and pattern specific. This requires the investment of significant time and resources into application of various techniques to catch the different types of covert channels. This paper reviews several patterns of network storage covert channels, describes generation of network traffic dataset with covert channels, and proposes a generic, protocol-independent approach for the detection of network storage covert channels using a supervised machine learning technique. The implementation of the proposed generic detection model can lead to a reduction of necessary techniques to prevent covert channel communication in network traffic. The datasets we have generated for experimentation represent storage covert channels in the IP, TCP, and DNS protocols and are available upon request for future research in this area.

Proceedings ArticleDOI
01 Aug 2019
TL;DR: KLog-Home project is being carried out as an initiative deployed to an elderly care home to monitor the elderly inhabitants continuously to help caregivers get notified of critical events and act instantly to avoid fatal incidents.
Abstract: The advent of superior research and advances in medicine and healthcare have led to an increase in life expectancy of humans increasing the aging population in many countries. Due to the rapid cost-to-performance ratio of the Internet of Things (IoT), many sectors are adopting IoT from agriculture to healthcare and daily life applications; monitoring elderly patients pervasively with IoT can improve elderly care while minimizing caregivers' effort. Furthermore transparent and uninterrupted monitoring by caregivers helps them finding patterns in elderly patients before precarious incidents, costing much more to care. Ambient Assisted Living (AAL) is a term that indicates the concept of improving wellness and health conditions of older adults. In the current scenario, AAL extends to learning the elderly behavior by caregivers to avoid accidents in the future. In this context, KLog-Home project is being carried out as an initiative deployed to an elderly care home to monitor the elderly inhabitants continuously to help caregivers get notified of critical events and act instantly to avoid fatal incidents. KLog-Home enriches the concept of AAL with videos enabling, Video Assisted Ambient Living (VAAL). This multi-modal scenario is achieved by leveraging the technological benefits from IoT and communication technologies and envisioning a solution to provide ubiquitous care to senior people while enabling them to live an independent, safe and secure life.

Journal ArticleDOI
01 Mar 2019
TL;DR: CIG's best practices, lessons learned, and community practices are discussed, and how development of high-quality, reusable scientific software has accelerated scientific discovery by enabling simulations of the dynamics of Earth's surface and interior across a wide spectrum of problems using resources from laptops to leadership-class supercomputers are highlighted.
Abstract: The domain of geophysics has historically been a driver of scientific software development due to the size, complexity, and societal importance of the research questions. Geophysical computation complements field observation, laboratory analysis, experiment, and theory. Specialized scientific software is regularly developed by geophysicists in collaboration with computational scientists and applied mathematicians; in this cross-disciplinary environment, reusability is critically important both to preserve the intellectual investment and to ensure the quality of the research and its replicability. The Computational Infrastructure for Geodynamics (CIG) is a “community of practice” that advances Earth science by developing and disseminating software for geophysics and related fields. We discuss CIG's best practices, lessons learned, and community practices, and highlight how development of high-quality, reusable scientific software has accelerated scientific discovery by enabling simulations of the dynamics of Earth's surface and interior across a wide spectrum of problems using resources from laptops to leadership-class supercomputers.

Proceedings ArticleDOI
01 Aug 2019
TL;DR: Test4Deep is presented, a white-box testing framework based on a single DNN that avoids mistakes of multiple DNNs by inducing inconsistencies between predicted labels of original inputs and that of generated test inputs and improves neuron coverage to capture more diversity.
Abstract: Current testing for Deep Neural Networks (DNNs) focuses on quantity of test cases but ignores diversity. To the best of our knowledge, DeepXplore is the first white-box framework for Deep Learning testing by triggering differential behaviors between multiple DNNs and increasing neuron coverage to improve diversity. Since it is based on multiple DNNs facing problems that (1) the framework is not friendly to a single DNN, (2) if incorrect predictions made by all DNNs simultaneously, DeepXplore cannot generate test cases. This paper presents Test4Deep, a white-box testing framework based on a single DNN. Test4Deep avoids mistakes of multiple DNNs by inducing inconsistencies between predicted labels of original inputs and that of generated test inputs. Meanwhile, Test4Deep improves neuron coverage to capture more diversity by attempting to activate more inactivated neurons. The proposed method was evaluated on three popular datasets with nine DNNs. Compared to DeepXplore, Test4Deep produced average 4.59% (maximum 10.49%) more test cases that all found errors and faults of DNNs. These test cases got 19.57% more diversity increment and 25.88% increment of neuron coverage. Test4Deep can further be used to improve the accuracy of DNNs by average up to 5.72% (maximum 7.0%).