scispace - formally typeset
Search or ask a question

Showing papers presented at "Computational Science and Engineering in 2016"


Journal ArticleDOI
25 May 2016
TL;DR: OpenPNM as discussed by the authors is a framework for modeling and simulation of porous networks using Python using NumPy and SciPy for most mathematical operations, combining Python's ease of use with the performance necessary to perform large simulations.
Abstract: Pore network modeling is a widely used technique for simulating multiphase transport in porous materials, but there are very few software options available. This work outlines the OpenPNM package that was jointly developed by several porous media research groups to help address this gap. OpenPNM is written in Python using NumPy and SciPy for most mathematical operations, thus combining Python's ease of use with the performance necessary to perform large simulations. The package assists the user with managing and interacting with all the topological, geometrical, and thermophysical data. It also includes a suite of commonly used algorithms for simulating percolation and performing transport calculations on pore networks. Most importantly, it was designed to be highly flexible to suit any application and be easily customized to include user-specified pore-scale physics models. The framework is fast, powerful, and concise. An illustrative example is included that determines the effective diffusivity through a partially water-saturated porous material with just 29 lines of code.

215 citations


Journal ArticleDOI
21 Jun 2016
TL;DR: By discussing the board game (Go) and the recent successful computer system (AlphaGo), associate EIC Jim X. Chen presents an overview of the evolution of computing and hardware efficiency, which has a significant impact on artificial intelligence.
Abstract: By discussing the board game (Go) and the recent successful computer system (AlphaGo), associate EIC Jim X. Chen presents an overview of the evolution of computing and hardware efficiency. The emphasis is on sharing information about the evolution of computing, which has a significant impact on artificial intelligence.

126 citations


Book ChapterDOI
04 Oct 2016
TL;DR: This article provides an overview of hp-version inverse estimates and approximation results for general polytopic elements, which are sharp with respect to element facet degeneration and a priori error bounds for the hp-DGFEM approximation of both second-order elliptic and first-order hyperbolic PDEs will be derived.
Abstract: The numerical approximation of partial differential equations (PDEs) posed on complicated geometries, which include a large number of small geometrical features or microstructures, represents a challenging computational problem. Indeed, the use of standard mesh generators, employing simplices or tensor product elements, for example, naturally leads to very fine finite element meshes, and hence the computational effort required to numerically approximate the underlying PDE problem may be prohibitively expensive. As an alternative approach, in this article we present a review of composite/agglomerated discontinuous Galerkin finite element methods (DGFEMs) which employ general polytopic elements. Here, the elements are typically constructed as the union of standard element shapes; in this way, the minimal dimension of the underlying composite finite element space is independent of the number of geometrical features. In particular, we provide an overview of hp-version inverse estimates and approximation results for general polytopic elements, which are sharp with respect to element facet degeneration. On the basis of these results, a priori error bounds for the hp-DGFEM approximation of both second-order elliptic and first-order hyperbolic PDEs will be derived. Finally, we present numerical experiments which highlight the practical application of DGFEMs on meshes consisting of general polytopic elements.

77 citations


Proceedings ArticleDOI
01 Aug 2016
TL;DR: It is shown that even if ESNs are not able to outperform state-of-the-art convolutional networks, they allow low error thanks to a suitable preprocessing of images and might be further investigated.
Abstract: Reservoir Computing is an attractive paradigm of recurrent neural network architecture, due to the ease of training and existing neuromorphic implementations. Successively applied on speech recognition and time series forecasting, few works have so far studied the behavior of such networks on computer vision tasks. Therefore we decided to investigate the ability of Echo State Networks to classify the digits of the MNIST database. We show that even if ESNs are not able to outperform state-of-the-art convolutional networks, they allow low error thanks to a suitable preprocessing of images. The best performance is obtained with a large reservoir of 4,000~neurons, but committees of smaller reservoirs are also appealing and might be further investigated.

52 citations


Journal ArticleDOI
13 May 2016
TL;DR: Findings suggest in order to capitalize on talent and perspective offered by the LGBTQ community, the field of computing should be especially attentive to LGBTQ students' sense of fit in the computing community.
Abstract: The field of computing is rapidly developing, requiring a strong and diverse labor force. The authors' work assessed the relationship between lesbian, gay, bisexual, transgender, and queer (LGBTQ) students' sense of belonging in computing and thoughts about leaving the field. The results of two studies indicated that among undergraduate students (study 1) and graduate students (study 2), thoughts about leaving a computing program were associated with feeling a low sense of belonging in the computing community. Importantly, women LGBTQ students reported the lowest sense of belonging among all student groups in the samples. These findings suggest in order to capitalize on talent and perspective offered by the LGBTQ community, the field of computing should be especially attentive to LGBTQ students' sense of fit in the computing community.

49 citations


Proceedings ArticleDOI
01 Aug 2016
TL;DR: In this article, a trace of a cluster of 11k machines referred as "Google Cluster Trace" was released, which contains cell information of about 29 days and the analysis of resource usage and requirements in this trace was provided.
Abstract: Cloud computing has gained interest amongst commercial organizations, research communities, developers and other individuals during the past few years. In order to move ahead with research in field of data management and to enable processing of such data, we need benchmark datasets and freely available data which are publicly accessible. Google in May 2011 released a trace of a cluster of 11k machines referred as "Google Cluster Trace". This trace contains cell information of about 29 days. This paper provides analysis of resource usage and requirements in this trace and is an attempt to give an insight about such kind of production trace similar to the ones in cloud environment. The major contributions of this paper include statistical profile of jobs based on resource usage, clustering of workload patterns and classification of jobs into different types based on k-means clustering. Though there have been earlier works for analysis of this trace, but our analysis provides several new findings such as jobs in a production trace are trimodal and there occurs symmetry in the tasks within a long job type.

43 citations


Journal ArticleDOI
01 Mar 2016
TL;DR: In this article, a study of Black engineering PhD students and postdoctoral scholars investigates their career decision-making processes concerning the professoriate, and a new approach develops a mentoring curriculum that raises racial and gender consciousness by utilizing the expertise of scholars from various social science disciplines.
Abstract: Engineering faculty members play a multifaceted role in the profession in that they help discover, promote, and disseminate advancements in technology, and they engage in capacity-building by training a future workforce of engineers. However, many potential faculty members are dissuaded from academia. A study of Black engineering PhD students and postdoctoral scholars investigates their career decision-making processes concerning the professoriate. The racial and gendered experiences of these students and scholars have impacted their desires and choices to pursue an academic career. Programmatic innovation is needed within graduate mentoring programs to address racial, gender, and other identity-based biases within engineering and academia, in addition to traditional content that focuses on presentation skills, networking, and other professional development areas. A new approach develops a mentoring curriculum that raises racial and gender consciousness by utilizing the expertise of scholars from various social science disciplines.

40 citations


Journal ArticleDOI
01 Jul 2016
TL;DR: Molecular simulation is an excellent tool to predict gas solubilities in solvents and may be used as a screening tool to navigate through the large number of theoretically possible ILs.
Abstract: Monte Carlo simulations are used to calculate the solubility of natural gas components in ionic liquids (ILs) and Selexol, which is a mixture of poly(ethylene glycol) dimethyl ethers. The solubility of the pure gases carbon dioxide (CO2), methane (CH4), ethane (C2H6), and sulfur dioxide (SO2) in the ILs 1-alkyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide ([Cnmim][Tf2N], n = 4, 6), 1-ethyl-3-methylimidazolium diethylphosphate ([emim][dep]), and Selexol (CH3O[CH2CH2O]nCH3, n = 4, 6) have been computed at 313.15 K and several pressures. The gas solubility trend observed in the experiments and simulations is: SO2 > CO2 > C2H6 > CH4. Overall, the Monte Carlo simulation results are in quantitative agreement with existing experimental data. Molecular simulation is an excellent tool to predict gas solubilities in solvents and may be used as a screening tool to navigate through the large number of theoretically possible ILs.

35 citations


Proceedings ArticleDOI
01 Aug 2016
TL;DR: Evaluation of the proposed Bitcoin Clustering Based Super Node protocol as a mechanism to speed up information propagation in Bitcoin network is presented, and results show that the presented clustering protocol is able to reduce the transaction propagation delay with a reasonable proportion.
Abstract: Bitcoin is a digital currency based on peer-to-peer network to propagate and verify transactions. Bitcoin differs from traditional currencies in that, it does not rely on a centralised authority. In this paper, we present a simulation model of Bitcoin peer-to-peer network which is an event based simulation. Large scale measurements of the real Bitcoin network are performed in order to enable a precise parameterisation of the presented simulation model. In addition, we perform validation results revealing that the presented simulation model behaves as close as the real Bitcoin network. Based on the developed simulation model, evaluation of our proposed Bitcoin Clustering Based Super Node (BCBSN) protocol as a mechanism to speed up information propagation in Bitcoin network is presented. Evaluation results show that the presented clustering protocol is able to reduce the transaction propagation delay with a reasonable proportion.

28 citations


Proceedings ArticleDOI
01 Aug 2016
TL;DR: An Algerian dialect lexicon and based on Algeriers Dialect is used and an approach based on an algorithm that perform three kinds of identification of Arabic dialects within social media in an unsupervised manner is proposed.
Abstract: The identification of Arabic dialects is considered to be the first pre-processing component by any NLP problem. This identification is useful for automatic translation, information retrieval, opinion mining and sentiment analysis. Most of the work on the identification treat this issue like any classification problem. These work are based on supervised learning. The majority of them are based on the EGYPTIAN (EGY), the TUNISIAN (TUN), the IRAQI (IRAQI) dialect, etc., by omitting the Algerian dialect. The purpose of this paper is to identify the Arabic dialects within social media in an unsupervised manner. In order to do so, we use an Algerian dialect lexicon and based on Algeriers Dialect. We also propose an approach based on an algorithm that perform three kinds of identification: 1) total (when the term is totally identify), 2) partial (when the term is identified only partially with prefix and suffix) and by using the improved Levenshtein distance (based on classical Levenshtein distance) and with considering the number of character of the words compared). We applied our algorithm on a corpus of 100 messages that were collected using the Facebook API, we obtain a rate exceeding 60%.

27 citations


Proceedings ArticleDOI
01 Aug 2016
TL;DR: In this article, the authors share their experience in troubleshooting coexistence problems in operational IIoT networks by reporting on examples that show the possible hurdles in carrying out failure analysis, and outline an architecture of such system that allows to observe multiple communication standards and unknown sources of interference.
Abstract: The ever-growing proliferation of wireless devices and technologies used for Internet of Things (IoT) applications, such as patient monitoring, military surveillance, and industrial automation and control, has created an increasing need for methods and tools for connectivity prediction, information flow monitoring, and failure analysis to increase the dependability of the wireless network. Indeed, in a safety-critical Industrial IoT (IIoT) setting, such as a smart factory, harsh signal propagation conditions combined with interference from coexisting radio technologies operating in the same frequency band may lead to poor network performance or even application failures despite precautionary measures. Analyzing and troubleshooting such failures on a large scale is often difficult and time-consuming. In this paper, we share our experience in troubleshooting coexistence problems in operational IIoT networks by reporting on examples that show the possible hurdles in carrying out failure analysis. Our experience motivates the need for a userfriendly, automated failure analysis system, and we outline an architecture of such system that allows to observe multiple communication standards and unknown sources of interference.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: DockerCap is proposed, a software-level power capping orchestrator for Docker containers that follows an Observe-Decide-Act loop structure that allows to quickly react to changes that impact on the power consumption by managing resources of each container at run-time, to ensure the desired power cap.
Abstract: Internet of Things (IoT) is experiencing a huge hype these days, thanks to the increasing capabilities of embedded devices that enable their adoption in new fields of application (e.g. Wireless Sensor Networks, Connected Cars, Health Care, etc.). On the one hand, this is leading to an increasing adoption of multi-tenancy solutions for Cloud and Fog Computing, to analyze and store the data produced. On the other hand, power consumption has become a major concern for almost every digital system, from the smallest embedded circuits to the biggest computer clusters, with all the shades in between. Fine-grain control mechanisms are then needed to cap power consumption at each level of the stack, still guaranteeing Service Level Agreements (SLA) to the hosted applications. In this work, we propose DockerCap, a software-level power capping orchestrator for Docker containers that follows an Observe-Decide-Act loop structure: this allows to quickly react to changes that impact on the power consumption by managing resources of each container at run-time, to ensure the desired power cap. We show how we are able to obtain results comparable with the state of the art power capping solution provided by Intel RAPL, still being able to tune the performances of the containers and even guarantee SLA constraints.

Book ChapterDOI
01 Jan 2016
TL;DR: The Riemannian BFGS method converges globally to a stationary point without assuming that the objective function is convex and superlinearly to a nondegenerate minimizer.
Abstract: In this paper, a Riemannian BFGS method is defined for minimizing a smooth function on a Riemannian manifold endowed with a retraction and a vector transport. The method is based on a Riemannian generalization of a cautious update and a weak line search condition. It is shown that, the Riemannian BFGS method converges (i) globally to a stationary point without assuming that the objective function is convex and (ii) superlinearly to a nondegenerate minimizer. The weak line search condition removes completely the need to consider the differentiated retraction. The joint diagonalization problem is used to demonstrate the performance of the algorithm with various parameters, line search conditions, and pairs of retraction and vector transport.

Journal ArticleDOI
Jianlong Zhou1, M. Asif Khawaja1, Zhidong Li1, Jinjun Sun1, Yang Wang1, Fang Chen1 
01 Jan 2016
TL;DR: The study showed that revealing of the internal states of ML process can help improve easiness of understanding the data analysis process, make real-time status update more meaningful, and make ML results more convincing.
Abstract: Machine learning ML techniques are often found difficult to apply effectively in practice because of their complexities. Therefore, making ML useable is emerging as one of active research fields recently. Furthermore, an ML algorithm is still a 'black-box'. This 'black-box' approach makes it difficult for users to understand complicated ML models. As a result, the user is uncertain about the usefulness of ML results and this affects the effectiveness of ML methods. This paper focuses on making a 'black-box' ML process transparent by presenting real-time internal status update of the ML process to users explicitly. A user study was performed to investigate the impact of revealing internal status update to users on the easiness of understanding data analysis process, meaningfulness of real-time status update, and convincingness of ML results. The study showed that revealing of the internal states of ML process can help improve easiness of understanding the data analysis process, make real-time status update more meaningful, and make ML results more convincing.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: The proposed statistical and spectral feature based method for automated apnea detection from singlelead electrocardiogram outperforms the state-of-theart ones in terms of accuracy.
Abstract: An automatic sleep apnea detection algorithm is essential not only for alleviating the onus of physicians of analyzing a high volume of data but also for making a portable sleep quality evaluation device feasible. Most prior studies are either multi-lead based or yield poor accuracy which hinder the aforementioned goals. In this work, we propound a statistical and spectral feature based method for automated apnea detection from singlelead electrocardiogram. The efficacy of the selected features is demonstrated by intuitive, graphical and statistical validation. RUSBoost is introduced for sleep apnea classification. Again, most of the existing works focus on the feature extraction part. The effect of various classification models is poorly studied. Besides propounding an automated sleep apnea screening method, we study the performance of eight well-know classifiers for our feature extraction scheme. The optimal choices of parameters for RUSBoost are also inspected. The results of our experiments manifest that the proposed method outperforms the state-of-theart ones in terms of accuracy.

Journal ArticleDOI
13 May 2016
TL;DR: An iterative and incremental development methodology for simulation models in network engineering projects is presented and network simulation models along with enhanced modeling capabilities and boosted simulation performance are developed in a robust yet flexible way.
Abstract: The authors present an iterative and incremental development methodology for simulation models in network engineering projects. Driven by the DEVS (Discrete Event Systems Specification) formal framework for modeling and simulation, they aim to assist network design, test, analysis, and optimization processes. A practical application of the methodology is presented for a case study in the data acquisition system of the ATLAS particle physics experiment at CERN's Large Hadron Collider at CERN. By adopting the DEVS MaS formal framework in combination with software engineering best practices, the authors develop network simulation models along with enhanced modeling capabilities and boosted simulation performance for tools in a robust yet flexible way.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: This paper proposes a scalable and memory-efficient radix trie based name component encoding scheme, named RaCE, to implement NDN FIB, and experiment results show that the RaCE scheme is reducing memory consumption and size compared to the original size of data and NCE scheme for the 29 million dataset, respectively.
Abstract: Named Data Networking (NDN) is a promising future Internet architecture which retrieves the content using their names. Content names composed of strings separated by '/' are stored in the NDN Forwarding Information Base (FIB) to forward the incoming packets further. To retrieve content through their names poses two main challenges for the NDN FIB: high memory consumption and high lookup time. Therefore, an efficient and scalable data structure is required to store names in FIB. Encoding components in all the names with a unique integer can reduce the memory consumption as well as lookup time. In this paper, we propose a scalable and memory-efficient radix trie based name component encoding scheme, named RaCE, to implement NDN FIB. Our experiment results show that the RaCE scheme is reducing memory consumption by 89.95% and 26.07% compared to the original size of data and NCE [4] scheme for the 29 million dataset, respectively.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: A multi-sensor process that is developed for in-situ monitoring of water pollution in rivers/lakes in which real-time water quality data is acquired using a multiparametric sensor probe for quantitative data, a crowd sensor via a mobile app for qualitative data and integrate these data onto a cloud platform i.e Bluemix which enables interactive visualization of data as Heatmap combined with geographical mapping.
Abstract: Sensor based environmental monitoring is beginning to gain traction given the recent advancements in sensor development technology. Sensor platforms offer several advantages in comparison to the traditional monitoring approaches based on discrete sampling methods as they offer the capability of providing high resolution data. Access to high-frequency spatial and temporal information facilitates real-time event detection or then understanding the impact of pollution to the water quality in natural water resources. In this paper, we report a multi-sensor process that we developed for in-situ monitoring of water pollution in rivers/lakes in which we acquire real-time water quality data using (a) a multiparametric sensor probe for quantitative data, (b) a crowd sensor via a mobile app for qualitative data and integrate these data onto a cloud platform i.e Bluemix which enables interactive visualization of data as Heatmap combined with geographical mapping. This type of visualization technique not only facilitates effective handling of high-resolution data but also allows large-scale data–driven inspection to identify affected/polluted zones and detection of pollution violations, thereby making it an important tool for enabling decision-making. Data analysis based on clustering techniques is also presented. We compare our techniques to traditional data collection methods. Furthermore, to support our efforts in water quality monitoring, we have also developed several web-based applications that are aimed at incorporating sensing data as well as data from various other sources onto a common online platform. We demonstrate the capabilities of our tools through a case study done on Yamuna river in New Delhi where we monitor the river pollution in real-time.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: This paper extends the traditional equations for the battery life estimation to the sensor node in order to perform an initial trade-off quickly by using simple equations in an early stage of the development and shows that the proposal can realize the attractive WSN applications with a long battery life by greatly reducing the power consumption compared with the conventional sensor nodes.
Abstract: The wireless sensor network (WSN) is a promising technology for monitoring infrastructures, industrial structures, home security, industrial security, elderly people, and so on. An attractive characteristic of the WSN is to be easily installed to the conventional places by using the battery driven sensor nodes. The equipment cost is rather cheap since the heavy constructing work for wiring the power rails is not needed. On the other hand, the management cost to change the batteries attached to many sensor nodes may be large. Conventional researches have reduced the power consumption of the WSN, i.e. extended the battery life, to reduce such management cost. Some of them have attempted to make the sensor node stay at the standby mode as much as possible, and to reduce the standby power consumption. In contrast to reduce the standby power, we have proposed the sensor node that completely eliminates the standby power consumption. However, the proposed sensor node wakes up from the status without any power, while the conventional node wakes up from the status with some power even if the standby current is very small. That is, the wake-up time of the proposed sensor node may become longer than the conventional node. The longer the wake-up time, the larger the power consumption. This time duration may affect to the power consumption of the WSN when using the proposed sensor nodes. This paper extends the traditional equations for the battery life estimation to our sensor node in order to perform an initial trade-off quickly by using simple equations in an early stage of the development. The estimation calculates the battery life considering the wakeup time by using the measured parameters. As a result of the estimation, it is shown that the our proposal can realize the attractive WSN applications with a long battery life by greatly reducing the power consumption compared with the conventional sensor nodes.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: This paper proposed a grade prediction model to automatically predict students' grades based on students' previous performances, utilizing regression, back-propagation neural network methods based on feature selection comprehensively considering educational theories.
Abstract: Over the past few years, we have witnessed the rapid growth of Massive Open Online Courses (MOOCs). More and more researches focus on MOOCs, especially grade prediction, due to the low completion rate in MOOCs. In this paper, we proposed a grade prediction model to automatically predict students' grades based on students' previous performances. We utilized regression, back-propagation neural network methods based on feature selection comprehensively considering educational theories. The model was tested on a real MOOC data set, and the results showed that the model performed well in predicting students' grades with a high accuracy. The most important contribution of this paper is that several valuable applications based on the simple grade prediction model are discussed, including teachers, students, MOOCs, traditional class and so on.

Journal ArticleDOI
26 Aug 2016
TL;DR: WaveformECG supports interactive invocation of multiple analysis algorithms on selected ECG datasets, visualization of ECG waveforms, manual annotation of time points and intervals in ECG Waveforms, and automated annotation of analysis results using standard medical terminology.
Abstract: The electrocardiogram (ECG) is the most commonly collected data in cardiovascular research because of the ease with which it can be measured and the fact that changes in ECG waveforms reflect underlying aspects of heart disease. Despite its ubiquity, there are no open, noncommercial platforms for interactive management, sharing, and analysis of these data. WaveformECG addresses this unmet need. Accessed through a browser, WaveformECG extracts ECGs from vendor files, storing them as a time series with other analysis results and annotations in an open source time-series database. It supports interactive invocation of multiple analysis algorithms on selected ECG datasets, visualization of ECG waveforms, manual annotation of time points and intervals in ECG waveforms, and automated annotation of analysis results using standard medical terminology. Integration with the I2B2 clinical data warehouse system enables bidirectional exchange of data between these two platforms.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: This work studies how to apply support vector machines in order to forecast the energy consumption of buildings using an alternative version of the sequential minimal optimisation algorithm to reduce the execution time.
Abstract: This work studies how to apply support vector machines in order to forecast the energy consumption of buildings. Usually, support vector regression is implemented using the sequential minimal optimisation algorithm. In this work, an alternative version of that algorithm is used to reduce the execution time. Several experiments were carried out taking into account data measured during one year. The weather conditions were used as independent variables and the consumed amount of electricity was considered as the parameter to predict. The model has been trained using the first six months of the dataset whereas it was validated using the following three months and tested taking into account the last three months of measurements. From obtained results, a good performance of the model is observed.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: This paper looks into few issues of designing web applications and recommends design criteria to create efficient web applications.
Abstract: Web applications differ from web sites as they have wide range of interactive features/functionalities and dynamic content. The content delivered via thin client or server driven architecture vary in size, structure and visual design. To accommodate such rich content care should be taken when designing user interface. Various methodologies exist to deliver the content to the users in an efficient manner. Due to the versatility and diversity of the information to be delivered via web applications, the focus shifts on to user satisfaction. Hence a user experience design aimed at user satisfaction becomes the main focus for such applications. This paper looks into few issues of designing web applications and recommends design criteria to create efficient web applications.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: A qualitative survey on different attempts of memory and devices disag segregation is conducted and alternative future directions for devices disaggregation are proposed in the context of the work planned in the H2020 dRedBox project.
Abstract: Traditionally, HPC workloads are characterized by different requirements in CPU and memory resources, which in addition vary over time in unpredictable manner. For this reason, HPC system designs, assuming physical co-location of CPU and memory on a single motherboard, strongly limit scalability, while leading to inefficient resources over-provisioning. Also, peripherals available in the system need to be globally accessible to allow optimal usage. In this context, modern HPC designs tend to support disaggregated memory, compute nodes, remote peripherals and hardware extensions to support virtualization techniques. In this paper, a qualitative survey on different attempts of memory and devices disaggregation is conducted. In addition, alternative future directions for devices disaggregation are proposed in the context of the work planned in the H2020 dRedBox project.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: A System Migration Life Cycle (SMLC) framework is proposed, which includes a step by-stepmigration strategy that is descriptive at the business analyst level and based on quality metrics modelling at the technical level, to estimate the potential computational needs, risks, and costs for an organisation.
Abstract: Cloud Computing has emerged as a viable alternative to in-house computing resources for many organisations. It offers an alternative solution for many enterprise applications, particularly large-scale legacy applications. In addition, it can offer a cost effective strategy for small and medium-sized enterprises (SMEs) where the high set-up and maintenance cost of computing resources can be prohibiting. Thus, in this paper a System Migration Life Cycle (SMLC) framework is proposed, which includes a step by-stepmigration strategy that is descriptive at the business analyst level and based on quality metrics modelling at the technical level, to estimate the potential computational needs, risks, and costs for an organisation. The proposed framework is generic and adaptable in order to accommodate various organisational requirements, thus covering a wide range of enterprise applications and following a number of novel software requirements and quality engineering principles.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: An efficient model for multivariate data reduction is proposed based on periodic data aggregation on two sensor levels, in addition to polynomial regression functions that allows 84% reduction rate and 93% approximation accuracy after reduction.
Abstract: Sensor networks are a collection of sensor nodes that co-operatively transmit sensed data to a base station. One of the well-known characteristics of Wireless Sensor Networks (WSNs) is its limited resources. Energy consumption of the network's nodes is considered one of the major challenges faced by researchers nowadays. On the other hand, data aggregation helps in reducing the redundant data transferred through the WSNs. This fact implies that aggregation of data is considered a very crucial technique for reducing the energy consumption across the WSN. Local aggregation and Prefix filtering are two methods used in which they utilize a tree based bi-level periodic data aggregation approach implemented on the source node and on the aggregator levels. In this paper an efficient model for multivariate data reduction is proposed based on periodic data aggregation on two sensor levels, in addition to polynomial regression functions. The performance of the model was evaluated using SensorScope network which is deployed at the Grand-St-Bernard located between Switzerland and Italy. The results show the advantages of the proposed model as it allows 84% reduction rate and 93% approximation accuracy after reduction. The simulations were done using the R software.

Journal ArticleDOI
01 Jan 2016
TL;DR: The aim of this paper is to develop and implement an advanced approach that is capable to accurately detect, identify, and track people within appropriate smart home environment to be used as infrastructure for various application domains.
Abstract: Research concern has been directed toward automatic identification and tracking of people within the home environment to support smart home services such as care services for elderly and disadvantaged people to enable and prolong their independent living. Although various approaches have been proposed to deal with this problem, solutions still remain untackled for various reasons e.g., user acceptance. The aim of this paper is to develop and implement an advanced approach that is capable to accurately detect, identify, and track people within appropriate smart home environment to be used as infrastructure for various application domains. A novel multimodal approach for non-tagged human identification and tracking within smart home environment is proposed. The proposed approach combines pyroelectric infrared PIR sensors and floor pressure through unique designed integration strategy aiming to merge the advantages of the two sensor types and overcome their limitations.

Journal ArticleDOI
01 May 2016
TL;DR: Findings establish empirical support for the Students & Technology in Academia, Research, and Service Computing Corps model of engagement, a flexible approach that can be applied across a variety of institutional types to positively impact underrepresented students in computing.
Abstract: In this article, the authors examine the impact of participation in a national community for broadening participation in computing that engages college students in computing-related service projects. Results of their study show many benefits for undergraduate computing students who engage in such projects, including academic, career, and personal benefits, with students who are underrepresented in computing benefitting more than others. Results also suggest that that an annual conference centered on training and reflection on service learning projects can help build a strong sense of community among students who otherwise wouldn't have access to a similar group of peers. These findings establish empirical support for the Students a Technology in Academia, Research, and Service Computing Corps model of engagement, a flexible approach that can be applied across a variety of institutional types to positively impact underrepresented students in computing.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: This paper evaluates the accuracy of the Android smart battery interface and designs a new, portable and cheap microcontroller-based power monitoring device that can be used to estimate the energy consumed by the applications running on the phone.
Abstract: Computational offloading has been shown being a promising approach to prolong the battery life of smartphones. To come up with energy-efficient offloading strategies, it is crucial to understand how much energy is used not only for local computation but also for network communication. Therefore, reliable methods to measure the energy consumption of the phones are needed as a fundamental basis. In this paper, we evaluate the accuracy of the Android smart battery interface. As a second contribution, we also designed a new, portable and cheap microcontroller-based power monitoring device. We conduct energy measurement experiments for a set of typical applications including local computations as well as WiFi and 3G operations. Our results show that both our device and smart battery interface can be used to estimate the energy consumed by the applications running on the phone. The smart battery interface, however, clearly underestimates in idle status. We see our measurements as an important step towards the development of more accurate offloading strategies for smartphones.

Proceedings ArticleDOI
13 Nov 2016
TL;DR: This study collects qualitative data on the use of a Software Engineering (SE) inspired development process, Document Driven Design (DDD), for developing Scientific Computing Software (SCS), and suggests that further empirical study is warranted.
Abstract: This study collects qualitative data on the use of a Software Engineering (SE) inspired development process, Document Driven Design (DDD), for developing Scientific Computing Software (SCS). Five SCS projects were redeveloped using DDD and SE best practices. Interviews with the code owners were conducted to assess the impact of the redevelopment. After redevelopment, the code owners agreed that a systematic development process can be beneficial, and they had a positive or neutral response to the software artifacts produced during redevelopment. The code owners, however, felt that the documentation produced by the DDD process requires too great a time commitment and too much up front effort. The concerns expressed by the study participants may be partly a consequence of a delay in ethics approval, which resulted in imperfect communication with the study participants and misunderstandings with respect to the process for creating, and the purpose of, the DDD artifacts. Although the DDD style of documentation has been successful in other domains, the previous claims may not apply in the SCS environment. This study is a first step toward measuring the impact of DDD on SCS. The results of the study are not definitive, but they certainly suggest that further empirical study is warranted.