scispace - formally typeset
Search or ask a question

Showing papers presented at "Computational Science and Engineering in 2014"


Journal ArticleDOI
01 Sep 2014
TL;DR: XSEDE's integrated, comprehensive suite of advanced digital services federates with other high-end facilities and with campus-based resources, serving as the foundation for a national e-science infrastructure ecosystem.
Abstract: Computing in science and engineering is now ubiquitous: digital technologies underpin, accelerate, and enable new, even transformational, research in all domains. Access to an array of integrated and well-supported high-end digital services is critical for the advancement of knowledge. Driven by community needs, the Extreme Science and Engineering Discovery Environment (XSEDE) project substantially enhances the productivity of a growing community of scholars, researchers, and engineers (collectively referred to as "scientists"' throughout this article) through access to advanced digital services that support open research. XSEDE's integrated, comprehensive suite of advanced digital services federates with other high-end facilities and with campus-based resources, serving as the foundation for a national e-science infrastructure ecosystem. XSEDE's e-science infrastructure has tremendous potential for enabling new advancements in research and education. XSEDE's vision is a world of digitally enabled scholars, researchers, and engineers participating in multidisciplinary collaborations to tackle society's grand challenges.

2,856 citations


Journal ArticleDOI
17 Apr 2014
TL;DR: In this work, a new dictionary learning algorithm is proposed here by extending the classical K-SVD method, and when each new batch of data samples is added to the training process, a number of new atoms are selectively introduced into the dictionary.
Abstract: A large group of dictionary learning algorithms focus on adaptive sparse representation of data. Almost all of them fix the number of atoms in iterations and use unfeasible schemes to update atoms in the dictionary learning process. It's difficult, therefore, for them to train a dictionary from Big Data. A new dictionary learning algorithm is proposed here by extending the classical K-SVD method. In the proposed method, when each new batch of data samples is added to the training process, a number of new atoms are selectively introduced into the dictionary. Furthermore, only a small group of new atoms as subspace controls the current orthogonal matching pursuit, construction of error matrix, and SVD decomposition process in every training cycle. The information, from both old and new samples, is explored in the proposed incremental K-SVD (IK-SVD) algorithm, but only the current atoms are adaptively updated. This makes the dictionary better represent all the samples without the influence of redundant information from old samples.

84 citations


Proceedings ArticleDOI
19 Dec 2014
TL;DR: The results clearly demonstrate that Node.js is quite lightweight and efficient, which is an idea fit for I/O intensive websites among the three, while PHP is only suitable for small and middle scale applications, and Python-Web is developer friendly and good for large web architectures.
Abstract: Large scale, high concurrency, and vast amount of data are important trends for the new generation of website. Node.js becomes popular and successful to build data-intensive web applications. To study and compare the performance of Node.js, Python-Web and PHP, we used benchmark tests and scenario tests. The experimental results yield some valuable performance data, showing that PHP and Python-Web handle much less requests than that of Node.js in a certain time. In conclusion, our results clearly demonstrate that Node.js is quite lightweight and efficient, which is an idea fit for I/O intensive websites among the three, while PHP is only suitable for small and middle scale applications, and Python-Web is developer friendly and good for large web architectures. To the best of our knowledge, this is the first paper to evaluate these Web programming technologies with both objective systematic tests (benchmark) and realistic user behavior tests (scenario), especially taking Node.js as the main topic to discuss.

72 citations


Proceedings ArticleDOI
08 Aug 2014
TL;DR: This research paper intends to use data mining Classification Modeling Techniques, namely, Decision Trees, Naive Bayes and Neural Network, along with weighted association Apriori algorithm and MAFIA algorithm in Heart Disease Prediction in healthcare.
Abstract: The successful application of data mining in highly visible fields like e-business, marketing and retail has led to its application in other industries and sectors. Among these sectors just discovering is healthcare. The Healthcare industry is generally “information rich”, but unfortunately not all the data are mined which is required for discovering hidden patterns & effective decision making .Discovery of hidden patterns and relationships often goes unexploited. Advanced data mining modeling techniques can help remedy this situation. This research paper intends to use data mining Classification Modeling Techniques, namely, Decision Trees, Naive Bayes and Neural Network, along with weighted association Apriori algorithm and MAFIA algorithm in Heart Disease Prediction. Using medical profiles such as age, sex, blood pressure and blood sugar it can predict the likelihood of patients getting heart disease.

68 citations


Journal ArticleDOI
02 Dec 2014
TL;DR: The article describes the active flow control application; then summarizes the main features in the implementation of a massively parallel turbulent flow solver, PHASTA; and finally demonstrates the methods strong scalability at extreme scale.
Abstract: Massively parallel computation provides an enormous capacity to perform simulations on a timescale that can change the paradigm of how scientists, engineers, and other practitioners use simulations to address discovery and design. This work considers an active flow control application on a realistic and complex wing design that could be leveraged by a scalable, fully implicit, unstructured flow solver and access to high-performance computing resources. The article describes the active flow control application; then summarizes the main features in the implementation of a massively parallel turbulent flow solver, PHASTA; and finally demonstrates the methods strong scalability at extreme scale. Scaling studies performed with unstructured meshes of 11 and 92 billion elements on the Argonne Leadership Computing Facility's Blue Gene/Q Mira machine with up to 786,432 cores and 3,145,728 MPI processes.

61 citations


Proceedings ArticleDOI
19 Dec 2014
TL;DR: This paper proposes a resource scheduling approach for the container virtualized cloud environments to reduce response time of customers' jobs and improve providers' resource utilization rate.
Abstract: Cloud computing is a new paradigm to deliver computing resources to customers in a pay-as-you-go model. In this paper, the recently emerged container-based virtualization technology is adopted for building the infrastructure of a cloud data center. Cloud providers are concerned with the resource usage in a multi-type resource sharing environment while cloud customers desire higher quality of services. According to this, we propose a resource scheduling approach for the container virtualized cloud environments to reduce response time of customers' jobs and improve providers' resource utilization rate. The stable matching theory is applied to generate an optimal mapping from containers to physical servers. Simulations are implemented to evaluate our resource scheduling approach. The results show that our approach achieves a better performance for customers and maximal profits for cloud providers by improving the resource utilization.

54 citations


Proceedings ArticleDOI
Yang Qifan1, Tang Hao1, Zhao Xuebing1, Li Yin1, Zhang Sanfeng1 
19 Dec 2014
TL;DR: A special sensor-independent in-air gesture recognition method named Dolphin is proposed in this paper which can be applied to off-the-shelf smart devices directly and recognizes a rich set of pre-defined gestures with high accuracy in real time.
Abstract: User experience of smart mobile devices can be improved in numerous scenarios with the assist of in-air gesture recognition. Most existing methods proposed by industry and academia are based on special sensors. On the contrary, a special sensor-independent in-air gesture recognition method named Dolphin is proposed in this paper which can be applied to off-the-shelf smart devices directly. The only sensors Dolphin needs are the loudspeaker and microphone embedded in the device. Dolphin emits a continuous 21 KHz tone by the loudspeaker and receive the gesture-reflecting ultrasonic wave by the microphone. The gesture performed is encoded into the reflected ultrasonic in the form of Doppler shift. By combining manual recognition and machine leaning methods, Dolphin extracts features from Doppler shift and recognizes a rich set of pre-defined gestures with high accuracy in real time. Parameter selection strategy and gesture recognition under several scenarios are discussed and evaluated in detail. Dolphin can be adapted to multiple devices and users by training using machine learning methods.

53 citations


Proceedings ArticleDOI
19 Dec 2014
TL;DR: This paper introduces an electrocardiogram beat classification method based on deep belief networks that can improve the recognition performance of some types of electrocardsiogram beats.
Abstract: This paper introduces an electrocardiogram beat classification method based on deep belief networks. This method includes two parts: feature extraction and classification. In the feature extraction part, features are extracted from the original electrocardiogram signal: including features extracted by deep belief networks and timing interval features. Several classifiers are selected to classify the electrocardiogram beat, and nonlinear support vector machine with Gaussian kernel achieves the best classification accuracy, reaching 98.49. Compared with other similar methods on electrocardiogram beat classification, our method can improve the recognition performance of some types of electrocardiogram beats.

47 citations


Journal ArticleDOI
30 Jan 2014
TL;DR: This article presents a novel, high-performance computational approach allowing simulations of 3D cell colony dynamics at a previously unavailable tissue scale and achieves simulation of cell colonies composed of 109 cells, which allows for describing tumor growth in its early clinical stage.
Abstract: Biological processes are inherently complex and involve many unknown relationships and mechanisms at different scales. Despite many efforts, researchers still can't explain all of the observed phenomena and, if necessary, make any desirable changes in the dynamics. Recently, it has become apparent that a research opportunity lies in complementing the traditional, heuristic experimental approach with mathematical modeling and computer simulations. Achieving a simulation scale that corresponds, for instance, to clinically detectable tumor sizes is still a huge challenge, however, this scale is necessary to understand and control complex biological processes. This article presents a novel, high-performance computational approach allowing simulations of 3D cell colony dynamics at a previously unavailable tissue scale. Due to its high parallel scalability, the method achieves simulation of cell colonies composed of 109 cells, which allows for describing tumor growth in its early clinical stage.

44 citations


Journal ArticleDOI
15 Oct 2014
TL;DR: The accurate calculation of electronic stopping power is demonstrated, which characterizes the rate of energy transfer from a high-energy particle to electrons in materials, as a representative example of material properties that derive from quantum dynamics of electrons.
Abstract: Advancement in high-performance computing allows us to calculate properties of increasingly complex materials with unprecedented accuracy. At the same time, to take full advantage of modern leadership-class supercomputers, the calculations need to scale well on hundreds of thousands of processing cores. We demonstrate such high scalability of our recently developed implementation of Ehrenfest non-adiabatic electron-ion dynamics up to 1 million floating-point processing units on two different leadership-class computing architectures. As a representative example of material properties that derive from quantum dynamics of electrons, we demonstrate the accurate calculation of electronic stopping power, which characterizes the rate of energy transfer from a high-energy particle to electrons in materials. We discuss the specific case of crystalline gold with a hydrogen atom as the high-energy particle, and we illustrate detailed scientific insights that can be obtained from the quantum dynamics simulation at the electronic structure level. Please note that two animation videos of the time evolution for Figure 3 are available as Web extras at http://youtu.be/WxiMZ2DVBbM and http://youtu.be/bAcaxF9ARzM.

39 citations


Journal ArticleDOI
01 Apr 2014
TL;DR: This work proposes a taxonomy-based approach to describe and model the complex security space characterising OSNs, and introduces a systematic approach to define the 'problem space' of an OSN.
Abstract: Social environments were already present in the original web vision, but nowadays are mainly available through online social networks OSNs, which are a real cultural phenomenon. However, their actual deployment is very heterogeneous, reflecting into different development choices and functional architectures. Such aspects, jointly with the intrinsic sharing of personal information, lead to severe risks both in terms of security and privacy. In this perspective, our work proposes a taxonomy-based approach to describe and model the complex security space characterising OSNs. The contributions of this paper are: 1 to introduce a systematic approach to define the 'problem space' of an OSN; 2 to showcase basic models for organising the engineering and the needed checking procedures.

Proceedings ArticleDOI
19 Dec 2014
TL;DR: A novel routing algorithm based on virtual topology snapshot has been proposed and it inherits the advantage of lower delay in topology snapshots as well as solves the problems of poor robustness and adaptability.
Abstract: Nowadays, communication technology is changing rapidly and satellite communication, as an important communication pattern, is playing a more and more important role. Due to lower system delay and lower power consumption in mobile terminal, LEO satellite communication will undoubtedly become an important development direction in satellite communication system in the future. In satellite communication networks, routing algorithms determine average delay and the average of traffic load thereby have significant influences on the performance of the system. In this paper, a novel routing algorithm based on virtual topology snapshot has been proposed and it inherits the advantage of lower delay in topology snapshot as well as solves the problems of poor robustness and adaptability. This algorithm has been proved more excellent performance than topology snapshot routing and distributed routing algorithm via simulation.

Book ChapterDOI
01 Jan 2014
TL;DR: In this article, the authors demonstrate the use of advanced linear stability tools developed for the spectral-element code Nek5000 to investigate the dynamics of nonlinear flows in moderately complex geometries.
Abstract: We demonstrate the use of advanced linear stability tools developed for the spectral-element code Nek5000 to investigate the dynamics of nonlinear flows in moderately complex geometries. The aim of stability calculations is to identify the driving mechanism as well as the region most sensitive to the instability: the wavemaker. We concentrate on global linear stability analysis, which considers the linearised Navier–Stokes equations and searches for growing small disturbances, i.e. so-called linear global modes. In the structural sensitivity analysis these modes are associated to the eigenmodes of the direct and adjoint linearised Navier–Stokes operators, and the wavemaker is defined as the overlap of the strongest direct and adjoint eigenmodes. The large eigenvalue problems are solved using matrix-free methods adopting the time-stepping Arnoldi approach. We present here our implementation in Nek5000 with the ARPACK library on a number of test cases.

Proceedings ArticleDOI
Cui Yang, Bo Yuan, Ye Tian, Zongming Feng, Wei Mao 
19 Dec 2014
TL;DR: A novel smart home architecture based on Resource Name Service (RNS) is proposed, which defines the participating parties of smart home and the relationships between them, and designs the specific process of smartHome services.
Abstract: Smart home contains centralized controlled lights, Heating, Ventilation and Air Conditioning (HVAC), appliances, security locks of gates and doors and other intelligent equipment, to provide convenient, comfortable, energy efficient and secure services to users. Although many countries and organizations have designed various smart home architectures, there are still some problems in these existing solutions. One problem is the interoperability among multiple appliances in home environment. Another rising problem is the difference of identifiers, operating platform and programming languages accepted for smart home systems. Heterogeneous services and devices need to be interoperable each other in order to perform joint execution of tasks. Therefore, the heterogeneity problem of appliance and identifier leads to poor compatibility in smart home systems. This paper proposes a novel smart home architecture based on Resource Name Service (RNS), which defines the participating parties of smart home and the relationships between them, and designs the specific process of smart home services. This architecture can achieve interoperability among various vendors and improve the compatibility without doing much extra changes on these existing smart home appliances.

Proceedings ArticleDOI
Yingying She1, Qian Wang1, Yunzhe Jia1, Ting Gu1, Qun He1, Baorong Yang1 
19 Dec 2014
TL;DR: A precise tracing of feature points including palm center, fingertips and joints by using Kinect is presented and a novel recognition method based on precise motion features of these feature points is also presented.
Abstract: Dynamic hand gesture recognition enables people to communicate with computers naturally without any mechanical devices. Due to the spread of depth sensor such as Microsoft Kinect and Leap Motion, dynamic hand gesture recognition becomes possible for recognizing meticulous gesture information in real time. However, most of these methods recognize the hand gesture by fuzzy features such as contour size, which cause imprecise hand gesture recognition. This paper presents a precise tracing of feature points including palm center, fingertips and joints by using Kinect. A novel recognition method based on precise motion features of these feature points is also presented. Having been tested with a series of applications, our method is proved to be robust and effective, and suitable for further application in real-time HCI systems.

Journal ArticleDOI
18 Mar 2014
TL;DR: The process of collecting and analyzing the complete record of more than 10 years of Web hits and SQL queries to SkyServer is described.
Abstract: SkyServer is the primary catalog data portal of the Sloan Digital Sky Survey that makes multiple terabytes of astronomy data available to the world. Here, the process is described of collecting and analyzing the complete record of more than 10 years of Web hits and SQL queries to SkyServer.

Proceedings ArticleDOI
19 Dec 2014
TL;DR: A new FER system is proposed, which uses the active shape mode (ASM) algorithm to align the faces, then extracts local binary patterns (LBP) features and uses support vector machine (SVM) classifier to predict the facial emotion.
Abstract: Automatic facial expression recognition has been drawn many attentions in both computer vision and artificial intelligence (AI) for the past decades. Although much progress has been made, facial expression recognition (FER) is still a challenging and interesting problem. In this paper, we propose a new FER system, which uses the active shape mode (ASM) algorithm to align the faces, then extracts local binary patterns (LBP) features and uses support vector machine (SVM) classifier to predict the facial emotion. Experiments on the Jaffe database show that the proposed method has a promising performance and increases the recognition rate by 5.2% compared to the method using Gabor features.

Proceedings ArticleDOI
19 Dec 2014
TL;DR: This paper first finds a prior CDS and then uses the Minimum-Weight Spanning Tree (MST) to optimize the result, and applies effective degree, the new term introduced in the algorithm, combining with ID to determine dominators.
Abstract: A Connected Dominating Set (CDS) is a subset Va#x2032; of V for the graph G(V, E) and induces a connected sub graph, such that each node in V -- Va#x2032; is at least adjacent to one node in Va#x2032;. CDSs have been proposed to formulate virtual backbones in wireless ad-hoc sensor networks to design routing protocols for alleviating the serious broadcast storms problem. It is not easy to construct the Minimum Connected Dominating Set (MCDS) due to the NP-hard nature of the problem. In this paper, our algorithm first finds a prior CDS and then uses the Minimum-Weight Spanning Tree (MST) to optimize the result. Our algorithm applies effective degree, the new term introduced in our algorithm, combining with ID to determine dominators. Default event is triggered to recalculate and update the node's effective degree after a predetermined amount of time. By 3-hop message relay, each node can learn the paths leading to the other dominators within 3-hop distance and thus some paths picked up by some rules can convert into the new weight edge by calculating the number of nodes over these paths. An MST will be found from the new weight graph induced by the prior CDS to further reduce CDS size. Our algorithm performs well in terms of CDS size and Average Hop Distance (AHD) by comparing with the existing algorithms. The simulation result also shows that our algorithm is more energy efficient than others.

Journal ArticleDOI
01 Jan 2014
TL;DR: In this article, the authors present a parallel solution using the MapReduce paradigm in the cloud that can deliver a high-performance, fault-tolerant, and flexible solution.
Abstract: Contaminant source characterization (CSC) in a water distribution system (WDS) exhibits a computationally intensive problem. Traditional solutions to the CSC problem can't fulfill the CSC's quality-of-service (QoS) requirements. The authors present a parallel solution using the MapReduce paradigm in the cloud that can deliver a high-performance, fault-tolerant, and flexible solution.

Book ChapterDOI
01 Jan 2014
TL;DR: The Quantum Monte Carlo (QMC) method is used to study physical problems which are analytically intractable due to many-body interactions and strong coupling strengths as discussed by the authors.
Abstract: The Quantum Monte Carlo (QMC) method is used to study physical problems which are analytically intractable due to many-body interactions and strong coupling strengths. This makes QMC a natural choice in the warm dense matter (WDM) regime where both the Coulomb coupling parameter \(\varGamma \equiv {e}^{2}/(r_{s}k_{B}T)\) and the electron degeneracy parameter Θ ≡ T∕T F are close to unity. As a truly first-principles simulation method, it affords superior accuracy while still maintaining reasonable scaling, emphasizing its role as a benchmark tool.Here we give an overview of QMC methods including diffusion MC, path integral MC, and coupled electron-ion MC. We then provide several examples of their use in the WDM regime, reviewing applications to the electron gas, hydrogen plasma, and first row elements. We conclude with a comparison of QMC to other existing methods, touching specifically on QMC’s range of applicability.

Proceedings ArticleDOI
19 Dec 2014
TL;DR: This paper presents a data-centric framework that offers a structured substrate for abstracting heterogeneous sensing sources and enables the collection, storage and discovery of observation and measurement data from both static and mobile sensing sources.
Abstract: The Internet of Things (IoT) paradigm connects everyday objects to the Internet and enables a multitude of applications with the real world data collected from those objects. In the city environment, real world data sources include fixed installations of sensor networks by city authorities as well as mobile sources, such as citizens' smartphones, taxis and buses equipped with sensors. This kind of data varies not only along the temporal but also the spatial axis. For handling such frequently updated, time-stamped and structured data from a large number of heterogeneous sources, this paper presents a data-centric framework that offers a structured substrate for abstracting heterogeneous sensing sources. More importantly, it enables the collection, storage and discovery of observation and measurement data from both static and mobile sensing sources.

Proceedings ArticleDOI
04 Apr 2014
TL;DR: In this paper, the authors focus on the task of extracting product aspects and users opinions by extracting all possible aspects and opinions from reviews using natural language, ontology, and frequent “tag” sets.
Abstract: Text is the main method of communicating information in the digital age. Messages, blogs, news articles, reviews, and opinionated information abounds on the Internet. People commonly purchase products online and post their opinions about purchased items. This feedback is displayed publicly to assist others with their purchasing decisions, creating the need for a mechanism with which to extract and summarize useful information for enhancing the decision-making process. Our contribution is to improve the accuracy of extraction by combining different techniques from three major areas, named Data Mining, Natural Language Processing techniques and Ontologies. The proposed framework sequentially mines product’s aspects and users’ opinions, groups representative aspects by similarity, and generates an output summary. This paper focuses on the task of extracting product aspects and users’ opinions by extracting all possible aspects and opinions from reviews using natural language, ontology, and frequent “tag” sets. The proposed framework, when compared with an existing baseline model, yielded promising results.

Proceedings ArticleDOI
19 Dec 2014
TL;DR: Surprisingly enough, sentiment lexicon generated from large News corpus exhibits a satisfactory performance when tested on annotated product and movie reviews despite that the later two have different contexts, bearing similarity to the notion of Transfer Learning reported in the literature.
Abstract: Sentiment lexicons are useful to automatically extract sentiment from text. In this paper, we generate several Norwegian sentiment lexicons by extracting sentiment information from two different types of Norwegian text corpus, namely, news corpus and discussions forums. The methodology is based on the Point wise Mutual Information (PMI). We introduce a modification of the PMI that considers small "blocks" of the text instead of the text as a whole. The rational of this modification is to counter the detrimental effect of the length of the text on the PMI and on the quality of the lexicon. The high computational cost due to the huge amount of textual information to be processed in addition to our modified PMI formula is tackled efficiently by relying heavily on parallelization using Map-Reduce and Mongo DB shards. Movie and product reviews are used to evaluate the generated sentiment lexicons to correctly classify review ratings. The lexicon exhibits a satisfactory performance when evaluated, in particular when considering the context change in the corpuses. In fact, surprisingly enough, sentiment lexicon generated from large News corpus exhibits a satisfactory performance when tested on annotated product and movie reviews despite that the later two have different contexts, bearing similarity to the notion of Transfer Learning [1] reported in the literature. Some suggestions on how to increase the performance are proposed. All the sentiment lexicons are publicly available for those that are interested.

Proceedings ArticleDOI
Zhen Yang1, Deshi Li1
19 Dec 2014
TL;DR: Improved binary coded genetic algorithm (GA) is used as global optimization method and selection of elite candidate services is discussed by the modelization and evaluation of IoT quality of service (QoS).
Abstract: Several kinds of sensors would be widely deployed with the development of the Internet of things (IoT). Each IoT information service provides individual sensory data. When a complex service request is submitted, composing multiple information services to satisfy customer's comprehensive demands efficiently becomes an important research issue. To address the issue of IoT information service composition, we propose an efficient strategy from the perspective of sensory data selection and aggregation. Selection of elite candidate services is discussed by the modelization and evaluation of IoT quality of service (QoS). Improved binary coded genetic algorithm (GA) is used as global optimization method. The optimal solution is defined as the service composition scheme with the maximum overall function value. The experimental results verify the effectiveness of the proposed method in IoT.

Proceedings ArticleDOI
19 Dec 2014
TL;DR: An Auto-configurable LDPC-based method is proposed and integrated with the Hadoop system with various configurations of LDPC codes and simulations show great improvement in terms of encoding and repairing latencies compared with Reed-Solomon.
Abstract: The current distributed storage systems mainly rely on data replication to ensure certain level of data availability and reliability. A recent trend is to introduce erasure codes into the distributed storage. Inspired by the RAID system, early attempts have been focused on designing Reed-Solomon (RS) based solutions and other block codes including Low Density Parity Check (LDPC) codes. This paper investigates in details about the usage of the system resources when different configurations of LDPC codes are applied to distributed storage systems. Auto-configurable LDPC-based method is proposed and integrated with the Hadoop system with various configurations of LDPC codes. Simulations show great improvement in terms of encoding and repairing latencies compared with Reed-Solomon. Simulations also show that trade-off in terms of different system resources is achieved through different configurations.

Book ChapterDOI
01 Jan 2014
TL;DR: In this article, a survey of the results for constraint-based non-interacting free energy functionals and exchange-correlation free-energy functionals is presented, including comparisons with novel finite-temperature Hartree-Fock calculations and also presents progress on both pertinent exact results and matters of computational technique.
Abstract: Reliable, tractable computational characterization of warm dense matter is a challenging task because of the wide range of important aggregation states and effective interactions involved. Contemporary best practice is to do ab initio molecular dynamics on the ion constituents with the forces from the electronic population provided by density functional calculations. Issues with that approach include the lack of reliable approximate density functionals and the computational bottleneck intrinsic to Kohn-Sham calculations. Our research is aimed at both problems, via the so-called orbital-free approach to density functional theory. After a sketch of the relevant properties of warm dense matter to motivate our research, we give a survey of our results for constraint-based non-interacting free energy functionals and exchange-correlation free-energy functionals. That survey includes comparisons with novel finite-temperature Hartree-Fock calculations and also presents progress on both pertinent exact results and matters of computational technique.

Journal ArticleDOI
01 Jan 2014
TL;DR: The quality of solutions and speed-ups achieved by the PBHS is significantly better than the HS, and a parallelisation method for the proposed PBHS by using graphical processing units GPU, allowing multiple function evaluations at the same time.
Abstract: This work presents a new evolutionary algorithm based on the standard harmony search strategy, called population-based harmony search PBHS. Also, this work provides a parallelisation method for the proposed PBHS by using graphical processing units GPU, allowing multiple function evaluations at the same time. Experiments were done using a benchmark of a hard scientific problem: protein structure prediction with the AB-2D off-lattice model. The performance and the solution quality were evaluated and compared using four implementations: two concerning the standard HS, one running in CPU and another running in GPU, and two implementations concerning the PBHS, also running in CPU and in GPU. Results show that the quality of solutions and speed-ups achieved by the PBHS is significantly better than the HS.

Proceedings ArticleDOI
19 Dec 2014
TL;DR: A Cubature Kalman Filter (CKF) based algorithm for estimation vehicle velocity, yaw rate and side slip angle using steering wheel angle, longitudinal acceleration and lateral sensors is proposed.
Abstract: The vehicle state is of significant to examine and control vehicle performance. But some vehicle states such as vehicle velocity and side slip angle which are vital to active safety application of vehicle can not be measured directly and must be estimated instead. In this paper, a Cubature Kalman Filter (CKF) based algorithm for estimation vehicle velocity, yaw rate and side slip angle using steering wheel angle, longitudinal acceleration and lateral sensors is proposed. The estimator is designed based on a three-degree-of-freedom (3DOF) vehicle model. Effectiveness of the estimation is examined by comparing the outputs of the estimator with the responses of the vehicle model in Car Sim under double lane change and slalom conditions.

Proceedings ArticleDOI
19 Dec 2014
TL;DR: A general fault-tolerance property is established for the interesting hierarchical cubic networks, when a linear number of vertices are removed from such a network, including its restricted connectivity, cyclic vertex-connectivity, component connectivity, and conditional diagnosability.
Abstract: We establish a general fault-tolerance property for the interesting hierarchical cubic networks, when a linear number of vertices are removed from such a network. As its application, we discuss and derive several connectivity results of its underlying graph, including its restricted connectivity, cyclic vertex-connectivity, component connectivity, and conditional diagnosability. These results demonstrate several fault-tolerance properties of the hierarchical cubic networks.

Proceedings ArticleDOI
04 Apr 2014
TL;DR: The main objective of this paper is to present an effort estimation model for mobile applications, and discuss the applicability of traditional estimation models for the purpose of developing systems in the context of mobile computing.
Abstract: The rise of the use of mobile technologies in the world, such as smartphones and tablets, connected to mobile networks is changing old habits and creating new ways for the society to access information and interact with computer systems. Thus, traditional information systems are undergoing a process of adaptation to this new computing context. However, it is important to note that the characteristics of this new context are different. There are new features and, thereafter, new possibilities, as well as restrictions that did not exist before. Finally, the systems developed for this environment have different requirements and characteristics than the traditional information systems. For this reason, there is the need to reassess the current knowledge about the processes of planning and building for the development of systems in this new environment. One area in particular that demands such adaptation is software estimation. The estimation processes, in general, are based on characteristics of the systems, trying to quantify the complexity of implementing them. Hence, the main objective of this paper is to present a proposal for an estimation model for mobile applications, as well as discuss the applicability of traditional estimation models for the purpose of developing systems in the context of mobile computing. Hence, the main objective of this paper is to present an effort estimation model for mobile applications.