scispace - formally typeset
Search or ask a question

Showing papers in "Computer and Information Science in 2015"


Journal ArticleDOI
TL;DR: The study findings show that Perceived Ease of Use as the most important factor in predicting Jordanian citizens’ adoption of e-government services.
Abstract: While e-Government has the potential to improve public administration effectiveness as well as efficiency by increasing convenience, performance and accessibility of different government services to citizens, the success of these initiatives is dependent not only on government support, but also on citizens’ willingness to accept and adopt those e-government services. Although there is a great body of literature that discuss e-Government in developed countries, e-government in developing countries, in general, and Arab countries, in particular, has not received equal attention. The objective of this study is to determine the factors that influence the adoption of e-government services in a developing country, namely Jordan. An extended version of Technology Acceptance Model (TAM) is utilized as the theoretical base of this study. Overall, the study proposes that citizens’ perceptions about e-Government services influence their attitude towards adopting e-government initiatives. A survey collected data from 853 online users of Jordan’s e-government services. Using partial least squares (PLS) of structural equation modeling (SEM) analysis technique, the results show that all the four factors, namely: Perceived Credibility, Perceived Usefulness, Perceived Ease of Use and Computer Self Efficacy have significant effect on the adoption of e-government services in Jordan. Moreover, the study findings show that Perceived Ease of Use as the most important factor in predicting Jordanian citizens’ adoption of e-government services. The research limitations, implications for research and practice are discussed.

26 citations


Book ChapterDOI
TL;DR: The preprocess uses the topic model to extract the related information from SHR to help support software maintenance, thus improving the effectiveness of traditional SHR-based technique.
Abstract: Mining software historical repositories (SHR) has emerged as a research direction Sun, over the past decade, which achieved substantial success in both research and practice to support various software maintenance tasks. Use of different types of SHR, or even different versions of the software project may derive different results for the same technique or approach of a maintenance task. Inclusion of unrelated information in SHR-based technique may lead to decreased effectiveness or even wrong results. To the best of our knowledge, few focus is on this respect in the SE community. This paper attempts to bridge this gap and proposes a preprocess to facilitate selection of related SHR to support various software maintenance tasks. The preprocess uses the topic model to extract the related information from SHR to help support software maintenance, thus improving the effectiveness of traditional SHR-based technique. Empirical results show the effectiveness of our approach.

26 citations


Journal ArticleDOI
TL;DR: The purpose of this paper is to develop a model for evaluating the benefits of IS for SMEs in Saudi Arabia as a case of a developing country, and to provide critical insights to policy makers and managers on assessing the benefits.
Abstract: The influence of information systems (IS) on small and medium-sized enterprises (SMEs) has enjoyed much attention by managers and policy makers. Despite the hype and eagerness to commit extensive levels of investment, very little research has focuses on assessing the benefits of IS for SMEs in developing counties. Existing literature has been skewed towards developed countries and large organizations. Consequently, the purpose of this paper is to develop a model for evaluating the benefits of IS for SMEs in Saudi Arabia as a case of a developing country. In order to achieve this, the study builds on and extends past IS-impact literature. Based on quantitative results of 365 responses from SMEs, the model comprises 44 measures across five dimensions: ‘Individual impact’, ‘Organisational impact’, ‘System quality’, ‘Information quality’ and ‘Vendor quality’. Applying confirmatory factor analysis and structural equation model, the validated model contributes to theory development of IS impact within the context of SMEs in developing counties. Additionally, it provides critical insights to policy makers and managers on assessing the benefits of IS for SMEs in developing countries.

21 citations


Journal ArticleDOI
TL;DR: This paper motivates developers to adopt this methodology in order to develop software that meet their changing needs and describes benefits and limetations of agile methodology.
Abstract: Software development has become a highly consequential activity for the society and many businesses, with most of them investing much resource. They employ various methods to develop software that can maximize their profits, while minimizing the operation costs. However, most of the projects have failed because they are not able to respond to the changing user needs, despite the heavy investment incurred. This has encouraged the software engineers to propose flexible and effective techniques, including agile methodologies that help to develop quality software. The methodology impacts software development because it results in quality products. It influences the developers positively, thus enabling them to commit their efforts in achieving the project objectives. The project managers motivate the team members, which increases their creativity and innovativeness necessary for success of the project. The methodology also employs effective communication strategies, which enable the teams and stakeholders to realize quality software. The increased level of stakeholder engagement helps to determine and address the faults of the project in good time, thus reducing the cost incurred. The paper explains various ways in which agile methodology impacts software development. It also describes benefits and limetations of agile methodology. This paper motivates developers to adopt this methodology in order to develop software that meet their changing needs.

20 citations


Journal ArticleDOI
TL;DR: An empirical classification analysis of new UCI datasets that have dierent imbalance ratios, sizes andcomplexities finds that SVM outperforms the other two classifiers in terms of Sensitive (or Specificity) for all the datasets, and is more accurate when classifying large datasets.
Abstract: SVM has been given top consideration for addressing the challenging problem of data imbalance learning. Here,we conduct an empirical classification analysis of new UCI datasets that have dierent imbalance ratios, sizes andcomplexities. The experimentation consists of comparing the classification results of SVM with two other popularclassifiers, Naive Bayes and decision tree C4.5, to explore their pros and cons. To make the comparative exper-iments more comprehensive and have a better idea about the learning performance of each classifier, we employin total four performance metrics: Sensitive, Specificity, G-means and time-based eciency. For each benchmarkdataset, we perform an empirical search of the learning model through numerous training of the three classifiersunder dierent parameter settings and performance measurements. This paper exposes the most significant resultsi.e. the highest performance achieved by each classifier for each dataset. In summary, SVM outperforms the othertwo classifiers in terms of Sensitive (or Specificity) for all the datasets, and is more accurate in terms of G-meanswhen classifying large datasets.

19 citations


Journal ArticleDOI
TL;DR: In this study, the standard IEEE 754–2008 and modulo-based floor functions for rounding non-integers and their effects on the accuracy of the bilinear interpolation algorithm have been demonstrated.
Abstract: In this study, the standard IEEE 754–2008 and modulo-based floor functions for rounding non-integers have been presented. Their effects on the accuracy of the bilinear interpolation algorithm have been demonstrated. The improved-floor uses the modulo operator in an effort to make each non-integer addend an integer when the remainder between the multiplicative inverse of the fractional factor and the numerator is greater than zero. The experiments demonstrated relatively positive effects with the improved-floor function and the alternativeness to the standard round function.

11 citations


Journal ArticleDOI
TL;DR: There is a lack of understanding on how to define soft skills and thus provide those soft skills for computer science graduating students in Egypt, and a simplified systematic literature review approach is presented.
Abstract: Soft skills for software engineers turned out to be a very important factor in the success of any project helping the team’s dynamics and performance. Conversely, computer science undergraduates are possibly not aware of the importance of soft skills for their careers. Accordingly this paper’s main purpose is to highlight the gaps that exist for computer science graduates in Egypt. In this paper we present a simplified systematic literature review approach for this topic. A survey is conducted in Hellwan University, Cairo, Egypt where 136 computer and software engineering graduating students participated. The survey purpose was to uncover how students evaluate the importance of softs skills, how much they attain these skills, in addition to how much they think the university is helping its development. One outcome of our analysis is that there is a lack of understanding on how to define and thus provide those soft skills for computer science graduating students in Egypt.

10 citations


Journal ArticleDOI
TL;DR: A call admission control policy to reduce the redundant handovers in the system and an improved process to create a neighboring list with best cells for handover and an appropriate ones for successful and efficient handover are presented.
Abstract: The femtocell technology is a quick and intelligent solution that improving system coverage and capacity to meet the great demand of services on broadband wireless access, the deployment of access point femtocells will unload a great quantity of traffic from the LTE macrocellular to be managed by the femtocellular network. Switching seamlessly between macrocell and femtocell base stations is a major challenge of LTE femtocell-macrocell integrated system; which is performed by the handover procedure that can guarantee an efficient transfer of UE from/to or between femtocells, and also offer an effective management of handover calls in the system. The intelligent LTE integrated system architecture, a generation and the optimization of neighboring list that contain the best cells for handover, efficient algorithm for call admission control, and also an optimized handover procedure are the most important topics for research. The intelligent LTE integrated system architecture is necessary to satisfy different criteria, as minimizing handover interruption, guarantee and assure minimum signaling overhead due to frequent and unnecessary handovers…so to determine a single handover decision making policy is not sufficient to develop the performance, thus in this paper we propose a call admission control policy to reduce the redundant handovers in the system and an improved process to create a neighboring list with best cells for handover and an appropriate ones for successful and efficient handover. Firstly, we present the LTE femtocell-macrocell integrated network architecture with a large deployment of femtocells. We propose the details signaling flux for handover procedure for different scenarios and the proposed CAC scheme to minimize the unnecessary handovers, based on an optimized neighboring list.

10 citations


Journal ArticleDOI
TL;DR: A multi-view software architecture design process with the help of a mission-critical defense system development case study is explained and a novel architectural style is introduced, named as “star-controller architectural style”.
Abstract: An architecture outlines what a system can or cannot do. Attention to software architecture is essential for successful product developments. Therefore, software architecture development is a crucial phase in software development process. As the software intensive systems become complex, software architects face with the challenges of dealing with multiple sometimes conflicting concerns at the same time. Satisfaction of quality requirements can be achieved via a good software architecture design. Since the quality requirements are multi-faceted, the software architects have to consider many diverse aspects and provide a software architectural solution that can optimally satisfy functional and quality requirements. Such a solution requires a multi-view software architecture design as the result of a systematic architecture development process. Case studies are helpful in bridging the gap between academia and industry. Research studies including carefully designed case studies will help practitioners to understand the theoretical concepts and apply novel research findings in their practices. Hence, in this study, we explain a multi-view software architecture design process with the help of a mission-critical defense system development case study. In the study, we explain the multi-view software architecture design step by step starting with identifying the system context, requirements, constraints, and quality expectations. We further outline the strategies, techniques, designs, and rationales used to satisfy a diverse set of requirements with a particular software architecture pattern. We also introduce a novel architectural style, named as “star-controller architectural style”. We explain the use of the style with a related discussion.

9 citations


Book ChapterDOI
TL;DR: The initial experiments have indicated that the proposed fusion approach outperforms individual classifiers and the global fusion method and the performance of the proposed approach with related fusion methods is compared.
Abstract: This paper aims to propose a novel approach to automatically detect verbal offense in social network comments. It relies on a local approach that adapts the fusion method to different regions of the feature space in order to classify comments from social networks as insult or not. The proposed algorithm is formulated mathematically through the minimization of some objective function. It combines context identification and multi-algorithm fusion criteria into a joint objective function. This optimization is intended to produce contexts as compact clusters in subspaces of the high-dimensional feature space via possibilistic unsupervised learning and feature weighting. Our initial experiments have indicated that the proposed fusion approach outperforms individual classifiers and the global fusion method. Also, in order to validate the obtained results, we compared the performance of the proposed approach with related fusion methods.

9 citations


Journal ArticleDOI
TL;DR: This article has been retracted on July 19, 2016 because of a lack of confidence in the author's ability to independently evaluate the evidence.
Abstract: The editorial board announced this article has been retracted on July 19, 2016. If you have any further question, please contact us at: cis@ccsenet.org

Book ChapterDOI
Thanh Cuong Nguyen1, Wenfeng Shen1, Zhaokai Luo1, Zhou Lei1, Weimin Xu1 
TL;DR: This paper surveys the related results that has been done and proposes two alternative schemes, called DIV-I and DDIV-II, which fully support dynamic operations as well as public verification.
Abstract: In order to persuade users of widely using cloud storage, one critical challenge that should be solved is finding way to determine whether data has been illegally modified on the cloud server or not. The topic has although been addressed in several works, there is lack of scheme to meet all the demand of supporting dynamic operations, public verification, less computation etc. This paper surveys the related results that has been done and proposes two alternative schemes, called DIV-I and DIV-II. Compared to S-PDP introduced by Ateniese et al., both DIV-I and DIV-II use less time to generate tags and verify. In addition, the proposed schemes fully support dynamic operations as well as public verification.

Journal ArticleDOI
TL;DR: A novel evaluation model to evaluate user acceptance of software and system technology by modifying the dimensions of the Technology Acceptance Model (TAM) and added additional success dimensions for expert users to indicate that the expert users have a strong significant influence to help in evaluation.
Abstract: Effective evaluation is necessary in order to ensure systems adequately meet the requirements and information processing needs of the users and scope of the system. Technology acceptance model is one of the most popular and effective models for evaluation. A number of studies have proposed evaluation frameworks to aid in evaluation work. The end users for evaluation the acceptance of new technology or system have a lack of knowledge to examine and evaluate some features in the new technology/system. This will give a fake evaluation results of the new technology acceptance. This paper proposes a novel evaluation model to evaluate user acceptance of software and system technology by modifying the dimensions of the Technology Acceptance Model (TAM) and added additional success dimensions for expert users. The proposed model has been validated by an empirical study based on a questionnaire. The results indicated that the expert users have a strong significant influence to help in evaluation and pay attention to some features that end users have lack of knowledge to evaluate it.

Book ChapterDOI
TL;DR: Encouraging simulation results reveal that the idea of using the proposed structure for identification of nonlinear systems is feasible and very appealing.
Abstract: In this paper a new dynamic neural network structure based on the Elman Neural Network (ENN), for identification of nonlinear systems is introduced. The proposed structure has feedbacks from the outputs to the inputs and at the same time there are some connections from the hidden layer to the output layer, so that it is called as Output to Input Feedback, Hidden to Output Elman Neural Network (OIFHO ENN). The capability of the proposed structure for representing nonlinear systems is shown analytically. Stability of the learning algorithms is analyzed and shown. Encouraging simulation results reveal that the idea of using the proposed structure for identification of nonlinear systems is feasible and very appealing.

Journal ArticleDOI
TL;DR: The experimental results demonstrate that the proposed ShillFree1 auction system keeps the auction users secured from shill bidding and therefore establishes trust among online auction users.
Abstract: Human cheating has been a barrier to establishing trust among e-commerce users, throughout the last two decades. In particular, in online auctions, since all the transactions occur among anonymous users, trust is difficult to establish and maintain. Shill bidding happens when bidders bid exclusively to inflate (in forward auctions) or deflate (in reverse auctions) prices in online auctions. At present, shill bidding is the most severe and persistent form of cheating in online auctions, but still there are only a few or no established techniques for shill defense at run-time. In this paper, I evaluate the strengths and weaknesses of existing approaches to combating shill bidding. I also propose the ShillFree1 auction system to secure and protect auction systems from shill bidders for both forward and reverse auctions. More precisely, by using a variety of bidding behavior and user history, proposed auction system prevents, monitors and detects shill activities in real time. Moreover, to detect shilling thoroughly I propose IP tracking techniques. The system also takes necessary actions against shill activities at run-time. The experimental results demonstrate that, by prevention, detection and response mechanisms, the proposed auction system keeps the auction users secured from shill bidding and therefore establishes trust among online auction users.

Journal ArticleDOI
TL;DR: A new greedy sequence pattern mining algorithm for the data streams is introduced, it will be used to find the strongly supported sequences and it is showed experimentally that the proposed algorithm is more efficient than the PrefixSpan algorithm for patterns with any support less than 30% for CPU time and with anySupport less than 60% for memory usage.
Abstract: Sequential pattern mining in data streams environment is an interesting data mining problem. The problem of finding sequential patterns in static databases had been studied extensively in the past years, however mining sequential patterns in the data streams still an active field for researches. In this research a new greedy sequence pattern mining algorithm for the data streams is introduced, it will be used to find the strongly supported sequences. The proposed algorithm is built based on the sequence tree which is used to find the sequential patterns in static databases. The proposed algorithm divides the streams into patches or windows and each patch will update the sequence tree which built from the previous windows. An example is introduced to explain how this algorithm works. We also show the efficiency and the effectiveness of the proposed algorithm on a synthetic dataset and prove how it is suited for data streams environment. We showed experimentally that the proposed algorithm is more efficient than the PrefixSpan algorithm for patterns with any support less than 30% for CPU time and with any support less than 60% for memory usage.

Journal ArticleDOI
TL;DR: Some key concepts of uncertainty in database such as join processing, query selection, and indexing of uncertain data are described and a survey of the database management systems dealing with uncertain data is provided, presenting their features and comparing them.
Abstract: In the last years, uncertainty management became an important aspect as the presence of uncertain data increased rapidly. Due to the several advanced technologies that have been developed to record large quantity of data continuously, resulting is a data that contain errors or may be partially complete. Instead of dealing with data uncertainty by removing it, we must deal with it as a source of information. To deal with this data, database management system should have special features to handle uncertain data. The aim of this paper is twofold: on one hand, we describe some key concepts of uncertainty in database. Then we discuss different techniques for managing uncertain data such as join processing, query selection, and indexing of uncertain data. We also provide a survey of the database management systems dealing with uncertain data, presenting their features and comparing them.

Journal ArticleDOI
TL;DR: Two virtual network embedding algorithms are proposed, which coarsen virtual networks using Heavy Edge Matching (HEM) technique and embed coarsened virtual networks on best-fit sub-substrate networks and increase the acceptance ratio and the revenue.
Abstract: One of the main objectives of cloud computing providers is increasing the revenue of their cloud datacenters by accommodating virtual network requests as many as possible. However, arrival and departure of virtual network requests fragment physical network’s resources and reduce the possibility of accepting more virtual network requests. To increase the number of virtual network requests accommodated by fragmented physical networks, we propose two virtual network embedding algorithms, which coarsen virtual networks using Heavy Edge Matching (HEM) technique and embed coarsened virtual networks on best-fit sub-substrate networks. The performance of the proposed algorithms are evaluated and compared with existing algorithms using extensive simulations, which show that the proposed algorithms increase the acceptance ratio and the revenue.

Journal ArticleDOI
TL;DR: Examining some of the technology solutions available to permit this group to remain productive and active in the workforce finds that some of these cohorts have several age-related limitations that may be of concern to employers.
Abstract: Many Baby Boomers (born between 1946 and 1964) are reaching retirement age at the rate of 8000 a day [AARP 2014]. Yet, they still have a desire to remain in the workforce and remain active in their professional environment. Over the years they have developed the strong skills and expertise that industry needs. However, some of these cohorts have several age-related limitations that may be of concern to employers. This paper examines some of the technology solutions available to permit this group to remain productive and active in the workforce.

Journal ArticleDOI
TL;DR: The results show that the back propagation network with 11 neurons in the hidden layer, 0.0025 as training goal and Levenberg-Marquardt learning algorithm is well enough for the curve fitting.
Abstract: Curve fitting for oscillometric waveforms is vital to maximum amplitude algorithm (MAA) in blood pressure measurement. Popular methods in recent years, such as asymmetric Gaussian or Lorentzian functions, perform well when the profile of the oscillometric waveforms (OMW) are close to them. But they will have a relatively large mean square error (MSE) when the oscillometric pulse amplitude envelopes are not so regularly shaped. In this contribution, the artificial neural network (ANN) is implemented instead for the curving fitting. Aided by LabVIEW and MATLAB, its number of neurons in the hidden layer, weight initialization algorithm, training goal and learing algorithm are implemented or carefully considered after some necessary preliminary work. The experiment with 48 subjects ranging in age from 18 to 60 years is included in this research. The results show that the back propagation network with 11 neurons in the hidden layer, 0.0025 as training goal and Levenberg-Marquardt learning algorithm is well enough for the curve fitting. ANN with proposed optimum parameters is then compared with the asymmetric Gaussian/Lorentzian functions. After properly adjust the max epoch, ANN can finish computing the envelope in less than a second (3.3 GHz CPU and 4 GB RAM) in all of our experiments like the other two methods while its MSE is still much lower than the other two methods. Their performance in measuring blood pressure is also compared, and ANN shows greater robustness.

Journal Article
TL;DR: A method that systematically defines, analyzes and designs a domain to enhance reusability effectively in Mobile Business Domain Modeling (MBDM) in AHMS(Adaptive Human Management Systems) requirements phase is suggested.
Abstract: Software development projects tend to grow larger and more time consuming over time. Many companies have turned to software generation techniques to save time and costs. Software generation techniques take information from one area of the application, and make intelligent decisions to automatically generate a different area. Considerable achievements have been made in the areas of object-relational mappers to generate business objects from their relational database equivalents, and vice versa. There are also many products that can generate business objects and databases using the domain model of the application. Domain engineering is the foundation for emerging “product line” software development approaches and affects the maintainability, understandability, usability, and reusability characteristics of family of similar systems [1]. In this paper, we suggest a method that systematically defines, analyzes and designs a domain to enhance reusability effectively in Mobile Business Domain Modeling (MBDM) in Adaptive Human Management Systems (AHMS) requirements phase. For this, we extract information objectively that can be reused in a domain from the requirement analysis phase. We sustain and refine the information, and match them to artifacts of each phase in domain engineering. Through this method, reusable domain components and malleable domain architecture can be produced. In addition, we show the practical applicability and features of our approach.

Book ChapterDOI
TL;DR: This paper demonstrates a research effort to evaluate the effectiveness and efficiency of different unsupervised detection techniques for anomaly detection in MANETs and indicates that K-means and C-Means deliver the best performance overall.
Abstract: Mobile ad hoc network (MANET) is vulnerable to numerous attacks due to its intrinsic characteristics such as the lack of fixed infrastructure, limited bandwidth and battery power, and dynamic topology. Recently, several unsupervised machine-learning detection techniques have been proposed for anomaly detection in MANETs. As the number of these detection techniques continues to grow, there is a lack of evidence to support the use of one unsupervised detection algorithm over the others. In this paper, we demonstrate a research effort to evaluate the effectiveness and efficiency of different unsupervised detection techniques. Different types of experiments were conducted, with each experiment involves different parameters such as number of nodes, speed, pause time, among others. The results indicate that K-means and C-means deliver the best performance overall. On the other hand, K-means requires the least resource usage while C-means requires the most resource usage among all algorithms being evaluated. The proposed evaluation methodology provides empirical evidence on the choice of unsupervised learning algorithms, and could shed light on the future development of novel intrusion detection techniques for MANETs.

Journal ArticleDOI
TL;DR: Both quantitative and qualitative comparison was performed on both Amazon EC2 and Amazon EMR, including a study of their pricing models and measures are suggested for future studies and research.
Abstract: Cloud computing is a relatively new form of computing, which uses virtualized resources and is dynamically scalable and is often provided as pay for use service over the Internet or Intranet or both. With increasing demand for data storage in the cloud, study of data intensive applications is becoming a primary focus. Data intensive applications are those which involve a high CPU usage, processsing large volumes of data typically in size of hundreds of gigabytes, terabytes, or petabytes. This study was conducted on Amazon's Elastic Cloud Compute (EC2) and Amazon Elastic Map Reduce (EMR) using HiBench Hadoop Benchmark Suite. HiBench is a Hadoop benchmark suite and is used for performing and evaluating Hadoop based data intensive computation on both these cloud paltforms. Both quantitative and qualitative comparison was performed on both Amazon EC2 and Amazon EMR, including a study of their pricing models and measures are suggested for future studies and research.

Journal ArticleDOI
TL;DR: An innovative technique is proposed for spectrum sensing based on principal component analysis and neural networks in frequency domain and the designed blocks are described using VHSIC Hardware Description Language (VHDL).
Abstract: The cognitive radio system is proposed as an optimal way to improve the frequency underutilization. Spectrum sensing is the first and the essential function in this approach. A cognitive user must sense his environment to detect the unused channels, and then he can use the free channel without causing any interference to the primary user. In this article, an innovative technique is proposed for spectrum sensing based on principal component analysis and neural networks in frequency domain. The designed blocks are described using VHSIC Hardware Description Language (VHDL). The suggested application consists of extracting features from the captured signals by PCA; the classification is done by a Multi-Layer Perceptron (MLP). Neural network training part and principal components are done on MATLAB environment; while the hardware implementations are created on an FPGA DE2-70board.

Journal ArticleDOI
TL;DR: This research developed a method which achieves a higher recognition rate in the training set 100% and in the testing set 83% and is done on the authors' own database.
Abstract: This paper proposed human identification method by gait. Human gait is a type of biometric features and related to the physiological and behavioral features of a human. In this paper, a feature vector of gait motion parameters is extracted from each frame using image segmentation methods, and categorized into different categories. Two of these categories were used to form the gait motion trajectories; Category one: Gait angle velocity: angle velocity hip, angle velocity knee, angle velocity thigh and angle velocity shank. Category two: Gait angle acceleration: angle acceleration hip, angle acceleration knee, angle acceleration thigh and angle acceleration shank for each image sequence. Finally, the TDNN method with different training algorithms is used for recognition purpose. This experiment is done on our own database. This research developed a method which achieves a higher recognition rate in the training set 100% and in the testing set 83%. Also, category one establishes gait motion features to be used in human gait identification applications using different training algorithms, While category two achieved a higher recognition rate by trainrb algorithm.

Book ChapterDOI
TL;DR: The results show that the service downtime can be significantly reduced by dynamically resizing memory size according to the working sets of running tasks.
Abstract: The quality of services is a major concern to the users on cloud computing platforms. Migrating non-stop services across hosts on a cloud is important for many critical applications. Live migration is a useful mechanism for virtual machines to minimize service downtime. Thus, the costs of iterative pre-copy and downtime experienced by a real time application should be as short as possible. In this paper, a general model for live migration is presented. An effective strategy for optimizing the service downtime under this model is suggested. The performance of live migration is evaluated for virtual machines with resizable memory. Our results show that the service downtime can be significantly reduced by dynamically resizing memory size according to the working sets of running tasks

Journal ArticleDOI
TL;DR: This paper proposes three different heuristic algorithms for the k-splittable flow problem, A1, A2 and A3, and compares the three algorithms by testing instances, showing that choosing suitable initial feasible flow is important for obtaining good results.
Abstract: In the k-splittable flow problem, each commodity can only use at most k paths and the key point is to find the suitable transmitting paths for each commodity. To guarantee the efficiency of the network, minimizing congestion is important, but it is not enough, the cost consumed by the network is also needed to minimize. Most researches restrict to congestion or cost, but not the both. In this paper, we consider the bi-objective (minimize congestion, minimize cost) k-splittable problem. We propose three different heuristic algorithms for this problem, A1, A2 and A3. Each algorithm finds paths for each commodity in a feasible splittable flow, and the only difference between these algorithms is the initial feasible flow. We compare the three algorithms by testing instances, showing that choosing suitable initial feasible flow is important for obtaining good results.

Journal ArticleDOI
TL;DR: The experimental results show that the proposed scalable SVM is scalable when the number of features are increased and has higher accuracy compared to SVM and GA-SVM.
Abstract: Support Vector Machines (SVM) is a state-of-the-art, powerful algorithm in machine learning which has strong regularization attributes. Regularization points to the model generalization to the new data. Therefore, SVM can be very efficient for spam detection. Although the experimental results represent that the performance of SVM is usually more than other algorithms, but its efficiency is decreased when the number of feature of spam is increased. In this paper, a scalable SVM is proposed by using J48 tree for spam detection. In the proposed method, dataset is firstly partitioned by using J48 tree, then, features selection are applied in each partition in parallel. Consistently, selected features are used in the training phase of SVM. The propose method is evaluated conducted some benchmark datasets and the results are compared with other algorithms such as SVM and GA-SVM. The experimental results show that the proposed method is scalable when the number of features are increased and has higher accuracy compared to SVM and GA-SVM.

Journal ArticleDOI
TL;DR: This study investigates WiFi usage and WiFi security in Hong Kong to identify any knowledge gap exists in using and setting up WiFi system and to use the findings to help policy makers, WiFi security advisory bodies and service providers devise appropriate WiFi security measures and education programmes.
Abstract: WiFi connectivity is a necessity as most of the mobile devices nowadays are coming with built-in WiFi adaptors. WiFi enables your electronic gadgets to ‘talk’ to other gadgets and to control and being controlled by other gadgets via Internet without complicated cabling systems. You can use your mobile phone to talk, to download, to control your TV, home security camera or even to open the door for you. WiFi connectivity makes life easier and more convenient. But in that convenience lies vulnerability if the WiFi network is not properly secured. A secured WiFi network is an important way of protecting your data, personal information or other tangible or intangible things in life. Building on the quantitative data collected from the 207 respondents in 2014 and comparing the findings with the data collected in the previous year, this paper investigates WiFi usage and WiFi security in Hong Kong. This study has two objectives: to identify any knowledge gap exists in using and setting up WiFi system; and to use the findings to help policy makers, WiFi security advisory bodies and service providers devise appropriate WiFi security measures and education programmes.

Book ChapterDOI
TL;DR: A model that is intuitive to clinicians for evaluating medication treatment options is introduced, and therefore has the advantage of engaging clinicians actively in a collaborative development of clinical Decision Support Systems (DSS).
Abstract: One of the key areas of clinical decision making in the field of clinical medicine involves choosing the most appropriate treatment option for a given patient, out of many alternative treatment options. This paper introduces a model that is intuitive to clinicians for evaluating medication treatment options, and therefore has the advantage of engaging clinicians actively in a collaborative development of clinical Decision Support Systems (DSS). This paper also extends the previously introduced models of medical diagnostic reasoning, and case formulation (in psychiatry). Whilst the proposed model is already implemented as a DSS in psychiatry, it can also be applied in other branches of clinical medicine.