scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of High Performance Computing and Networking in 2017"


Journal ArticleDOI
TL;DR: The experiment results on the MovieLens dataset provides a reliable model which is precise and generates more personalised movie recommendations compared to other models.
Abstract: Over the last decade, there has been a burgeoning of data due to social media, e-commerce and overall digitisation of enterprises. The data is exploited to make informed choices, predict marketplace trends and patterns in consumer preferences. Recommendation systems have become ubiquitous after the penetration of internet services among the masses. The idea is to make use of filtering and clustering techniques to suggest items of interest to users. For a media commodity like movies, suggestions are made to users by finding user profiles of individuals with similar tastes. Initially, user preference is obtained by letting them rate movies of their choice. Upon usage, the recommender system will be able to understand the user better and suggest movies that are more likely to be rated higher. The experiment results on the MovieLens dataset provides a reliable model which is precise and generates more personalised movie recommendations compared to other models.

70 citations


Journal ArticleDOI
TL;DR: Results of experiments show the validity of the proposed approach in terms of imperceptibility and efficiency to detect reliable and strong attacks.
Abstract: In this paper, we propose a region of interest (ROI) based fragile watermarking scheme for medical image tamper detection. The proposed methodology is inspired by network transmission where the transmitted message is divided into packets and redundant information is added to treat errors. In fact, the cyclic redundancy check code (CRC) is one of the most crucial error detecting checking tools used in various digital communication systems. Consequently, the region of interest to be protected is considered as a message to be transmitted without errors. Thus, the CRC code is based on a standard polynomial generator CRC-32 with more particular mathematical properties and is performed on each packet to generate a watermark to be inserted in spatial domain. At the reception end, the watermark is extracted to detect errors. Results of experiments show the validity of the proposed approach in terms of imperceptibility and efficiency to detect reliable and strong attacks.

33 citations


Journal ArticleDOI
TL;DR: An in-depth investigation of some schemes proposed recently and simulating the keyword guessing attack on them, it is presented that none of these schemes can resist this attack and a comprehensive security definition is made.
Abstract: Multi-user searchable encryption enables the client to perform keyword search over encrypted data while supporting authorisation management. Most of these schemes are constructed using public key encryption. However, public key encryption with keyword search is vulnerable to keyword guessing attack. Consequently, a secure channel is necessarily involved for secret information transformation, which leads to extra severe burden. This vulnerability is recognised in traditional searchable encryption, but it is still undecided whether it also exists in multi-user setting. In this paper, we firstly point out that keyword guessing attack is also a problem in multi-user searchable encryption without the supposed secure channel. By an in-depth investigation of some schemes proposed recently and simulating the keyword guessing attack on them, we present that none of these schemes can resist this attack. We make a comprehensive security definition and propose some open problems.

10 citations


Journal ArticleDOI
TL;DR: Results indicate that there is a significant relationship between customer experience of smartphone users and mobile financial services users and their customer behavioural intentions of advocacy, churn, cross-sell, up-sell and complaints.
Abstract: This research paper proposes to establish the relationship between customer experience and customer behavioural intentions of churn, advocacy, cross-sell, up-sell and complaint for cellular service providers for end user devices and technologies like smartphones, mobile internet and mobile financial services. The method adopted incorporates various determinants across the customer lifecycle which are sufficient in defining customer experience holistically. A primary survey was conducted on 5,231 respondents by means of a questionnaire along with personal interviews. Data was analysed using descriptive analysis as well as through statistical backing of logistic regression tests. Results indicate that there is a significant relationship between customer experience of smartphone users and mobile financial services users and their customer behavioural intentions of advocacy, churn, cross-sell, up-sell and complaints. The implications of this research can prove useful for cellular service providers in formulating their marketing strategy, cross-sell and up-sell strategy, churn management strategy and customer acquisition/retention strategy.

7 citations


Journal ArticleDOI
TL;DR: Results show that balancing the workload between processes and threads per process is the key factor to maintain high performance with reasonable cost.
Abstract: The use of high-performance computing (HPC) applications has increased progressively in scientific research and industry. Cloud computing attracts HPC users because of its extreme cost efficiency. The reduced cost is the result of the successful employment of multilayer-virtualisation enabling dynamic elastic resource-sharing between different tenants. In this paper, we evaluate the impact of using multi-levels of parallelism on computationally intensive parallel tasks hosted on a cloud virtualised HPC cluster. We use multi-levels of parallelism through a set of experiments employing both message passing and multi-threading techniques. Our evaluation addresses two main perspectives, the performance of applications and cost of running HPC applications on clouds. We use millions of operations per seconds (MOPS) and speed-up to evaluate the computational performance. To evaluate the cost we use United States Dollar/MOPS (USD/MOPS). The experiments on two different clouds are compared against each other and with published results for Amazon EC2 cloud. Results show that balancing the workload between processes and threads per process is the key factor to maintain high performance with reasonable cost.

6 citations


Journal ArticleDOI
TL;DR: An enhanced low latency queuing (ELLQ) algorithm is proposed in which an additional SPQ is introduced along with the existing SPQ and the QoS is improved by integrating congestion avoidance algorithm with ELLQ.
Abstract: Low cost broadband services used widely in the recent years increased the demand for multimedia applications. Many of these applications require different quality of service (QoS) in terms of throughput and delay. In the current resource constrained wireless networks, resource management plays a vital role. In order to handle the resources effectively and to increase the QoS, proper packet scheduling algorithms need to be developed. Low latency queuing (LLQ) is a packet scheduling algorithm which combines strict priority queuing (SPQ) to class-based weighted fair queuing (CBWFQ). In this paper, an enhanced low latency queuing (ELLQ) algorithm is proposed in which an additional SPQ is introduced along with the existing SPQ. Further, the QoS is improved by integrating congestion avoidance algorithm with ELLQ. Simulation results using the OPNET modeller show that the proposed algorithm outperforms the existing algorithm in terms of throughput and delay for the multimedia applications.

6 citations


Journal ArticleDOI
TL;DR: A new task scheduling algorithm for heterogeneous computing platforms, called communication-aware earliest finish time (CEFT), which combines the features of list-scheduling and task-duplication, where task priority is assigned according to communication ratio (CR).
Abstract: In this paper, we propose a new task scheduling algorithm for heterogeneous computing platforms, called communication-aware earliest finish time (CEFT). It combines the features of list-scheduling and task-duplication, where task priority is assigned according to communication ratio (CR), a notion defined to represent communication cost. We also present a duplication mechanism, which cooperates with CR to reduce the communication overhead. The time complexity of the algorithm is O(v2 • p), for v tasks and p processors. The experiment results show that CEFT algorithm improves performance by 11% compared with the state-of-the-art list-based algorithm PEFT, and 15.6% compared with duplication-based algorithm HDCPD, in terms of scheduling length ratio. CEFT is the best algorithm compared with PEFT, HDCPD and HEFT regarding to efficiency and average surplus time as well.

6 citations


Journal ArticleDOI
TL;DR: This paper explores the soft error susceptibility of three common sorting algorithms at the software layer and uses a software fault injection tool to place faults with fine precision during the execution of these algorithms.
Abstract: Soft errors are becoming an important issue in computing systems. Near-threshold voltage (NTV), reduced circuit sizes, high performance computing (HPC), and high altitude computing all present interesting challenges in this area. Much of the existing literature has focused on hardware techniques to mitigate and measure soft errors at the hardware level. Instead, in this paper, we explore the soft error susceptibility of three common sorting algorithms at the software layer. We focus on the comparison operator and use our software fault injection tool to place faults with fine precision during the execution of these algorithms. We explore how the algorithm susceptibilities vary based on input and bit position and relate these faults back to the source code to study how algorithmic decisions impact the reliability of the codes. Finally, we look at the question of the number of fault injections required for statistical significance. Using standard practice equations used in hardware fault injection experiments, we calculate the number of injections that should be required to achieve confidence in our results. Then we show, empirically, that more fault injections are required before we gain confidence in our experiments.

5 citations


Journal ArticleDOI
TL;DR: This paper designs an efficient communication-aware VM migration algorithm and seamlessly combine three types of resources: CPU, memory and bandwidth, and shows that the proposed method can outperform the state-of-art RIAL and Sandpiper, in terms of the number of migrations, the inter-VM communication cost, and the migration network cost.
Abstract: The virtual machine (VM) migration technology is widely used in cloud data centres for achieving load balance, or saving energy consumption. Despite the potential benefits of the VM migrations, relatively little work has taken the inter-VM communication into account, with researchers mainly focusing on reducing the network cost of migration or the energy cost. In this paper, we focus on the VM migration problem in the context of large volume of inter-VM communication traffic. To this end, we make decisions on three problems of when, which, and to where the VMs shall be migrated. Specifically, we formulate an optimisation of minimising the number of migrations as well as the cost of both inter-VM traffic and migration traffic. To solve the optimisation, we design an efficient communication-aware VM migration algorithm and seamlessly combine three types of resources: CPU, memory and bandwidth. Finally, we conduct comprehensive experiments based on the CloudSim simulator. Extensive simulation results show that the proposed method can outperform the state-of-art RIAL and Sandpiper, in terms of the following three metrics: the number of migrations, the inter-VM communication cost, and the migration network cost.

5 citations


Journal ArticleDOI
TL;DR: A new efficient CPA-secure variant of the McEliece cryptosystem whose advantage is that it can enlarge the plain- text space while the cipher-text space unchanged, and formally prove the security of the scheme.
Abstract: More and more data has to be dealt with in the current network computing. An extremely efficient encryption algorithm can greatly improve the efficiency and security of the process in large data environment. As a significant candidate of post-quantum cryptosystem, McEliece public-key cryptosystem (PKC) has one remarkable advantage that it has a very fast and efficient encryption process. In this paper, we put forward a new efficient CPA-secure variant of the McEliece cryptosystem whose advantage is that we can enlarge the plain-text space while the cipher-text space unchanged. We formally prove the security of the scheme. Our proof is based on the learning parity with noise (LPN) problem. We also extend our scheme to a CCA-secure cryptosystem and a signcryption.

4 citations


Journal ArticleDOI
TL;DR: This paper presents an optimised approach for migration of a virtual machine along with its local storage by considering the locality of storage access and shows the improvement in downtime and reduction in overhead.
Abstract: Processing large volumes of data to drive their core business has been the primary objective of many firms and scientific applications in these days. Cloud computing being a large-scale distributed computing paradigm can be used to cater for the needs of data intensive applications. There are various approaches for managing the workload on a data intensive cloud. Live migration of a virtual machine is the most prominent paradigm. Existing approaches to live migration use network attached storage where just the run time state needs to be transferred. Live migration of virtual machines with local persistent storage has been shown to have performance advantages like security, availability and privacy. This paper presents an optimised approach for migration of a virtual machine along with its local storage by considering the locality of storage access. Count map combined with a restricted block transfer mechanism is used to minimise the downtime and overhead. The solution proposed is tested by various parameters like bandwidth, write access patterns and threshold. Results show the improvement in downtime and reduction in overhead.

Journal ArticleDOI
TL;DR: A fault-tolerant architecture to improve the dependability of infrastructure-based vehicular networks and a fail silence enforcement mechanism for road-side units (RSUs) that can be adapted to any wireless communications system.
Abstract: This paper presents a fault-tolerant architecture to improve the dependability of infrastructure-based vehicular networks. For that purpose, a fail silence enforcement mechanism for road-side units (RSUs) was designed, implemented and tested. Vehicular communications based on IEEE 802.11p are inherently non-deterministic. The presence of RSUs and a backhauling network, adds a degree of determinism that is useful to enforce real-time and dependability, both by providing global knowledge and by supporting the operation of collision-free deterministic MAC protocols. One of such protocols is V-FTT, for which the proposed mechanism was designed as a case study. Notice, however that this mechanism is protocol independent and can be adapted to any wireless communications system. The proposed technique is capable of validating a frame for transmission by identifying faults both in value and time domains. Experimental results show that the fail silence enforcement mechanism has low latency and consumes few FPGA resources.

Journal ArticleDOI
TL;DR: This paper develops a portable, high-level paradigm for such systems to run big data applications, more specifically, graph analytics applications popular in the big data and machine learning communities and utilises the MapReduce framework, Apache Spark, in conjunction with CUDA.
Abstract: HPC offers tremendous potential to process large amounts of data often termed as big data. Distributing data efficiently and leveraging specialised hardware (e.g., accelerators) are critical in order to best utilise HPC platforms constituting of heterogeneous and distributed systems. In this paper, we develop a portable, high-level paradigm for such systems to run big data applications, more specifically, graph analytics applications popular in the big data and machine learning communities. Using our paradigm, we accelerate three real-world, compute and data intensive, graph analytics applications: a function call graph similarity application, a triangle enumeration subroutine, and a graph assaying application. Our paradigm utilises the MapReduce framework, Apache Spark, in conjunction with CUDA and simultaneously takes advantage of automatic data distribution and accelerator on each node of the system. We demonstrate scalability and parameter space exploration and offer a portable solution to leverage almost any legacy, current, or next-generation HPC or cloud-based system.

Journal ArticleDOI
TL;DR: A novel algorithm and architecture for efficiently mining frequent patterns from big data in distributed many-task computing environments is proposed and it is shown that the proposed method delivers excellent execution time.
Abstract: Many studies have tried to efficiently discover frequent patterns in large databases The algorithms used in these studies fall into two main categories: apriori algorithms and frequent pattern growth (FP-growth) algorithms Apriori algorithms operate according to a generate-and-test approach, so performance suffers from the testing of too many candidate itemsets Therefore, most recent studies have applied an FP-growth approach to the discovery of frequent patterns The rapid growth of data, however, has introduced new challenges for the mining of frequent patterns, in terms of both execution efficiency and scalability Big data often contains a large number of items, a large number of transactions and long average transaction length, which result in large FP-trees In addition to its dependence on data characteristics, FP-tree size is also sensitive to the minimum support threshold This is because the small support is probable to bring many branches for nodes, greatly enlarging the FP-tree and the number of reconstructed conditional pattern-based trees In this paper, we propose a novel algorithm and architecture for efficiently mining frequent patterns from big data in distributed many-task computing environments Through empirical evaluation of various simulation conditions, we show that the proposed method delivers excellent execution time

Journal ArticleDOI
Cong Xu1, Jiahai Yang1, Jianping Weng1, Wang Ye2, Hui Yu1 
TL;DR: The impacts of both application and platform features on the VM instance installation rate are studied, and a novel mechanism to optimise the deployment of VM image replicas in cloud storage clusters is proposed.
Abstract: With the immense proliferation of cloud-based applications, the diversity of cloud services exacerbates the complexity of virtual machine provisioning. Addressing the performance issue, a series of solutions have been proposed to optimise the VM replica deployment. However, the majority of the existing mechanisms concentrate on optimising the locations of VM replicas, or resizing the scales of different replica sets; without an overall consideration of both the platform and application features. The purpose of this paper is to optimise the deployment of VM image replicas in cloud storage clusters. We study the impacts of both application and platform features on the VM instance installation rate, and further propose a novel mechanism to optimise the deployment of VM image replicas. Our deployment mechanism has been implemented in a real cloud storage cluster. Experimental results show that our mechanism improves the overall throughput of a storage cluster and speeds up the image installation operations.

Journal ArticleDOI
TL;DR: An ameliorated design of pvFPGA is presented, which is a novel system design solution for virtualising an FPGA-based hardware accelerator by a virtual machine monitor (VMM), and a technique, hyper-requesting, which enables portions of two requests bidding to different accelerator applications to be processed on the FPGa accelerator simultaneously through DMA context switches, to achieve request level parallelism.
Abstract: This paper presents an ameliorated design of pvFPGA, which is a novel system design solution for virtualising an FPGA-based hardware accelerator by a virtual machine monitor (VMM) The accelerator design on the FPGA can be used for accelerating various applications, regardless of the application computation latencies In the implementation, we adopt the Xen VMM to build a paravirtualised environment, and a Xilinx Virtex-6 as an FPGA accelerator The data transferred between the x86 server and the FPGA accelerator through direct memory access (DMA), and a streaming pipeline technique is adopted to improve the efficiency of data transfer Several solutions to solve streaming pipeline hazards are discussed in this paper In addition, we propose a technique, hyper-requesting, which enables portions of two requests bidding to different accelerator applications to be processed on the FPGA accelerator simultaneously through DMA context switches, to achieve request level parallelism The experimental results show that hyper-requesting reduces request turnaround time by up to 80%

Journal ArticleDOI
TL;DR: Results show that V-CTP protocol can effectively solve the problem of load imbalance and maintain high data delivery rate as well as CTP, and achieved desirable energy-efficiency, prolonged the life of networks and improved the performance of networks.
Abstract: As the state-of-the-art data collection protocol for wireless sensor networks (WSNs), collection tree protocol (CTP) has been applied to many practical applications. But, it can lead to load imbalance and data congestion as to each node chooses the most optimal path in CTP. To address the problem of load imbalance, this paper presents and evaluates V-CTP, a novel data collection protocol for wireless sensor network based on the CTP. V-CTP considers the number of child nodes, and use a virtual metric (v-eetx) to choose a path. We implement V-CTP and evaluate its performance on an indoor test-bed with 30 to 100 Telosb nodes. The experiment results show that V-CTP protocol can effectively solve the problem of load imbalance and maintain high data delivery rate as well as CTP. In addition, V-CTP achieved desirable energy-efficiency, prolonged the life of networks and improved the performance of networks.

Journal ArticleDOI
TL;DR: An architecture and algorithm for self-repairing of design bugs in the data path using FPGA is proposed, which is re-configured during the run-time to take over the functions of the faulty component.
Abstract: In recent times, with increased transistor density, it is impossible to verify all the components exhaustively for different scenarios. This results in design bugs also known as extrinsic hardware faults to escape into the processor chip in spite of various levels of testing. Hence, handling design bugs efficiently on the field is a necessity in modern multi-core processors. In this paper, an architecture and algorithm for self-repairing of design bugs in the data path using FPGA is proposed. The FPGA is re-configured during the run-time to take over the functions of the faulty component. To verify the effectiveness of the proposed design a representative sample of five faults are injected and handled. The proposed design's area overhead and time overhead calculations are done using Cadence ncverilog and gem5 simulator respectively. The area overhead of the proposed design is < 1% and performance improvement is around 2.5% compared to the existing techniques.

Journal ArticleDOI
TL;DR: In this article, a non-quiescent reliable broadcast algorithm for anonymous distributed systems with fair lossy communication channels is proposed, which tolerates an arbitrary number of crashed processes.
Abstract: Reliable broadcast (RB) is a basic abstraction in distributed systems, because it allows processes to communicate consistently and reliably with each other. This abstraction has been extensively investigated in eponymous distributed systems (i.e., all processes have different identifiers) in contrast to the study in anonymous systems (i.e., all processes have no ID). Hence, this paper is aimed to study RB in anonymous distributed systems with fair lossy communication channels. Firstly, a non-quiescent RB algorithm tolerating an arbitrary number of crashed processes is given. Then, we introduce an anonymous perfect failure detector AP*. Finally, we propose an extended and quiescent RB algorithm using AP*, in which eventually no process sends messages.

Journal ArticleDOI
TL;DR: This paper presents a lower limb rehabilitation control system designed to train stroke patients by providing variable damping on the knee joint that has robustness, dependability and works in real-time, and is easy to be upgraded and expanded.
Abstract: This paper presents a lower limb rehabilitation control system designed to train stroke patients by providing variable damping on the knee joint. The hardware based on the controller TMS320F28335 digital signal processor (DSP) and the software based on the DSP/BIOS embedded operation system are designed. The effective control of brushless DC (BLDC) motor can be implemented by combining with the upper impedance control algorithms and the underlying field-oriented control (FOC) control algorithm, which improves the performance of rehabilitation robot control system. The proposed integrated control system has robustness, dependability and works in real-time, and is easy to be upgraded and expanded.

Journal ArticleDOI
TL;DR: This paper proposes a novel public auditing for data sharing with multi-user modification that does not support public checking and efficient user revocation, but provides backward security and is provably secure under the bilinear Diffie-Hellman problem.
Abstract: In most data sharing protocols for cloud storage, to update the outsourced data, the data updating operation is only executed by data owner Obviously, it is far from practical owing to the tremendous computational cost on data owner Until now, there are a few protocols in which multiple cloud users are allowed to update the outsourced data with integrity assurance And these protocols do not consider collusion attack between misbehaving cloud servers and the revoked users, which is an important challenge in data sharing protocol To support multi-user modification and resist collusion attack, we propose a novel public auditing for data sharing with multi-user modification in this paper Our scheme does not support public checking and efficient user revocation, but provides backward security At the same time, our scheme is provably secure under the bilinear Diffie-Hellman problem To increase efficiency of the auditor's verification, an improved protocol is given In the improved protocol, only one pairing operation is required in the auditor's verification phase By comparison with the other protocols, our scheme has lower computation cost on the auditor and stronger security

Journal ArticleDOI
TL;DR: An experiment to evaluate ten data-race detection techniques on 100 small-scale or middle-scale C/C++ programs and give suggestions of which technique is the most suitable one to use when the target program exhibits particular characteristics.
Abstract: Many techniques for dynamically detecting data races in multithreaded programs have been proposed. However, it is unclear how these techniques compare in terms of precision, overhead and scalability. This paper presents an experiment to evaluate ten data-race detection techniques on 100 small-scale or middle-scale C/C++ programs. The selected ten techniques, implemented in the same Maple framework, cover not only the classical but also the state-of-the-art in dynamical data-race detection. We compare the ten techniques and try to give reasonable explanations for why some techniques are weaker or stronger than other ones. Evaluation results show that no one technique performs perfectly for all programs according to the three criteria. Based on the evaluation and comparison, we give suggestions of which technique is the most suitable one to use when the target program exhibits particular characteristics. Later researchers can also benefit from our results to construct a better detection technique.

Journal ArticleDOI
TL;DR: This study attempts to use the high-speed computing capability of cloud computing and smart phones to achieve a vehicle management system running on moving cars to achieve real-time services, motorcade managements and caddy caring services.
Abstract: This study attempts to use the high-speed computing capability of cloud computing and smart phones to achieve a vehicle management system running on moving cars Our researches exploit smart phones or tablet PCs as a car machine that provides an application of location-based services on the mobile device with global position system (GPS), wireless camcorder, Google-map, visualisation information, and graphical presentation to provide personalised services (Zhang et al, 2010; Sultan, 2010) This study allows users instantly to access the information and manage the movement of cars Additionally, the location of golf carts, surrounding environment and personnel information are transmitted through a wireless network to the monitor centre Thus monitor centre can achieve real-time services, motorcade managements and caddy caring services If the proposed mechanism performs well, this study will be extended to some other applications in the near future

Journal ArticleDOI
TL;DR: This paper is proposing a hierarchy-based privacy preserving access control scheme, in which data is encrypted and decrypted by the cloud using the blinded encryption and decryption technique that retains the constant computation cost to the owner and the user for any large file size, thus it makes the scheme suitable for energy deficient devices.
Abstract: Cloud computing is one term that has evolved drastically over the years. It involves deploying groups of remote servers and software that are networked, that will allow centralised data storage and online access to computer services or resources. However, an important problem in public clouds is how to selectively share data based on fine-grained attribute-based access control policies while at the same time assuring confidentiality of the data and preserving the privacy of users from the cloud. Keeping in mind the confidentiality of data that must be provided, many schemes and encryption algorithms have been implemented, but most of those schemes have high computational costs and overload on the data owner. In this paper, we are proposing a hierarchy-based privacy preserving access control scheme, in which data is encrypted and decrypted by the cloud using the blinded encryption and decryption technique that retains the constant computation cost to the owner and the user for any large file size, thus it makes our scheme suitable for energy deficient devices. The experimental result shows that our scheme is more suitable for energy deficient devices without compromising the security of data.

Journal ArticleDOI
TL;DR: A similarity graph is built to describe the relationships between those reviewers who post reviews on the same products, and an iterative algorithm is proposed to calculate the spam score for each reviewer using the edge weight and key features of adjacent reviewers in the graph.
Abstract: In recent years, e-commerce is so popular that many consumers make transactions online. In order to make more profit, some merchants hire spammers to give high ratings to promote certain products, or to give malicious negative reviews to defame products of competitors. Those misleading reviews are destructive to the fairness of e-commerce environment. Therefore, it is very important to detect spammers who are always posting deceptive reviews. However, existing methods have low recognition rate for detecting spam reviews. In this paper, we first propose to use SCTD to reduce the whole dataset, so that we can focus on the periods when spammers are more likely to happen. And then, a similarity graph is built to describe the relationships between those reviewers who post reviews on the same products. Finally, we propose an iterative algorithm to calculate the spam score for each reviewer using the edge weight and key features of adjacent reviewers in the graph. Experiment results show that our proposed method is much more effective in spammers detection.

Journal ArticleDOI
TL;DR: Evaluations show that the proposed design of integrating a message-passing engine into each router of the network-on-chip as well as the programming-friendly message passing interface for these engines can decrease the message passing latency by one or two orders of magnitude.
Abstract: Compared with the traditional shared-memory programming model, message passing models for chip multiprocessors (CMPs) have distinct advantages due to the relative ease of validation and the fact that they are more portable. This paper proposes a design of integrating a message-passing engine into each router of the network-on-chip as well as the programming-friendly message passing interface for these engines. Combined with the DMA mechanism, the proposed design applies the on-chip RAM as intermediary message buffer, and frees the CPU core from message-passing operations to a large extent. The detailed design and implementation, including the register-transfer-level (RTL) descriptions of the engine, are presented. Evaluations show that: compared with the software-based solution, it can decrease the message passing latency by one or two orders of magnitude. Co-simulation also demonstrates that the proposed designs effectively boost the performance of point-to-point communications on-chip, while the consumptions of power and chip-area are both limited.

Journal ArticleDOI
TL;DR: The main aim of this work is to broadcast the safety message, to avoid the packet collision or to reduce the packet loss so that the efficiency of the network is improved and these can be analysed using different protocols in vehicular adhoc network.
Abstract: In vehicular adhoc networks, messages are broadcast by active mobile nodes spontaneously to all the neighbourhood nodes within the connectivity range. These important messages may have severe delay and are time sensitive. The main aim of this work is to broadcast the safety message, to avoid the packet collision or to reduce the packet loss so that the efficiency of the network is improved and these can be analysed using different protocols in vehicular adhoc network. In typical carrier sense multiple accesses the users that content with channel access do not seem to be suitable for this application. Protocol sequence is the method we are following to broadcast the safety message. 0s and 1s are the protocol sequences used. When each user in the network reads the 0s and 1s they transmit the packet in the time slot. It does not require time synchronisation between the mobile nodes. We compare the delay performance with dedicated short range communication protocol, ALOHA type random access scheme, and zone routing protocol. By arranging the data packets a hard assurance of delay might be achieved. The delay in the network reduces in zone routing protocol.

Journal ArticleDOI
TL;DR: This paper constructs a master key leakage-resilient anonymous hierarchical identity-based encryption scheme based on dual system encryption techniques and considers security for public-key encryption with multi-keyword ranked search (PEMKRS) in the presence of secret key leakage in the trapdoor generation algorithm.
Abstract: Hierarchical identity-based encryption can be used to protect the sensitive data in cloud system. However, as the traditional security model does not capture side-channel attacks, many hierarchical identity-based encryption schemes do not resist this kind of attack, which can exploit various forms of unintended information leakage. Inspired by these, leakage-resilience cryptography formalises some models of side-channel attacks. In this paper, we consider the memory leakage resilience in anonymous hierarchical identity-based encryption schemes. By applying Lewko et al.'s tools, we construct a master key leakage-resilient anonymous hierarchical identity-based encryption scheme based on dual system encryption techniques. As an interesting application of our scheme, we consider security for public-key encryption with multi-keyword ranked search (PEMKRS) in the presence of secret key leakage in the trapdoor generation algorithm, and provide a generic construction of leakage-resilient secure PEMKRS from a master key leakage-resilient anonymous hierarchical identity-based encryption scheme.

Journal ArticleDOI
TL;DR: Inspired by the weakness of current app packing model, DexSplit maintains the protected dex file as several pieces throughout this application's entire lifecycle, which makes it difficult to be dumped.
Abstract: With the increasing popularity and adoption of Android-based smartphones, there are more and more Android malwares in app marketplaces. What's more, most malwares are repackaged versions of legitimate applications. Existing solutions have mostly focused on post-mortem detection of repackaged application. Lately, packing mechanism has been proposed to enable self-defence for Android apps against repackaging. However, since current app packing systems all load the executable file into process memory in plaintext intactly, it can be easily dumped, which would enable the repackaging again. To address this problem, we propose a more effective protection model, DexSplit, to prevent app repackaging. Inspired by the weakness of current app packing model, DexSplit maintains the protected dex file as several pieces throughout this application's entire lifecycle, which makes it difficult to be dumped. Experiments with a DexSplit prototype using six typical apps show that DexSplit effectively defends against app repackaging threats with reasonable performance overhead.

Journal ArticleDOI
TL;DR: This paper presents a comprehensive solution using application-independent metrics consisting of different types of vulnerability measures to prevent brute force attacks in cloud-based software-as-a-service (SaaS) model.
Abstract: Cloud model of computing will be widely adopted by different organisations if it can support a higher level of data privacy than currently supported. The higher level of data privacy is mandatory to store and query the sensitive data in cloud-based information system applications such as customer relationship management (CRM) systems. Identity-based homomorphic encryption and tokenisation has proved its efficiency in providing privacy and simultaneously querying encrypted data. However, in cloud-based software-as-a-service (SaaS) model, the adversary can run brute force attacks which can reveal the attribute values by colluding with the service provider. It is a significant challenge to detect and prevent such attacks. This paper presents a comprehensive solution using application-independent metrics consisting of different types of vulnerability measures. This paper also presents the detailed design of a system that uses application-independent metrics to prevent brute force attacks.