scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Grid and Utility Computing in 2018"


Journal ArticleDOI
TL;DR: Performance study indicates that the proposed negotiation based heuristic for dynamic pricing that considers behaviour of both the service provider and the customer and tries to optimally satisfy both for pricing performs well and is a potential candidate for its implementation in a real cloud.
Abstract: Over the years, cloud computing has emerged as a good business platform for IT related services. In cloud, prices of computing resource act as lever to control the utilisation of the resources. That is the reason, when the number of cloud customers started increasing, cloud service providers started offering resources with various pricing schemes to attract customers. This work proposes a negotiation based heuristic for dynamic pricing that considers behaviour of both the service provider and the customer and tries to optimally satisfy both for pricing. Both customer and provider are reluctant to reveal information about their utility to each other. The designed utility function for provider considers payment offered by customer and opinion of the provider about customer. Similarly, utility function for customer considers price offered by the provider and opinion of customer about provider. This will encourage both to offer their true value. Performance study indicates that the proposed method performs well and is a potential candidate for its implementation in a real cloud.

12 citations


Journal ArticleDOI
TL;DR: An improved SMURF scheme in two aspects is proposed, based on dynamic tags, and the second one considers the influence of data redundancy, which is reasonable in dynamic settings of sliding window and the accuracy is improved.
Abstract: RFID technology is widely used in the Internet of Things (IoT) environment for object tracking. With the expansion of its application areas, the demand for reliability of business data is increasingly important. In order to fulfil the needs of upper-level applications, data cleaning is essential and directly affects the correctness and completeness of the business data, so it needs to filter and handle RFID data. The traditional statistical smoothing for unreliable RFID data (SMURF) algorithm is only aimed at constant speed data flow during the process of data cleaning. In this paper, we overcome the shortage of SMURF algorithm, and an improved SMURF scheme in two aspects is proposed. The first one is based on dynamic tags, and the second one considers the influence of data redundancy. The experiments verify that the improved scheme is reasonable in dynamic settings of sliding window, and the accuracy is improved as well.

12 citations


Journal ArticleDOI
TL;DR: The goal of this model is to present the stages for the processing of data in the steps of extraction, storage, processing and visualisation, as well as the types of tools needed for each phase.
Abstract: The massive amount of data generated in projects focused on smart cities creates a degree of complexity in how to manage all this information. In attention to solve this problem, several approaches have been developed in recent years. In this paper we propose an infrastructure model for big data for a smart city project. The goal of this model is to present the stages for the processing of data in the steps of extraction, storage, processing and visualisation, as well as the types of tools needed for each phase. To implement our proposed model, we used the ParticipACT Brazil, which is a project based in smart cities. This project uses different databases to compose its big data and uses this data to solve urban problems. We observe that our model provides a structured vision of the software to be used in big data server of ParticipACT Brazil.

12 citations


Journal ArticleDOI
TL;DR: An ontology-based cloud infrastructure service discovery and selection system that defines functional and non-functional concepts, attributes and relations of infrastructure services is designed and it is shown how the system enables one to discover appropriate services optimally as requested by consumers.
Abstract: In recent years, due to global economic downfall, many of the organisations have resorted to downsizing their Information Technology (IT) expenses by adopting innovative computing models like cloud computing, which allows business houses to reduce their fixed IT costs by promising a greener, scalable, cost-effective alternative to utilise the IT resources. A growing number of pay-per-use cloud services are now available on the web in the form of Software as a Service (SaaS), Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). With the increase in the number of services, there has also been an increase in demand and adoption of cloud services making cloud service identification and discovery a challenging task. This is due to varied service, descriptions, non-standardised naming conventions, heterogeneity in type and features of cloud services. Thus, selecting an appropriate cloud service according to consumer requirements is a daunting task, especially for applications that use a composition of different cloud services. In this paper, we have designed an ontology-based cloud infrastructure service discovery and selection system that defines functional and non-functional concepts, attributes and relations of infrastructure services. We have shown how the system enables one to discover appropriate services optimally as requested by consumers.

11 citations


Journal ArticleDOI
TL;DR: A revised algorithm based on weighted visual dictionary of K-means cluster that is efficient in descript image information and can provide image classification performance and gets the visual dictionary.
Abstract: Image classification constitutes an important issue in large-scale image data process systems based on cluster. In this context, a significant number of relying BoW models and SVM methods have been proposed for image fusion systems. Some works classified these methods into Generative Mode and Discriminative Mode. Very few works deal with a classifier based on the fusion of these modes when building an image classification system. In this paper, we propose a revised algorithm based on weighted visual dictionary of K-means cluster. First, it uses SIFT and Laplace spectrum features to cluster object respectively to get local characteristics of low dimension images (sub-visual dictionary); then clusters low-dimension characteristics to get the super visual dictionaries of two features; finally, we get the visual dictionary although most of these features have been proposed for a balance role through weighting of the parent visual dictionaries. Experimental result shows that the algorithm and this model are efficient in descript image information and can provide image classification performance. It is widely used in unmanned-navigation and the machine-vision and other fields.

10 citations


Journal ArticleDOI
TL;DR: An Enhanced Cuckoo Search (ECS) algorithm is proposed to address the issues with VM placement focusing on the energy consumption and comparision results illustrate that the proposed ECS algorithm consumes less energy than the participant algorithms while maintaining a steady performance for SLA and VM migration.
Abstract: In order to enhance resource utilisation and power efficiency in cloud data centres it is important to perform Virtual Machine (VM) placement in an optimal manner. VM placement uses the method of mapping virtual machines to physical machines (PM). Cloud computing researchers have recently introduced various meta-heuristic algorithms for VM placement considering the optimised energy consumption. However, these algorithms do not meet the optimal energy consumption requirements. This paper proposes an Enhanced Cuckoo Search (ECS) algorithm to address the issues with VM placement focusing on the energy consumption. The performance of the proposed algorithm is evaluated using three different workloads in CloudSim tool. The evaluation process includes comparison of the proposed algorithm against the existing Genetic Algorithm (GA), Optimised Firefly Search (OFS) algorithm, and Ant Colony (AC) algorithm. The comparision results illustrate that the proposed ECS algorithm consumes less energy than the participant algorithms while maintaining a steady performance for SLA and VM migration. The ECS algorithm consumes around 25% less energy than GA, 27% less than OFS, and 26% less than AC.

9 citations


Journal ArticleDOI
TL;DR: A dynamic virtual machine migration (DVMM) algorithm to reduce the total electric energy consumption of servers and the average execution time of processes can be reduced in the DVMM algorithm.
Abstract: Virtual machines are widely used to support applications with virtual service in server clusters. Here, a virtual machine can migrate from a host server to a guest server. In this paper, we consider a cluster where virtual machines are dynamically created and dropped depending on the number of processes. We propose a dynamic virtual machine migration (DVMM) algorithm to reduce the total electric energy consumption of servers. If an application issues a process to a cluster, the most energy-efficient host server is first selected and then the process is performed on a virtual machine of the server. Then, a virtual machine migrates from a host server to a guest server so that total electric energy consumption of servers can be reduced. In the evaluation, we show the total electric energy consumption and active time of servers and the average execution time of processes can be reduced in the DVMM algorithm.

9 citations


Journal ArticleDOI
TL;DR: This paper presents a framework that enables the adoption of a per-service SLA model, supporting the automatic implementation of cloud security SLAs tailored to the needs of each customer for specific service instances.
Abstract: In the cloud computing context, Service Level Agreements (SLAs) 'tailored' to specific Cloud Service Customers (CSCs) seem to be still an utopia, and things are even worse as regards the security terms to be guaranteed. In fact, existing cloud SLAs focus only on a few service terms, and Cloud Service Providers (CSPs) mainly provide uniform guarantees for all offered services and for all customers, regardless of any particular service characteristics or of customer-specific needs. This paper presents a framework that enables the adoption of a per-service SLA model, supporting the automatic implementation of cloud security SLAs tailored to the needs of each customer for specific service instances. In particular, the process and the software architecture for per-service SLA implementation are shown. A case study application, related to the provisioning of a secure web container service, is presented and discussed, to demonstrate the feasibility and effectiveness of the proposed solution.

7 citations


Journal ArticleDOI
TL;DR: A fuzzy energy-aware algorithm is proposed to estimate the host resource usage, and a fuzzy algorithm generates rules and membership functions dynamically to adapt to workload changes to optimise the energy-performance trade-off.
Abstract: Placement of virtual machines (VMs) on physical nodes as a sub-problem of dynamic VM consolidation has been driven by mainly energy efficiency and performance objectives However, due to varying workloads in VMs, placement of the VMs can cause a violation in Service Level Agreement (SLA) In this paper, the VM placement is regarded as a bin packing problem, and a fuzzy energy-aware algorithm is proposed to estimate the host resource usage The estimated resource usage is used to find the most energy-efficient host to reallocate the VMs The fuzzy algorithm generates rules and membership functions dynamically to adapt to workload changes The main objective of the proposed algorithm is to optimise the energy-performance trade-off The effectiveness of the proposed algorithm is evaluated through simulations on the random and real-world PlanetLab workloads Simulation results demonstrate that the proposed algorithm reduces the energy consumption, while it provides a high level of adherence to the SLAs

6 citations


Journal ArticleDOI
TL;DR: The results show that the proposed PUFSIRS architecture can enhance the accuracy and efficiency for gathering the appropriate web-records which are collected from various fields such as food, education, news and healthcare to the current schemes.
Abstract: In this work, an ontology based Semantic Supported Information Retrieval System (SIRS) is introduced, in which the users give the input query which is determined by the Hypertext Markup Language (HTML) Parser then Probabilistic Latent Semantic Indexing (PLSI) algorithm is utilised to gathered the details in an effective manner. These conversations are preceded, along with the assistance of the field concept of the pre-existing domain ontologies, a mediator thesaurus and determine semantic association amongst them in nature. The proposed work concentrates on resolving the web search issues and also concentrates on resolving the personalised web search. So the SIRS will be correlated to the personalised search with item features, termed as Personalised User Preference and Feature based Semantic Information Retrieval System (PUFSIRS) architecture. PUFSIRS performs according to the particle agent, which accomplishes the SIRS based on the curiosity of the user through Multi-Criteria Particle Swarm Optimisation (MCPSO) for giving user's personal interest. The appropriate details for the semantic query are gathered and categorised based on the pertinence of an MCPSO procedure. The results show that the proposed PUFSIRS architecture can enhance the accuracy and efficiency for gathering the appropriate web-records which are collected from various fields such as food, education, news and healthcare to the current schemes. The result of the proposed PUFSIRS architecture is measured in terms of precision, recall, F-measure, accuracy and processing time.

6 citations


Journal ArticleDOI
TL;DR: In this work, simulation results show that the proposed model EEUNM keeps the same security level while completing better energy utilisation of nodes for data packet delivery.
Abstract: This paper introduces the new routing model called Effective Energy Utilisation of Node Model (EEUNM) created for energy-efficient and secure data in wireless sensor networks. EEUNM deals with wireless sensor applications such as military and forest field. The aim of this model is to find the perfect path based on consumption of node energy. The focus of EEUNM is a new approach to simultaneously factor energy and trustworthiness of routes in the routing model. EEUNM finds and selects the path on the basis of having maximum capability of node with incurring additional cost in overhead compared to the common protocols AODV-EHA and LTB-AODV. In this work, simulation results show that the proposed model EEUNM keeps the same security level while completing better energy utilisation of nodes for data packet delivery.

Journal ArticleDOI
TL;DR: The simulation results show that the throughput and delay are increased, but the fairness index is decreased with increasing of transmission rate, and the throughput of Hybrid WMN is higher than Infrastructure/Backbone (I/B) WMN.
Abstract: In this paper, we evaluate the performance of two Wireless Mesh Network (WMN) architectures considering throughput, delay and fairness index metrics. For simulations, we used a Genetic Algorithm (GA)-based simulation system (called WMN-GA) and ns-3. We compare the performance for two architectures considering normal and uniform distributions, different transmission rates and OLSR protocol. The simulation results show that the throughput and delay are increased, but the fairness index is decreased with increasing of transmission rate. The throughput of Hybrid WMN is higher than Infrastructure/Backbone (I/B) WMN. But, the delay of I/B WMN is higher than Hybrid WMN for normal distribution. The fairness index of normal distribution is higher than uniform distribution.

Journal ArticleDOI
TL;DR: This paper examines how to generate the sub-tree of matching keywords that the user can access by the access control rules for the user's role, and the corresponding algorithm is proposed to achieve safe and efficient keywords search.
Abstract: With increasing rate of storing and sharing information in the cloud by the users, data storage brings new challenges to the Extensible Markup Language (XML) database in big data environments. The efficient retrieval of data with protection and privacy issues for accessing mass data in the cloud is more and more important. Most of existing research about XML data query and retrieval focuses on efficiency or establishing the index, and so on. However, these methods or algorithms do not take into account the data and data structure for their own safety issues. Furthermore, traditional access control rules read XML document node in a dynamic environment, relevant dynamic query-based keyword research data security and privacy protection requirements are not many. In order to improve the search efficiency with security condition, this paper examines how to generate the sub-tree of matching keywords that the user can access by the access control rules for the user's role. The corresponding algorithm is proposed to achieve safe and efficient keywords search.

Journal ArticleDOI
TL;DR: An iterative regularisation strategy with a second-order derivative penalty term for the reconstruction of undersampled image datasets, and an implementation on a massively parallel architecture environment, a multi Graphics Processing Unit (GPU) system, of the proposed iterative algorithm.
Abstract: A recent trend in the Magnetic Resonance Imaging (MRI) research field is to design and adopt machines that are able to acquire undersampled clinical data, reducing the time for which the patient is lying in the body scanner. Unfortunately, the missing information in these undersampled acquired datasets leads to artefacts in the reconstructed image; therefore, computationally expensive image reconstruction techniques are required. In this paper, we present an iterative regularisation strategy with a second-order derivative penalty term for the reconstruction of undersampled image datasets. Moreover, we compare this approach with other constrained minimisation methods, resulting in improved accuracy. Finally, an implementation on a massively parallel architecture environment, a multi Graphics Processing Unit (GPU) system, of the proposed iterative algorithm is presented. The resulting performance gives clinically-feasible reconstruction run times, speed-up and improvements in terms of reconstruction accuracy of the undersampled MRI images.

Journal ArticleDOI
TL;DR: This research helps the teaching community to realise the influence of student's self-esteem on their performance and helps them to grow positively in their knowledge, emotions and skills.
Abstract: Educationalists have formulated a taxonomy called Bloom's taxonomy. It states that during education a student progresses not only in knowledge (cognitive), but also in emotions (attitude/behaviour) and skill sets (psychomotor). In general only the knowledge of the students is assessed. This paper researches on the influence of students' self-esteem (attitude) on their performance in the academics. Self-esteem is emotional evaluation of self-worth positively or negatively. Rosenberg's self-esteem scale is used to evaluate the individual's self-esteem. The students are categorised based on their self-esteem scale and performance scale respectively using supervised and unsupervised learning. The relation between the self-esteem and the performance is proven using predictive and descriptive modelling. The study reveals the positive association between the self-esteem and the performance. This research helps the teaching community to realise the influence of student's self-esteem on their performance and helps them to grow positively in their knowledge, emotions and skills.

Journal ArticleDOI
TL;DR: This paper presents an energy-efficient scheduling algorithm for a cloud computing DC using the dynamic voltage frequency scaling technique and shows that the proposed scheduling algorithm can efficiently reduce the energy consumption for executing jobs by increasing resource utilisation.
Abstract: Cloud computing is a fascinating and profitable area in modern distributed computing Aside from providing millions of users the means to use offered services through their own computers, terminals, and mobile devices, cloud computing presents an environment with low cost, simple user interface, and low power consumption by employing server virtualisation in its offered services (eg, Infrastructure as a Service) The pool of virtual machines found in a cloud computing data centre (DC) must run through an efficient task scheduling algorithm to achieve resource utilisation and good quality of service, thus ensuring the positive effect of low energy consumption in the cloud computing environment In this paper, we present an energy-efficient scheduling algorithm for a cloud computing DC using the dynamic voltage frequency scaling technique The proposed scheduling algorithm can efficiently reduce the energy consumption for executing jobs by increasing resource utilisation GreenCloud simulator is used to simulate our algorithm Experimental results show that, compared with other algorithms, our algorithm can increase server utilisation, reduce energy consumption, and reduce execution time

Journal ArticleDOI
TL;DR: Interactions of theoretical computer science with computer and network technologies are examined analysing how agent technology emerged, matured and progressed in mathematical models of computation to demonstrate how these models are used in the novel distributed intelligent managed element (DIME) network architecture (DNA).
Abstract: Agents and agent systems are becoming more and more important in the development of a variety of fields such as ubiquitous computing, ambient intelligence, autonomous computing, data analytics, machine learning, intelligent systems and intelligent robotics. In this paper, we examine interactions of theoretical computer science with computer and network technologies analysing how agent technology emerged, matured and progressed in mathematical models of computation. We demonstrate how these models are used in the novel distributed intelligent managed element (DIME) network architecture (DNA), which extends the conventional computational model of information processing networks, allowing improvement of the efficiency and resiliency of computational processes. Two implementations of DNA described in the paper illustrate how the application of agent technology radically improves current cloud computing state of the art. First example demonstrates the live migration of a database from a laptop to a cloud without losing transactions and without using containers or moving virtual machine images. The second example exhibits the implementation of cloud agnostic computing over a network of public and private clouds where live computing process workflows are migrated from one cloud to another without losing transactions. Both these implementations demonstrate the power of scientific thought for dramatically extending the current state of the art of cloud and grid computing practice.

Journal ArticleDOI
TL;DR: A model for the formal specification of the concept of trust and a trust management system for opportunistic cloud services platforms is presented and the applicability of the trust model and theTrust management system is demonstrated for cloud computing by applying it to software as a service and infrastructure as aService usage scenario in the context of opportunisticcloud services environments.
Abstract: This paper presents a model for the concept of trust and a trust management system for opportunistic cloud services platforms Results from applying the systematic review methodology to review trus

Journal ArticleDOI
TL;DR: Both static and dynamic schemes are presented – the former are selected before the algorithm execution, whereas the latter are dynamically updated on the fly to better respond to the optimisation progress.
Abstract: Designing and implementing cooperation schemes for parallel algorithms has become a very important task recently. The scheme, which defines the cooperation topology, frequency and strategies for handling transferred solutions, has a tremendous influence on the algorithm search capabilities, and can help balance the exploration and exploitation of the vast solution space. In this paper, we present both static and dynamic schemes – the former are selected before the algorithm execution, whereas the latter are dynamically updated on the fly to better respond to the optimisation progress. To understand the impact of such cooperation approaches, we applied them in the parallel memetic algorithms for solving rich routing problems, and performed an extensive experimental study using well-known benchmark sets. This experimental analysis is backed with the appropriate statistical tests to verify the importance of the retrieved results.

Journal ArticleDOI
TL;DR: In this work, the problem of virtual machine placement is modelled through the three slot bin packing problem and polynomial time approximation scheme is designed for this problem which provides a facility to control the quality of approximation.
Abstract: Virtual machine is a logical framework to provide services on the cloud. These virtual machines form a logical partition of the physical infrastructure present in the cloud centre. Virtual machines are not only prone to cost escalation but they also result in huge power consumption. Hence, the cloud centre needs to optimise cost and power consumption by placing these virtual machines from their current physical machines to other suitable physical machines. Currently, this problem has been addressed by considering the virtual machines present in a single location cloud centre. But, currently cloud centres have multiple locations and all of them are synchronised to provide cloud services. In this complex paradigm, it is important to differentiate the same and different location virtual machines and provide suitable placement algorithms. In this work, the problem of virtual machine placement is modelled through the three slot bin packing problem. It is shown as NP-complete and suitable approximation algorithms are proposed. Also, polynomial time approximation scheme is designed for this problem which provides a facility to control the quality of approximation. Empirical studies performed through simulation confirm the theoretical bounds that were obtained.

Journal ArticleDOI
TL;DR: A hybrid model, that uses genetic algorithm and graph theory concept has been proposed for this purpose, and the result reveals the benefits achieved with the proposed model for optimal energy with resource allocation in the grid.
Abstract: Computational grid helps in faster execution of compute intensive jobs. The resource allocation for the job execution in computational grid demands a lot of characteristic parameters to be optimised but in the process the green aspect is ignored. Reducing the energy consumption in computational grid is a major recent issue among researchers. The conventional systems, which offer energy efficient scheduling strategies, ignore other quality of service parameters while scheduling the jobs. The proposed work tries to optimise the energy for resource allocation and at the same time makes no compromise on other related characteristic parameters. A hybrid model, that uses genetic algorithm and graph theory concept has been proposed for this purpose. In this model, an energy saving mechanism is implemented using a dynamic threshold method followed by genetic algorithm to further consolidate the saving. Eventually, a graph theory concept of Minimum Spanning Tree (MST) is applied. The performance of the proposed model has been studied by its simulation. The result reveals the benefits achieved with the proposed model for optimal energy with resource allocation in the grid.

Journal ArticleDOI
TL;DR: Various distributed models that are developed in the domain of e-learning with respect to the learning environment, learning objects and learning style are studied.
Abstract: Electronic learning systems have been an area of interest for researchers in the past few decades as it offers flexible, reusable, scalable, available, affordable services to a wide variety of users including content generators, content users, and content managers. E-learning system involves digital content development, administration, and delivery of learning contents that can be made available even to remote locations. In the advent of huge digital content available on the internet, an effective e-learning system requires collaboration among the content creators, users, and administrators, further, it requires huge computational power and storage capacity. Presently, the system focuses mostly on the delivery of content to learners rather than concentrating on the effective storage and efficient retrieval of the content based on the skill of the individual learners. Several approaches have been attempted towards the personalised content delivery based on the basic skill of learners. However, the system maintained a repository of verified contents relatively than a validated content. In order to maintain a validated content for efficient retrieval, distributed models with respect to a different perspective of validation may be fruitful. This paper studied various distributed models that are developed in the domain of e-learning with respect to the learning environment, learning objects and learning style.

Journal ArticleDOI
TL;DR: A new approach based on a recently proposed formalism 'Evolutionary Petri Net' (EPN), which is an extension of Petri net, enriched with two genetic operators, crossover and mutation, to minimise the workflow application completion time (makespan) as well as the cost incurred by using cloud resources.
Abstract: Nowadays, many evolutionary algorithms for workflow scheduling in cloud computing are available. Most of those algorithms focus on the effectiveness, discarding the issue of flexibility. Research on Petri nets addresses the issue of flexibility; many extensions have been proposed to facilitate the modelling of complex systems. Typical extensions are the addition of 'colour', 'time' and 'hierarchy'. By mapping scheduling problems into Petri nets, we are able to use standard Petri net theory. In this case, the scheduling problem can be reduced to finding an optimal sequence of transitions leading from an initial marking to a final one. To find the optimal scheduling, we propose a new approach based on a recently proposed formalism 'Evolutionary Petri Net' (EPN), which is an extension of Petri net, enriched with two genetic operators, crossover and mutation. The objectives of our research are to minimise the workflow application completion time (makespan) as well as the cost incurred by using cloud resources. Some numerical experiments are carried out to demonstrate the usefulness of our algorithm.

Journal ArticleDOI
TL;DR: This paper investigates the low-cost and off-the-shelf sensors to be installed inside street lighting luminaires for traffic sensing and revealed that a HC-SR501 PIR motion detector could count traffic with 73% accuracy at a low cost and may be suitable for intelligent lighting applications if accuracy can be further improved.
Abstract: Real-time traffic monitoring is essential to the development of smart cities as well as its potential for energy savings. However, real-time traffic monitoring is a task that requires sophisticated and expensive hardware. Owing to the prohibitive cost of specialised sensors, accurate traffic counts are typically limited to intersections where traffic information is used for signalling purposes. The sparse arrangement of traffic detection points does not provide adequate information for intelligent lighting applications, such as adaptive dimming. This paper investigates the low-cost and off-the-shelf sensors to be installed inside street lighting luminaires for traffic sensing. A luminaire-mounted sensor test-bed installed on a moderately-busy road trialled three non-invasive presence-detection sensors: Passive Infrared (PIR), Sonar (UVD) and lidar. The proof-of-concept study revealed that a HC-SR501 PIR motion detector could count traffic with 73% accuracy at a low cost and may be suitable for intelligent lighting applications if accuracy can be further improved.

Journal ArticleDOI
TL;DR: A test has shown that voice and the motion pictures in a rather small size of 320 × 240 pixels on a teacher's PC have been presented at the rate of 5 frames per second without any conceivable delays on the web browsers running on 38 client devices for students under a local WiFi network.
Abstract: A software system is developed for casting the screen images and voices from a host PC to the client web browsers on many other PCs in real time This system is intended to be used in the classrooms Students have only to bring their own PCs and connect to the teacher's host PC by a web browser via a wireless network to see and listen to the teaching materials presented on the host PC Then the client web-browsers are organised in the shape of a binary tree along which the video and audio data are relayed in the multihop fashion by the Web Real-Time Communication (WebRTC) protocol This structure of binary multihop relay is adopted in order not to burden the host PC with communications load A test has shown that voice and the motion pictures in a rather small size of 320 × 240 pixels on a teacher's PC have been presented at the rate of 5 frames per second without any conceivable delays on the web browsers running on 38 client devices for students under a local WiFi network To host more client devices, we have to lower the frame rate as slow as the slide show of still pictures

Journal ArticleDOI
Jun Iio1
TL;DR: An overview of a novel framework for attendance management that consists of a mobile device and a web application and the results of an evaluation after a trial period are reported.
Abstract: There have been multiple proposals to optimise student attendance management in higher education. However, each method has pros and cons and we have not yet found a perfect solution. In this study, a novel framework for attendance management is proposed that consists of a mobile device and a web application. During lectures, students participating in the lecture can register their attendance on the mobile device with their selfie or their signature. After the lecture is finished, the registration data are sent to the database and they are added to the 'RollSheet'. This paper reports an overview of this system and the results of an evaluation after a trial period, which was conducted in the second semester of the 2015 fiscal year.

Journal ArticleDOI
TL;DR: To enhance the performance of image encryption technique, it first scrambles the location of the pixels and then applies a chaotic map by using a 32 bit symmetric key that changes the pixel values of the image.
Abstract: To enhance the performance of image encryption technique, it first scrambles the location of the pixels and then applies a chaotic map by using a 32 bit symmetric key that changes the pixel values of the image. The traditional confusion-diffusion algorithms are insecure against known plaintext attacks. To make the plain image unidentifiable, we use the scrambling operation by using a one dimensional vector technique that changes the correlations of all the adjacent pixels. The chaotic map algorithm produces the cipher image by changing the pixel values of the given image. To increase the security level we use the application of keys in the encryption and decryption process. For any large sized image this encryption and decryption process is simple enough and provides high security. The proposed encryption method has been tested on different grey images and showed good results. The security level of image encryption and decryption is further increased.

Journal ArticleDOI
TL;DR: Simulation results demonstrate that system SER obviously decreases by adopting cooperative communication system with directional transmitting and directional receiving, and the OPA is also proved to be superior to EPA (Equal Power Allocation).
Abstract: Aiming at the cooperative communication system with directional antenna, this paper has studied the SER (Symbol Error Rate) performance under AF (Amplify-and-Forward protocol). The model of AF cooperative communication system using directional antenna was firstly established to deduce the closed-form expression of SER in this model as well as the upper limit of SER. Then, the OPA (Optimum Power Allocation) was also analysed on the purpose of minimising SER. Combining specific simulation numerical values, SER performance of established model was thoroughly researched in this paper. Simulation results demonstrate that system SER obviously decreases by adopting cooperative communication system with directional transmitting and directional receiving. Each node's directional gain, channel quality and power allocation method all have great influence on system's overall performance. And the OPA is also proved to be superior to EPA (Equal Power Allocation).

Journal ArticleDOI
TL;DR: A tuple sampling-based tenant duplication protection mechanism MT-DIPS (Duplication Integrity Protection Scheme for Multi-Tenants), which makes use of homomorphism labels with auxiliary authentication structure to allow trusted third party verification without disclosing tenant data to relieve the verification burden on tenants' client sides.
Abstract: In SaaS, the data sharing storage mode and tenant isolation requirement present new challenge to traditional remote data duplication protection schemes. This paper aims at the new requirement of tenant data duplication protection in SaaS and presents a tuple sampling-based tenant duplication protection mechanism MT-DIPS (Duplication Integrity Protection Scheme for Multi-Tenants). Instead of data block sampling, MT-DIPS accommodates the data isolation requirement of different tenants by sampling tenants physical data tuples. Through periodical random sampling, MT-DIPS reduces the complexity on service provider side of verification object construction and eliminates the resource waste. Analysis and the experimental results show that if the damage rate of tenant data tuples is about 1%, the random sampling data number is about 5% of the total number of tuples. MT-DIPS makes use of homomorphism labels with auxiliary authentication structure to allow trusted third party verification without disclosing tenant data to relieve the verification burden on tenants' client sides.

Journal ArticleDOI
TL;DR: The proposed algorithm significantly improves the performance of MIML classifier compared with the state-of-the-art methods.
Abstract: Multi-Instance Multi-Label Learning (MIML) has been successfully used in image and text classification problems. It is noteworthy that few of the previous studies consider the pattern-label relations. Inevitably, there are some useless instances in a bag which will reduce the accuracy of the annotation. In this paper we focus on this problem. Firstly, an instance selection method via joint l2,1-norms constraint is employed to eliminate the useless instances and select the representative instances by modelling the instance correlation. Then, bags are mapped to these representative instances. Finally, the classifier is trained by an optimisation algorithm based on label correlations. Experimental results on image data set, text data sets and bird song audio data set show that the proposed algorithm significantly improves the performance of MIML classifier compared with the state-of-the-art methods.