scispace - formally typeset
Search or ask a question

Showing papers in "The Computer Journal in 2010"


Journal ArticleDOI
TL;DR: The usage of methods and technologies currently used for energy-efficient operation of computer hardware and network infrastructure and some of the remaining key research challenges that arise when such energy-saving techniques are extended for use in cloud computing environments are identified.
Abstract: Energy efficiency is increasingly important for future information and communication technologies (ICT), because the increased usage of ICT, together with increasing energy costs and the need to reduce green house gas emissions call for energy-efficient technologies that decrease the overall energy consumption of computation, storage and communications. Cloud computing has recently received considerable attention, as a promising approach for delivering ICT services by improving the utilization of data centre resources. In principle, cloud computing can be an inherently energy-efficient technology for ICT provided that its potential for significant energy savings that have so far focused on hardware aspects, can be fully explored with respect to system operation and networking aspects. Thus this paper, in the context of cloud computing, reviews the usage of methods and technologies currently used for energy-efficient operation of computer hardware and network infrastructure. After surveying some of the current best practice and relevant literature in this area, this paper identifies some of the remaining key research challenges that arise when such energy-saving techniques are extended for use in cloud computing environments.

682 citations


Journal ArticleDOI
TL;DR: Approximation Algorithms for Facility Dispersion Greedy Al algorithms for Metric Facility Location Problems Prize-Collecting Traveling Salesman and Related Problems A Development and Deployment Framework for Distributed Branch and Bound Approximations for Steiner Minimum Trees Practical ApproxIMations of Steiner Trees in Uniform Orientation Metrics Approximation Schemes.
Abstract: PREFACE BASIC METHODOLOGIES Introduction, Overview, and Notation Basic Methodologies and Applications Restriction Methods Greedy Methods Recursive Greedy Methods Linear Programming LP Rounding and Extensions On Analyzing Semidefinite Programming Relaxations of Complex Quadratic Optimization Problems Polynomial-Time Approximation Schemes Rounding, Interval Partitioning, and Separation Asymptotic Polynomial-Time Approximation Schemes Randomized Approximation Techniques Distributed Approximation Algorithms via LP-Duality and Randomization Empirical Analysis of Randomized Algorithms Reductions that Preserve Approximability Differential Ratio Approximation Hardness of Approximation LOCAL SEARCH, NEURAL NETWORKS, AND METAHEURISTICS Local Search Stochastic Local Search Very Large-Scale Neighborhood Search: Theory, Algorithms, and Applications Reactive Search: Machine Learning for Memory-Based Heuristics Neural Networks Principles of Tabu Search Evolutionary Computation Simulated Annealing Ant Colony Optimization Memetic Algorithms MULTIOBJECTIVE OPTIMIZATION, SENSITIVITY ANALYSIS, AND STABILITY Approximation in Multiobjective Problems Stochastic Local Search Algorithms for Multiobjective Combinatorial Optimization: A Review Sensitivity Analysis in Combinatorial Optimization Stability of Approximation TRADITIONAL APPLICATIONS Performance Guarantees for One-Dimensional Bin Packing Variants of Classical One-Dimensional Bin Packing Variable, Sized Bin Packing and Bin Covering Multidimensional Packing Problems Practical Algorithms for Two-Dimensional Packing A Generic Primal-Dual Approximation Algorithm for an Interval Packing and Stabbing Problem Approximation Algorithms for Facility Dispersion Greedy Algorithms for Metric Facility Location Problems Prize-Collecting Traveling Salesman and Related Problems A Development and Deployment Framework for Distributed Branch and Bound Approximations for Steiner Minimum Trees Practical Approximations of Steiner Trees in Uniform Orientation Metrics Approximation Algorithms for Imprecise Computation Tasks with 0/1 Constraint Scheduling Malleable Tasks Vehicle Scheduling Problems in Graphs Approximation Algorithms and Heuristics for Classical Planning Generalized Assignment Problem Probabilistic Greedy Heuristics for Satisfiability Problems COMPUTATIONAL GEOMETRY AND GRAPH APPLICATIONS Approximation Algorithms for Some Optimal 2D and 3D Triangulations Approximation Schemes for Minimum-Cost k-Connectivity Problems in Geometric Graphs Dilation and Detours in Geometric Networks The Well-Separated Pair Decomposition and its Applications Minimum-Edge Length Rectangular Partitions Partitioning Finite d-Dimensional Integer Grids with Applications Maximum Planar Subgraph Edge-Disjoint Paths and Unsplittable Flow Approximating Minimum-Cost Connectivity Problems Optimum Communication Spanning Trees Approximation Algorithms for Multilevel Graph Partitioning Hypergraph Partitioning and Clustering Finding Most Vital Edges in a Graph Stochastic Local Search Algorithms for the Graph Coloring Problem On Solving the Maximum Disjoint Paths Problem with Ant Colony Optimization LARGE-SCALE AND EMERGING APPLICATIONS Cost-Efficient Multicast Routing in Ad Hoc and Sensor Networks Approximation Algorithm for Clustering in Ad Hoc Networks Topology Control Problems for Wireless Ad Hoc Networks Geometrical Spanner for Wireless Ad Hoc Networks Multicast Topology Inference and its Applications Multicast Congestion in Ring Networks QoS Multimedia Multicast Routing Overlay Networks for Peer-to-Peer Networks Scheduling Data Broadcasts on Wireless Channels: Exact Solutions and Heuristics Combinatorial and Algorithmic Issues for Microarray Analysis Approximation Algorithms for the Primer Selection, Planted Motif Search, and Related Problems Dynamic and Fractional Programming-Based Approximation Algorithms for Sequence Alignment with Constraints Approximation Algorithms for the Selection of Robust Tag SNPs Sphere Packing and Medical Applications Large-Scale Global Placement Multicommodity Flow Algorithms for Buffered Global Routing Algorithmic Game Theory and Scheduling Approximate Economic Equilibrium Algorithms Approximation Algorithms and Algorithm Mechanism Design Histograms, Wavelets, Streams, and Approximation Digital Reputation for Virtual Communities Color Quantization INDEX

261 citations


Journal ArticleDOI
TL;DR: A distributed simulation tool which addresses the unique needs for the simulation of emergency response scenarios using the multi-agent paradigm and operates in a distributed fashion to reduce the simulation time required for such large-scale systems.
Abstract: We describe a distributed simulation tool which addresses the unique needs for the simulation of emergency response scenarios. The simulation tool adopts the multi-agent paradigm, so as to facilitate the modelling of diverse and autonomous agents, and it provides mechanisms for the interaction of the entities that are being simulated. It operates in a distributed fashion to reduce the simulation time required for such large-scale systems. The simulation tool represents the individuals that need to be evacuated, the resources that contribute to the evacuation including human rescuers, and other active resources and entities which may include robots and which can autonomously interact with the environment and with each other and take individual or collaborative decisions. We illustrate the tool with an application and compare the results for both centralized and distributed execution. Our results also show the significant reduction in execution time that is achieved for different degrees of distribution of the simulator on multiple servers.

162 citations


Journal ArticleDOI
TL;DR: A novel decentralized solution to the coalition formation process that pervades disaster management is provided using the state-of-the-art Max-Sum algorithm that provides a completely decentralized message-passing solution and a novel algorithm (F-Max-Sum) that avoids sending redundant messages and efficiently adapts to changes in the environment.
Abstract: Emergency responders are faced with a number of significant challenges when managing major disasters. First, the number of rescue tasks posed is usually larger than the number of responders (or agents) and the resources available to them. Second, each task is likely to require a different level of effort in order to be completed by its deadline. Third, new tasks may continually appear or disappear from the environment, thus requiring the responders to quickly recompute their allocation of resources. Fourth, forming teams or coalitions of multiple agents from different agencies is vital since no single agency will have all the resources needed to save victims, unblock roads and extinguish the fires which might erupt in the disaster space. Given this, coalitions have to be efficiently selected and scheduled to work across the disaster space so as to maximize the number of lives and the portion of the infrastructure saved. In particular, it is important that the selection of such coalitions should be performed in a decentralized fashion in order to avoid a single point of failure in the system. Moreover, it is critical that responders communicate only locally given they are likely to have limited battery power or minimal access to long-range communication devices. Against this background, we provide a novel decentralized solution to the coalition formation process that pervades disaster management. More specifically, we model the emergency management scenario defined in the RoboCup Rescue disaster simulation platform as a coalition formation with spatial and temporal constraints (CFST) problem where agents form coalitions to complete tasks, each with different demands. To design a decentralized algorithm for CFST, we formulate it as a distributed constraint optimization problem and show how to solve it using the state-of-the-art Max-Sum algorithm that provides a completely decentralized message-passing solution. We then provide a novel algorithm (F-Max-Sum) that avoids sending redundant messages and efficiently adapts to changes in the environment. In empirical evaluations, our algorithm is shown to generate better solutions than other decentralized algorithms used for this problem.

158 citations


Journal ArticleDOI
TL;DR: A formal notion for process-oriented contracts is proposed and a criterion for accordance between a private view and its public view is given and the overall implemented process is guaranteed to be deadlock-free and it is always possible to terminate properly.
Abstract: To implement an interorganizational process between different enterprizes, one needs to agree on the ‘rules of engagement’. These can be specified in terms of a contract that describes the overall intended process and the duties of all parties involved. We propose to use such a process-oriented contract which can be seen as the composition of the public views of all participating parties. Based on this contract, each party may locally implement its part of the contract such that the implementation (the private view) agrees on the contract. In this paper, we propose a formal notion for such process-oriented contracts and give a criterion for accordance between a private view and its public view. The public view of a party can be substituted by a private view if and only if the private view accords with the public view. Using the notion of accordance, the overall implemented process is guaranteed to be deadlock-free and it is always possible to terminate properly. In addition, we present a technique for automatically checking our accordance criterion. A case study illustrates how our proposed approach can be used in practice.

150 citations


Journal ArticleDOI
TL;DR: This discussion aims to identify the trends in DoS attacks, the weaknesses of protection approaches and the qualities that modern ones should exhibit, so as to suggest new directions that DoS research can follow.
Abstract: Denial of service (DoS) is a prevalent threat in today's networks because DoS attacks are easy to launch, while defending a network resource against them is disproportionately difficult. Despite the extensive research in recent years, DoS attacks continue to harm, as the attackers adapt to the newer protection mechanisms. For this reason, we start our survey with a historical timeline of DoS incidents, where we illustrate the variety of types, targets and motives for such attacks and how they evolved during the last two decades. We then provide an extensive literature review on the existing research on DoS protection with an emphasis on the research of the last years and the most demanding aspects of defence. These include traceback, detection, classification of incoming traffic, response in the presence of an attack and mathematical modelling of attack and defence mechanisms. Our discussion aims to identify the trends in DoS attacks, the weaknesses of protection approaches and the qualities that modern ones should exhibit, so as to suggest new directions that DoS research can follow.

134 citations


Journal ArticleDOI
TL;DR: A review of the theory, extension models, learning algorithms and applications of the RNN, which has been applied in a variety of areas including pattern recognition, classification, image processing, combinatorial optimization and communication systems.
Abstract: The random neural network (RNN) is a recurrent neural network model inspired by the spiking behaviour of biological neuronal networks. Contrary to most artificial neural network models, neurons in the RNN interact by probabilistically exchanging excitatory and inhibitory spiking signals. The model is described by analytical equations, has a low complexity supervised learning algorithm and is a universal approximator for bounded continuous functions. The RNN has been applied in a variety of areas including pattern recognition, classification, image processing, combinatorial optimization and communication systems. It has also inspired research activity in modelling interacting entities in various systems such as queueing and gene regulatory networks. This paper presents a review of the theory, extension models, learning algorithms and applications of the RNN.

101 citations


Journal ArticleDOI
TL;DR: A new sequential algorithm for making robust predictions in the presence of changepoints, which focuses on the problem of making predictions even when such changes might be present, and introduces nonstationary covariance functions to be used in Gaussian process prediction that model such changes, and demonstrates how to effectively manage the hyperparameters associated with those covariance function.
Abstract: We introduce a new sequential algorithm for making robust predictions in the presence of changepoints. Unlike previous approaches, which focus on the problem of detecting and locating changepoints, our algorithm focuses on the problem of making predictions even when such changes might be present. We introduce nonstationary covariance functions to be used in Gaussian process prediction that model such changes, and then proceed to demonstrate how to effectively manage the hyperparameters associated with those covariance functions. We further introduce covariance functions to be used in situations where our observation model undergoes changes, as is the case for sensor faults. By using Bayesian quadrature, we can integrate out the hyperparameters, allowing us to calculate the full marginal predictive distribution. Furthermore, if desired, the posterior distribution over putative changepoint locations can be calculated as a natural byproduct of our prediction algorithm.

73 citations


Journal ArticleDOI
TL;DR: This paper presents an efficient and secure ID-based mutual authentication and key exchange protocol using bilinear pairings that is well suited for a client–server environment with low-power mobile devices.
Abstract: The identity (ID)-based public-key system using bilinear pairings defined on elliptic curves offers a flexible approach to simplify the certificate management. In 2006, the IEEE P1363.3 committee has defined the ID-based public-key system with bilinear pairings as one of public-key cryptography standards. In this, an authenticated key agreement (AKA) protocol is one important issue that provides mutual authentication and key exchange between two parties. Owing to the fast growth of mobile networks, the computational cost on the client side with low-power computing devices is a critical factor in designing an AKA protocol suited for mobile networks. In this paper, we present an efficient and secure ID-based mutual authentication and key exchange protocol using bilinear pairings. Performance analysis and experimental data are given to demonstrate that our proposed protocol is well suited for a client–server environment with low-power mobile devices. In comparison with the recently proposed ID-based protocols, our protocol has the best performance on the client side.

61 citations


Journal ArticleDOI
TL;DR: This work formalizes the typical steps of the development process and express and justify them directly in logic, and treats three types of refinement steps: horizontal refinement which stays within onelevel of abstraction, vertical refinement addressing the transition from one level of abstraction to another, and implementation by glass box refinement.
Abstract: A theory for the systematic development of distributed interactive software systems constructed in terms of components requires a basic system model and description techniques supporting specific views and abstractions of systems. Typical system views are the interface, the distribution, or the state transition view. We show how to represent these views by mathematics and logics. The development of systems consists in working out these views leading step by step to implementations in terms of sets of distributed, concurrent, interacting state machines. For large systems, the development is carried out by refinement through several levels of abstraction. We formalize the typical steps of the development process and express and justify them directly in logic. In particular, we treat three types of refinement steps: horizontal refinement which stays within one level of abstraction, vertical refinement addressing the transition from one level of abstraction to another, and implementation by glass box refinement. We introduce refinement relations to capture these three dimensions of the development space. We derive verification rules for the refinement steps and show the modularity of the approach.

60 citations


Journal ArticleDOI
TL;DR: This paper shows that another tradeoff with similar properties can be obtained by Fibonacci codes, using fixed codeword sets, using binary representations of integers based on fibonacci numbers of order m ≥ 2.
Abstract: Recent publications advocate the use of various variable length codes for which each codeword consists of an integral number of bytes in compression applications using large alphabets. This paper shows that another tradeoff with similar properties can be obtained by Fibonacci codes. These are fixed codeword sets, using binary representations of integers based on Fibonacci numbers of order m ≥ 2. Fibonacci codes have been used before, and this paper extends previous work presenting several novel features. In particular, the compression efficiency is analyzed and compared to that of dense codes, and various table-driven decoding routines are suggested.

Journal ArticleDOI
TL;DR: A comprehensive survey of the cognitive packet network, which provides QoS-driven routing and performs self-improvement in a distributed manner, by learning from the experience of special packets, which gather on-line QoS measurements and discover new routes is provided.
Abstract: Current and future multimedia networks require connections under specific quality of service (QoS) constraints which can no longer be provided by the best-effort Internet. Therefore, ‘smarter’ networks have been proposed in order to cover this need. The cognitive packet network (CPN) is a routing protocol that provides QoS-driven routing and performs self-improvement in a distributed manner, by learning from the experience of special packets, which gather on-line QoS measurements and discover new routes. The CPN was first introduced in 1999 and has been used in several applications since then. Here we provide a comprehensive survey of its variations, applications and experimental performance evaluations.

Journal ArticleDOI
TL;DR: This work considers in detail the applications of generalized distance functions in giving a uniform treatment of several important semantics for logic programs, including acceptable programs and natural generalizations of them.
Abstract: We discuss a number of distance functions encountered in the theory of computation, including metrics, ultra-metrics, quasi-metrics, generalized ultra-metrics, partial metrics, d-ultra-metrics and generalized metrics. We consider their properties, associated fixed-point theorems and some general applications they have within the theory of computation. We consider in detail the applications of generalized distance functions in giving a uniform treatment of several important semantics for logic programs, including acceptable programs and natural generalizations of them, and also the supported model and the stable model in the context of locally stratified extended disjunctive logic programs and databases.

Journal ArticleDOI
TL;DR: It is shown that degeneration of the genetic code is a p-adic phenomenon and a hypothesis on the evolution of the Genetic Code assuming that primitive code was based on single nucleotides and chronologically first four amino acids is put forward.
Abstract: This paper presents the foundations of p-adic modelling in genomics. Considering nucleotides, codons, DNA and RNA sequences, amino acids and proteins as information systems, we have formulated the corresponding p-adic formalisms for their investigations. Each of these systems has its characteristic prime number used for construction of the related information space. Relevance of this approach is illustrated by some examples. In particular, it is shown that degeneration of the genetic code is a p-adic phenomenon. We have also put a forward a hypothesis on the evolution of the genetic code assuming that primitive code was based on single nucleotides and chronologically first four amino acids. This formalism of p-adic genomic information systems can be implemented in computer programs and applied to various concrete cases.

Journal ArticleDOI
TL;DR: It is shown that all monitors with WC places in all cases presented are redundant and can be removed while maintaining the maximal number of good states and identified the condition and examples for a WC places to be redundant.
Abstract: Huang et al. propose a more permissive siphon-based algorithm for deadlock prevention of a subclass of Petri nets, S3PMR. It iteratively (based on a mixed integer programming (MIP) technique) adds two kinds of control places called ordinary control (OC) places and weighted control (WC) places to the original model to prevent siphons from becoming unmarked. Numerical experiments indicate that the proposed policy appears to be more permissive than closely related approaches in the literature. The presence of WC renders the net a generalized Petri net, which is harder to analyze, and it is unclear how the above traditional MIP must be modified. We show that all monitors with WC places in all cases presented are redundant and can be removed while maintaining the maximal number of good states. We also (1) show that OC places and WC places are associated with resource and mixture siphons, respectively; (2) identify the condition and examples for a WC places to be redundant; (3) explore different types of problematic siphons; and (4) identify the correct sequence of adding monitors to avoid redundant monitors.

Journal ArticleDOI
TL;DR: Preliminary evaluations show that the combination of an ASP-based reasoning component and a WSN is a good solution for creating a home-based healthcare system.
Abstract: This paper describes an intelligent home healthcare system characterized by a wireless sensor network (WSN) and a reasoning component. The aim of the system is to allow constant and unobtrusive monitoring of a patient in order to enhance autonomy and increase quality of life. Data collected by the sensor network are used to support a reasoning component, which is based on answer set programming (ASP), in performing three main reasoning tasks: (i) continuous contextualization of the physical, mental and social state of a patient, (ii) prediction of possibly risky situations and (iii) identification of plausible causes for the worsening of a patient's health. Starting from different data sources (sensor data, test results, inference results) the reasoning component applies expressive logic rules aimed at correct interpretation of incomplete or inconsistent contextual information, and evaluates correlation rules expressed by clinicians. The expressive power of ASP allows efficient enough reasoning to support prevention, while declarativity simplifies rule-specification and allows automatic encoding of knowledge. Preliminary evaluations show that the combination of an ASP-based reasoning component and a WSN is a good solution for creating a home-based healthcare system.

Journal ArticleDOI
TL;DR: In a series of experiments, it is shown how agents can support human planners, ease their cognitive burden by giving advice on the correct use of policies and catch possible violations.
Abstract: In this paper, we describe how agents can support collaborative planning within international coalitions, formed in an ad hoc fashion as a response to military and humanitarian crises. As these coalitions are formed rapidly and without much lead time or co-training, human planners may be required to observe a plethora of policies that direct their planning effort. In a series of experiments, we show how agents can support human planners, ease their cognitive burden by giving advice on the correct use of policies and catch possible violations. The experiments show that agents can effectively prevent policy violations with no significant extra cost.

Journal ArticleDOI
TL;DR: The evaluation indicates that the proposed RNN-based algorithm is better in terms of performance than the greedy heuristic, consistently achieving on average results within 5% of the cost obtained by the optimal solution for all problem cases considered.
Abstract: We investigate the assignment of assets to tasks where each asset can potentially execute any of the tasks, but assets execute tasks with a probabilistic outcome of success. There is a cost associated with each possible assignment of an asset to a task, and if a task is not executed, there is also a cost associated with the non-execution of the task. Thus, any assignment of assets to tasks will result in an expected overall cost which we wish to minimize. We formulate the allocation of assets to tasks in order to minimize this expected cost, as a nonlinear combinatorial optimization problem. A neural network approach for its approximate solution is proposed based on selecting parameters of a random neural network (RNN), solving the network in equilibrium, and then identifying the assignment by selecting the neurons whose probability of being active is the highest. Evaluations of the proposed approach are conducted by comparison with the optimum (enumerative) solution as well as with a greedy approach over a large number of randomly generated test cases. The evaluation indicates that the proposed RNN-based algorithm is better in terms of performance than the greedy heuristic, consistently achieving on average results within 5% of the cost obtained by the optimal solution for all problem cases considered. The RNN-based approach is fast and is of low polynomial complexity in the size of the problem, while it can be used for decentralized decision making.

Journal ArticleDOI
TL;DR: A lower bound is given, showing that any comparison-based algorithm for the k-clustering problem that produces clusters of average size more than 2 in the worst case takes Ω(diam) time, where diam is the diameter of the network.
Abstract: A silent self-stabilizing asynchronous distributed algorithm is given for constructing a k-dominating set, and hence a k-clustering, of a connected network of processes with unique IDs and no designated leader. The algorithm is comparison-based, takes O(k) time and uses O(k log n) space per process, where n is the size of the network. It is known that finding a minimum k-dominating set is NP-hard. A lower bound is given, showing that any comparison-based algorithm for the k-clustering problem that produces clusters of average size more than 2 in the worst case takes Ω(diam) time, where diam is the diameter of the network.

Journal ArticleDOI
TL;DR: Solving the generalized test derivation problem, sufficient conditions for test suite completeness weaker than the existing ones are formulated and used to elaborate an algorithm that can be used both for extending user-defined test suites to achieve the desired fault coverage and for test generation.
Abstract: In this paper, we consider a classical problem of complete test generation for deterministic finite-state machines (FSMs) in a more general setting. The first generalization is that the number of states in implementation FSMs can even be smaller than that of the specification FSM. Previous work deals only with the case when the implementation FSMs are allowed to have the same number of states as the specification FSM. This generalization provides more options to the test designer: when traditional methods trigger a test explosion for large specification machines, tests with a lower, but yet guaranteed, fault coverage can still be generated. The second generalization is that tests can be generated starting with a user-defined test suite, by incrementally extending it until the desired fault coverage is achieved. Solving the generalized test derivation problem, we formulate sufficient conditions for test suite completeness weaker than the existing ones and use them to elaborate an algorithm that can be used both for extending user-defined test suites to achieve the desired fault coverage and for test generation. We present the experimental results that indicate that the proposed algorithm allows obtaining a trade-off between the length and fault coverage of test suites.

Journal ArticleDOI
TL;DR: The results show that the technique performs comparably to a centralized task scheduler (within 6% on average), and also, unlike its centralized counterpart, it is robust to restrictions on the agents’ communication and observation ranges.
Abstract: This paper reports on a novel decentralized technique for planning agent schedules in dynamic task allocation problems. Specifically, we use a stochastic game formulation of these problems in which tasks have varying hard deadlines and processing requirements. We then introduce a new technique for approximating this game using a series of static potential games, before detailing a decentralized method for solving the approximating games that uses the distributed stochastic algorithm. Finally, we discuss an implementation of our approach to a task allocation problem in the RoboCup Rescue disaster management simulator. The results show that our technique performs comparably to a centralized task scheduler (within 6% on average), and also, unlike its centralized counterpart, it is robust to restrictions on the agents’ communication and observation ranges.

Journal ArticleDOI
TL;DR: An architectural model based on policy-based self-managed cells for engineering ubiquitous computing systems, and the need for learning adaptive behaviour from users and the importance of formal methods within the engineering design process are indicated.
Abstract: The advent of miniaturized sensors that can be carried on the body or embedded in the environment, together with ubiquitous ‘smartphones’ with various sensors means that ubiquitous computing systems already pervade our lives. However, for them to ‘disappear’ in the background, they need to be adaptive, autonomous and self-managing. We present an architectural model based on policy-based self-managed cells for engineering ubiquitous computing systems, and discuss issues of security and fault management. We indicate the need for learning adaptive behaviour from users and the importance of formal methods within the engineering design process.

Journal ArticleDOI
TL;DR: A novel methodology that is grounded in the ethical issues associated with a project of this nature is created and developed, which has developed ambient user interfaces that are integrated in familiar home artefacts, such as televisions and digital picture frames.
Abstract: In this paper, we present a case study on the development of interfaces for elderly and disabled users. The domain of the case study was situated in the home environment, where we focused on producing affordable technologies to enable users to interact with and to control home appliances. We have developed ambient user interfaces that are integrated in familiar home artefacts, such as televisions and digital picture frames. These interfaces are connected remotely to a home network and are adaptive to users’ expected increasing physical and cognitive needs. To support the development of the project, we created a novel methodology that is grounded in the ethical issues associated with a project of this nature. Our success with it has led to us presenting it here as a practical approach to developing user interfaces for a range of interactive applications, especially where there may be diverse user populations. This paper describes our journey through this project, how the methodology has been used throughout and the development of our user interfaces and their evaluation.

Journal ArticleDOI
TL;DR: This paper addresses context in intelligent context-aware systems to support personalised service provision and cooperative computing using RDF/S with OWL and Jena provide an effective basis for autonomous decision making using processing rules.
Abstract: This paper addresses context in intelligent context-aware systems to support personalised service provision and cooperative computing. Context processing, context modelling, ontology, and OWL are introduced and a context reasoning ontology presented. Context implementation reduces to a decision problem which is characterised as one of selecting from a number of potential options based on the relationship between the values that describe the input and the solution, the modelling school of decision analysis attempts to construct an explicit model of such relationships, usually in the form of decision trees. An overview of decision trees with parametric design considerations is presented. Comparisons with related research are drawn and an evaluation and simulation of Smart-Context is presented. RDF/S with OWL and Jena provide an effective basis for autonomous decision making using processing rules, and the issue is one of implementation in adaptable and tractable solutions. A conclusion with open research questions is presented with consideration of potential directions for future research.

Journal ArticleDOI
TL;DR: A CVE categorization framework termed CVE Classifier is proposed that transforms the dictionary into a classifier that not only categorizes CVEs with respect to diverse taxonomic features but can also evaluate general trends in the evolution of vulnerabilities.
Abstract: The dictionary of common vulnerabilities and exposures (CVEs) is a compilation of known security loopholes whose objective is to both facilitate the exchange of security-related information and expedite vulnerability analysis of computer systems. Its lack of categorization and generalization capability renders the dictionary ineffective when it comes to developing defense strategies for clustered vulnerabilities instead of individual exploits. To address this issue, we propose a CVE categorization framework termed CVE Classifier that transforms the dictionary into a classifier that not only categorizes CVEs with respect to diverse taxonomic features but can also evaluate general trends in the evolution of vulnerabilities. With the help of support vector machines, CVE Classifier builds learning models for taxonomic features based on training data automatically extracted from pertinent vulnerability databases including BID, X-Force and Secunia, and CVE entries containing telltale keywords unique to taxonomic features. We use word-stemming and stopword-removal techniques to reduce the dimensions of the feature space formed by CVEs and develop a data fusion and cleansing process to eliminate data inconsistencies to improve classification performance. The CVE classification produced by the proposed framework reveals that the majority of the Internet security loopholes are harbored by a small set of services. Moreover, it becomes evident that the widespread deployment of security devices provides many additional attack points as such devices demonstrate a great mount of vulnerabilities. Finally, the CVE Classifier points out that remotely exploitable security loopholes continue to dominate the CVEs landscape.

Journal ArticleDOI
TL;DR: A multicast routing infrastructure is proposed as a core feature of SpiNNaker, a massively parallel computer for the real-time simulation of large-scale spiking neural networks, which focuses on neural modelling flexibility, power-efficiency, fault-tolerance and the communication throughput of the router.
Abstract: A multicast routing infrastructure is proposed as a core feature of SpiNNaker, a massively parallel computer for the real-time simulation of large-scale spiking neural networks. The infrastructure is implemented using a communications router, based on an event-driven routing scheme, on each multicore processing node in the system. The design considerations emphasize the difference between the requirements of neural network communications and those of conventional computer networks and on-chip networks. The focus of the design is on neural modelling flexibility, power-efficiency, fault-tolerance and the communication throughput of the router.

Journal ArticleDOI
TL;DR: A novel technique is proposed, the ‘Randomized Re-Routing Algorithm (RRR)’, which detects the presence of novel events in a distributed manner, and dynamically disperses the background traffic towards secondary paths in the network, while creating a ‘fast track path’ which provides better delay and better quality of service (QoS) for the high priority traffic which is carrying the new information.
Abstract: Sensor networks (SNs) consist of spatially distributed sensors which monitor an environment, and which are connected to some sinks or backbone system to which the sensor data is being forwarded. In many cases, the sensor nodes themselves can serve as intermediate nodes for data coming from other nodes, on the way to the sinks. Much of the traffic carried by SNs will originate from routine measurements or observations by sensors that monitor a particular situation, such as the temperature and humidity in a room or the infrared observation of the perimeter of a house, so that the volume of routine traffic resulting from such observations may be quite high. When important and unusual events occur, such as a sudden fire breaking out or the arrival of an intruder, it will be necessary to convey this new information very urgently through the network to a designated set of sink nodes where this information can be processed and dealt with. This paper addresses the important challenge by avoiding the routine background traffic from creating delays or bottlenecks that impede the rapid delivery of high priority traffic resulting from the unusual events. Specifically we propose a novel technique, the ‘Randomized Re-Routing Algorithm (RRR)’, which detects the presence of novel events in a distributed manner, and dynamically disperses the background traffic towards secondary paths in the network, while creating a ‘fast track path’ which provides better delay and better quality of service (QoS) for the high priority traffic which is carrying the new information. When the surge of new information has subsided, this is again detected by the nodes and the nodes progressively revert to best QoS or shortest-path routing for all the ongoing traffic. The proposed technique is evaluated using a mathematical model as well as simulations, and is also compared with a standard node by a node priority scheduling technique.

Journal ArticleDOI
TL;DR: This paper provides a formal definition of cheating for visual cryptography and new (2, n)-threshold and (n, n-threshold schemes that are immune to deterministic cheating.
Abstract: In this paper, we consider the problem of cheating for visual cryptography schemes. Although the problem of cheating has been extensively studied for secret sharing schemes, little work has been done for visual secret sharing. We provide a formal definition of cheating for visual cryptography and new (2, n)-threshold and (n, n)-threshold schemes that are immune to deterministic cheating.

Journal ArticleDOI
TL;DR: A new routing algorithm called ant-based energy aware disjoint multipath routing algorithm (AEADMRA) is proposed, based on swarm intelligence and especially on the ant colony based meta heuristic that extends GRID to enable path accumulation in route request/reply packets and discover multiple energy aware routing paths with a low routing overhead.
Abstract: Ant-based routing protocols for mobile ad hoc networks (MANETs) have been widely explored, but most of them are essentially single-path routing methods that tend to impose a heavy burden on the hosts along the shortest path from source to destination In this paper, we combine swarm intelligence and node-disjoint multipath routing to alleviate these problems A novel approach called ant-based energy-aware disjoint multipath routing algorithm (AEADMRA) is proposed AEADMRA is based on swarm intelligence and especially on the ant colony-based meta heuristic AEADMRA can discover multiple energy-aware node-disjoint routing paths with a low routing overhead Simulation results indicate that the performance of AEADMRA outperforms other pertinent algorithms

Journal ArticleDOI
TL;DR: In application to policy decision making, it is shown how a Euclidean embedding of different information spaces based on cross-tabulation counts is provided by correspondence analysis and how to focus analysis in a small number of dimensions.
Abstract: We study two aspects of information semantics: (i) the collection of all relationships, (ii) tracking and spotting anomaly and change. The first is implemented by endowing all relevant information spaces with a Euclidean metric in a common projected space. The second is modelled by an induced ultrametric. A very general way to achieve a Euclidean embedding of different information spaces based on cross-tabulation counts (and from other input data formats) is provided by correspondence analysis. From there, the induced ultrametric that we are particularly interested in takes a sequential—e.g. temporal—ordering of the data into account. We employ such a perspective to look at narrative, ‘the flow of thought and the flow of language’ (Chafe). In application to policy decision making, we show how we can focus analysis in a small number of dimensions.