scispace - formally typeset
Search or ask a question

Showing papers in "The Computer Journal in 2007"


Journal ArticleDOI
TL;DR: This article describes a new technique for 'hedging' the predictions output by many such algorithms, including support vector machines, kernel ridge regression, kernel nearest neighbours and by many other state-of-the-art methods.
Abstract: Recent advances in machine learning make it possible to design efficient prediction algorithms for data sets with huge numbers of parameters. This article describes a new technique for 'hedging' the predictions output by many such algorithms, including support vector machines, kernel ridge regression, kernel nearest neighbours and by many other state-of-the-art methods. The hedged predictions for the labels of new objects include quantitative measures of their own accuracy and reliability. These measures are provably valid under the assumption of randomness, traditional in machine learning: the objects and their labels are assumed to be generated independently from the same probability distribution. In particular, it becomes possible to control (up to statistical fluctuations) the number of erroneous predictions by selecting a suitable confidence level. Validity being achieved automatically, the remaining goal of hedged prediction is efficiency: taking full account of the new objects' features and other available information to produce as accurate predictions as possible. This can be done successfully using the powerful machinery of modern machine learning.

120 citations


Journal ArticleDOI
TL;DR: This paper proposes a complementary approach that obtains efficient event dissemination by reorganizing the overlay network topology through a self-organizing algorithm executed by brokers whose aim is to directly connect, through overlay links, pairs of brokers matching same events.
Abstract: Recently many scalable and efficient solutions for event dissemination in publish/subscribe (pub/sub) systems have appeared in the literature. This dissemination is usually done over an overlay network of brokers and its cost can be measured as the number of messages sent over the overlay to allow the event to reach all intended subscribers. Efficient solutions to this problem are often obtained through smart dissemination algorithms that avoid flooding events on the overlay. In this paper, we propose a complementary approach that obtains efficient event dissemination by reorganizing the overlay network topology. More specifically, this reorganization is done through a self-organizing algorithm executed by brokers whose aim is to directly connect, through overlay links, pairs of brokers matching same events. In this way, on average, the number of brokers involved in an event dissemination decreases, thus reducing its cost. Even though the paradigm of the self-organizing algorithm is general and then applicable to any overlay-based pub/sub system, its concrete implementation depends on the specific system. As a consequence, we studied the effect of the introduction of the self-organizing algorithm in the context of a specific system implementing a tree-based routing strategy, namely SIENA, showing the actual performance benefits through an extensive simulation study. In particular, performance results point out the capacity of the algorithm to converge to an overlay topology accommodating efficient event with respect to (w.r.t) dissemination a specific scenario. Moreover, the algorithm shows a significant capacity to adapt the overlay network topology to continuously changing scenarios while keeping an efficient behavior w.r.t. event dissemination.

93 citations


Journal ArticleDOI
TL;DR: A verification approach based on an abstraction of the OR-join semantics; the relaxed soundness property; and transition invariants can be used to successfully detect errors in YAWL models and can be easily transferred to other workflow languages allowing for advanced constructs such as cancellations and OR-joins.
Abstract: YAWL (Yet Another Workflow Language) workflow language supports the most frequent control-flow patterns found in the current workflow practice. As a result, most workflow languages can be mapped onto YAWL without the loss of control-flow details, even languages allowing for advanced constructs such as cancellation regions and OR-joins. Hence, a verification approach for YAWL is desirable, because such an approach could be used for any workflow language that can be mapped onto YAWL. Unfortunately, cancellation regions and OR-joins are 'non-local' properties, and in general we cannot even decide whether the desired final state is reachable if both patterns are present. This paper proposes a verification approach based on (i) an abstraction of the OR-join semantics; (ii) the relaxed soundness property; and (iii) transition invariants. This approach is correct (errors reported are really errors), but not necessarily complete (not every error might get reported). This incompleteness can be explained because, on the one hand, the approach abstracts from the OR-join semantics and on the other hand, it may use only transition invariants, which are structural properties. Nevertheless, our approach can be used to successfully detect errors in YAWL models. Moreover, the approach can be easily transferred to other workflow languages allowing for advanced constructs such as cancellations and OR-joins.

77 citations


Journal ArticleDOI
TL;DR: Three main techniques to develop fixed-parameter algorithms are surveyed, namely: kernelization (data reduction with provable performance guarantee), depthbounded search trees and a new technique called iterative compression.
Abstract: The fixed-parameter approach is an algorithm design technique for solving combinatorially hard (mostly NP-hard) problems. For some of these problems, it can lead to algorithms that are both efficient and yet at the same time guaranteed to find optimal solutions. Focusing on their application to solving NP-hard problems in practice, we survey three main techniques to develop fixed-parameter algorithms, namely: kernelization (data reduction with provable performance guarantee), depthbounded search trees and a new technique called iterative compression. Our discussion is circumstantiated by several concrete case studies and provides pointers to various current challenges in the field.

71 citations


Journal ArticleDOI
TL;DR: This paper presents a simple framework unifying a family of consensus algorithms that can tolerate process crash failures and asynchronous periods of the network, also called indulgent consensus algorithms, and introduces a new abstraction, called Alpha, which precisely captures consensus safety.
Abstract: This paper presents a simple framework unifying a family of consensus algorithms that can tolerate process crash failures and asynchronous periods of the network, also called indulgent consensus algorithms Key to the framework is a new abstraction we introduce here, called Alpha, and which precisely captures consensus safety Implementations of Alpha in shared memory, storage area network, message passing and active disk systems are presented, leading to directly derived consensus algorithms suited to these communication media The paper also considers the case where the number of processes is unknown and can be arbitrarily large

60 citations


Journal ArticleDOI
TL;DR: A DoS detection approach which uses the maximum likelihood criterion with the random neural network (RNN) and fusing the information gathered from the individual input features using likelihood averaging and different architectures of RNNs is proposed.
Abstract: Due to the simplicity of the concept and the availability of attack tools, launching a DoS attack is relatively easy, while defending a network resource against it is disproportionately difficult. The first step of a protection scheme against DoS must be the detection of its existence, ideally before the destructive traffic build-up. In this paper we propose a DoS detection approach which uses the maximum likelihood criterion with the random neural network (RNN). Our method is based on measuring various instantaneous and statistical variables describing the incoming network traffic, acquiring a likelihood estimation and fusing the information gathered from the individual input features using likelihood averaging and different architectures of RNNs. We present and compare seven variations of it and evaluate our experimental results obtained in a large networking testbed.

58 citations


Journal ArticleDOI
TL;DR: This paper developed an approach to symbolic CKLn model checking via Ordered Binary decision diagrams and implemented the corresponding symbolic model checker MCTK, and modelled the Dining Cryptographers protocol and the five-hands protocol for Russian Cards problem in MCTk.
Abstract: Model checking is a promising approach to automatic verification, which has concentrated on specification expressed in temporal logics. Comparatively little attention has been given to temporal logics of knowledge, although such logics have been proven to be very useful in the specifications of protocols for distributed systems. In this paper, we addressed the model checking problem for a temporal logic of knowledge (Halpern and Vardi's logic of CKLn). Based on the semantics of interpreted systems with local propositions, we developed an approach to symbolic CKLn model checking via Ordered Binary decision diagrams and implemented the corresponding symbolic model checker MCTK. In our approach to model checking specifications involving agents' knowledge, the knowledge modalities are eliminated via quantifiers over agents' non-observable variables. We then modelled the Dining Cryptographers protocol and the five-hands protocol for Russian Cards problem in MCTK. Via these two examples, we compare MCTK's empirical performance with two different state-of-the-art epistemic model checkers, MCK and MCMAS.

52 citations


Journal ArticleDOI
Wei Li1
TL;DR: The properties reachability, soundness and completeness of the R-calculus are formally defined and proved.
Abstract: First-order languages have been introduced to describe beliefs formally and the concept of revision is defined for model as well as proof. A logical inference system named R-calculus is defined to derive all maximal contractions of a base of belief set for its given refutations. The R-calculus consists of the structural rules, an axiom, a cut rule and the rules for logical connectives and quantifiers. Some examples are given to demonstrate how to use the R-calculus. Furthermore, the properties reachability, soundness and completeness of the R-calculus are formally defined and proved.

42 citations


Journal ArticleDOI

41 citations


Journal ArticleDOI
TL;DR: Genetic regulatory mechanisms that control cellular responses to low-dose ionizing radiation (IR) are elucidated by developing powerful graph algorithms that exploit the innovative principles of fixed parameter tractability in order to generate distilled gene sets.
Abstract: Tools of molecular biology and the evolving tools of genomics can now be exploited to study the genetic regulatory mechanisms that control cellular responses to a wide variety of stimuli. These responses are highly complex, and involve many genes and gene products. The main objectives of this paper are to describe a novel research program centered on understanding these responses by i.developing powerful graph algorithms that exploit the innovative principles of fixed parameter tractability in order to generate distilled gene sets; ii.producing scalable, high performance parallel and distributed implementations of these algorithms utilizing cutting-edge computing platforms and auxiliary resources; iii.employing these implementations to identify gene sets suggestive of co-regulation; and iv.performing sequence analysis and genomic data mining to examine, winnow and highlight the most promising gene sets for more detailed investigation. As a case study, we describe our work aimed at elucidating genetic regulatory mechanisms that control cellular responses to low-dose ionizing radiation (IR). A low-dose exposure, as defined here, is an exposure of at most 10 cGy (rads). While the consequences of high doses of radiation are well known, the net outcome of low-dose exposures continues to be debated, with support in the literature for both detrimental and beneficialmore » effects. We use genome-scale gene expression data collected in response to low-dose IR exposure in vivo to identify the pathways that are activated or repressed as a tissue responds to the radiation insult. The driving motivation is that knowledge of these pathways will help clarify and interpret physiological responses to IR, which will advance our understanding of the health consequences of low-dose radiation exposures.« less

40 citations


Journal ArticleDOI
TL;DR: It is demonstrated that the proposed protocol is a real contributory group key agreement one and provides forward secrecy as well as implicit key authentication and is provably secure against passive adversaries and impersonator's attacks.
Abstract: With rapid growth of mobile wireless networks, many mobile applications have received significant attention. However, security will be an important factor for their full adoption. Most security technologies currently deployed in wired networks are not fully applicable to wireless networks involved in resource-limited mobile nodes because of their low-power computing capability and limited energy. The design of secure group key agreement protocols is one of many important security issues in wireless networks. A group key agreement protocol involves all participants cooperatively establishing a group key, which is used to encrypt/decrypt transmitted messages among participants over an open channel. Unfortunately, most previously proposed group key agreement protocols are too expensive computationally to be employed in mobile wireless networks. Recently, Bresson et al. proposed an authenticated group key agreement protocol suitable for a mobile wireless network. This mobile wireless network is an asymmetric wireless one that consists of many mobile nodes with limited computing capability and a powerful node with less restriction. However, this protocol does not satisfy some important security properties such as forward secrecy and contributory key agreement. In this paper, we propose a new authenticated group key agreement protocol, which is well suited for this asymmetric wireless network. The proposed protocol not only is efficient but also meets strong security requirements. We demonstrate that the proposed protocol is a real contributory group key agreement one and provides forward secrecy as well as implicit key authentication. The proposed protocol is provably secure against passive adversaries and impersonator's attacks. A simulation result shows that the proposed protocol is well suited for mobile devices with limited computing capability.

Book ChapterDOI
TL;DR: This chapter surveys the use of fixed-parameter algorithms in phylogenetics, which involves the construction of a likely phylogeny (genealogical tree) for a set of species based on observed differences in the phenotype, Differences in the genotype, or given partial phylogenies.
Abstract: We survey the use of fixed-parameter algorithms in the field of phylogenetics, which is the study of evolutionary relationships. The central problem in phylogenetics is the reconstruction of the evolutionary history of biological species, but its methods also apply to linguistics, philology or architecture. A basic computational problem is the reconstruction of a likely phylogeny (genealogical tree) for a set of species based on observed differences in the phenotype like color or form of limbs, based on differences in the genotype like mutated nucleotide positions in the DNA sequence, or based on given partial phylogenies. Ideally, one would like to construct socalled perfect phylogenies, which arise from a very simple evolutionary model, but in practice one must often be content with phylogenies whose ‘distance from perfection’ is as small as possible. The computation of phylogenies has applications in seemingly unrelated areas such as genomic sequencing and finding and understanding genes. The numerous computational problems arising in phylogenetics often are NPcomplete, but for many natural parametrizations they can be solved using fixed-parameter algorithms.

Journal ArticleDOI
TL;DR: This paper describes the development of a technical demonstrator system, the AKTiveSA TDS, which integrates a variety of knowledge technologies and visualization components within the context of a unitary application framework, and describes the approach to scenario development, knowledge acquisition, ontology engineering, and system design.
Abstract: The issue of improved situation awareness is a key concern for military agencies, promising to deliver strategic advantages in a variety of conflict and non-conflict scenarios. Improved situation awareness can benefit operational effectiveness by facilitating the planning process, improving the quality and timeliness of decisions, and providing better feedback regarding the strategic consequences of military actions. In this paper, we aim to show how a combination of semantic technologies and user interface design initiatives can be used to improve situation awareness in a simulated humanitarian relief scenario. We describe the development of a technical demonstrator system, the AKTiveSA TDS, which integrates a variety of knowledge technologies and visualization components within the context of a unitary application framework. We also describe our approach to scenario development, knowledge acquisition, ontology engineering, and system design. Some specific problems encountered during system development are discussed, e.g. the performance overheads associated with rules-based processing, and potential solution strategies for these problems are presented alongside a description of future development activities.

Journal ArticleDOI
TL;DR: A technique to describe use cases by means of sequence diagrams, which compares sequence diagrams in order to define sequence-diagram relationships for identifying and defining use-case relationships is explained.
Abstract: One of the key tools of the unified modelling language for behaviour modelling is the use-case model. The behaviour of a use case can be described by means of interaction diagrams (sequence and collaboration diagrams), activity charts and state diagrams or by pre-conditions and post-conditions, as well as natural language text, where appropriate. This article explains a technique to describe use cases by means of sequence diagrams. It compares sequence diagrams in order to define sequence-diagram relationships for identifying and defining use-case relationships. This article uses an automatic teller machine system case study to illustrate our proposal.

Journal ArticleDOI
TL;DR: An introductory overview of the novel field of quantum programming languages (QPLs) from a pragmatic perspective and some of the goals and design issues are surveyed, which motivate the research in this area.
Abstract: The present article gives an introductory overview of the novel field of quantum programming languages (QPLs) from a pragmatic perspective. First, after a short summary of basic notations of quantum mechanics, some of the goals and design issues are surveyed, which motivate the research in this area. Then, several of the approaches are described in more detail. The article concludes with a brief survey of current research activities and a tabular summary of a selection of QPLs, which have been published so far.

Journal ArticleDOI
TL;DR: Two experiments are reported that investigated the relationship between information access and the selection of cognitive strategy, particularly the effect on learning and memory, and the potential benefits of a novel approach to display design that manipulates information access depending on task performance criteria.
Abstract: Recent developments in technology have meant that operators of complex systems, such as those found in the modern aircraft cockpit, now have access to an unprecedented volume of information. Significant focus within computer science and engineering has therefore been placed upon developing methods by which task-relevant information can be made as accessible as possible within the interface, thus reducing operator workload—a goal of traditional human factors approaches to interface design. Recent theoretical perspectives of adaptive human cognition [Gray, W.D. and Fu, W.-T. (2004). Soft constraints in interactive behaviour: the case of ignoring perfect knowledge in the world for imperfect knowledge in the head. Cogn. Sci., 28, 359–383] suggest that there is a delicate balance and a series of trade-offs between information access at the interface and resulting human interactive behaviour. A distinction has been drawn between display-and memory-based strategies. Two experiments are reported that investigated the relationship between information access and the selection of cognitive strategy, particularly the effect on learning and memory. Experiment 1 varied the cost of accessing information in a routine copying task, and found that higher access costs (e.g. mouse movement plus 1-s delay) led to the adoption of a memory-based strategy, and better retention of visual-spatial information. Experiment 2 manipulated the availability of fused information in a low-fidelity flight simulation task. Temporarily removing onscreen information similarly improved recall of site location information. The potential benefits are discussed of a novel approach to display design that manipulates information access depending on task performance criteria.

Journal ArticleDOI
TL;DR: This article contests the Wegner and Eberbach claims that there are fundamental limitations to Turing Machines as a foundation of computability and that these can be overcome by so-called super-Turing models such as interaction machines, the π-Calculus and the $-calculus.
Abstract: Wegner and Eberbach have argued that there are fundamental limitations to Turing Machines as a foundation of computability and that these can be overcome by so-called super-Turing models such as interaction machines, the π-calculus and the $-calculus. In this article, we contest the Wegner and Eberbach claims.

Journal ArticleDOI
TL;DR: Computer simulations demonstrate that the proposed approach is robust in joint initiation/termination and tracking of multiple manoeuvring targets even when the environment is hostile with high-clutter rates and low target detection probabilities.
Abstract: In this paper, we present an online approach for joint initiation/termination and tracking for multiple targets with multiple sensors using sequential Monte Carlo (SMC) methods. There are several main contributions in the paper. The first contribution is the extension of the deterministic initiation and termination method proposed by the authors' previous publications to a full SMC context in which track initiation/termination are executed with sampling methods. In effect, the dimensions of the particles are variable. In addition, we also integrate a Markov random field (MRF) motion model with the framework to enable efficient and accurate tracking for interacting targets and to avoid potential track coalescence problems. With the employment of multiple sensors, a centralized tracking strategy is adopted, where the observations from all active sensors are fused together for target initiation/termination and tracking and a set of global tracks is maintained. Intra-and inter-sensor clusters are constructed, comprised of closely spaced observations either in time for single sensors or from distinct sensors at a single time, that can increase the reliability when proposing new tracks for initiation. Computer simulations demonstrate that the proposed approach is robust in joint initiation/termination and tracking of multiple manoeuvring targets even when the environment is hostile with high-clutter rates and low target detection probabilities. The integration of the MRF framework into the proposed methods improves robustness in handling close target interactions when the observation noise is high.

Journal ArticleDOI
TL;DR: This paper presents a detailed analysis of the propagation of errors to the output in the hardware implementation of SHA-512, and proposes an error detection scheme based on parity codes and hardware redundancy.
Abstract: The Secure Hash Algorithm SHA-512 is a dedicated cryptographic hash function widely considered for use in data integrity assurance and data origin authentication security services. Reconfigurable hardware devices such as Field Programmable Gate Arrays (FPGAs) offer a flexible and easily upgradeable platform for implementation of cryptographic hash functions. Owing to the iterative structure of SHA-512, even a single transient error at any stage of the hash value computation will result in large number of errors in the final hash value. Hence, detection of errors becomes a key design issue. In this paper, we present a detailed analysis of the propagation of errors to the output in the hardware implementation of SHA-512. Included in this analysis are single, transient as well as permanent faults that may appear at any stage of the hash value computation. We then propose an error detection scheme based on parity codes and hardware redundancy. We report the performance metrics such as area, memory, and throughput for the implementation of SHA-512 with error detection capability on an FPGA of ALTERA. We achieved 100% fault coverage in the case of single faults with an area overhead of 21% and with a reduced throughput of 11.6% with the error detection circuit.

Journal ArticleDOI
TL;DR: Bellare and Rogaway as mentioned in this paper proposed a key establishment protocol based on the strengthened Yahalom protocol, which is a simplified version of the protocol and was proven secure by Backes and Pfitzmann in 2006.
Abstract: Although the Yahalom protocol, proposed by Burrows, Abadi, and Needham in 1990, is one of the most prominent key establishment protocols analysed by researchers from the computer security community (using automated proof tools), a simplified version of the protocol is only recently proven secure by Backes and Pfitzmann [(2006) On the Cryptographic Key Secrecy of the Strengthened Yahalom Protocol. Proc. IFIP SEC 2006] in their cryptographic library framework. We present a protocol for key establishment that is closely based on the Yahalom protocol. We then present a security proof in the Bellare, M. and Rogaway, P. [(1993a). Entity Authentication and Key Distribution. Proc. of CRYPTO 1993, Santa Barbara, CA, August 22–26, LNCS, Vol. 773, pp. 110–125. Springer-Verlag, Berlin] model and the random oracle model. We also observe that no partnering mechanism is specified within the Yahalom protocol. We then present a brief discussion on the role and the possible construct of session identifiers (SIDs) as a form of partnering mechanism, which allows the right session key to be identified in concurrent protocol executions. We then recommend that SIDs should be included within protocol specification rather than consider SIDs as artefacts in protocol proof.

Journal ArticleDOI
TL;DR: The web data management practices emerging techniques and technologies is one book that the authors really recommend you to read, to get more solutions in solving this problem.
Abstract: A solution to get the problem off, have you found it? Really? What kind of solution do you resolve the problem? From what sources? Well, there are so many questions that we utter every day. No matter how you will get the solution, it will mean better. You can take the reference from some books. And the web data management practices emerging techniques and technologies is one book that we really recommend you to read, to get more solutions in solving this problem.

Journal ArticleDOI
TL;DR: This paper investigates an alternative less-studied approach called user-oriented feature selection by exploiting the domain-specific semantic information to construct a subset of features that is most consistent with the user requirements.
Abstract: The effectiveness of any machine learning algorithm depends, to a large extent, on the selection of a good subset of features or attributes. Most existing methods use the syntactic or statistical information of the data, relying on a heuristic criterion to select features. In this paper, we investigate an alternative less-studied approach called user-oriented feature selection by exploiting the domain-specific semantic information. Given any two features, a user is able to express which one is more important based on the semantic consideration. Such user requirements are formally described by a preference relation on the set of features. Algorithms are proposed to construct a subset of features that is most consistent with the user requirements. Their properties and computational complexity are analysed. User-oriented feature selection offers a new view for machine learning and its potentials need to be further investigated and explored.

Journal ArticleDOI
TL;DR: The design of OMEGA is explained and its advantages and application to code generation are discussed and the open questions are explained and extensively discuss related work.
Abstract: This article describes the ontological metamodel extension for generative architectures (OMEGA), an extension to the Meta Object Facility metamodel which allows non-linear metamodeling. The article begins with a discussion of non-linear metamodeling in general and briefly describes several options available to the designers of a metamodeling framework supporting orthogonal metamodeling. In the light of these options, we then explain the design of OMEGA and discuss its advantages and application to code generation. We also explain the open questions and extensively discuss related work.

Journal ArticleDOI
TL;DR: An evolutionary IMS AKA (E-IMS AKA) in the IMS authentication is proposed to replace the I MS AKA, which has some security problems and loses the mutual AKA capabilities.
Abstract: In a Universal Mobile telecommunications system IP multimedia subsystem (IMS), even when an IMS subscriber has passed the packet-switch domain authentication, the IMS subscriber's identity must be confirmed by the IMS authentication again before accessing IMS services. Both the packet-switch domain and the IMS authentications are necessary for the IMS subscriber. This is referred to as a two-pass authentication. However, the packet-switch domain authentication is carried out by the authentication and key agreement (AKA) of the 3rd generation partnership projects (3GPP), called 3GPP AKA; the IMS authentication is carried out by IMS AKA. Since IMS AKA is based on 3GPP AKA, almost all of the operations are the same. It is inefficient because almost all the involved steps in the two-pass authentication are duplicated. Hence, the one-pass authentication was proposed to increase the efficiency of the IMS authentication. Unfortunately, the one-pass authentication has some security problems and loses the mutual AKA capabilities. Therefore, in this paper, an evolutionary IMS AKA (E-IMS AKA) in the IMS authentication is proposed to replace the IMS AKA. E-IMS AKA not only adheres to the security requirements of IMS AKA, but also maintains the efficiency of the one-pass authentication method.

Journal ArticleDOI
TL;DR: This paper reviews some of the techniques under development within the Hyperion sub-projects and the results achieved to date.
Abstract: QinetiQ, Malvern Technology Centre, St Andrews Road, Malvern, Worcestershire WR14 3PS, UK*Corresponding author: robert.ghanea-hercock@bt.comThe future digital battlespace will be a fast-paced and frenetic environment that stresses informationcommunicationtechnologysystemstothelimit.Thechallengesaremostacuteinthetacticalandoper-ational domains where bandwidth is severely limited, security of information is paramount, thenetwork is under physical and cyber attack and administrative support is minimal. Hyperion is aclusterofresearchprojectsdesignedtoprovideanautomatedandadaptiveinformationmanagementcapability embeddedin defence networks. The overall system architecture isdesigned toimprove thesituational awareness of field commanders by providing the ability to fuse and compose informationservices in real time. The key technologies adopted to enable this include: autonomous softwareagents, self-organizing middleware, a smart data filtering system and a 3-D battlespace simulationenvironment. This paper reviews some of the specific techniques under development within theHyperion sub-projects and the results achieved to date.Keywords: agents; autonomous systems; middleware; data visualizationReceived 14 May 2007; revised 14 May 2007

Journal ArticleDOI
TL;DR: The analysis of the M/G/1 queueing model of VSM, the vacationing server model (VSM), is more accurate and yet simpler than a previous analysis, but it also takes into account the effect of disk zoning explicitly.
Abstract: RAID5 tolerates single disk failures by exclusive-ORing (XORing) the blocks corresponding to a requested block on the failed disk to reconstruct it. This results in increased loads on surviving disks and degraded disk response times with respect to normal mode operation. Provided a spare disk is available, a rebuild process systematically reads successive disk tracks, XORs them to recreate lost tracks and writes them onto a spare disk, thus returning the system to its original state. Rebuild time is important since RAID5 disk arrays with a single disk failure are susceptible to data loss if a second disk fails. According to the vacationing server model (VSM), rebuild read requests on surviving disks are given a lower priority than external user requests, so as to have less impact on their response time. Given that disk loads are balanced due to striping, rebuild time can be approximated by the time to read the contents of any one of the surviving disks. The analysis of the M/G/1 queueing model of VSM, given in this article, is more accurate and yet simpler than a previous analysis, but it also takes into account the effect of disk zoning explicitly. We also present a heuristic method to estimate rebuild time, which can be combined with the new analysis. The ability to quickly and accurately estimate rebuild time is useful in computing the reliability of RAID5 systems, especially during design tradeoff studies. The accuracy of the various analyses to estimate rebuild time are checked against detailed simulation results.

Journal ArticleDOI
TL;DR: This paper presents a systematic model, two-phase optimization algorithms (TPOA), for Mastermind that is not only able to efficiently obtain approximate results but also effectively discover results that are getting closer to the optima.
Abstract: This paper presents a systematic model, two-phase optimization algorithms (TPOA), for Mastermind. TPOA is not only able to efficiently obtain approximate results but also effectively discover results that are getting closer to the optima. This systematic approach could be regarded as a general improver for heuristics. That is, given a constructive heuristic, TPOA has a higher chance to obtain results better than those obtained by the heuristic. Moreover, it sometimes can achieve optimal results that are difficult to find by the given heuristic. Experimental results show that (i) TPOA with parameter setting (k, d) = (1, 1) is able to obtain the optimal result for the game in the worst case, where k is the branching factor and d is the exploration depth of the search space. (ii) Using a simple heuristic, TPOA achieves the optimal result for the game in the expected case with (k, d) = (180, 2). This is the first approximate approach to achieve the optimal result in the expected case.

Journal ArticleDOI
TL;DR: A real-time information delivery system based on an event's context and content that focuses on the delivery of advertisements relevant to notified events as a specific case of delivery of information relevant to the events is proposed.
Abstract: In mobile environments, transmitting information relevant to an event along with notification of the event has been proven to be an effective means of providing revenue enhancing services. For example, a relevant advertisement can be displayed just before event notification; for instance, a product promotion by Beckham can be shown just before the notification of a goal scored by him. Challenges in achieving real-time search and delivery of information relevant to events as they occur include: (i) predicting the next event so that the appropriate information can be kept ready; (ii) finding information relevant to the context and content of the event; (iii) searching for and bringing the potentially needed information closer to the user location; and (iv) disseminating and displaying relevant information just before the actual event is notified to the user. In this paper, we propose a real-time information delivery system based on an event's context and content. The key features of our system are: (i) representing domains with event-based scenarios using statecharts; (ii) using a novel combination of history information and state transitions to predict events; and (iii) real-time delivery of information relevant to an event by prefetching and caching information based on the events predicted to occur in the near future. To illustrate our approach, we focus on the delivery of advertisements relevant to notified events as a specific case of delivery of information relevant to the events. Using an experimental setup, we measured the effectiveness of our approach as compared to a context-unaware approach to event prediction. Experiments demonstrate that our approach has a superior performance, resulting in: (i) better prediction accuracy; (ii) the delivery of the most relevant information most of the time; and (iii) an effective cache management in mobile devices.


Journal ArticleDOI
TL;DR: The Science of Programming has made comparable advances by the discovery of faster and more general algorithms, and by the development of a wide range of specific application programs, spreading previously unimaginable benefits into almost all aspects of human life.
Abstract: Introduction. Computer Science owes its existence to the invention of the storedprogram digital computer. It derives continuously renewed inspiration from the constant stream of new computer applications, which are still being opened up by half a century of continuous reduction in the cost of computer chips, and by spectacular increases in their reliability, performance and capacity. The Science of Programming has made comparable advances by the discovery of faster and more general algorithms, and by the development of a wide range of specific application programs, spreading previously unimaginable benefits into almost all aspects of human life.