scispace - formally typeset
Search or ask a question

Showing papers in "ACM Computing Surveys in 2011"


Journal ArticleDOI
TL;DR: This article provides a detailed overview of various state-of-the-art research papers on human activity recognition, discussing both the methodologies developed for simple human actions and those for high-level activities.
Abstract: Human activity recognition is an important area of computer vision research. Its applications include surveillance systems, patient monitoring systems, and a variety of systems that involve interactions between persons and electronic devices such as human-computer interfaces. Most of these applications require an automated recognition of high-level activities, composed of multiple simple (or atomic) actions of persons. This article provides a detailed overview of various state-of-the-art research papers on human activity recognition. We discuss both the methodologies developed for simple human actions and those for high-level activities. An approach-based taxonomy is chosen that compares the advantages and limitations of each approach. Recognition methodologies for an analysis of the simple actions of a single person are first presented in the article. Space-time volume approaches and sequential approaches that represent and recognize activities directly from input images are discussed. Next, hierarchical recognition methodologies for high-level activities are presented and compared. Statistical approaches, syntactic approaches, and description-based approaches for hierarchical recognition are discussed in the article. In addition, we further discuss the papers on the recognition of human-object interactions and group activities. Public datasets designed for the evaluation of the recognition methodologies are illustrated in our article as well, comparing the methodologies' performances. This review will provide the impetus for future research in more productive areas.

2,084 citations


Journal ArticleDOI
TL;DR: The survey outlines fundamental results about multiprocessor real-time scheduling that hold independent of the scheduling algorithms employed, and provides a taxonomy of the different scheduling methods, and considers the various performance metrics that can be used for comparison purposes.
Abstract: This survey covers hard real-time scheduling algorithms and schedulability analysis techniques for homogeneous multiprocessor systems. It reviews the key results in this field from its origins in the late 1960s to the latest research published in late 2009. The survey outlines fundamental results about multiprocessor real-time scheduling that hold independent of the scheduling algorithms employed. It provides a taxonomy of the different scheduling methods, and considers the various performance metrics that can be used for comparison purposes. A detailed review is provided covering partitioned, global, and hybrid scheduling algorithms, approaches to resource sharing, and the latest results from empirical investigations. The survey identifies open issues, key research challenges, and likely productive research directions.

910 citations


Journal ArticleDOI
TL;DR: Previous work on CT is reviewed, the evolution of CT is highlighted, important issues, methods, and applications of CT are identified, and the growing trend of CT research is presented.
Abstract: Combinatorial Testing (CT) can detect failures triggered by interactions of parameters in the Software Under Test (SUT) with a covering array test suite generated by some sampling mechanisms. It has been an active field of research in the last twenty years. This article aims to review previous work on CT, highlights the evolution of CT, and identifies important issues, methods, and applications of CT, with the goal of supporting and directing future practice and research in this area. First, we present the basic concepts and notations of CT. Second, we classify the research on CT into the following categories: modeling for CT, test suite generation, constraints, failure diagnosis, prioritization, metric, evaluation, testing procedure and the application of CT. For each of the categories, we survey the motivation, key issues, solutions, and the current state of research. Then, we review the contribution from different research groups, and present the growing trend of CT research. Finally, we recommend directions for future CT research, including: (1) modeling for CT, (2) improving the existing test suite generation algorithm, (3) improving analysis of testing result, (4) exploring the application of CT to different levels of testing and additional types of systems, (5) conducting more empirical studies to fully understand limitations and strengths of CT, and (6) combining CT with other testing techniques.

615 citations


Journal ArticleDOI
TL;DR: This article summarizes and classifies research on end-user software engineering activities, defining the area of End-User Software Engineering (EUSE) and related terminology, and addresses several crosscutting issues in the design of EUSE tools.
Abstract: Most programs today are written not by professional software developers, but by people with expertise in other domains working towards goals for which they need computational support. For example, a teacher might write a grading spreadsheet to save time grading, or an interaction designer might use an interface builder to test some user interface design ideas. Although these end-user programmers may not have the same goals as professional developers, they do face many of the same software engineering challenges, including understanding their requirements, as well as making decisions about design, reuse, integration, testing, and debugging. This article summarizes and classifies research on these activities, defining the area of End-User Software Engineering (EUSE) and related terminology. The article then discusses empirical research about end-user software engineering activities and the technologies designed to support them. The article also addresses several crosscutting issues in the design of EUSE tools, including the roles of risk, reward, and domain complexity, and self-efficacy in the design of EUSE tools and the potential of educating users about software engineering principles.

562 citations


Journal ArticleDOI
TL;DR: This article surveys research progress made to address various coverage problems in sensor networks, and state the basic Coverage problems in each category, and review representative solution approaches in the literature.
Abstract: Sensor networks, which consist of sensor nodes each capable of sensing environment and transmitting data, have lots of applications in battlefield surveillance, environmental monitoring, industrial diagnostics, etc. Coverage which is one of the most important performance metrics for sensor networks reflects how well a sensor field is monitored. Individual sensor coverage models are dependent on the sensing functions of different types of sensors, while network-wide sensing coverage is a collective performance measure for geographically distributed sensor nodes. This article surveys research progress made to address various coverage problems in sensor networks. We first provide discussions on sensor coverage models and design issues. The coverage problems in sensor networks can be classified into three categories according to the subject to be covered. We state the basic coverage problems in each category, and review representative solution approaches in the literature. We also provide comments and discussions on some extensions and variants of these basic coverage problems.

507 citations


Journal ArticleDOI
TL;DR: This article presents a taxonomy of WSN programming approaches that captures the fundamental differences among existing solutions, and uses the taxonomy to provide an exhaustive classification of existing approaches.
Abstract: Wireless sensor networks (WSNs) are attracting great interest in a number of application domains concerned with monitoring and control of physical phenomena, as they enable dense and untethered deployments at low cost and with unprecedented flexibility. However, application development is still one of the main hurdles to a wide adoption of WSN technology. In current real-world WSN deployments, programming is typically carried out very close to the operating system, therefore requiring the programmer to focus on low-level system issues. This not only distracts the programmer from the application logic, but also requires a technical background rarely found among application domain experts. The need for appropriate high-level programming abstractions, capable of simplifying the programming chore without sacrificing efficiency, has long been recognized, and several solutions have hitherto been proposed, which differ along many dimensions. In this article, we survey the state of the art in programming approaches for WSNs. We begin by presenting a taxonomy of WSN applications, to identify the fundamental requirements programming platforms must deal with. Then, we introduce a taxonomy of WSN programming approaches that captures the fundamental differences among existing solutions, and constitutes the core contribution of this article. Our presentation style relies on concrete examples and code snippets taken from programming platforms representative of the taxonomy dimensions being discussed. We use the taxonomy to provide an exhaustive classification of existing approaches. Moreover, we also map existing approaches back to the application requirements, therefore providing not only a complete view of the state of the art, but also useful insights for selecting the programming abstraction most appropriate to the application at hand.

402 citations


Journal ArticleDOI
TL;DR: The applications of ambient intelligence are surveyed, including its applications, some of the technologies it uses, and its social and ethical implications; for example planning, learning, event-condition-action rules, temporal reasoning, and agent-oriented technologies.
Abstract: In this article we survey ambient intelligence (AmI), including its applications, some of the technologies it uses, and its social and ethical implications. The applications include AmI at home, care of the elderly, healthcare, commerce, and business, recommender systems, museums and tourist scenarios, and group decision making. Among technologies, we focus on ambient data management and artificial intelligence; for example planning, learning, event-condition-action rules, temporal reasoning, and agent-oriented technologies. The survey is not intended to be exhaustive, but to convey a broad range of applications, technologies, and technical, social, and ethical challenges.

373 citations


Journal ArticleDOI
TL;DR: The emerging field of digital image forensics is introduced, including the main topic areas of source camera identification, forgery detection, and steganalysis, including a critical analysis of the state of the art, and recommendations for the direction of future research.
Abstract: Digital images are everywhere—from our cell phones to the pages of our online news sites. How we choose to use digital image processing raises a surprising host of legal and ethical questions that we must address. What are the ramifications of hiding data within an innocent imageq Is this an intentional security practice when used legitimately, or intentional deceptionq Is tampering with an image appropriate in cases where the image might affect public behaviorq Does an image represent a crime, or is it simply a representation of a scene that has never existedq Before action can even be taken on the basis of a questionable image, we must detect something about the image itself. Investigators from a diverse set of fields require the best possible tools to tackle the challenges presented by the malicious use of today's digital image processing techniques.In this survey, we introduce the emerging field of digital image forensics, including the main topic areas of source camera identification, forgery detection, and steganalysis. In source camera identification, we seek to identify the particular model of a camera, or the exact camera, that produced an image. Forgery detection's goal is to establish the authenticity of an image, or to expose any potential tampering the image might have undergone. With steganalysis, the detection of hidden data within an image is performed, with a possible attempt to recover any detected data. Each of these components of digital image forensics is described in detail, along with a critical analysis of the state of the art, and recommendations for the direction of future research.

266 citations


Journal ArticleDOI
TL;DR: An overview of techniques reported in the literature for making DHT-based systems resistant to the three most important attacks that can be launched by malicious nodes participating in the DHT: the Sybil attack, the Eclipse attack, and the routing and storage attacks is presented.
Abstract: Peer-to-peer networks based on distributed hash tables (DHTs) have received considerable attention ever since their introduction in 2001. Unfortunately, DHT-based systems have been shown to be notoriously difficult to protect against security attacks. Various reports have been published that discuss or classify general security issues, but so far a comprehensive survey describing the various proposed defenses has been lacking. In this article, we present an overview of techniques reported in the literature for making DHT-based systems resistant to the three most important attacks that can be launched by malicious nodes participating in the DHT: (1) the Sybil attack, (2) the Eclipse attack, and (3) routing and storage attacks. We review the advantages and disadvantages of the proposed solutions and, in doing so, confirm how difficult it is to secure DHT-based systems in an adversarial environment.

228 citations


Journal ArticleDOI
TL;DR: A survey of the research literature that has addressed this topic in the period 1996-2006, with a particular focus on empirical analyses is provided in this article, where a new classification framework that represents an abstracted and synthesized view of the types of factors that have been asserted as influencing project outcomes is presented.
Abstract: Determining the factors that have an influence on software systems development and deployment project outcomes has been the focus of extensive and ongoing research for more than 30 years. We provide here a survey of the research literature that has addressed this topic in the period 1996–2006, with a particular focus on empirical analyses. On the basis of this survey we present a new classification framework that represents an abstracted and synthesized view of the types of factors that have been asserted as influencing project outcomes.

214 citations


Journal ArticleDOI
TL;DR: This article surveys the literature on auctions from a computer science perspective, primarily from the viewpoint of computer scientists interested in learning about auction theory, and provides pointers into the economics literature for those who want a deeper technical understanding.
Abstract: There is a veritable menagerie of auctions—single-dimensional, multi-dimensional, single-sided, double-sided, first-price, second-price, English, Dutch, Japanese, sealed-bid—and these have been extensively discussed and analyzed in the economics literature. The main purpose of this article is to survey this literature from a computer science perspective, primarily from the viewpoint of computer scientists who are interested in learning about auction theory, and to provide pointers into the economics literature for those who want a deeper technical understanding. In addition, since auctions are an increasingly important topic in computer science, we also look at work on auctions from the computer science literature. Overall, our aim is to identifying what both these bodies of work these tell us about creating electronic auctions.

Journal ArticleDOI
TL;DR: A systematic survey of various analysis techniques that use discrete wavelet transformation (DWT) in time series data mining, and the benefits of this approach demonstrated by previous studies performed on diverse application domains, including image classification, multimedia retrieval, and computer network anomaly detection are outlined.
Abstract: Time series are recorded values of an interesting phenomenon such as stock prices, household incomes, or patient heart rates over a period of time. Time series data mining focuses on discovering interesting patterns in such data. This article introduces a wavelet-based time series data analysis to interested readers. It provides a systematic survey of various analysis techniques that use discrete wavelet transformation (DWT) in time series data mining, and outlines the benefits of this approach demonstrated by previous studies performed on diverse application domains, including image classification, multimedia retrieval, and computer network anomaly detection.

Journal ArticleDOI
TL;DR: A comparison of existing decision-making techniques is provided, aimed to guide architects in their selection, and shows that there is no “best” decision- making technique; however, some techniques are more susceptible to specific difficulties.
Abstract: The architecture of a software-intensive system can be defined as the set of relevant design decisions that affect the qualities of the overall system functionality; therefore, architectural decisions are eventually crucial to the success of a software project. The software engineering literature describes several techniques to choose among architectural alternatives, but it gives no clear guidance on which technique is more suitable than another, and in which circumstances. As such, there is no systematic way for software engineers to choose among decision-making techniques for resolving tradeoffs in architecture design. In this article, we provide a comparison of existing decision-making techniques, aimed to guide architects in their selection. The results show that there is no “best” decision-making technique; however, some techniques are more susceptible to specific difficulties. Hence architects should choose a decision-making technique based on the difficulties that they wish to avoid. This article represents a first attempt to reason on meta-decision-making, that is, the issue of deciding how to decide.

Journal ArticleDOI
TL;DR: A meta-study of the empirical literature on trust in e-commerce systems is conducted, and a qualitative model incorporating the various factors that have been empirically found to influence consumer trust in E-commerce is proposed.
Abstract: Trust is at once an elusive, imprecise concept, and a critical attribute that must be engineered into e-commerce systems. Trust conveys a vast number of meanings, and is deeply dependent upon context. The literature on engineering trust into e-commerce systems reflects these ambiguous meanings; there are a large number of articles, but there is as yet no clear theoretical framework for the investigation of trust in e-commerce. E-commerce, however, is predicated on trust; indeed, any e-commerce vendor that fails to establish a trusting relationship with their customers is doomed. There is a very clear need for specific guidance on e-commerce system attributes and business operations that will effectively promote consumer trust. To address this need, we have conducted a meta-study of the empirical literature on trust in e-commerce systems. This area of research is still immature, and hence our meta-analysis is qualitative rather than quantitative. We identify the major theoretical frameworks that have been proposed in the literature, and propose a qualitative model incorporating the various factors that have been empirically found to influence consumer trust in e-commerce. As this model is too complex to be of practical use, we explore subsets of this model that have the strongest support in the literature, and discuss the implications of this model for Web site design. Finally, we outline key conceptual and methodological needs for future work on this topic.

Journal ArticleDOI
TL;DR: This survey reviews the key methodologies introduced in the transliteration literature and categorizes them based on the resources and algorithms used, and the effectiveness is compared.
Abstract: Machine transliteration is the process of automatically transforming the script of a word from a source language to a target language, while preserving pronunciation. The development of algorithms specifically for machine transliteration began over a decade ago based on the phonetics of source and target languages, followed by approaches using statistical and language-specific methods. In this survey, we review the key methodologies introduced in the transliteration literature. The approaches are categorized based on the resources and algorithms used, and the effectiveness is compared.

Journal ArticleDOI
Yuhui Deng1
TL;DR: The architectural design of disk drives has reached a turning point which should allow their performance to advance further, while still maintaining high reliability and energy efficiency, according to the authors.
Abstract: Disk drives have experienced dramatic development to meet performance requirements since the IBM 1301 disk drive was announced in 1961. However, the performance gap between memory and disk drives has widened to 6 orders of magnitude and continues to widen by about 50p per year. Furthermore, energy efficiency has become one of the most important challenges in designing disk drive storage systems. The architectural design of disk drives has reached a turning point which should allow their performance to advance further, while still maintaining high reliability and energy efficiency. This article explains how disk drives have evolved over five decades to meet challenging customer demands. First of all, it briefly introduces the development of disk drives, and deconstructs disk performance and power consumption. Secondly, it describes the design constraints and challenges that traditional disk drives are facing. Thirdly, it presents some innovative disk drive architectures discussed in the community. Fourthly, it introduces some new storage media types and the impacts they have on the architecture of the traditional disk drives. Finally, it discusses two important evolutions of disk drives: hybrid disk and solid state disk. The article highlights the challenges and opportunities facing these storage devices, and explores how we can expect them to affect storage systems.

Journal ArticleDOI
TL;DR: A conceptual reference model is presented as the article's first contribution, centrally capturing the basic design concepts of Aspect-Oriented Modeling and their interrelationships in terms of a UML class diagram, and an evaluation framework has been designed, resembling the second contribution, by deriving a detailed and well-defined catalogue of evaluation criteria, thereby operationalizing the conceptualreference model.
Abstract: Aspect-orientation provides a new way of modularization by clearly separating crosscutting concerns from noncrosscutting ones. While aspect-orientation originally has emerged at the programming level, it now stretches also over other development phases. There are, for example, already several proposals for Aspect-Oriented Modeling (AOM), most of them pursuing distinguished goals, providing different concepts as well as notations, and showing various levels of maturity. Consequently, there is an urgent need to provide an in-depth survey, clearly identifying commonalities and differences between current AOM approaches. Existing surveys in this area focus more on comprehensibility with respect to development phases or evaluated approaches rather than on comparability on bases of a detailed evaluation framework.This article tries to fill this gap focusing on aspect-oriented design modeling. As a prerequisite for an in-depth evaluation, a conceptual reference model is presented as the article's first contribution, centrally capturing the basic design concepts of AOM and their interrelationships in terms of a UML class diagram. Based on this conceptual reference model, an evaluation framework has been designed, resembling the second contribution, by deriving a detailed and well-defined catalogue of evaluation criteria, thereby operationalizing the conceptual reference model. This criteria catalogue is employed together with a running example in order to evaluate a carefully selected set of eight design-level AOM approaches representing the third contribution of the article. This per approach evaluation is complemented with an extensive report on lessons learned, summarizing the approaches' strengths and shortcomings.

Journal ArticleDOI
TL;DR: It is shown that the ongoing research in many of these domains requires complex representations of data entities, and state-of-the-art techniques for efficient (fast) nonmetric similarity search are reviewed, concerning both exact and approximate search.
Abstract: The task of similarity search is widely used in various areas of computing, including multimedia databases, data mining, bioinformatics, social networks, etc. In fact, retrieval of semantically unstructured data entities requires a form of aggregated qualification that selects entities relevant to a query. A popular type of such a mechanism is similarity querying. For a long time, the database-oriented applications of similarity search employed the definition of similarity restricted to metric distances. Due to its topological properties, metric similarity can be effectively used to index a database which can then be queried efficiently by so-called metric access methods. However, together with the increasing complexity of data entities across various domains, in recent years there appeared many similarities that were not metrics—we call them nonmetric similarity functions. In this article we survey domains employing nonmetric functions for effective similarity search, and methods for efficient nonmetric similarity search. First, we show that the ongoing research in many of these domains requires complex representations of data entities. Simultaneously, such complex representations allow us to model also complex and computationally expensive similarity functions (often represented by various matching algorithms). However, the more complex similarity function one develops, the more likely it will be a nonmetric. Second, we review state-of-the-art techniques for efficient (fast) nonmetric similarity search, concerning both exact and approximate search. Finally, we discuss some open problems and possible future research trends.

Journal ArticleDOI
TL;DR: This survey integrates the vast amount of research efforts that have been produced in comparison-based diagnosis, from the earliest theoretical models to new promising applications, and includes the quantitative evaluation of a relevant reliability metric—the diagnosability— of several popular interconnection network topologies.
Abstract: The growing complexity and dependability requirements of hardware, software, and networks demand efficient techniques for discovering disruptive behavior in those systems. Comparison-based diagnosis is a realistic approach to detect faulty units based on the outputs of tasks executed by system units. This survey integrates the vast amount of research efforts that have been produced in this field, from the earliest theoretical models to new promising applications. Key results also include the quantitative evaluation of a relevant reliability metric—the diagnosability—of several popular interconnection network topologies. Relevant diagnosis algorithms are also described. The survey aims at clarifying and uncovering the potential of this technology, which can be applied to improve the dependability of diverse complex computer systems.

Journal ArticleDOI
TL;DR: This article illustrates how failure detectors can factor out timing assumptions to detect failures in distributed agreement algorithms and surveys the weakest failure detector question.
Abstract: A failure detector is a fundamental abstraction in distributed computing. This article surveys this abstraction through two dimensions. First we study failure detectors as building blocks to simplify the design of reliable distributed algorithms. In particular, we illustrate how failure detectors can factor out timing assumptions to detect failures in distributed agreement algorithms. Second, we study failure detectors as computability benchmarks. That is, we survey the weakest failure detector question and illustrate how failure detectors can be used to classify problems. We also highlight some limitations of the failure detector abstraction along each of the dimensions.

Journal ArticleDOI
TL;DR: An up-to-date review of the recent works of craniofacial superimposition is presented along with a discussion of advantages and drawbacks of the existing approaches, with an emphasis on the automatic ones.
Abstract: Craniofacial superimposition is a forensic process in which a photograph of a missing person is compared with a skull found to determine its identity. After one century of development, craniofacial superimposition has become an interdisciplinary research field where computer sciences have acquired a key role as a complement of forensic sciences. Moreover, the availability of new digital equipment (such as computers and 3D scanners) has resulted in a significant advance in the applicability of this forensic identification technique. The purpose of this contribution is twofold. On the one hand, we aim to clearly define the different stages involved in the computer-aided craniofacial superimposition process. Besides, we aim to clarify the role played by computers in the methods considered.In order to accomplish these objectives, an up-to-date review of the recent works is presented along with a discussion of advantages and drawbacks of the existing approaches, with an emphasis on the automatic ones. Future case studies will be easily categorized by identifying which stage is tackled and which kind of computer-aided approach is chosen to face the identification problem. Remaining challenges are indicated and some directions for future research are given.

Journal ArticleDOI
TL;DR: An integrated view is introduced that is useful when comparing XML data clustering approaches, when developing a new clustering algorithm, and when implementing an XML clustering component is introduced.
Abstract: In the last few years we have observed a proliferation of approaches for clustering XML documents and schemas based on their structure and content. The presence of such a huge amount of approaches is due to the different applications requiring the clustering of XML data. These applications need data in the form of similar contents, tags, paths, structures, and semantics. In this article, we first outline the application contexts in which clustering is useful, then we survey approaches so far proposed relying on the abstract representation of data (instances or schema), on the identified similarity measure, and on the clustering algorithm. In this presentation, we aim to draw a taxonomy in which the current approaches can be classified and compared. We aim at introducing an integrated view that is useful when comparing XML data clustering approaches, when developing a new clustering algorithm, and when implementing an XML clustering component. Finally, the article moves into the description of future trends and research issues that still need to be faced.

Journal ArticleDOI
TL;DR: Fault-handling methods not requiring modification of the FPGA device architecture or user intervention to recover from faults are examined and evaluated against overhead-based and sustainability-based performance metrics such as additional resource requirements, throughput reduction, fault capacity, and fault coverage.
Abstract: The capabilities of current fault-handling techniques for Field Programmable Gate Arrays (FPGAs) develop a descriptive classification ranging from simple passive techniques to robust dynamic methods. Fault-handling methods not requiring modification of the FPGA device architecture or user intervention to recover from faults are examined and evaluated against overhead-based and sustainability-based performance metrics such as additional resource requirements, throughput reduction, fault capacity, and fault coverage. This classification alongside these performance metrics forms a standard for confident comparisons.

Journal ArticleDOI
TL;DR: This survey compares generic music constraint programming systems according to a number of criteria such as the range of music theories these systems support, and introduces the field and its problems in general.
Abstract: Constraint programming is well suited for the computational modeling of music theories and composition: its declarative and modular approach shares similarities with the way music theory is traditionally expressed, namely by a set of rules which describe the intended result. Various music theory disciplines have been modeled, including counterpoint, harmony, rhythm, form, and instrumentation. Because modeling music theories “from scratch” is a complex task, generic music constraint programming systems have been proposed that predefine the required building blocks for modeling a range of music theories. After introducing the field and its problems in general, this survey compares these generic systems according to a number of criteria such as the range of music theories these systems support.

Journal ArticleDOI
TL;DR: The goal of this tutorial is to present a comprehensive review of the literature on protocol engineering techniques and to discuss difficulties imposed by the characteristics of WSONs on the protocol engineering community.
Abstract: Wireless self-organizing networks (WSONs) have attracted considerable attention from the network research community; however, the key for their success is the rigorous validation of the properties of the network protocols. Applications of risk or those demanding precision (like alert-based systems) require a rigorous and reliable validation of deployed network protocols. While the main goal is to ensure the reliability of the protocols, validation techniques also allow the establishment of their correctness regarding the related protocols' requirements. Nevertheless, even if different communities have carried out intensive research activities on the validation domain, WSONs still raise new issues for and challenging constraints to these communities. We thus, advocate the use of complementary techniques coming from different research communities to efficiently address the validation of WSON protocols. The goal of this tutorial is to present a comprehensive review of the literature on protocol engineering techniques and to discuss difficulties imposed by the characteristics of WSONs on the protocol engineering community. Following the formal and nonformal classification of techniques, we provide a discussion about components and similarities of existing protocol validation approaches. We also investigate how to take advantage of such similarities to obtain complementary techniques and outline new challenges.

Journal ArticleDOI
TL;DR: This article reviews the various implementation techniques available in static typing and in the three cases of single inheritance, multiple inheritance, and multiple subtyping, and presents an experimental compiler-linker, where separate compilation implies the OWA, whereas the whole program is finally linked under the CWA.
Abstract: Object-oriented programming represents an original implementation issue due to its philosophy of making the program behavior depend on the dynamic type of objects. This is expressed by the late binding mechanism, aka message sending. The underlying principle is that the address of the actually called procedure is not statically determined at compile-time, but depends on the dynamic type of a distinguished parameter known as the receiver. A similar issue arises with attributes, because their position in the object layout may also depend on the object's dynamic type. Furthermore, subtyping introduces another original feature (i.e., runtime subtype checks). All three mechanisms need specific implementations and data structures. In static typing, late binding is generally implemented with so-called virtual function tables. These tables reduce method calls to pointers to functions via a small fixed number of extra indirections. It follows that object-oriented programming yields some overhead, as compared to the usual procedural languages. The different techniques and their resulting overhead depend on several parameters. First, inheritance and subtyping may be single or multiple, and even a mixing is possible, as in Java and dNET which present single inheritance for classes and multiple subtyping for interfaces. Multiple inheritance is a well-known complication. Second, the production of executable programs may involve various schemes, from global compilation, which implies the closed-world assumption (CWA), as the whole program is known at compile time, to separate compilation and dynamic loading, where each program unit is compiled and loaded independently of any usage, hence under the open-world assumption (OWA). Global compilation is well-known to facilitate optimization. This article reviews the various implementation techniques available in static typing and in the three cases of single inheritance, multiple inheritance, and multiple subtyping. This language-independent survey focuses on separate compilation and dynamic loading, as they represent the most commonly used and the most demanding framework. However, many works have been undertaken in the global compilation framework, mostly for dynamically typed languages, but also applied to the EIFFEL language. Hence, we also examine global techniques and how they can improve implementation efficiency. Finally, mixed frameworks that combine open and closed world assumptions are considered. For instance, just-in-time (JIT) compilers work under provisional CWA, at the expense of possible recompilations. In contrast, we present an experimental compiler-linker, where separate compilation implies the OWA, whereas the whole program is finally linked under the CWA.

Journal ArticleDOI
TL;DR: This article will show the usefulness and elegance of strict intersection types for the Lambda Calculus, that are strict in the sense that they are the representatives of equivalence classes of types in the BCD-system.
Abstract: This article will show the usefulness and elegance of strict intersection types for the Lambda Calculus, that are strict in the sense that they are the representatives of equivalence classes of types in the BCD-system [Barendregt et al. 1983]. We will focus on the essential intersection type assignment; this system is almost syntax directed, and we will show that all major properties hold that are known to hold for other intersection systems, like the approximation theorem, the characterization of (head/strong) normalization, completeness of type assignment using filter semantics, strong normalization for cut-elimination and the principal pair property. In part, the proofs for these properties are new; we will briefly compare the essential system with other existing systems.

Journal ArticleDOI
TL;DR: The current multisource literature is surveyed from the viewpoint of central questions regarding how to partition the robots during search to ensure that all sources are located in minimal time, how to avoid obstacles and other robots, and how to proceed after each source is found.
Abstract: The problem of time-varying, multisource localization using robotic swarms has received relatively little attention when compared to single-source localization. It involves distinct challenges regarding how to partition the robots during search to ensure that all sources are located in minimal time, how to avoid obstacles and other robots, and how to proceed after each source is found. Unfortunately, no common set of validation problems and reference algorithms has evolved, and there are no general theoretical foundations that guarantee progress, convergence, and termination. This article surveys the current multisource literature from the viewpoint of these central questions.

Journal ArticleDOI
TL;DR: The evidence suggests that Shibboleth meets many of the demands of the research community in accessing and using grid resources, and in particular the use to support federated authentication and authorization to support interinstitutional sharing of remote grid resources that are subject to access control.
Abstract: Grid computing facilitates resource sharing typically to support distributed virtual organizations (VO). The multi-institutional nature of a grid environment introduces challenging security issues, especially with regard to authentication and authorization. This article presents a state-of-the-art review of major grid authentication and authorization technologies. In particular we focus upon the Internet2 Shibboleth technologies and their use to support federated authentication and authorization to support interinstitutional sharing of remote grid resources that are subject to access control. We outline the architecture, features, advantages, limitations, projects, and applications of Shibboleth in a grid environment. The evidence suggests that Shibboleth meets many of the demands of the research community in accessing and using grid resources.

Journal ArticleDOI
TL;DR: The conclusion is that the basic results are clearly important, but in practice much less striking than generally thought, and the differences between random failures and attacks are not so huge and can be explained with simple facts.
Abstract: It has appeared recently that the underlying degree distribution of networks may play a crucial role concerning their robustness. Previous work insisted on the fact that power-law degree distributions induce high resilience to random failures but high sensitivity to attack strategies, while Poisson degree distributions are quite sensitive in both cases. Then much work has been done to extend these results. We aim here at studying in depth these results, their origin, and limitations. We review in detail previous contributions in a unified framework, and identify the approximations on which these results rely. We then present new results aimed at clarifying some important aspects. We also provide extensive rigorous experiments which help evaluate the relevance of the analytic results. We reach the conclusion that, even if the basic results are clearly important, they are in practice much less striking than generally thought. The differences between random failures and attacks are not so huge and can be explained with simple facts. Likewise, the differences in the behaviors induced by power-law and Poisson distributions are not as striking as often claimed.