scispace - formally typeset
Search or ask a question

Showing papers in "Software - Practice and Experience in 2008"


Journal IssueDOI
TL;DR: It is shown that a mutual reinforcement relationship between ranking and Web-snippet clustering does exist, and the better the ranking of the underlying search engines, the more relevant the results from which SnakeT distills the hierarchy of labeled folders, and hence the more useful this hierarchy is to the user.
Abstract: We propose a (meta-)search engine, called SnakeT (SNippet Aggregation for Knowledge ExtracTion), which queries more than 18 commodity search engines and offers two complementary views on their returned results. One is the classical flat-ranked list, the other consists of a hierarchical organization of these results into folders created on-the-fly at query time and labeled with intelligible sentences that capture the themes of the results contained in them. Users can browse this hierarchy with various goals: knowledge extraction, query refinement and personalization of search results. In this novel form of personalization, the user is requested to interact with the hierarchy by selecting the folders whose labels (themes) best fit her query needs. SnakeT then personalizes on-the-fly the original ranked list by filtering out those results that do not belong to the selected folders. Consequently, this form of personalization is carried out by the users themselves and thus results fully adaptive, privacy preserving, scalable and non-intrusive for the underlying search engines. We have extensively tested SnakeT and compared it against the best available Web-snippet clustering engines. SnakeT is efficient and effective, and shows that a mutual reinforcement relationship between ranking and Web-snippet clustering does exist. In fact, the better the ranking of the underlying search engines, the more relevant the results from which SnakeT distills the hierarchy of labeled folders, and hence the more useful this hierarchy is to the user. Vice versa, the more intelligible the folder hierarchy, the more effective the personalization offered by SnakeT on the ranking of the query results. Copyright © 2007 John Wiley & Sons, Ltd. This work was done while the second author was a PhD student at the Dipartimento di Informatica, University of Pisa. The work contains the complete description and a full set of experiments on the software system SnakeT, which was partially published in the Proceedings of the 14th International World Wide Web Conference, Chiba, Japan, 2005

120 citations


Journal IssueDOI
TL;DR: The software library STXXL is presented, an implementation of the C++ standard template library (STL) for processing huge data sets that can fit only on hard disks, and it is the first I-O-efficient algorithm library that supports the pipelining technique that can save more than half of the I-Os.
Abstract: We present the software library STXXL that is an implementation of the C++ standard template library (STL) for processing huge data sets that can fit only on hard disks. It supports parallel disks, overlapping between disk I-O and computation and it is the first I-O-efficient algorithm library that supports the pipelining technique that can save more than half of the I-Os. STXXL has been applied both in academic and industrial environments for a range of problems including text processing, graph algorithms, computational geometry, Gaussian elimination, visualization, and analysis of microscopic images, differential cryptographic analysis, etc. The performance of STXXL and its applications are evaluated on synthetic and real-world inputs. We present the design of the library, how its performance features are supported, and demonstrate how the library integrates with STL. Copyright © 2007 John Wiley & Sons, Ltd. Now at mental images GmbH, Berlin, Germany.

65 citations


Journal IssueDOI
TL;DR: This paper presents the research results of an ongoing technology transfer project carried out in cooperation between the University of Salerno and a small software company, aimed at developing and transferring migration technology to the industrial partner.
Abstract: This paper presents the research results of an ongoing technology transfer project carried out in cooperation between the University of Salerno and a small software company. The project is aimed at developing and transferring migration technology to the industrial partner. The partner should be enabled to migrate monolithic multi-user COBOL legacy systems to a multi-tier Web-based architecture. The assessment of the legacy systems of the partner company revealed that these systems had a very low level of decomposability with spaghetti-like code and embedded control flow and database accesses within the user interface descriptions. For this reason, it was decided to adopt an incremental migration strategy based on the reengineering of the user interface using Web technology, on the transformation of interactive legacy programs into batch programs, and the wrapping of the legacy programs. A middleware framework links the new Web-based user interface with the Wrapped Legacy System. An Eclipse plug-in, named MELIS (migration environment for legacy information systems), was also developed to support the migration process. Both the migration strategy and the tool have been applied to two essential subsystems of the most business critical legacy system of the partner company. Copyright © 2008 John Wiley & Sons, Ltd.

63 citations


Journal IssueDOI
TL;DR: A grammar-driven technique to build a debugging tool generation framework from existing DSL grammars is described, which addresses a long-term goal of empowering end-users with development tools for particular DSL problem domains at the proper level of abstraction without depending on a specific GPL.
Abstract: Domain-specific languages (DSLs) assist a software developer (or end-user) in writing a program using idioms that are similar to the abstractions found in a specific problem domain. Tool support for DSLs is lacking when compared with the capabilities provided for standard general-purpose languages (GPLs), such as Java and C++. For example, support for debugging a program written in a DSL is often non-existent. The lack of a debugger at the proper abstraction level limits an end-user's ability to discover and locate faults in a DSL program. This paper describes a grammar-driven technique to build a debugging tool generation framework from existing DSL grammars. The DSL grammars are used to generate the hooks needed to interface with a supporting infrastructure constructed for an integrated development environment that assists in debugging a program written in a DSL. The contribution represents a coordinated approach to bring essential software tools (e.g. debuggers) to different types of DSLs (e.g. imperative, declarative, and hybrid). This approach hides from the end-users the accidental complexities associated with expanding the focus of a language environment to include debuggers. The research described in this paper addresses a long-term goal of empowering end-users with development tools for particular DSL problem domains at the proper level of abstraction without depending on a specific GPL. Copyright © 2007 John Wiley & Sons, Ltd.

54 citations


Journal IssueDOI
TL;DR: ANTLRWorks is described, a complete development environment for ANTLR grammars that attempts to resolve difficulties and, in general, make grammar development more accessible to the average programmer.
Abstract: Programmers tend to avoid using language tools, resorting to ad hoc methods, because tools can be hard to use, their parsing strategies can be difficult to understand and debug, and their generated parsers can be opaque black-boxes In particular, there are two very common difficulties encountered by grammar developers: understanding why a grammar fragment results in a parser non-determinism and determining why a generated parser incorrectly interprets an input sentence This paper describes ANTLRWorks, a complete development environment for ANTLR grammars that attempts to resolve these difficulties and, in general, make grammar development more accessible to the average programmer The main components are a grammar editor with refactoring and navigation features, a grammar interpreter, and a domain-specific grammar debugger ANTLRWorks' primary contributions are a parser non-determinism visualizer based on syntax diagrams and a time-traveling debugger that pays special attention to parser decision-making by visualizing lookahead usage and speculative parsing during backtracking Copyright © 2008 John Wiley & Sons, Ltd

45 citations


Journal IssueDOI
TL;DR: Jgroup-ARM as mentioned in this paper is a distributed object group platform with autonomous replication management along with a novel measurement-based assessment technique that is used to validate the fault-handling capability of Jgroup.
Abstract: This paper presents the design and implementation of Jgroup-ARM, a distributed object group platform with autonomous replication management along with a novel measurement-based assessment technique that is used to validate the fault-handling capability of Jgroup-ARM. Jgroup extends Java RMI through the group communication paradigm and has been designed specifically for application support in partitionable systems. ARM aims at improving the dependability characteristics of systems through a fault-treatment mechanism. Hence, ARM focuses on deployment and operational aspects, where the gain in terms of improved dependability is likely to be the greatest. The main objective of ARM is to localize failures and to reconfigure the system according to application-specific dependability requirements. Combining Jgroup and ARM can significantly reduce the effort necessary for developing, deploying and managing dependable, partition-aware applications. Jgroup-ARM is evaluated experimentally to validate its fault-handling capability; the recovery performance of a system deployed in a wide area network is evaluated. In this experiment multiple nearly coincident reachability changes are injected to emulate network partitions separating the service replicas. The results show that Jgroup-ARM is able to recover applications to their initial state in several realistic failure scenarios, including multiple, concurrent network partitionings. Copyright © 2007 John Wiley & Sons, Ltd.

44 citations


Journal IssueDOI
TL;DR: This work surveys and categorizes existing techniques before presenting their own syllable-based algorithm that produces higher-quality results, and results are applicable elsewhere, in areas such as password generation, username generation, and even computer-generated poetry.
Abstract: Automatically generating ‘good’ domain names that are random yet pronounceable is a problem harder than it first appears. The problem is related to random word generation, and we survey and categorize existing techniques before presenting our own syllable-based algorithm that produces higher-quality results. Our results are also applicable elsewhere, in areas such as password generation, username generation, and even computer-generated poetry. Copyright © 2008 John Wiley & Sons, Ltd.

39 citations


Journal IssueDOI
TL;DR: SwingStates is a Java toolkit designed to facilitate the development of graphical user interfaces and bring advanced interaction techniques to the Java platform and the results demonstrate that SwingStates can be used by non-expert developers with little training to successfully implementAdvanced interaction techniques.
Abstract: This article describes SwingStates, a Java toolkit designed to facilitate the development of graphical user interfaces and bring advanced interaction techniques to the Java platform. SwingStates is based on the use of finite-state machines specified directly in Java to describe the behavior of interactive systems. State machines can be used to redefine the behavior of existing Swing widgets or, in combination with a new canvas widget that features a rich graphical model, to create brand new widgets. SwingStates also supports arbitrary input devices to implement novel interaction techniques based, for example, on bi-manual or pressure-sensitive input. We have used SwingStates in several Master's-level classes over the past two years and have developed a benchmark approach to evaluate the toolkit in this context. The results demonstrate that SwingStates can be used by non-expert developers with little training to successfully implement advanced interaction techniques. Copyright © 2007 John Wiley & Sons, Ltd.

38 citations


Journal IssueDOI
TL;DR: The design of the Gridbus Grid resource broker is presented that allows users to create applications and specify different objectives through different interfaces without having to deal with the complexity of Grid infrastructure.
Abstract: Grids provide uniform access to aggregations of heterogeneous resources and services such as computers, networks and storage owned by multiple organizations. However, such a dynamic environment poses many challenges for application composition and deployment. In this paper, we present the design of the Gridbus Grid resource broker that allows users to create applications and specify different objectives through different interfaces without having to deal with the complexity of Grid infrastructure. We present the unique requirements that motivated our design and discuss how these provide flexibility in extending the functionality of the broker to support different low-level middlewares and user interfaces. We evaluate the broker with different job profiles and Grid middleware and conclude with the lessons learnt from our development experience. Copyright © 2007 John Wiley & Sons, Ltd.

38 citations


Journal IssueDOI
TL;DR: An elegant method based on tabled logic programming (TLP) that simplifies the specification of such dynamic programming solutions by introducing a new mode declaration for tabled predicates and results show that mode declarations improve performance in solving dynamic programming problems on TLP systems.
Abstract: In the dynamic programming paradigm the value of an optimal solution is recursively defined in terms of optimal solutions to subproblems. Such dynamic programming definitions can be tricky and error-prone to specify. This paper presents an elegant method based on tabled logic programming (TLP) that simplifies the specification of such dynamic programming solutions. Our method introduces a new mode declaration for tabled predicates. The arguments of each tabled predicate are divided into indexed and non-indexed arguments so that tabled predicates can be regarded as functions: indexed arguments represent input values and non-indexed arguments represent output values. The non-indexed arguments in a tabled predicate can be further declared to be aggregated, for example, the minimum, so that while generating answers, the global table will dynamically maintain the smallest value for that argument. This mode-declaration scheme, coupled with recursion, provides an easy-to-use method for dynamic programming: there is no need to define the value of an optimal solution recursively, as the definition of a general solution suffices. The optimal value as well as its corresponding concrete solution can be derived implicitly and automatically using tabled logic programming systems. Our experimental results show that mode declarations improve performance in solving dynamic programming problems on TLP systems. Copyright © 2007 John Wiley & Sons, Ltd. This is an expanded version of the authors' paper ‘Simplifying Dynamic Programming via Tabling’ that appeared in the Proceedings of the 6th International Symposium on Practical Aspects of Declarative Languages, 2004, pp. 163–177

33 citations


Journal IssueDOI
TL;DR: The importance of timer design is motivated and the techniques and methodologies developed in order to accurately time HPC kernel routines for the authors' well-known empirical tuning framework, ATLAS are discussed.
Abstract: Key computational kernels must run near their peak efficiency for most high-performance computing (HPC) applications. Getting this level of efficiency has always required extensive tuning of the kernel on a particular platform of interest. The success or failure of an optimization is usually measured by invoking a timer. Understanding how to build reliable and context-sensitive timers is one of the most neglected areas in HPC, and this results in a host of HPC software that looks good when reported in the papers, but delivers only a fraction of the reported performance when used by actual HPC applications. In this paper, we motivate the importance of timer design and then discuss the techniques and methodologies we have developed in order to accurately time HPC kernel routines for our well-known empirical tuning framework, ATLAS. Copyright © 2008 John Wiley & Sons, Ltd.

Journal IssueDOI
TL;DR: This paper describes a structural complexity measure for a CBSS written in Unified Modelling Language (UML) from a system analyst's point of view and identifies three factors, interface, constraints and interaction, as primary contributors to the complexity of aCBSS.
Abstract: A component-based system (CBS) is integration centric with a focus on assembling individual components to build a software system. In CBS, component source code information is usually unavailable. Each component also introduces added properties such as constraints associated with its use, interactions with other components and customizability properties. Recent research suggests that most faults are found in only a few system components. A complexity measure at a specification phase can identify these components. However, traditional complexity metrics are not adequate for a CBS as they focus mainly on either lines of code (LOC) or information based on object and class properties. There is therefore a need to develop a new technique for measuring the complexity of a CBS specification (CBSS). This paper describes a structural complexity measure for a CBSS written in Unified Modelling Language (UML) from a system analyst's point of view. A CBSS consists of individual component descriptions characterized by its syntactic, semantic and interaction properties. We identify three factors, interface, constraints and interaction, as primary contributors to the complexity of a CBSS. We also present an application of our technique to a university course registration system. Copyright © 2006 John Wiley & Sons, Ltd.

Journal IssueDOI
TL;DR: SUMLOW is a unified modelling language (UML) diagramming tool that uses an electronic whiteboard and sketching-based user interface to support collaborative software design and allows designers to sketch UML constructs, mixing different UML diagram elements, diagram annotations, and hand-drawn text.
Abstract: Most visual diagramming tools provide point-and-click construction of computer-drawn diagram elements using a conventional desktop computer and mouse. SUMLOW is a unified modelling language (UML) diagramming tool that uses an electronic whiteboard (E-whiteboard) and sketching-based user interface to support collaborative software design. SUMLOW allows designers to sketch UML constructs, mixing different UML diagram elements, diagram annotations, and hand-drawn text. A key novelty of the tool is the preservation of hand-drawn diagrams and support for manipulation of these sketches using pen-based actions. Sketched diagrams can be automatically ‘formalized’ into computer-recognized and -drawn UML diagrams and then exported to a third party CASE tool for further extension and use. We describe the motivation for SUMLOW, illustrate the use of the tool to sketch various UML diagram types, describe its key architecture abstractions and implementation approaches, and report on two evaluations of the toolset. We hope that our experiences will be useful for others developing sketching-based design tools or those looking to leverage pen-based interfaces in software applications. Copyright © 2007 John Wiley & Sons, Ltd.

Journal IssueDOI
TL;DR: A tool, called Oto, that provides support for submission and marking of assignments and aims at reducing the workload associated with the marking task and providing timely feedback to the students, including feedback before the final submission.
Abstract: Marking programming assignments in programming courses involves a lot of work: each program must be tested, the source code must be read and evaluated, etc. With the large classes encountered nowadays, the feedback provided to students through marking is thus rather limited, and often late. Tools providing support for marking programming assignments do exist, ranging from support for administrative aspects through automation of program testing or support for source code evaluation based on metrics. In this paper, we introduce a tool, called Oto, that provides support for submission and marking of assignments. Oto aims at reducing the workload associated with the marking task. Oto also aims at providing timely feedback to the students, including feedback before the final submission. Furthermore, the tool has been designed to be generic and extensible, so that the marking process for a specific assignment can easily be customized and the tool can be extended with various marking components (modules) that allows it to deal with various aspects of marking (testing, style, structure, etc.) and with programs written in various programming languages. Copyright © 2007 John Wiley & Sons, Ltd.

Journal IssueDOI
TL;DR: This paper surveys some of the recent efforts in providing tools for easy gridification of applications and proposes several taxonomies to identify approaches followed in the materialization of such tools, and describes common features among the proposed approaches.
Abstract: The Grid shows itself as a globally distributed computing environment, in which hardware and software resources are virtualized to transparently provide applications with vast capabilities. Just like the electrical power grid, the Grid aims at offering a powerful yet easy-to-use computing infrastructure to which applications can be easily ‘plugged’ and efficiently executed. Unfortunately, it is still very difficult to Grid-enable applications, since current tools force users to take into account many details when adapting applications to run on the Grid. In this paper, we survey some of the recent efforts in providing tools for easy gridification of applications and propose several taxonomies to identify approaches followed in the materialization of such tools. We conclude this paper by describing common features among the proposed approaches, and by pointing out open issues and future directions in the research and development of gridification methods. Copyright © 2007 John Wiley & Sons, Ltd.

Journal IssueDOI
TL;DR: This work describes XML word-replacing transform (XML-WRT), a fast and fully reversible XML transform, which, when combined with generally used LZ77-style compression algorithms, allows to attain high compression ratios, comparable to those achieved by the current state-of-the-art XML compressors.
Abstract: The innate verbosity of the extensible markup language (XML) remains one of its main weaknesses, especially when large documents are concerned. This problem can be solved with the aid of dedicated XML compression algorithms. In this work, we describe XML word-replacing transform (XML-WRT), a fast and fully reversible XML transform, which, when combined with generally used LZ77-style compression algorithms, allows to attain high compression ratios, comparable to those achieved by the current state-of-the-art XML compressors. The resulting compression scheme is asymmetric in the sense that its decoder is much faster than the coder. This is a desirable practical property, as in many XML applications data are read much more often than written. The key features of the transform are dictionary-based encoding of both document structure and content, separation of different content types into multiple streams, and dedicated encoding of specific patterns, including numbers and dates. The test results show that the proposed transform improves the XML compression efficiency of general-purpose compressors on average by 35p in case of gzip, and 17p in case of LZMA. Compared with the current state-of-the-art SCMPPM algorithm, XML-WRT with LZMA attains over 2p better compression ratio, while being 55p faster. Copyright © 2007 John Wiley & Sons, Ltd.

Journal IssueDOI
TL;DR: This paper designs adaptive variants of the recently proposed family of dense compression codes, showing that they are much simpler and faster than dynamic Huffman codes and reach almost the same compression effectiveness.
Abstract: Semistatic byte-oriented word-based compression codes have been shown to be an attractive alternative to compress natural language text databases, because of the combination of speed, effectiveness, and direct searchability they offer. In particular, our recently proposed family of dense compression codes has been shown to be superior to the more traditional byte-oriented word-based Huffman codes in most aspects. In this paper, we focus on the problem of transmitting texts among peers that do not share the vocabulary. This is the typical scenario for adaptive compression methods. We design adaptive variants of our semistatic dense codes, showing that they are much simpler and faster than dynamic Huffman codes and reach almost the same compression effectiveness. We show that our variants have a very compelling trade-off between compression-decompression speed, compression ratio, and search speed compared with most of the state-of-the-art general compressors. Copyright © 2008 John Wiley & Sons, Ltd. A preliminary partial version on this work appeared in [1]

Journal IssueDOI
TL;DR: An intuitive algorithm for performing automatic conversions between quantities measured in different units is developed, which both requires fewer conversion operations than current approaches and makes more sensible choices about which quantities to convert.
Abstract: When physical quantities are used in programs they are typically represented as raw numbers, with the units in which they were measured only being given in comments, if at all. This can lead to errors from the use of dimensionally inconsistent expressions, or the comparison of two quantities of the same dimension but measured in different units, which are not discovered until run time. Any program working with the physical world has this issue, with scientific modelling being a major application. Implementors of models have the time-consuming and error-prone task of adding in dynamic units checks and conversions manually. Most existing programming languages do not provide support for representing units explicitly (although extensions to some have been proposed). With the advent of domain-specific modelling languages, incorporating code generation techniques, we propose checking physical units at the level of the modelling language, removing the need for such a support in the underlying implementation language. We present our work in the context of one such modelling language: CellML, developed at the University of Auckland with a focus on modelling biological systems. We have developed an intuitive algorithm for performing automatic conversions between quantities measured in different units. It both requires fewer conversion operations than current approaches and makes more sensible choices about which quantities to convert. Uniquely, by using partial evaluation techniques it is also capable of dealing robustly with quantities raised to arbitrary powers, even where the exponent is given by an expression. We demonstrate our algorithm on various examples. Copyright © 2007 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, a discussion of Frege's most provocative reflections on the concept of truth is presented, with the focus on the notion of "thoughts" and the "truth-bearers".
Abstract: The founder of modern logic and grandfather of analytic philosophy was 70 years old when he published his paper 'Der Gedanke' (The Thought ) in 1918. This essay contains some of Gottlob Frege's deepest and most provocative reflections on the concept of truth, and it will play a prominent role in my lectures. The plan for my lectures is as follows. What is it that is (primarily) true or false? 'Thoughts', is Frege's answer. In §1, I shall explain and defend this answer. In §2, I shall briefly consider his enthymematic argument for the conclusion that the word 'true' resists any attempt at defining it. In §3, I shall discuss his thesis that the thought that things are thus and so is identical with the thought that it is true that things are thus and so. The reasons we are offered for this thesis will be found wanting. In §4, I shall comment extensively on Frege's claim that, in a non-formal language like the one I am currently trying to speak, we can say whatever we want to say without ever using the word 'true' or any of its synonyms. I will reject the propositional-redundancy claim, endorse the assertive-redundancy claim and deny the connection Frege ascribes to them. In his classic 1892 paper 'Uber Sinn und Bedeutung' (On Sense and Signification) Frege argues that truth-values are objects. In §5, I shall scrutinize his argument. In §6, I will show that in Frege's ideography (Begriffsschrift) truth, far from being redundant, is omnipresent. The final §7 is again on truth-bearers, this time as a topic in the theory of intentionality and in metaphysics. In the course of discussing Frege's views on the objecthood, the objectivity of thoughts and the timelessness of truth(s), I will plead for a somewhat mitigated Platonism.

Journal IssueDOI
TL;DR: This paper shows how the competitors of *sendmail* improved on its design in response to the increased need for security.
Abstract: As the Internet matured, security became more important and formerly adequate designs became inadequate. One of the victims of the increased need for security was *sendmail*. This paper shows how its competitors improved on its design in response to the increased need for security. The designers of *qmail* and *Postfix* used well-known patterns to achieve better security without affecting performance; these patterns can be used by the designers of other systems with an increased need for security. Copyright © 2008 John Wiley & Sons, Ltd.

Journal IssueDOI
TL;DR: This paper presents an architecture framework, the Adaptive Server Framework (ASF), to facilitate the development of adaptive behavior for legacy server applications and demonstrates the performance gains possible by adaptive behavior and the low overhead introduced by ASF.
Abstract: Server applications augmented with behavioral adaptation logic can react to environmental changes, creating self-managing server applications with improved quality of service at runtime. However, developing adaptive server applications is challenging due to the complexity of the underlying server technologies and highly dynamic application environments. This paper presents an architecture framework, the Adaptive Server Framework (ASF), to facilitate the development of adaptive behavior for legacy server applications. ASF provides a clear separation between the implementation of adaptive behavior and the business logic of the server application. This means a server application can be extended with programmable adaptive features through the definition and implementation of control components defined in ASF. Furthermore, ASF is a lightweight architecture in that it incurs low CPU overhead and memory usage. We demonstrate the effectiveness of ASF through a case study, in which a server application dynamically determines the resolution and quality to scale an image based on the load of the server and network connection speed. The experimental evaluation demonstrates the performance gains possible by adaptive behavior and the low overhead introduced by ASF. Copyright © 2007 John Wiley & Sons, Ltd.

Journal IssueDOI
TL;DR: VCluster is a new parallel programming model and a library, VCluster, which implements this model, based on migrating virtual threads instead of processes to support clusters of SMP machines more efficiently.
Abstract: Clusters, composed of symmetric multiprocessor (SMP) machines and heterogeneous machines, have become increasingly popular for high-performance computing. Message-passing libraries, such as message-passing interface (MPI) and parallel virtual machine (PVM), are de facto parallel programming libraries for clusters that usually consist of homogeneous and uni-processor machines. For SMP machines, MPI is combined with multithreading libraries like POSIX Thread and OpenMP to take advantage of the architecture. In addition to existing parallel programming libraries that are in C-C++ and FORTRAN programming languages, the Java programming language presents itself as another alternative with its object-oriented framework, platform neutral byte code, and ever-increasing performance. This paper presents a new parallel programming model and a library, VCluster, which implements this model. VCluster is based on migrating virtual threads instead of processes to support clusters of SMP machines more efficiently. The implementation uses thread migration, which can be used in dynamic load balancing. VCluster was developed in pure Java, utilizing the portability of Java to support clusters of heterogeneous machines. Several applications are developed to illustrate the use of this library and compare the usability and performance of VCluster with other approaches. Copyright © 2007 John Wiley & Sons, Ltd.

Journal IssueDOI
TL;DR: The proposed algorithm, called TLSF (two-level segregated fit), has an asymptotic constant cost, O(1), maintaining a fast response time (less than 200 processor instructions on a x86 processor) and a low level of memory usage (low fragmentation).
Abstract: This paper describes the design criteria and implementation details of a dynamic storage allocator for real-time systems. The main requirements that have to be considered when designing a new allocator are concerned with temporal and spatial constraints. The proposed algorithm, called TLSF (two-level segregated fit), has an asymptotic constant cost, O(1), maintaining a fast response time (less than 200 processor instructions on a x86 processor) and a low level of memory usage (low fragmentation). TLSF uses two levels of segregated lists to arrange free memory blocks and an incomplete search policy. This policy is implemented with word-size bitmaps and logical processor instructions. Therefore, TLSF can be categorized as a good-fit allocator. The incomplete search policy is shown also to be a good policy in terms of fragmentation. The fragmentation caused by TLSF is slightly smaller (better) than that caused by best fit (which is one of the best allocators regarding memory fragmentation). In order to evaluate the proposed allocator, three analyses are presented in this paper. The first one is based on worst-case scenarios. The second one provides a detailed consideration of the execution cost of the internal operations of the allocator and its fragmentation. The third analysis is a comparison with other well-known allocators from the temporal (number of cycles and processor instructions) and spatial (fragmentation) points of view. In order to compare them, a task model has been presented. Copyright © 2007 John Wiley & Sons, Ltd.

Journal IssueDOI
TL;DR: The process illustrates the use of a collection of refactorings for aspect-oriented source code, covering the extraction of scattered implementation elements to aspects, the internal reorganization of the extracted aspects and the extractionof commonalities to super-aspects.
Abstract: This paper describes a refactoring process that transforms a Java source code base into a functionally equivalent AspectJ source code base. The process illustrates the use of a collection of refactorings for aspect-oriented source code, covering the extraction of scattered implementation elements to aspects, the internal reorganization of the extracted aspects and the extraction of commonalities to super-aspects. Copyright © 2007 John Wiley & Sons, Ltd.

Journal IssueDOI
TL;DR: A new approach for automatic content recommendation is presented, based on Semantic Web technologies, that significantly reduces deficiencies in the current content recommenders and performs better than other existing approaches.
Abstract: Digital Television will bring a significant increase in the amount of channels and programs available to end users, with many more difficulties to find contents appealing to them among a myriad of irrelevant information. Thus, automatic content recommenders should receive special attention in the following years to improve their assistance to users. The current content recommenders have important deficiencies that hamper their wide acceptance. In this paper, we present a new approach for automatic content recommendation that significantly reduces those deficiencies. This approach, based on Semantic Web technologies, has been implemented in the AdVAnced Telematic search of Audiovisual contents by semantic Reasoning tool, a hybrid content recommender that makes extensive use of well-known standards, such as Multimedia Home Platform, TV-Anytime and OWL. Also, we have carried out an experimental evaluation, the results of which show that our proposal performs better than other existing approaches. Copyright © 2007 John Wiley & Sons, Ltd.

Journal IssueDOI
TL;DR: It is shown how one can use MiGaLs to very efficiently compare two RNAs of any size at different levels of detail.
Abstract: We formally introduce a new data structure, called MiGaL for ‘Multiple Graph Layer’, composed of various graphs linked together by relations of abstraction-refinement. The new structure is useful for representing information that can be described at different levels of abstraction, each level corresponding to a graph. We then propose an algorithm for comparing two MiGaLs. The algorithm performs a step-by-step comparison starting with the most ‘abstract’ level. The result of the comparison at a given step is communicated to the next step using a special colouring scheme. MiGaLs represent a very natural model for comparing RNA secondary structures that may be seen at different levels of detail, going from the sequence of nucleotides, single or paired with another to participate in a helix, to the network of multiple loops that is believed to represent the most conserved part of RNAs having similar function. We therefore show how one can use MiGaLs to very efficiently compare two RNAs of any size at different levels of detail. Copyright © 2007 John Wiley & Sons, Ltd.

Journal IssueDOI
TL;DR: Two mechanisms are described for automatically tuning two performance-related parameters of Apache web servers: *KeepAliveTimeout* and *MaxClients*.
Abstract: Apache web servers are widely used as stand-alone servers or front-ends in multi-tiered web servers. Despite the wide availability of software, it is quite difficult for many administrators to properly configure their web servers. In particular, setting the performance-related parameters is an error-prone and time-consuming task because their values heavily depend on the server environment. In this paper, two mechanisms are described for automatically tuning two performance-related parameters of Apache web servers: *KeepAliveTimeout* and *MaxClients*. These mechanisms are easy to deploy because no modifications to the server or the operating system are required. Moreover, they are parameter specific. Although interference between *KeepAliveTimeout* and *MaxClients* is inevitable, the tuning mechanisms minimize the correlation by using almost completely independent metrics. Experimental results show that these mechanisms work well for two different workloads; the parameter values are close to optimal and can adapt to workload changes. Copyright © 2007 John Wiley & Sons, Ltd.

Journal IssueDOI
TL;DR: The JnJVM is presented, a full Java virtual machine (JVM) that satisfies needs by using dynamic aspect weaving techniques and a component architecture that supports adding or replacing its own code, while it is running, with no overhead on unmodified code execution.
Abstract: Dynamic flexibility is a major challenge in modern system design to react to context or applicative requirements evolutions. Adapting behaviors may impose substantial code modification across the whole system, in the field, without service interruption and without state loss. This paper presents the JnJVM, a full Java virtual machine (JVM) that satisfies these needs by using dynamic aspect weaving techniques and a component architecture. It supports adding or replacing its own code, while it is running, with no overhead on unmodified code execution. Our measurements reveal similar performance when compared with the monolithic JVM Kaffe. Three illustrative examples show different extension scenarios: (i) modifying the JVMs behavior; (ii) adding capabilities to the JVM; and (iii) modifying applications behavior. Copyright © 2008 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In any sense in which there are ordinary objects, they are vague as mentioned in this paper, because composition is restricted, or there really are no such objects (but we still want to talk about them), or such objects are not metaphysically (independently of us) distinguishable from other 'extraordinary' objects.
Abstract: Ordinary objects are vague, because either (i) composition is restricted, or (ii) there really are no such objects (but we still want to talk about them), or (iii) because such objects are not metaphysically (independently of us) distinguishable from other 'extra-ordinary' objects. In any sense in which there are ordinary objects, they are vague.

Journal IssueDOI
TL;DR: The Viuva Negra (VN) crawler as discussed by the authors was developed and operated for four years to feed a search engine and a Web archive for the Portuguese Web for 4 years.
Abstract: This paper documents hazardous situations on the Web that crawlers must address. This knowledge was accumulated while developing and operating the Viuva Negra (VN) crawler to feed a search engine and a Web archive for the Portuguese Web for four years. The design, implementation and evaluation of the VN crawler are also presented as a case study of a Web crawler design. The case study tested provides crawling techniques that may be useful for the further development of crawlers. Copyright © 2007 John Wiley & Sons, Ltd.