scispace - formally typeset
Search or ask a question

Showing papers in "The Computer Journal in 2005"


Journal ArticleDOI
TL;DR: h.264 and mpeg-4 video compression: video mdp h264 andmpeg 4 video compression video coding for next book by iain richardson h.264 an explanation of video compression techniques.
Abstract: h.264 and mpeg-4 video compression: video mdp h264 and mpeg 4 video compression video coding for next book by iain richardson h.264 and mpeg-4 video mpeg-4 avc / h.264 coding standard avc / h.264 h.264/mpeg4 part 10 staroceans ensuring security of h.264 videos by using watermarking next generation video coding systems (h.265 ijetmas efficient transmission of 3d video using mpeg-4 avc/h.264 an explanation of video compression techniques. tandberg whitepapers tandberg cisco windows xp professional instalacion configuracion y invited paper overviewofthestereoand multiviewvideocoding compression of high dynamic range video using the hevc and hrs skin disorders sb health reference ebook | imchasingplaces complexity analysis of h.264 decoder for fpga design basics of high-efficiency video coding (hevc) and its h.264 / mpeg-4 part 10 white paper overview of h.264 1 ion channels of excitable membranes third edition ebook analysing gop structure and packet loss effects on error videocodingusingtheh.264/mpeg-4avc compressionstandard h.264 software ip suite dsp ict new opportunities for video communication inner work using dreams and active imagination for guidance for healthcare ethics committees cambridge herstein abstract algebra solutions guibot dependence video quality on nalu size fruct h.264 & iptv over dsl enabling new telco revenue opportunities rand mcnally las vegas nevada eleina compressed-domain transcoding of h.264/avc and svc video do androids dream of electric sheep vol 5 cafebr 1987 jeep comanche service manuals free download lifan 250cc manual ervera bloodraven pl nunn cafebr

479 citations


Journal ArticleDOI
TL;DR: The foundation of this work is the topological theory of drawings of graphs on surfaces and the results regarding the relation of the size of the largest grid minor in terms of treewidth in bounded-genus graphs and more generally in graphs excluding a fixed graph H as a minor.
Abstract: Our newly developing theory of bidimensional graph problems provides general techniques for designing efficient fixed-parameter algorithms and approximation algorithms for NP-hard graph problems in broad classes of graphs. This theory applies to graph problems that are bidimensional in the sense that (1) the solution value for the k × k grid graph (and similar graphs) grows with k, typically as Ω(k2), and (2) the solution value goes down when contracting edges and optionally when deleting edges. Examples of such problems include feedback vertex set, vertex cover, minimum maximal matching, face cover, a series of vertex-removal parameters, dominating set, edge dominating set, r-dominating set, connected dominating set, connected edge dominating set, connected r-dominating set, and unweighted TSP tour (a walk in the graph visiting all vertices). Bidimensional problems have many structural properties; for example, any graph embeddable in a surface of bounded genus has treewidth bounded above by the square root of the problem's solution value. These properties lead to efficient---often subexponential---fixed-parameter algorithms, as well as polynomial-time approximation schemes, for many minor-closed graph classes. One type of minor-closed graph class of particular relevance has bounded local treewidth, in the sense that the treewidth of a graph is bounded above in terms of the diameter; indeed, we show that such a bound is always at most linear. The bidimensionality theory unifies and improves several previous results. The theory is based on algorithmic and combinatorial extensions to parts of the Robertson-Seymour Graph Minor Theory, in particular initiating a parallel theory of graph contractions. The foundation of this work is the topological theory of drawings of graphs on surfaces and our results regarding the relation (the linearity) of the size of the largest grid minor in terms of treewidth in bounded-genus graphs and more generally in graphs excluding a fixed graph H as a minor. In this thesis, we also develop the algorithmic theory of vertex separators, and its relation to the embeddings of certain metric spaces. Unlike in the edge case, we show that embeddings into L1 (and even Euclidean embeddings) are insufficient, but that the additional structure provided by many embedding theorems does suffice for our purposes. We obtain an O( logn ) approximation for min-ratio vertex cuts in general graphs, based on a new semidefinite relaxation of the problem, and a tight analysis of the integrality gap which is shown to be Θ( logn ). We also prove various approximate max-flow/min-vertex-cut theorems, which in particular give a constant-factor approximation for min-ratio vertex cuts in any excluded-minor family of graphs. Previously, this was known only for planar graphs, and for general excluded-minor families the best-known ratio was O(log n). These results have a number of applications. We exhibit an O( logn ) pseudo-approximation for finding balanced vertex separators in general graphs. Furthermore, we obtain improved approximation ratios for treewidth: In any graph of treewidth k, we show how to find a tree decomposition of width at most O(k logk ), whereas previous algorithms yielded O( k log k). For graphs excluding a fixed graph as a minor, we give a constant-factor approximation for the treewidth; this via the bidimensionality theory can be used to obtain the first polynomial-time approximation schemes for problems like minimum feedback vertex set and minimum connected dominating set in such graphs. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

239 citations


Journal ArticleDOI
TL;DR: The conference on grand challenges, held in Newcastle on 30 and 31 March 2004, occurred at a particularly opportune time and the strand on the educational aspects was particularly relevant and the idea innovative.
Abstract: The conference on grand challenges, held in Newcastle on 30 and 31 March 2004, occurred at a particularly opportune time. The strand on the educational aspects was particularly relevant and the idea innovative in the sense that this was the first occasion on which a grand challenge event with a focus on educational issues in computing had taken place. This paper provides some of the background and includes a distillation of the educational challenges that emerged from that event.

139 citations


Journal ArticleDOI
TL;DR: The goal of this paper is to develop matching and scheduling algorithms which account for both the execution time and the failure probability and can trade off execution time against the failure probabilities of the application.
Abstract: A heterogeneous computing (HC) system is composed of a suite of geographically distributed high-performance machines interconnected by a high-speed network, thereby providing high-speed execution of computationally intensive applications with diverse demands. In HC systems, however, there is a possibility of machine and network failures and this can have an adverse impact on applications running on the system. In order to decrease the impact of failures on an application, matching and scheduling algorithms must be devised which minimize not only the execution time but also the failure probability of the application. However, because of the conflicting requirements, it is not possible to minimize both at the same time. Thus, the goal of this paper is to develop matching and scheduling algorithms which account for both the execution time and the failure probability and can trade off execution time against the failure probability of the application. In order to attain these goals, a biobjective scheduling problem is first formulated and then two different algorithms, the biobjective dynamic level scheduling algorithm and the biobjective genetic algorithm, are developed. Unique to both algorithms is the expression used for computing the failure probability of an application with precedence constraints. The simulation results confirm that the proposed algorithms can be used for producing task assignments where the execution time is weighed against the failure probability.

129 citations


Journal ArticleDOI
TL;DR: CAM is a new component and aspect model that defines components and aspects as first-order entities, together with a non-intrusive composition mechanism to plug aspects into components.
Abstract: Component-based software development (CBSD) represents a significant advance towards assembling systems by plugging in independent and (re)usable components. On the other hand, aspect-oriented software development (AOSD) is presently considered as a possible technology to improve the modularity and adaptability of complex and large-scale distributed systems. Both are complementary technologies, so it would be helpful to have models that combine them to take advantage of all their mutual benefits. Thus recent research has tried to combine CBSD and AOSD by considering aspects as reusable parts that can be woven and then attached to the individual components. Our contribution to the integration of these technologies is CAM, a new component and aspect model that defines components and aspects as first-order entities, together with a non-intrusive composition mechanism to plug aspects into components. The underlying infrastructure supporting CAM is the dynamic aspect-oriented platform (DAOP), a component and aspect platform that provides the usual services of distributed applications, as well as a composition mechanism to perform the plugging of software aspects into components at runtime.

110 citations


Journal ArticleDOI
TL;DR: This paper presents a classification of these state-of-the-art tools, and describes and compares them, and the rationale behind the different approaches is presented.
Abstract: To counteract the factors that negatively affect the programming learning process and the teaching of programming, different supporting software tools are used nowadays. This paper presents a classification of these state-of-the-art tools, and describes and compares them. The rationale behind the different approaches is presented. Some challenges that the approaches and tools described could face are also pointed out.

93 citations


Journal ArticleDOI
TL;DR: A taxonomy of distributed event-based programming systems is presented, structured as a hierarchy of the properties of a distributed event system and may be used as a framework to describe such a system according to its properties.
Abstract: Event-based middleware is currently being applied for application component integration in a range of application domains. As a result, a variety of event services has been proposed to address different requirements. In order to aid the understanding of the relationships between these systems, this paper presents a taxonomy of distributed event-based programming systems. The taxonomy is structured as a hierarchy of the properties of a distributed event system and may be used as a framework to describe such a system according to its properties. The taxonomy identifies a set of fundamental properties of event systems and categorizes them according to the event model supported and the structure of the event service. Event services are further classified according to their organization and their interaction models, as well as other functional and non-functional features.

73 citations


Journal ArticleDOI
TL;DR: A report on an exercise by the Computing Research Community in the UK to answer the major research challenges that face the world of computing today includes a summary of the outcomes of a BCS-sponsored conference held in Newcastle-upon-Tyne from 29 to 31 March this year.
Abstract: What are the major research challenges that face the world of computing today? Are there any of them that match the grandeur of well-known challenges in other branches of science? This article is a report on an exercise by the Computing Research Community in the UK to answer these questions, and includes a summary of the outcomes of a BCS-sponsored conference held in Newcastle-upon-Tyne from 29 to 31 March this year.

73 citations


Journal ArticleDOI
TL;DR: This paper describes a series of extensions to an existing performance-aware grid management system (TITAN) that provide additional support for workflow prediction and scheduling using a multi-domain performance management infrastructure.
Abstract: Grid middleware development has advanced rapidly over the past few years to support component-based programming models and service-oriented architectures. This is most evident with the forthcoming release of the Globus toolkit (GT4), which represents a convergence of concepts (and standards) from both the grid and web-services communities. Grid applications are increasingly modular, composed of workflow descriptions that feature both resource and application dynamism. Understanding the performance implications of scheduling grid workflows is critical in providing effective resource management and reliable service quality to users. This paper describes a series of extensions to an existing performance-aware grid management system (TITAN). These extensions provide additional support for workflow prediction and scheduling using a multi-domain performance management infrastructure.

72 citations


Journal ArticleDOI
TL;DR: It is proved theoretically that execution times using advanced reservations display less variance than those without, and it is shown that the costs of advanced reservations can be reduced by providing the system with more accurate performance models.
Abstract: Unpredictable job execution environments pose a significant barrier to the widespread adoption of the Grid paradigm, because of the innate risk of jobs failing to execute at the time specified by the user. We demonstrate that predictability can be enhanced with a supporting infrastructure consisting of three parts: Performance modelling and monitoring, scheduling which exploits application structure and an advanced reservation resource management service. We prove theoretically that execution times using advanced reservations display less variance than those without. We also show that the costs of advanced reservations can be reduced by providing the system with more accurate performance models. Following the theoretical discussion, we describe the implementation of a fully functional workflow enactment framework that supports advanced reservations and performance modelling thereby providing predictable execution behavior. We further provide experimental results confirming our theoretical models.

71 citations


Journal ArticleDOI
TL;DR: A clustering oriented approach for facing the problem of source code plagiarism, designed such that it may be easily adapted over any keyword-based programming language and it is quite beneficial when compared with earlier plagiarism detection approaches.
Abstract: Efficient detection of plagiarism in programming assignments of students is of a great importance to the educational procedure. This paper presents a clustering oriented approach for facing the problem of source code plagiarism. The implemented software, called PDetect, accepts as input a set of program sources and extracts subsets (the clusters of plagiarism) such that each program within a particular subset has been derived from the same original. PDetect proposes the use of an appropriate measure for evaluating plagiarism detection performance and supports the idea of combining different plagiarism detection schemes. Furthermore, a cluster analysis is performed in order to provide information beneficial to the plagiarism detection process. PDetect is designed such that it may be easily adapted over any keyword-based programming language and it is quite beneficial when compared with earlier (state-of-the-art) plagiarism detection approaches.

Journal ArticleDOI
TL;DR: The aDORe as mentioned in this paper repository architecture is designed and implemented for ingesting, storing, and accessing a vast collection of Digital Objects at the Research Library of the Los Alamos National Laboratory.
Abstract: This paper describes the aDORe repository architecture designed and implemented for ingesting, storing, and accessing a vast collection of Digital Objects at the Research Library of the Los Alamos National Laboratory. The aDORe architecture is highly modular and standards-based. In the architecture, the MPEG-21 Digital Item Declaration Language is used as the XML-based format to represent Digital Objects that can consist of multiple datastreams as Open Archival Information System Archival Information Packages (OAIS AIPs). Through an ingestion process, these OAIS AIPs are stored in a multitude of autonomous repositories. A Repository Index keeps track of the creation and location of all the autonomous repositories, whereas an Identifier Locator reflects in which autonomous repository a given Digital Object or OAIS AIP resides. A front-end to the complete environment---the OAI-PMH Federator---is introduced for requesting OAIS Dissmination Information Packages (OAIS DIPs). These OAIS DIPs can be the stored OAIS AIPs themselves, or transformations thereof. This front-end allows OAI-PMH harvesters to recurrently and selectively collect batches of OAIS DIPs from aDORe, and hence to create multiple, parallel services using the collected objects. Another front-end---the OpenURL Resolver---is introduced for requesting OAIS Result Sets. An OAIS Result Set is a dissemination of an individual Digital Object or of its constituent datastreams. Both front-ends make use of an MPEG-21 Digital Item Processing engine to apply those services to OAIS AIPs, Digital Objects, or constituent datastreams that were specified in a dissemination request.

Journal ArticleDOI
TL;DR: A newly developed jumping genes evolutionary paradigm is proposed for optimizing the multiobjective resource management problem in direct sequence--wideband code division multiple access systems that enables both total transmission power and total transmission rate to be simultaneously optimized.
Abstract: In this paper, a newly developed jumping genes evolutionary paradigm is proposed for optimizing the multiobjective resource management problem in direct sequence--wideband code division multiple access systems. This formulation enables both total transmission power and total transmission rate to be simultaneously optimized. Since these two objectives are conflicting in nature, a set of tradeoff non-dominated solutions could be obtained without violating the quality of service. This new algorithm has been statistically tested and compared with a number of various multiobjective evolutionary algorithms including the use of binary e-indicator for classifying the capability in generating the quality of non-dominated solution sets. In addition, the capacity of finding a number of extreme solutions is an extra indication to show its ability to measure the diversity along the Pareto-optimal solution front in a unique fashion.

Journal ArticleDOI
TL;DR: This paper gives a summary of some of the work of the Performance Evaluation Process Algebra (PEPA) project, which was awarded the 2004 Roger Needham Award from the BCS.
Abstract: This paper gives a summary of some of the work of the Performance Evaluation Process Algebra (PEPA) project, which was awarded the 2004 Roger Needham Award from the BCS. Centred on the PEPA modelling formalism, the project has sought to balance theory and practice. Theoretical developments have been tested and validated by application to a wide range of problems and such case studies have provided the stimulus for new directions in theory. Both aspects of the work are presented in summary as well as some current and future research topics.

Journal ArticleDOI
TL;DR: A non-authenticated multi-party key agreement protocol resistant to malicious participants is proposed and it is shown that the proposed protocol is provably secure against passive adversaries and malicious participants.
Abstract: By its very nature, a non-authenticated multi-party key agreement protocol cannot provide participant and message authentication, so it must rely on an authenticated network channel. This paper presents the inability of two famous multi-party key agreement protocols to withstand malicious participant attacks, even though their protocols are based on the authenticated network channel. This attack involves a malicious participant disrupting the multi-party key agreement among honest participants. In this case, other honest participants do not correctly agree on a common key. Obviously, the malicious participant cannot obtain the common key either, and the communication confidentiality among participants is not breached. However, in some emergency situations or applications, a multi-party key agreement protocol design that is resistant to malicious participants is useful. Therefore, in this paper, a non-authenticated multi-party key agreement protocol resistant to malicious participants is proposed. The proposed robust protocol requires constant rounds to establish a common key. Each participant broadcasts a constant number of messages. Under the assumption of the Decision Diffie--Hellman problem and the random oracle model, we will show that the proposed protocol is provably secure against passive adversaries and malicious participants.

Journal ArticleDOI
TL;DR: This paper introduces two honeycomb graphical models in which the voxels are hexagonal prisms, and shows that these are the only possible models under certain reasonable conditions.
Abstract: In this paper we investigate the advantages of using hexagonal grids in raster and volume graphics. In 2D, we present a hexagonal graphical model based on a hexagonal grid. In 3D, we introduce two honeycomb graphical models in which the voxels are hexagonal prisms, and we show that these are the only possible models under certain reasonable conditions. In the framework of the proposed models we design the 2D and 3D analytical honeycomb geometry of linear objects as well as of circles and spheres. We demonstrate certain advantages of the honeycomb models and address algorithmic and complexity issues.

Journal ArticleDOI
TL;DR: The boundary line reuse latency (BLRL) as discussed by the authors was proposed to address the problem of the unknown hardware state at the beginning of each sample by warming up the hardware state before each sample.
Abstract: Current computer architecture research relies heavily on architectural simulation to obtain insight into the cycle-level behavior of modern microarchitectures. Unfortunately, such architectural simulations are extremely time-consuming. Sampling is an often-used technique to reduce the total simulation time. This is achieved by selecting a limited number of samples from a complete benchmark execution. One important issue with sampling, however, is the unknown hardware state at the beginning of each sample. Several approaches have been proposed to address this problem by warming up the hardware state before each sample. This paper presents the boundary line reuse latency (BLRL) which is an accurate and efficient warmup strategy. BLRL considers reuse latencies (between memory references to the same memory location) that cross the boundary line between the pre-sample and the sample to compute the warmup that is required for each sample. This guarantees a nearly perfect warmup state at the beginning of a sample. Our experimental results obtained using detailed processor simulation of SPEC CPU2000 benchmarks show that BLRL significantly outperforms the previously proposed memory reference reuse latency (MRRL) warmup strategy. BLRL achieves a warmup that is only half the warmup for MRRL on average for the same level of accuracy.

Journal ArticleDOI
TL;DR: The main idea is to employ a refined encoding scheme, which transforms large itemsets into large 2-itemsets and thereby makes the application of perfect hashing feasible, and the results demonstrate that the new method is also efficient, and scalable when the database size increases.
Abstract: Hashing schemes are widely used to improve the performance of data mining association rules, as in the DHP algorithm that utilizes the hash table in identifying the validity of candidate itemsets according to the number of the table's bucket accesses. However, since the hash table used in DHP is plagued by the collision problem, the process of generating large itemsets at each level requires two database scans, which leads to poor performance. In this paper we propose perfect hashing schemes to avoid collisions in the hash table. The main idea is to employ a refined encoding scheme, which transforms large itemsets into large 2-itemsets and thereby makes the application of perfect hashing feasible. Our experimental results demonstrate that the new method is also efficient (about three times faster than DHP), and scalable when the database size increases. We also propose another variant of the perfect hash scheme with reduced memory requirements. The properties and performances of several perfect hashing schemes are also investigated and compared.

Journal ArticleDOI
TL;DR: The paper illustrates this using the branch coverage adequacy criterion and develops a branch adequacy equivalence relation and a testability transformation for restructuring, and presents a proof that the transformation preserves branches adequacy.
Abstract: Test data generation by hand is a tedious, expensive and error-prone activity, yet testing is a vital part of the development process. Several techniques have been proposed to automate the generation of test data, but all of these are hindered by the presence of unstructured control flow. This paper addresses the problem using testability transformation. Testability transformation does not preserve the traditional meaning of the program, rather it deals with preserving test-adequate sets of input data. This requires new equivalence relations which, in turn, entail novel proof obligations. The paper illustrates this using the branch coverage adequacy criterion and develops a branch adequacy equivalence relation and a testability transformation for restructuring. It then presents a proof that the transformation preserves branch adequacy.

Journal ArticleDOI
TL;DR: The problems of state assignment, flip-flop polarity selection and outputPolarity selection have been integrated into a single genetic algorithmic formulation and the quality of the solution obtained and the high rate of convergence have established the effectiveness of the GA in solving this difficult problem.
Abstract: This paper presents a genetic algorithm (GA)-based approach for the synthesis of a finite state machine (FSM). Three aspects---state assignment, choice of polarity for the state bits and the polarities of the primary outputs---significantly affect the cost of the combinational logic synthesized for an FSM. Thus, the problems of state assignment, flip-flop polarity selection and output polarity selection have been integrated into a single genetic algorithmic formulation. The experiments performed on a large suite of benchmarks have established the fact that this tool outperforms the existing two-level state assignment algorithms. The quality of the solution obtained and the high rate of convergence have established the effectiveness of the GA in solving this difficult problem.

Journal ArticleDOI
TL;DR: The development of a broker that is designed within the SNAP (Service Negotiation and Acquisition Protocol) framework and focuses on applications that require resources on demand, using a three-phase commit protocol is presented.
Abstract: Resource brokering is an essential component in building effective Grid systems. Existing mechanisms employ a traditional approach for resource allocation, which is likely to run into performance problems. This paper presents the development of a broker that is designed within the SNAP (Service Negotiation and Acquisition Protocol) framework and focuses on applications that require resources on demand. The broker uses a three-phase commit protocol as the traditional advance reservation facilities cannot cater to such needs due to the prior time that it requires to schedule the reservation. Experiments have been carried out on a Grid testbed, supported by mathematical modelling and simulation. The experimental results show that the inclusion of the three-phase commit protocol results in a performance enhancement in terms of the time taken from submission of user requirements until a job begins execution. The broker is a viable contender for use in future Grid resource broker implementations.

Journal ArticleDOI
TL;DR: This work derives cost functions for the processing requirements of clustered RAID in normal and degraded modes of operation and quantifies the level of load increase in order to determine the value of G which maintains an acceptable level of performance in degraded mode operation.
Abstract: RAID5 (resp. RAID6) are two popular RAID designs, which can tolerate one (resp. two) disk failures, but the load of surviving disks doubles (resp. triples) when failures occur. Clustered RAID5 (resp. RAID6) disk arrays utilize a parity group size G, which is smaller than the number of disks N, so that the redundancy level is 1/G (resp. 2/G). This enables the array to sustain a peak throughput closer to normal mode operation; e.g. the load increase for RAID5 in processing read requests is given by α = (G - 1)/(N - 1). Three methods to realize clustered RAID are balanced incomplete blocks designs and nearly random permutations, which are applicable to RAID5 and RAID6, and RM2 where each data block is protected by two parity disks. We derive cost functions for the processing requirements of clustered RAID in normal and degraded modes of operation. For given disk characteristics, the cost functions can be translated into disk service times, which can be used for the performance analysis of disk arrays. Numerical results are used to quantify the level of load increase in order to determine the value of G which maintains an acceptable level of performance in degraded mode operation.

Journal ArticleDOI
TL;DR: The IPSW algorithm (in-place sliding window) is introduced and experiments are presented that indicate that it compares favorably with traditional practical approaches, even those that do not decode in-place, while at the same time having low encoding complexity and extremely low decoding complexity.
Abstract: We present algorithms for in-place differential file compression, where a target file T of size n is compressed with respect to a source file S of size m using no additional space in addition to the that used to replace S by T; that is, it is possible to encode using m + n + O(1) space and decode using MAX(m,n) + O(1) space (so that when decoding the source file is overwritten by the decompressed target file). From a theoretical point of view, an optimal solution (best possible compression) to this problem is known to be NP-hard, and in previous work we have presented a factor of 4 approximation (not in-place) algorithm based on a sliding window approach. Here we consider practical in-place algorithms based on sliding window compression where our focus is on decoding; that is, although in-place encoding is possible, we will allow O(m + n) space for the encoder so as to improve its speed and present very fast decoding with only MAX(m, n) + O(1) space. Although NP-hardness implies that these algorithms cannot always be optimal, the asymptotic optimality of sliding window methods along with their ability for constant-factor approximation is evidence that they should work well for this problem in practice. We introduce the IPSW algorithm (in-place sliding window) and present experiments that indicate that it compares favorably with traditional practical approaches, even those that do not decode in-place, while at the same time having low encoding complexity and extremely low decoding complexity. IPSW is most effective when S and T are reasonably well aligned (most large common substrings occur in approximately the same order). We also present a preprocessing step for string alignment that can be employed when the encoder determines significant gains will be achieved.

Journal ArticleDOI
TL;DR: A generalized bottom up parser in which non-embedded recursive rules are handled directly by the underlying automaton, thus limiting stack activity to the activation of rules displaying embedded recursion, which leads to parsers that are correct for all context-free grammars, including those with hidden left recursion.
Abstract: We describe a generalized bottom up parser in which non-embedded recursive rules are handled directly by the underlying automaton, thus limiting stack activity to the activation of rules displaying embedded recursion. Our strategy is motivated by Aycock and Horspool's approach, but uses a different automaton construction and leads to parsers that are correct for all context-free grammars, including those with hidden left recursion. The automaton features edges which directly connnect states containing reduction actions with their associated goto state: hence we call the approach reduction incorporated generalized LR parsing. Our parser constructs shared packed parse forests in a style similar to that of Tomita parsers. We give formal proofs of the correctness of our algorithm, and compare it with Tomita's algorithm in terms of the space and time requirements of the running parsers and the size of the parsers' tables. Experimental results are given for standard grammars for ANSI-C, ISO-Pascal; for a non-deterministic grammar for IBM VS-COBOL, and for a small grammar that triggers asymptotic worst case behaviour in our parser.

Journal ArticleDOI
TL;DR: An approach in which the future application behaviour is constrained by the use of algorithmic skeletons, facilitating modelling with a performance oriented process algebra, and future grid resource performance is predicted by the Network Weather Service (NWS) tool is described.
Abstract: Any scheduling scheme for grid applications must make implicit or explicit assumptions about both the future behaviour of the application and the future availability and performance of grid resources. This paper describes an approach in which the future application behaviour is constrained by the use of algorithmic skeletons, facilitating modelling with a performance oriented process algebra, and future grid resource performance is predicted by the Network Weather Service (NWS) tool. The concept is illustrated through a case study involving Pipeline and Deal skeletons. A tool is presented which automatically generates and solves a set of models which are parameterised with information obtained from NWS. Some numerical results and timing information on the use of the tool are provided, illustrating the efficacy of this approach.

Journal ArticleDOI
TL;DR: A technique for detecting plagiarism in computer code is presented, which is easier to implement than existing methods and has the advantage of distinguishing between the originator and the copiers.
Abstract: We present a technique for detecting plagiarism in computer code, which is easier to implement than existing methods and has the advantage of distinguishing between the originator and the copiers. We record our experience using it to monitor a large group studying Java programming in an automated learning environment.

Journal ArticleDOI
TL;DR: The techniques in this paper have achieved significant performance improvements on the industry standard SPEC* OMPM2001 and SPEC*OMPL2001 benchmarks, and these performance results are presented for Intel® Pentium® and Itanium® processor based systems.
Abstract: State-of-the-art multiprocessor systems pose several difficulties: (i) the user has to parallelize the existing serial code; (ii) explicitly threaded programs using a thread library are not portable; (iii) writing efficient multi-threaded programs requires intimate knowledge of machine's architecture and micro-architecture. Thus, well-tuned parallelizing compilers are in high demand to leverage state-of-the-art computer advances of NUMA-based multiprocessors, simultaneous multi-threading processors and chip-multiprocessor systems in response to the performance quest from the high-performance computing community. On the other hand, OpenMP* has emerged as the industry standard parallel programming model. Applications can be parallelized using OpenMP with less effort in a way that is portable across a wide range of multiprocessor systems. In this paper, we present several practical compiler optimization techniques and discuss their effect on the performance of OpenMP programs. We elaborate on the major design considerations in a high performance OpenMP compiler and present experimental data based on the implementation of the optimizations in the Intel® C++ and Fortran compilers. Interactions of the OpenMP transformation with other sequential optimizations in the compiler are discussed. The techniques in this paper have achieved significant performance improvements on the industry standard SPEC* OMPM2001 and SPEC* OMPL2001 benchmarks, and these performance results are presented for Intel® Pentium® and Itanium® processor based systems.

Journal ArticleDOI
TL;DR: This paper proves that static slicing algorithms produce dataflow minimal end slices for programs which can be represented as schemas which are free and liberal.
Abstract: Program slicing is an automated source code extraction technique that has been applied to a number of problems including testing, debugging, maintenance, reverse engineering, program comprehension, reuse and program integration. In all these applications the size of the slice is crucial; the smaller the better. It is known that statement minimal slices are not computable, but the question of dataflow minimal slicing has remained open since Weiser posed it in 1979. This paper proves that static slicing algorithms produce dataflow minimal end slices for programs which can be represented as schemas which are free and liberal.

Journal ArticleDOI
TL;DR: This paper improves previous approximate DBQ algorithms by applying a combination of the approximation techniques in the same query algorithm (hybrid approximation scheme) and investigates the performance of these improvements for one of the most representative DBQs in high-dimensional data spaces.
Abstract: In modern database applications the similarity or dissimilarity of complex objects is examined by performing distance-based queries (DBQs) on data of high dimensionality. The R-tree and its variations are commonly cited multidimensional access methods that can beused for answering such queries. Although the related algorithms work well for low-dimensional data spaces, their performance degrades as the number of dimensions increases (dimensionality curse). In order to obtain acceptable response time in high-dimensional data spaces, algorithms that obtain approximate solutions can be used. Approximation techniques, like N-consider (based on the tree structure), α-allowance and e-approximate (based on distance), or Time-consider (based on time) can be applied in branch-and-bound algorithms for DBQs inorder to control the trade-off between cost and accuracy of the result. In this paper, we improve previous approximate DBQ algorithms by applying a combination of the approximation techniques in the same query algorithm (hybrid approximation scheme). We investigate the performance of these improvements for one of the most representative DBQs (the K-closest pairs query, K-CPQ) in high-dimensional data spaces, as well as the influence of the algorithmic parameters on the control of the trade-off between the response time and the accuracy of the result. The outcome of the experimental evaluation, using synthetic and real datasets, is the derivation of the outperforming DBQ approximate algorithm for large high-dimensional point datasets.

Journal ArticleDOI
TL;DR: An alternative approach to classical results on aperiodic tilings is discussed in the hope of providing additional intuition, not apparent in classical works.
Abstract: Classical results on aperiodic tilings are rather complicated and not widely understood. In the present article, an alternative approach to these results is discussed in the hope of providing additional intuition, not apparent in classical works.