scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Computer in 1995"


Journal ArticleDOI
TL;DR: The Query by Image Content (QBIC) system as discussed by the authors allows queries on large image and video databases based on example images, user-constructed sketches and drawings, selected color and texture patterns, camera and object motion, and other graphical information.
Abstract: Research on ways to extend and improve query methods for image databases is widespread. We have developed the QBIC (Query by Image Content) system to explore content-based retrieval methods. QBIC allows queries on large image and video databases based on example images, user-constructed sketches and drawings, selected color and texture patterns, camera and object motion, and other graphical information. Two key properties of QBIC are (1) its use of image and video content-computable properties of color, texture, shape and motion of images, videos and their objects-in the queries, and (2) its graphical query language, in which queries are posed by drawing, selecting and other graphical means. This article describes the QBIC system and demonstrates its query capabilities. QBIC technology is part of several IBM products. >

3,957 citations


Journal ArticleDOI
TL;DR: Dynamic instrumentation lets us defer insertion until the moment it is needed (and remove it when it is no longer needed); Paradyn's Performance Consultant decides when and where to insert instrumentation.
Abstract: Paradyn is a tool for measuring the performance of large-scale parallel programs. Our goal in designing a new performance tool was to provide detailed, flexible performance information without incurring the space (and time) overhead typically associated with trace-based tools. Paradyn achieves this goal by dynamically instrumenting the application and automatically controlling this instrumentation in search of performance problems. Dynamic instrumentation lets us defer insertion until the moment it is needed (and remove it when it is no longer needed); Paradyn's Performance Consultant decides when and where to insert instrumentation. >

864 citations


Journal ArticleDOI
TL;DR: A content-based image retrieval (CBIR) system is required to effectively and efficiently use information from these image repositories as discussed by the authors, which helps users (even those unfamiliar with the database) retrieve relevant images based on their contents.
Abstract: Images are being generated at an ever-increasing rate by sources such as defence and civilian satellites, military reconnaissance and surveillance flights, fingerprinting and mug-shot-capturing devices, scientific experiments, biomedical imaging, and home entertainment systems. For example, NASA's Earth Observing System will generate about 1 terabyte of image data per day when fully operational. A content-based image retrieval (CBIR) system is required to effectively and efficiently use information from these image repositories. Such a system helps users (even those unfamiliar with the database) retrieve relevant images based on their contents. Application areas in which CBIR is a principal activity are numerous and diverse. With the recent interest in multimedia systems, CBIR has attracted the attention of researchers across several disciplines. >

854 citations


Journal ArticleDOI
TL;DR: This work presents an approach that integrates a relational database retrieval system with a color analysis technique, and shows how a coarse granularity is used for content analysis improves the ability to retrieve images efficiently.
Abstract: Selecting from a large, expanding collection of images requires carefully chosen search criteria. We present an approach that integrates a relational database retrieval system with a color analysis technique. The Chabot project was initiated at our university to study storage and retrieval of a vast collection of digitized images. These images are from the State of California Department of Water Resources. The goal was to integrate a relational database retrieval system with content analysis techniques that would give our querying system a better method for handling images. Our simple color analysis method, if used in conjunction with other search criteria, improves our ability to retrieve images efficiently. The best result is obtained when text-based search criteria are combined with content-based criteria and when a coarse granularity is used for content analysis. >

780 citations


Journal ArticleDOI
TL;DR: Open questions in code cognition models relate to the scalability of existing experimental results with small programs, the validity and credibility of results based on experimental procedures, and the challenges of data availability are identified.
Abstract: Code cognition models examine how programmers understand program code. The authors survey the current knowledge in this area by comparing six program comprehension models: the Letovsky (1986) model; the Shneiderman and Mayer (1979) model; the Brooks (1983) model; Soloway, Adelson and Ehrlich's (1988) top-down model; Pennington's (1987) bottom-up model; and the integrated metamodel of von Mayrhauser and Vans (1994). While these general models can foster a complete understanding of a piece of code, they may not always apply to specialized tasks that more efficiently employ strategies geared toward partial understanding. We identify open questions, particularly considering the maintenance and evolution of large-scale code. These questions relate to the scalability of existing experimental results with small programs, the validity and credibility of results based on experimental procedures, and the challenges of data availability. >

606 citations


Journal Article
TL;DR: Application areas in which CBIR is a principal activity are numerous and diverse and with the recent interest in multimedia systems, CBIR has attracted the attention of researchers across several disciplines.

551 citations


Journal ArticleDOI
TL;DR: This tutorial highlights the unique issues and data storage characteristics that concern designers in the real-time processing of multimedia data.
Abstract: Real-time processing of multimedia data is required of those who offer audio and video on-demand. This tutorial highlights the unique issues and data storage characteristics that concern designers. >

465 citations


Journal ArticleDOI
TL;DR: Implementing the Responsive Workbench required close attention to several important elements: its components, a typical setup, the user interface, feedback speed and real-time rendering.
Abstract: In this virtual environment, customized scientific visualization tools offer specialists new ways to work cooperatively, which opens the door to more comprehensive analysis and, perhaps, reduced costs. Implementing the Responsive Workbench required close attention to several important elements: its components, a typical setup, the user interface, feedback speed and real-time rendering. >

437 citations


Journal ArticleDOI
TL;DR: In this paper, the authors divide the scheduling problem between uniprocessor and multi-processor results, and divide the work between static and dynamic algorithms, and propose a taxonomy of the complexity, fundamental limits and performance bounds.
Abstract: Knowledge of complexity, fundamental limits and performance bounds-well known for many scheduling problems-helps real time designers choose a good design and algorithm and avoid poor ones. The scheduling problem has so many dimensions that it has no accepted taxonomy. We divide scheduling theory between uniprocessor and multiprocessor results. In the uniprocessor section, we begin with independent tasks and then consider shared resources and overload. In the multiprocessor section, we divide the work between static and dynamic algorithms. >

387 citations


Journal ArticleDOI
TL;DR: The Terasys project as discussed by the authors uses a processor-in-memory (PIM) memory with a single-bit ALU controlling each column of memory for data parallel bit C (dbC) applications.
Abstract: SRC researchers have designed and fabricated a processor-in-memory (PIM) chip, a standard 4-bit memory augmented with a single-bit ALU controlling each column of memory. In principle, PIM chips can replace the memory of any processor, including a supercomputer. To validate the notion of integrating SIMD computing into conventional processors on a more modest scale, we have built a half dozen Terasys workstations, which are Sun Microsystems Sparcstation-2 workstations in which 8 megabytes of address space consist of PIM memory holding 32K single-bit ALUs. We have designed and implemented a high-level parallel language, called data parallel bit C (dbC), for Terasys and demonstrated that dbC applications using the PIM memory as a SIMD array run at the speed of multiple Cray-YMP processors. Thus, we can deliver supercomputer performance for a small fraction of supercomputer cost. Since the successful creation of the Terasys research prototype, we have begun work on processing in memory in a supercomputer setting. In a collaborative research project, we are working with Cray Computer to incorporate a new Cray-designed implementation of the PIM chips into two octants of Cray-3 memory. >

367 citations


Journal ArticleDOI
TL;DR: This work discusses the central issues in similar-shape retrieval and explains how these issues are resolved in a shape retrieval scheme called FIBSSR (Feature Index-Based Similar-Shape Retrieval).
Abstract: Addresses the problem of similar-shape retrieval, where shapes or images in a shape database that satisfy specified shape-similarity constraints with respect to the query shape or image must be retrieved from the database. In its simplest form, the similar-shape retrieval problem can be stated as, "retrieve or select all shapes or images that are visually similar to the query shape or the query image's shape". We focus on databases of 2D shapes-or equivalently, databases of images of flat or almost flat objects. (We use the terms "object" and "shape" interchangeably). Two common types of 2D objects are rigid objects, which have a single rigid component called a link, and articulated objects, which have two or more rigid components joined by movable (rotating or sliding) joints. An ideal similar-shape retrieval technique must be general enough to handle images of articulated as well as rigid objects. It must be flexible enough to handle simple query images, which have isolated shapes, and complex query images, which have partially visible, overlapping or touching objects. We discuss the central issues in similar-shape retrieval and explain how these issues are resolved in a shape retrieval scheme called FIBSSR (Feature Index-Based Similar-Shape Retrieval). This new similar-shape retrieval system effectively models real-world applications. >

Journal ArticleDOI
TL;DR: Through observations of many recently completed and in-progress projects, these guidelines that, if adhered to, greatly increase a project's chances for success are come up with.
Abstract: Producing correct, reliable software in systems of ever increasing complexity is a problem with no immediate end in sight. The software industry suffers from a plague of bugs on a near-biblical scale. One promising technique in alleviating this problem is the application of formal methods that provide a rigorous mathematical basis to software development. When correctly applied, formal methods produce systems of the highest integrity and thus are especially recommended for security- and safety-critical systems. Unfortunately, although projects based on formal methods are proliferating, the use of these methods is still more the exception than the rule, which results from many misconceptions regarding their costs, difficulties, and payoffs. Surveys of formal methods applied to large problems in industry help dispel these misconceptions and show that formal methods projects can be completed on schedule and within budget. Moreover, these surveys show that formal methods projects produce correct software (and hardware) that is well structured, maintainable, and satisfies customer requirements. Through observations of many recently completed and in-progress projects we have come up with ten guidelines that, if adhered to, greatly increase a project's chances for success. >

Journal ArticleDOI
TL;DR: This work surveys several fault injection studies and discusses tools such as React (Reliable Architecture Characterization Tool) that facilitate its application.
Abstract: A fault tolerant computer system's dependability must be validated to ensure that its redundancy has been correctly implemented and the system will provide the desired level of reliable service. Fault injection-the deliberate insertion of faults into an operational system to determine its response offers an effective solution to this problem. We survey several fault injection studies and discuss tools such as React (Reliable Architecture Characterization Tool) that facilitate its application. >

Journal ArticleDOI
TL;DR: The article examines extant Web access patterns with the aim of developing more efficient file-caching and prefetching strategies.
Abstract: To support continued growth, WWW servers must manage a multigigabyte (in some instances a multiterabyte) database of multimedia information while concurrently serving multiple request streams. This places demands on the servers' underlying operating systems and file systems that lie far outside today's normal operating regime. Simply put, WWW servers must become more adaptive and intelligent. The first step on this path is understanding extant access patterns and responses. The article examines extant Web access patterns with the aim of developing more efficient file-caching and prefetching strategies. >

Journal ArticleDOI
TL;DR: In this article, the authors developed an automatic indexing system for captioned pictures of people; the indexing information and other textual information is subsequently used in a content-based image retrieval system.
Abstract: The interaction of textual and photographic information in an integrated text/image database environment is being explored. Specifically, our research group has developed an automatic indexing system for captioned pictures of people; the indexing information and other textual information is subsequently used in a content-based image retrieval system. Our approach presents an alternative to traditional face identification systems; it goes beyond a superficial combination of existing text-based and image-based approaches to information retrieval. By understanding the caption accompanying a picture, we can extract information that is useful both for retrieving the picture and for identifying the faces shown. In designing a pictorial database system, two major issues are (1) the amount and type of processing required when inserting new pictures into the database and (2) efficient retrieval schemes for query processing. Our research has focused on developing a computational model for understanding pictures based on accompanying descriptive text. Understanding a picture can be informally defined as the process of identifying relevant people and objects. Several current vision systems employ the idea of top-down control in picture understanding. We carry the notion of top-down control one step further, exploiting not only general context but also picture-specific context. >

Journal ArticleDOI
TL;DR: A pilot study is described that used virtual reality graded exposure techniques to treat acrophobia-the fear of heights, and the extent to which subjects feel that they were actually present in height situations is addressed.
Abstract: Can virtual environments help elicit fearful feelings so they can be treated? This article shows how therapists and computer experts used them to do just that. We describe a pilot study that used virtual reality graded exposure techniques to treat acrophobia-the fear of heights. We specifically address two issues: the extent to which we were able to make subjects feel that they were actually present in height situations, and the efficacy of the treatment conducted using virtual height situations. >

Journal ArticleDOI
TL;DR: The article identifies and examines extensions to the basic client/server model that provide explicit support for coordinating multiserver interactions.
Abstract: A major limitation in the basic client/server model is its focus on clients requesting individual services. Clients often need to invoke multiple services, coordinated to reflect how those services interrelate and contribute to the overall application. Important examples include task allocation and event notification in collaborative workgroup systems, and task sequencing and routing in workflow applications. Failure to address control requirements for such interactions has impeded development of uniform methods and tools for building many types of distributed systems with client/server architectures. The article identifies and examines extensions to the basic client/server model that provide explicit support for coordinating multiserver interactions. >

Journal ArticleDOI
TL;DR: This work examines software and hardware approaches to implementing collective communication operations and describes the major classes of algorithms proposed to solve problems arising in this research area.
Abstract: Most MPC networks use wormhole routing to reduce the effect of path length on communication time. Researchers have exploited this by designing ingenious algorithms to speed collective communication. Many projects have addressed the design of efficient collective communication algorithms for wormhole-routed systems. By exploiting the relative distance-insensitivity of wormhole routing, these new algorithms often differ fundamentally from their store-and-forward counterparts. We examine software and hardware approaches to implementing collective communication operations. Although we emphasize methods in which the underlying architecture is a direct network, such as a hypercube or mesh, as opposed to an indirect switch-based network, several approaches apply to systems of either type. We illustrate several issues arising in this research area and describe the major classes of algorithms proposed to solve these problems.

Journal ArticleDOI
TL;DR: The Paradigm (PARAllelizing compiler for DIstributed-memory, General-purpose Multicomputers) project at the University of Illinois addresses the problem of massively parallel distributed-memory multicomputers by developing automatic methods for the efficient parallelization of sequential programs.
Abstract: To harness the computational power of massively parallel distributed-memory multicomputers, users must write efficient software. This process is laborious because of the absence of global address space. The programmer must manually distribute computations and data across processors and explicitly manage communication. The Paradigm (PARAllelizing compiler for DIstributed-memory, General-purpose Multicomputers) project at the University of Illinois addresses this problem by developing automatic methods for the efficient parallelization of sequential programs. A unified approach efficiently supports regular and irregular computations using data and functional parallelism. >

Journal ArticleDOI
TL;DR: User expectations for multimedia applications are imposing enormous demands on network delivery, and proper resource management is crucial to high throughput, fast rates, and time and space guarantees.
Abstract: User expectations for multimedia applications are imposing enormous demands on network delivery. Proper resource management is crucial to high throughput, fast rates, and time and space guarantees. >

Journal ArticleDOI
TL;DR: Agentheets as discussed by the authors is a tool for creating domain oriented visual programming languages, and illustrates how it supports collaborative design by examining experiences from a real language design project, and summarizes the contributions of their approach and discuss its viability in industrial design projects.
Abstract: Customized visual representations enable end users to achieve their programming goals. Here, designers work with users to tailor visual programming languages to specific problem domains. We describe a design methodology and a tool for creating domain oriented, end user programming languages that effectively use visualization. We first describe a collaborative design methodology involving end users and designers. We then present Agentsheets, a tool for creating domain oriented visual programming languages, and illustrate how it supports collaborative design by examining experiences from a real language design project. Finally, we summarize the contributions of our approach and discuss its viability in industrial design projects. >

Journal ArticleDOI
TL;DR: The authors show that adaptive parallelism has the potential to integrate heterogeneous platforms seamlessly into a unified computing resource and to permit more efficient sharing of traditional parallel processors than is possible with current systems.
Abstract: Desktop computers are idle much of the time. Ongoing trends make aggregate LAN "waste"-idle compute cycles-an increasingly attractive target for recycling. Piranha, a software implementation of adaptive parallelism, allows these waste cycles to be recaptured by putting them to work running parallel applications. Most parallel processing is static: programs execute on a fixed set of processors throughout a computation. Adaptive parallelism allows for dynamic processor sets which means that the number of processors working on a computation may vary, depending on availability. With adaptive parallelism, instead of parceling out jobs to idle workstations, a single job is distributed over many workstations. Adaptive parallelism is potentially valuable on dedicated multiprocessors as well, particularly on massively parallel processors. One key Piranha advantage is that task descriptors, not processes, are the basic movable, remappable computation unit. The task descriptor approach supports strong heterogeneity. A process image representing a task in mid computation can't be moved to a machine of a different type, but a task descriptor can be. Thus, a task begun on a Sun computer can be completed by an IBM machine. The authors show that adaptive parallelism has the potential to integrate heterogeneous platforms seamlessly into a unified computing resource and to permit more efficient sharing of traditional parallel processors than is possible with current systems. >

Journal ArticleDOI
TL;DR: The Graphical Rapid Prototyping Environment (Grape-II) automates the prototyping methodology for general-purpose hardware systems by offering tools for resource estimation, partitioning, assignment, routing, scheduling, code generation, and parameter modification.
Abstract: We propose a rapid-prototyping setup to minimize development cost and a structured-prototyping methodology to reduce programming effort. The general-purpose hardware consists of commercial DSP processors, bond-out versions of core processors, and field-programmable gate arrays (FPGAs) linked to form a powerful, heterogeneous multiprocessor, such as the Paradigm RP developed within the Retides (Real-Time DSP Emulation System) Esprit project. Our Graphical Rapid Prototyping Environment (Grape-II) automates the prototyping methodology for these hardware systems by offering tools for resource estimation, partitioning, assignment, routing, scheduling, code generation, and parameter modification. Grape-II has been used successfully in three real-world DSP applications. >

Journal ArticleDOI
TL;DR: The scheduling techniques discussed might be used by a compiler writer to optimize the code that comes out of a parallelizing compiler, and the optimizer would schedule these grains such that the program runs in the shortest time.
Abstract: The complex problem of assigning tasks to processing elements in order to optimize a performance measure has resulted in numerous heuristics aimed at approximating an optimal solution. This article addresses the task scheduling problem in many of its variations and surveys the major solutions. The scheduling techniques we discuss might be used by a compiler writer to optimize the code that comes out of a parallelizing compiler. The compiler would produce grains of sequential code, and the optimizer would schedule these grains such that the program runs in the shortest time.

Journal ArticleDOI
TL;DR: The authors explored the utility of custom computing machinery for accelerating the development, testing, and prototyping of a diverse set of image processing applications and developed a real time image processing system called VTSplash, based on the Splash-2 general-purpose platform.
Abstract: The authors explore the utility of custom computing machinery for accelerating the development, testing, and prototyping of a diverse set of image processing applications. We chose an experimental custom computing platform called Splash-2 to investigate this approach to prototyping real time image processing designs. Custom computing platforms are emerging as a class of computers that can provide near application specific computational performance. We developed a real time image processing system called VTSplash, based on the Splash-2 general-purpose platform. Splash-2 is an attached processor featuring programmable processing elements (PEs) and communication paths. The Splash-2 system uses arrays of RAM based field programmable gate arrays (FPGAs), crossbar networks, and distributed memory to accomplish the needed flexibility and performance tasks. Such platforms let designers customize specific operations for function and size, and data paths for individual applications. >

Journal ArticleDOI
TL;DR: The article outlines the concepts and identifies the activities that comprise event based monitoring, describing several representative monitoring systems, and focuses on four areas: dependability, performance enhancement, correctness checking, and security.
Abstract: Online monitoring can complement formal techniques to increase application dependability. The article outlines the concepts and identifies the activities that comprise event based monitoring, describing several representative monitoring systems. It focuses on four areas: dependability, performance enhancement, correctness checking, and security. >

Journal ArticleDOI
TL;DR: The scaling up problem is how to expand applicability without sacrificing the goals of better logic expression and understanding, and nine major subproblems are discussed and emerging solutions from existing VPL systems are described.
Abstract: The directness, immediacy, and simplicity of visual programming languages are appealing. The question is, can VPLs be effectively applied to large scale programming problems while retaining these characteristics. In scaling up, the problem is how to expand applicability without sacrificing the goals of better logic expression and understanding. From a size standpoint, scaling up refers to the programmer's ability to apply VPLs in larger programs. Such programs range from those requiring several days' work by a single programmer to programs requiring months of work, large programming teams, and large data structures. From a problem domain standpoint, scaling up refers to suitability for many kinds of problems. These range from visual application domains-such as user interface design or scientific visualization-to general purpose programming in such diverse areas as financial planning, simulations, and real time applications with explicit timing requirements. To illustrate the scaling up problem, we discuss nine major subproblems and describe emerging solutions from existing VPL systems. First, we examine representation issues, including static representation, screen real estate, and documentation. Next, we examine programming language issues-procedural abstraction, interactive visual data abstraction, type checking, persistence, and efficiency. Finally, we look at issues beyond the coding process. >

Journal ArticleDOI
TL;DR: The general technical concepts underlying compound documents and component software, which promise to simplify the design and implementation of complex software applications and, equally important, simplify human-computer interactive work models for application end users are examined.
Abstract: Component software benefits include reusability and interoperability, among others. What are the similarities and differences between the competing standards for this new technology, and how will they interoperate? Object-oriented technology is steadily gaining acceptance for commercial and custom application development through programming languages such as C++ and Smalltalk, object oriented CASE tools, databases, and operating systems such as Next Computer's NextStep. Two emerging technologies, called compound documents and component software, will likely accelerate the spread of objectoriented concepts across system-level services, development tools, and application-level behaviours. Tied closely to the popular client/server architecture for distributed computing, compound documents and component software define object-based models that facilitate interactions between independent programs. These new approaches promise to simplify the design and implementation of complex software applications and, equally important, simplify human-computer interactive work models for application end users. Following unfortunate tradition, major software vendors have developed competing standards to support and drive compound document and component software technologies. These incompatible standards specify distinct object models, data storage models, and application interaction protocols. The incompatibilities have generated confusion in the market, as independent software vendors, system integrators, in-house developers, and end users struggle to sort out the standards' relative merits, weaknesses, and chances for commercial success. Let's take a look now at the general technical concepts underlying compound documents and component software. Then we examine the OpenDoc, OLE 2, COM, and CORBA standards being proposed for these two technologies. Finally, we'll review the work being done to extend the standards and to achieve interoperability across them. >

Journal ArticleDOI
TL;DR: This article proposes potential solutions to modify existing systems to support these new functions of large, distributed multimedia systems, and discusses related matters.
Abstract: Computing, communication, and relevant standards are on the brink of enabling thousands of people to enjoy the services offered by large, distributed multimedia systems in their own homes. Collectively, these services will include TV (basic, subscription, and pay-per-view), service navigator, interactive entertainment, digital audio, video-on-demand, home shopping, financial transactions, interactive single- and multiuser games, digital multimedia libraries, and electronic versions of newspapers, magazines, TV program guides, and yellow pages. It is obvious that current TV systems and architectures must be redesigned to support such services. In this article, we propose potential solutions to modify existing systems to support these new functions, and we discuss related matters. >

Journal ArticleDOI
TL;DR: A team of researchers and educators has introduced a computer science curriculum into Israeli high schools that combines conceptual and practical issues in a zipper-like fashion.
Abstract: A team of researchers and educators has introduced a computer science curriculum into Israeli high schools. This curriculum combines conceptual and practical issues in a zipper-like fashion. Its emphasis is on the basics of algorithmics, and it teaches programming as a way to get a computer to execute an algorithm. It has been proposed by a committee formed in 1990 by the Israel Ministry of Education. >