scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Computer in 2001"


Journal ArticleDOI
TL;DR: This survey and taxonomy of location systems for mobile-computing applications describes a spectrum of current products and explores the latest in the field to help developers of location-aware applications better evaluate their options when choosing a location-sensing system.
Abstract: This survey and taxonomy of location systems for mobile-computing applications describes a spectrum of current products and explores the latest in the field. To make sense of this domain, we have developed a taxonomy to help developers of location-aware applications better evaluate their options when choosing a location-sensing system. The taxonomy may also aid researchers in identifying opportunities for new location-sensing techniques.

3,237 citations


Journal ArticleDOI
TL;DR: Model-integrated computing (MIC), an approach to model-based engineering that helps compose domain-specific design environments rapidly and cost effectively, is particularly relevant for specialized computer-based systems domains-perhaps even single projects.
Abstract: Domain-specific integrated development environments can help capture specifications in the form of domain models. These tools support the design process by automating analysis and simulating essential system behavior. In addition, they can automatically generate, configure, and integrate target application components. The high cost of developing domain-specific, integrated modeling, analysis, and application-generation environments prevents their penetration into narrower engineering fields that have limited user bases. Model-integrated computing (MIC), an approach to model-based engineering that helps compose domain-specific design environments rapidly and cost effectively, is particularly relevant for specialized computer-based systems domains-perhaps even single projects. The authors describe how MIC provides a way to compose such environments cost effectively and rapidly by using a metalevel architecture to specify the domain-specific modeling language and integrity constraints. They also discuss the toolset that implements MIC and describe a practical application in which using the technology in a tool environment for the process industry led to significant reductions in development and maintenance costs.

1,394 citations


Journal ArticleDOI
TL;DR: The rise and fall of the dotcom-driven Internet economy shouldn't distract us from seeing that the business environment continues to change at a dramatically increasing pace, and Agile software development approaches view change from a perspective that mirrors today's turbulent business and technology environment.
Abstract: The rise and fall of the dotcom-driven Internet economy shouldn't distract us from seeing that the business environment continues to change at a dramatically increasing pace. To thrive in this turbulent environment, we must confront the business need for relentless innovation and forge the future workforce culture. Agile software development approaches, such as extreme programming, Crystal methods, lean development, Scrum, adaptive software development (ASD) and others, view change from a perspective that mirrors today's turbulent business and technology environment.

1,210 citations


Journal ArticleDOI
TL;DR: The effects of working in an agile style is described and the problem it addresses and the way in which it addresses the problem are introduced.
Abstract: In a previous article (2001), we introduced agile software development through the problem it addresses and the way in which it addresses the problem. Here, we describe the effects of working in an agile style.

1,085 citations


Journal ArticleDOI
TL;DR: The article presents 10 techniques that can help reduce the flaws in your code and improve the ability of software developers to predict and control efficient software projects.
Abstract: Software's complexity and accelerated development schedules make avoiding defects difficult. We have found, however, that researchers have established objective and quantitative data, relationships, and predictive models that help software developers avoid predictable pitfalls and improve their ability to predict and control efficient software projects. The article presents 10 techniques that can help reduce the flaws in your code.

766 citations


Journal ArticleDOI
Bashar Nuseibeh1
TL;DR: The spiral life-cycle model as mentioned in this paper addresses many drawbacks of a waterfall development process by providing an incremental development process, in which developers repeatedly evaluate changing project risks to manage unstable requirements and funding.
Abstract: Software development organizations often choose between alternative starting points-requirements or architectures. This invariably results in a waterfall development process that produces artificially frozen requirements documents for use in the next step in the development life cycle. Alternatively, this process creates systems with constrained architectures that restrict users and handicap developers by resisting inevitable and desirable changes in requirements. The spiral life-cycle model addresses many drawbacks of a waterfall model by providing an incremental development process, in which developers repeatedly evaluate changing project risks to manage unstable requirements and funding. An even finer-grain spiral life cycle reflects both the realities and necessities of modern software development. Such a life cycle acknowledges the need to develop software architectures that are stable, yet adaptable, in the presence of changing requirements. The cornerstone of this process is that developers craft a system's requirements and its architecture concurrently, and interleave their development.

542 citations


Journal ArticleDOI
TL;DR: Researchers at University of California, Berkeley and Carnegie Mellon University have designed, implemented, and evaluated SILK (Sketching Interfaces Like Krazy), an informal sketching tool that combines many of the benefits of paper-based sketching with the merits of current electronic tools.
Abstract: Researchers at University of California, Berkeley and Carnegie Mellon University have designed, implemented, and evaluated SILK (Sketching Interfaces Like Krazy), an informal sketching tool that combines many of the benefits of paper-based sketching with the merits of current electronic tools. With SILK, designers can quickly sketch an interface using an electronic pad and stylus, and SILK recognizes widgets and other interface elements as the designer draws them. Unlike paper-based sketching, however, designers can exercise these elements in their sketchy state. For example, a sketched scroll-bar is likely to contain an elevator or thumbnail, the small rectangle a user drags with a mouse. In a paper sketch, the elevator would just sit there, but in a SILK sketch, designers can drag it up and down, which lets them test component or widget behavior. SILK also supports the creation of storyboards-the arrangement of sketches to show how design elements behave, such as how a dialog box appears when the user activates a button. Storyboards are important because they give designers a way to show colleagues, customers, or end users early on how an interface will behave.

532 citations


Journal ArticleDOI
TL;DR: An enhanced version of AT&T Laboratories Cambridge's sentient computing system, which uses sensors to update a model of the real world, is installed throughout an office building.
Abstract: Sentient computing systems, which can change their behaviour based on a model of the environment they construct using sensor data, may hold the key to managing tomorrow's device-rich mobile networks. At AT&T Laboratories Cambridge, we have built a system that uses sensors to update a model of the real world. We designed the model's terms (object positions, descriptions and state, and so forth) to be immediately familiar to users. Thus, the model describes the world much as users themselves would. We can use this model to write programs that react to changes in the environment according to the user's preferences. We call this sentient computing because the applications appear to share the user's perception of the environment. Treating the current state of the environment as common ground between computers and users provides new ways of interacting with information systems. A sentient computing system doesn't need to be intelligent or capable of forming new concepts about the world, it only needs to act as though its perceptions duplicate the user's. In earlier work, we described a prototype of this system and stated our intention to deploy it on a large scale. We have now installed an enhanced version throughout an office building. Over the past year, approximately 50 staff members have used the system daily with a set of trial applications.

531 citations


Journal ArticleDOI
TL;DR: Power is a design constraint not only for portable computers and mobile communication devices but also for high-end systems, and the design process should not subordinate it to performance.
Abstract: Power is a design constraint not only for portable computers and mobile communication devices but also for high-end systems, and the design process should not subordinate it to performance.

531 citations


Journal ArticleDOI
TL;DR: It is argued that assisted-GPS technology offers superior accuracy, availability, and coverage at a reasonable cost.
Abstract: Currently in development, numerous geolocation technologies can pinpoint a person's or object's position on the Earth. Knowledge of the spatial distribution of wireless callers will facilitate the planning, design, and operation of next generation broadband wireless networks. Mobile users will gain the ability to get local traffic information and detailed directions to gas stations, restaurants, hotels, and other services. Police and rescue teams will be able to quickly and precisely locate people who are lost or injured but cannot give their precise location. Companies will use geolocation based applications to track personnel, vehicles, and other assets. The driving force behind the development of this technology is a US Federal Communications Commission (FCC) mandate stating that by 1 October 2001 all wireless carriers must provide the geolocation of an emergency 911 caller to the appropriate public safety answering point. Location technologies requiring new modified, or upgraded mobile stations must determine the caller's longitude and latitude within 50 meters for 67 percent of emergency calls, and within 150 meters for 95 percent of the calls. Otherwise, they must do so within 100 meters and 300 meters, respectively, for the same percentage of calls. Currently deployed wireless technology can locate 911 calls within an area no smaller than 10 to 15 square kilometers. It is argued that assisted-GPS technology offers superior accuracy, availability, and coverage at a reasonable cost.

469 citations


Journal ArticleDOI
TL;DR: Multidimensional database technology will increasingly be applied where analysis results are fed directly into other systems, thereby eliminating humans from the loop, when coupled with the need for continuous updates.
Abstract: Multidimensional database technology is a key factor in the interactive analysis of large amounts of data for decision making purposes. In contrast to previous technologies, these databases view data as multidimensional cubes that are particularly well suited for data analysis. Multidimensional models categorize data either as facts with associated numerical measures or as textual dimensions that characterize the facts. Queries aggregate measure values over a range of dimension values to provide results such as total sales per month of a given product. Multidimensional database technology is being applied to distributed data and to new types of data that current technology often cannot adequately analyze. For example, classic techniques such as preaggregation cannot ensure fast query response times when data-such as that obtained from sensors or GPS-equipped moving objects-changes continuously. Multidimensional database technology will increasingly be applied where analysis results are fed directly into other systems, thereby eliminating humans from the loop. When coupled with the need for continuous updates, this context poses stringent performance requirements not met by current technology.

Journal ArticleDOI
Joan G. Dyer1, Mark Lindemann, Ronald Perez, Reiner Sailer, L. van Doorn, Sean W. Smith 
TL;DR: The 4758 is a lifetime-secure tamper-responding device, a multipurpose programmable device based on a 99-MHz 486 CPU internal environment, with a real operating system, a C language development environment and relatively high-speed cryptography.
Abstract: Meeting the challenge of building a user-configurable secure coprocessor provided several lessons in hardware and software development and continues to spur further research. In developing the 4758, we met our major research security goals and provided the following features: (1) a lifetime-secure tamper-responding device, rather than one that is secure only between resets that deployment-specific security officers perform; (2) a secure booting process in which each layer progressively validates the next less-trusted layer, with hardware restricting access to its secrets before passing control to that layer; (3) an actual manufacturable product - a nontrivial accomplishment considering that we designed the device so that it does not have a personality until configured in the field; (4) the first FIPS 140-1 Level 4 validation, arguably the only general-purpose computational platform validated at this level so far; and (5) a multipurpose programmable device based on a 99-MHz 486 CPU internal environment, with a real operating system, a C language development environment and relatively high-speed cryptography.

Journal ArticleDOI
TL;DR: An approach that provides a theoretical foundation for the use of object-oriented databases and object-relational databases in data warehouse, multidimensional database, and online analytical processing applications and introduces a set of minimal constraints and extensions to the Unified Modeling Language for representing multiddimensional modeling properties for these applications.
Abstract: The authors propose an approach that provides a theoretical foundation for the use of object-oriented databases and object-relational databases in data warehouse, multidimensional database, and online analytical processing applications. This approach introduces a set of minimal constraints and extensions to the Unified Modeling Language for representing multidimensional modeling properties for these applications. Multidimensional modeling offers two benefits. First, the model closely parallels how data analyzers think and, therefore, helps users understand data. Second, multidimensional modeling helps predict what final users want to do, thereby facilitating performance improvements. The authors are using their approach to create an automatic implementation of a multidimensional model. They plan to integrate commercial online-analytical-processing tool facilities within their GOLD model case tool as well, a task that involves data warehouse prototyping and sample data generation issues.

Journal ArticleDOI
TL;DR: This work proposes a solution based on trust management that involves developing a security policy, assigning credentials to entities, verifying that the credentials fulfill the policy, delegating trust to third parties, and reasoning about users' access rights.
Abstract: Traditionally, stand-alone computers and small networks rely on user authentication and access control to provide security. These physical methods use system-based controls to verify the identity of a person or process, explicitly enabling or restricting the ability to use, change, or view a computer resource. However, these strategies are inadequate for the increased flexibility that distributed networks such as the Internet and pervasive computing environments require because such systems lack central control and their users are not all predetermined. Mobile users expect to access locally hosted resources and services anytime and anywhere, leading to serious security risks and access control problems. We propose a solution based on trust management that involves developing a security policy, assigning credentials to entities, verifying that the credentials fulfill the policy, delegating trust to third parties, and reasoning about users' access rights. This architecture is generally applicable to distributed systems but geared toward pervasive computing environments.

Journal ArticleDOI
TL;DR: This work proposes using an active disk storage device that combines on-drive processing and memory with software downloadability to allow disks to execute application-level functions directly at the device.
Abstract: As processor performance increases and memory cost decreases, system intelligence continues to move away from the CPU and into peripherals. Storage system designers use this trend toward excess computing power to perform more complex processing and optimizations inside storage devices. To date, such optimizations take place at relatively low levels of the storage protocol. Trends in storage density, mechanics, and electronics eliminate the hardware bottleneck and put pressure on interconnects and hosts to move data more efficiently. We propose using an active disk storage device that combines on-drive processing and memory with software downloadability to allow disks to execute application-level functions directly at the device. Moving portions of an application's processing to a storage device significantly reduces data traffic and leverages the parallelism already present in large systems, dramatically reducing the execution time for many basic data mining tasks.

Journal ArticleDOI
TL;DR: The paper discusses the advantages of P2P networks: load balancing; dynamic information repositories; fault tolerance; content-based addressing and improved searches; and the disadvantages of P1P.
Abstract: Peer-to-peer networking offers unique advantages that will make it a more effective alternative to several existing client-server e-commerce applications, if it can mature into a secure and reliable technology. The paper discusses the advantages of P2P networks: load balancing; dynamic information repositories; fault tolerance; content-based addressing and improved searches. It also considers the disadvantages of P2P.

Journal ArticleDOI
TL;DR: This work has developed a method that extends and transforms traditional author co-citation analysis by extracting structural patterns from the scientific literature and representing them in a 3D knowledge landscape.
Abstract: To make knowledge visualizations clear and easy to interpret, we have developed a method that extends and transforms traditional author co-citation analysis by extracting structural patterns from the scientific literature and representing them in a 3D knowledge landscape.

Journal ArticleDOI
TL;DR: It is argued that although few critical resources have been lost to date, new strategies to manage Internet resources and improved citation practices are necessary to minimize the future loss of information.
Abstract: The lack of persistence of Web references has called into question the increasingly common practice of citing URLs in scientific papers. It is argued that although few critical resources have been lost to date, new strategies to manage Internet resources and improved citation practices are necessary to minimize the future loss of information.

Journal ArticleDOI
TL;DR: The 1:1 Pro system as mentioned in this paper constructs personal profiles based on customers' transactional histories and uses data mining techniques to discover a set of rules describing customers' behavior and support human experts in validating the rules.
Abstract: This paper describes 1:1 Pro system which constructs personal profiles based on customers' transactional histories. The system uses data mining techniques to discover a set of rules describing customers' behavior and supports human experts in validating the rules.

Journal ArticleDOI
TL;DR: This work analyses techniques for generating custom tours for electronic city-guide systems by taking into account multiple contextual triggers and user preferences, and believes that the majority of the work is relevant to location-based systems in general.
Abstract: In a study that provided unique insights into the challenges associated with developing location-based applications, the Lancaster Guide project used members of the general public to test a network-centric electronic tourist guide. We discuss two main topics. The first is our choice of positioning technology - beacons that broadcast using an IEEE 802.11 wireless network combined with user input. The second topic concerns techniques for generating custom tours for electronic city-guide systems. Guide generates these custom tours by taking into account multiple contextual triggers and user preferences. In practice, producing good tours and, indeed, assessing the quality of a tour are difficult tasks. While our analysis of techniques for producing custom tours is somewhat specific to the city-guide domain we believe that the majority of our work is relevant to location-based systems in general.


Journal ArticleDOI
TL;DR: A COTS-based system (CBS) software defect-reduction list is presented as hypotheses, rather than results, that also serve as software challenges for enhancing the empirical understanding of CBSs.
Abstract: Presents a COTS-based system (CBS) software defect-reduction list as hypotheses, rather than results, that also serve as software challenges for enhancing our empirical understanding of CBSs. The hypotheses are: (1) more than 99% of all executing computer instructions come from COTS products (each instruction passed a market test for value); (2) more than half the features in large COTS software products go unused; (3) the average COTS software product undergoes a new release every 8-9 months, with active vendor support for only its latest three releases; (4) CBS development and post-deployment efforts can scale as high as the square of the number of independently developed COTS products targeted for integration; (5) CBS post-deployment costs exceed CBS development costs; (6) although glue-code development usually accounts for less than half the total CBS software development effort, the effort per line of glue code averages about three times the effort per line of developed applications code; (7) non-development costs, such as licensing fees, are significant, and projects must plan for and optimize them; (8) CBS assessment and tailoring efforts vary significantly by COTS product class (operating system, database management system, user interface, device driver, etc.); (9) personnel capability and experience remain the dominant factors influencing CBS development productivity; and (10) CBS is currently a high-risk activity, with effort and schedule overruns exceeding non-CBS software overruns, yet many systems have used COTS successfully for cost reduction and early delivery.

Journal ArticleDOI
TL;DR: The Artemis modeling and simulation environment aims to efficiently explore the design space of heterogeneous embedded-systems architectures at multiple abstraction levels and for a wide range of applications targeting these architectures.
Abstract: Because embedded systems mostly target mass production and often run on batteries, they should be cheap to realize and power efficient. In addition, they require a high degree of programmability to provide real-time performance for multiple applications and standards. However, performance requirements as well as cost and power-consumption constraints demand that substantial parts of these systems be implemented in dedicated hardware blocks. As a result, their heterogeneous system architecture consists of components ranging from fully dedicated hardware components for time-critical application tasks. Increasingly, these designs yield heterogeneous embedded multiprocessor systems that reside together on a single chip. The heterogeneity of these highly programmable systems and the varying demands of their target applications greatly complicate system design. The increasing complexity of embedded-system architectures makes predicting performance behavior more difficult. Therefore, having the appropriate tools to explore different choices at an early design stage is increasingly important. The Artemis modeling and simulation environment aims to efficiently explore the design space of heterogeneous embedded-systems architectures at multiple abstraction levels and for a wide range of applications targeting these architectures. The authors describe their of this methodology in two studies that showed promising results, providing useful feedback on a wide range of design decisions involving the architectures for the two applications.

Journal ArticleDOI
TL;DR: This work is organized into two categories according to the distinct ways values factor into it, in which values themselves are not the controversy's central subject and technology's values form part of the controversy.
Abstract: The story of how information technology has radically altered our lives and even ourselves has been told many times, in many versions. The radical effects of the process have extended to institutions, social processes, relationships, power structures, work, play, education, and beyond. Although the changes have been varied, affecting the economy, the shape and functioning of organizations, artistic expression, and even conceptions of identity, some of us have focused on changes with an ethical dimension. I've found it useful to organize this work into two categories according to the distinct ways values factor into it. In one category I place work in which values themselves are not the controversy's central subject. In the other, however, technology's values form part of the controversy.

Journal ArticleDOI
TL;DR: Facing dynamic modifications in distributed systems technology, middleware developers are striving to support applications that meet the technical challenges of ubiquitous computing.
Abstract: Middleware research and development has reached the end of its first major phase, and new requirements are arising that are so fundamentally different that they will lead to new-generation middleware systems. Facing dynamic modifications in distributed systems technology, middleware developers are striving to support applications that meet the technical challenges of ubiquitous computing.

Journal ArticleDOI
TL;DR: 4G networks will also feature IP interoperability for seamless mobile Internet access and bit rates of 50 Mbps or more, and developers will hopefully have time to resolve issues involving multiple heterogeneous networks.
Abstract: Researchers and vendors are expressing a growing interest in 4G wireless networks that support global roaming across multiple wireless and mobile networks. With this feature, users will have access to different services, increased coverage, the convenience of a single device, one bill with a reduced total access cost, and more reliable wireless access even with the failure or loss of one or more networks. 4G networks will also feature IP interoperability for seamless mobile Internet access and bit rates of 50 Mbps or more. Because deployment of 4G wireless technology is not expected until 2006 or even later, developers will hopefully have time to resolve issues involving multiple heterogeneous networks.

Journal ArticleDOI
TL;DR: This work proposes three mobile computing services: user virtual environment (UVE), mobile virtual terminal (MVT), and virtual resource management (VRM), which provides users with a uniform view of their working environments independent of current locations and specific terminals.
Abstract: Mobile computing requires an advanced infrastructure that integrates suitable support protocols, mechanisms, and tools. This mobility middleware should dynamically reallocate and trace mobile users and terminals and permit communication and coordination of mobile entities. In addition, open and untrusted environments must overcome system heterogeneity and grant the appropriate security level. Solutions to these issues require compliance with standards to interoperate with different systems and legacy components and a reliable security infrastructure based on standard cryptographic mechanisms and tools. Many proposals suggest using mobile agent technology middleware to address these issues. A mobile agent moves entities in execution together with code and achieved state, making it possible to upgrade distributed computing environments without suspending service. We propose three mobile computing services: user virtual environment (UVE), mobile virtual terminal (MVT), and virtual resource management (VRM). UVE provides users with a uniform view of their working environments independent of current locations and specific terminals. MVT extends traditional terminal mobility by preserving the terminal execution state for restoration at new locations, including active processes and subscribed services. VRM permits mobile users and terminals to maintain access to resources and services by automatically requalifying the bindings and moving specific resources or services to permit load balancing and replication.

Journal ArticleDOI
TL;DR: The Quakebot uses Soar-an engine for making and executing decisions-as its underlying AI engine for controlling a single player.
Abstract: Building software agents that can survive in the harsh environment of a popular computer game (Quake II) provides fresh insight into the study of artificial intelligence (AI). Our Quakebot uses Soar-an engine for making and executing decisions-as its underlying AI engine for controlling a single player. We chose Soar as our AI engine because our real research goal is to understand and develop general integrated intelligent agents.

Journal ArticleDOI
TL;DR: As the demand for more flexible, adaptable, extensible, and robust Web based enterprise application systems accelerates, adopting new software engineering methodologies and development strategies becomes critical.
Abstract: As the demand for more flexible, adaptable, extensible, and robust Web based enterprise application systems accelerates, adopting new software engineering methodologies and development strategies becomes critical. These strategies must support the construction of enterprise software systems that assemble highly flexible software components written at different times by various developers. Traditional software development strategies and engineering methodologies, which require development of software systems from scratch, fall short in this regard. Component based software engineering offers an attractive alternative for building Web based enterprise application systems. CBSE works by developing and evolving software from selected reusable software components, then assembling them within appropriate software architectures. By promoting the use of software components that commercial vendors or in-house developers build, the component based software development approach promises large scale software reuse.

Journal ArticleDOI
TL;DR: This overview of open source licensing and development models describes some of the movement's main principles and provides pointers to essential information about the movement and its general licensing structures.
Abstract: Although some challenge the value of open source software development, its popularity cannot be disputed. This overview of open source licensing and development models describes some of the movement's main principles. We also clarify some of the main principles underlying the resulting software. Because so much has already been written about open source software, we only touch on some of its major themes and provide pointers to essential information about the movement and its general licensing structures.