scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Computer in 2006"


Journal Article•DOI•
TL;DR: Model-driven engineering technologies offer a promising approach to address the inability of third-generation languages to alleviate the complexity of platforms and express domain concepts effectively.
Abstract: Model-driven engineering technologies offer a promising approach to address the inability of third-generation languages to alleviate the complexity of platforms and express domain concepts effectively.

1,883 citations


Journal Article•DOI•
L. von Ahn1•
TL;DR: "Games with a purpose" have a vast range of applications in areas as diverse as security, computer vision, Internet accessibility, adult content filtering, and Internet search, and any game designed to address these and other problems must ensure that game play results in a correct solution and, at the same time, is enjoyable.
Abstract: Through online games, people can collectively solve large-scale computational problems. Such games constitute a general mechanism for using brain power to solve open problems. In fact, designing such a game is much like designing an algorithm - it must be proven correct, its efficiency can be analyzed, a more efficient version can supersede a less efficient one, and so on. "Games with a purpose" have a vast range of applications in areas as diverse as security, computer vision, Internet accessibility, adult content filtering, and Internet search. Any game designed to address these and other problems must ensure that game play results in a correct solution and, at the same time, is enjoyable. People will play such games to be entertained, not to solve a problem - no matter how laudable the objective

1,057 citations


Journal Article•DOI•
TL;DR: For concurrent programming to become mainstream, threads must be discarded as a programming model, and nondeterminism should be judiciously and carefully introduced where needed, and it should be explicit in programs.
Abstract: For concurrent programming to become mainstream, we must discard threads as a programming model. Nondeterminism should be judiciously and carefully introduced where needed, and it should be explicit in programs. In general-purpose software engineering practice, we have reached a point where one approach to concurrent programming dominates all others namely, threads, sequential processes that share memory. They represent a key concurrency model supported by modern computers, programming languages, and operating systems. In scientific computing, where performance requirements have long demanded concurrent programming, data-parallel language extensions and message-passing libraries such as PVM, MPI, and OpenMP dominate over threads for concurrent programming. Computer architectures intended for scientific computing often differ significantly from so-called general-purpose architectures.

956 citations


Journal Article•DOI•
Marc Levoy1•
TL;DR: A survey of the theory and practice of light field imaging emphasizes the devices researchers in computer graphics and computer vision have built to capture light fields photographically and the techniques they have developed to compute novel images from them.
Abstract: A survey of the theory and practice of light field imaging emphasizes the devices researchers in computer graphics and computer vision have built to capture light fields photographically and the techniques they have developed to compute novel images from them

615 citations


Journal Article•DOI•
TL;DR: The two most popular biometric techniques are focused on: fingerprints and iris scans, which are used increasingly as a hedge against identity theft.
Abstract: In this age of digital impersonation, biometric techniques are being used increasingly as a hedge against identity theft. The premise is that a biometric - a measurable physical characteristic or behavioral trait - is a more reliable indicator of identity than legacy systems such as passwords and PINs. There are three general ways to identify yourself to a computer system, based on what you know, what you have, or who you are. Biometrics belong to the "who you are" class and can be subdivided into behavioral and physiological approaches. Behavioral approaches include signature recognition, voice recognition, keystroke dynamics, and gait analysis. Physiological approaches include fingerprints; iris and retina scans; hand, finger, face, and ear geometry; hand vein and nail bed recognition; DNA; and palm prints. In this article, we focus on the two most popular biometric techniques: fingerprints and iris scans.

416 citations


Journal Article•DOI•
TL;DR: Although ongoing improvements would not eliminate most device limitations or alter the mobility context, they make it easier to create and experiment with alternative approaches, and building more sophisticated mobile visualizations become easier due to new, possibly standard, software APIs and increasingly powerful devices.
Abstract: Visualization can make a wide range of mobile applications more intuitive and productive. The mobility context and technical limitations such as small screen size make it impossible to simply port visualization applications from desktop computers to mobile devices, but researchers are starting to address these challenges. From a purely technical point of view, building more sophisticated mobile visualizations become easier due to new, possibly standard, software APIs such as OpenGLES and increasingly powerful devices. Although ongoing improvements would not eliminate most device limitations or alter the mobility context, they make it easier to create and experiment with alternative approaches.

331 citations


Journal Article•DOI•
TL;DR: This work designed the smart camera as a fully embedded system, focusing on power consumption, QoS management, and limited resources, and combined several smart cameras to form a distributed embedded surveillance system that supports cooperation and communication among cameras.
Abstract: Recent advances in computing, communication, and sensor technology are pushing the development of many new applications. This trend is especially evident in pervasive computing, sensor networks, and embedded systems. Smart cameras, one example of this innovation, are equipped with a high-performance onboard computing and communication infrastructure, combining video sensing, processing, and communications in a single embedded device. By providing access to many views through cooperation among individual cameras, networks of embedded cameras can potentially support more complex and challenging applications - including smart rooms, surveillance, tracking, and motion analysis - than a single camera. We designed our smart camera as a fully embedded system, focusing on power consumption, QoS management, and limited resources. The camera is a scalable, embedded, high-performance, multiprocessor platform consisting of a network processor and a variable number of digital signal processors (DSPs). Using the implemented software framework, our embedded cameras offer system-level services such as dynamic load distribution and task reconfiguration. In addition, we combined several smart cameras to form a distributed embedded surveillance system that supports cooperation and communication among cameras.

302 citations


Journal Article•DOI•
TL;DR: Simulation is useful for evaluating protocol performance and operation, however, the lack of rigor with which it's applied threatens the credibility of the published research within the manet research community.
Abstract: Simulation is useful for evaluating protocol performance and operation. However, the lack of rigor with which it's applied threatens the credibility of the published research within the manet research community. Mobile ad hoc networks (manets) allow rapid deployment because they don't depend on a fixed infrastructure. Manet nodes can participate as the source, the destination, or an intermediate router. This flexibility is attractive for military applications, disaster-response situations, and academic environments where fixed net working infrastructures might not be available

235 citations


Journal Article•DOI•
TL;DR: It is shown that the adequacy of thin-client computing is highly variable and depend on both the application and available network quality, and the combination of worst anticipated network quality and most tightly coupled tasks determine whether a thin- client approach is satisfactory for an organization.
Abstract: We describe an approach to quantifying the impact of network latency on interactive response and show that the adequacy of thin-client computing is highly variable and depend on both the application and available network quality. If near ideal network conditions (low latency and high bandwidth) can be guaranteed, thin clients offer a good computing experience. As network quality degrades, interactive performance suffers. It is latency - not bandwidth -that is the greater challenge. Tightly coupled tasks such as graphics editing suffer more than loosely coupled tasks such as Web browsing. The combination of worst anticipated network quality and most tightly coupled tasks determine whether a thin-client approach is satisfactory for an organization.

233 citations


Journal Article•DOI•
TL;DR: This work proposes a distributed navigation algorithm for emergency situations that quickly separates hazardous areas from safe areas, and the sensors establish escape paths.
Abstract: In an emergency, wireless network sensors combined with a navigation algorithm could help safely guide people to a building exit while helping them avoid hazardous areas. We propose a distributed navigation algorithm for emergency situations. At normal time, sensors monitor the environment. When the sensors detect emergency events, our protocol quickly separates hazardous areas from safe areas, and the sensors establish escape paths. Simulation and implementation results show that our scheme achieves navigation safety and quick convergence of the navigation directions. We based our protocol on the temporally ordered routing algorithm for mobile ad hoc networks. TORA assigns mobile nodes temporally ordered sequence numbers to support multipath routing from a source to a specific destination node

233 citations


Journal Article•DOI•
TL;DR: SOC reinvents the way enterprises work together: common tasks in a business process or supply chain can be easily outsourced to external service providers for both performance and cost reasons.
Abstract: Service-oriented computing using Web services has emerged as a major research topic in recent years. Strong support from major computer companies including IBM, Microsoft, Hewlett-Packard, Oracle, and SAP has accelerated the acceptance and adoption of SOC using Web services. With SOC, enterprises can define and execute transactions across multiple business. Companies can use SOC's plug-and-play interoperability to compose business processes and integrate different information systems on the fly to enable ad hoc cooperation between new partners. Thus, SOC reinvents the way enterprises work together: common tasks in a business process or supply chain can be easily outsourced to external service providers for both performance and cost reasons.

Journal Article•DOI•
TL;DR: Model-driven development is an emerging paradigm that solves numerous problems associated with the composition and integration of large-scale systems while leveraging advances in software development technologies such as component-based middleware.
Abstract: Historically, software development methodologies have focused more on improving tools for system development than on developing tools that assist with system composition and integration. Component-based middleware like Enterprise Java-Beans (EJB), Microsoft .NET, and the CORBA Component Model (CCM) have helped improve software reusability through component abstraction. However, as developers have adopted these commercial off-the-shelf technologies, a wide gap has emerged between the availability and sophistication of standard software development tools like compilers and debuggers, and the tools that developers use to compose, analyze, and test a complete system or system of systems. As a result, developers continue to accomplish system integration using ad hoc methods without the support of automated tools. Model-driven development is an emerging paradigm that solves numerous problems associated with the composition and integration of large-scale systems while leveraging advances in software development technologies such as component-based middleware. MDD elevates software development to a higher level of abstraction than is possible with third-generation programming languages.

Journal Article•DOI•
TL;DR: The Object Management Group initiated the Unified Modeling Language 2.0 effort to address significant problems in earlier versions, but its size and complexity can present a problem to users, tool developers, and working groups charged with evolving the standard.
Abstract: Experience indicates that effective complexity management mechanisms automate mundane development tasks and provide strong support for separation of concerns. For example, current high-level programming languages and integrated development environments provide abstractions that shield developers from intricate lower-level details and offer automated support for transforming abstract representations of source code into faithful machine-executable forms. The Object Management Group initiated the Unified Modeling Language 2.0 effort to address significant problems in earlier versions. While UML 2.0 improves over earlier versions in some aspects, its size and complexity can present a problem to users, tool developers, and OMG working groups charged with evolving the standard.

Journal Article•
TL;DR: In this article, a method for inhomogeneous 2D texture mapping guided by a feature mask that preserves some regions of the image, such as foreground objects or other prominent parts, is presented.
Abstract: We present a method for inhomogeneous 2D texture mapping guided by a feature mask, that preserves some regions of the image, such as foreground objects or other prominent parts. The method is able to arbitrarily warp a given image while preserving the shape of its features by constraining their deformation to be a similarity transformation. In particular, our method allows global or local changes to the aspect ratio of the texture without causing undesirable shearing to the features. The algorithmic core of our method is a particular formulation of the Laplacian editing technique, suited to accommodate similarity constraints on parts of the domain. The method is useful in digital imaging, texture design and any other applications involving image warping, where parts of the image have high familiarity and should retain their shape after modification.

Journal Article•DOI•
Shree K. Nayar1•
TL;DR: Using a controllable optical system to form the image and a programmable light source as the cameras flash can further enhance the capabilities of these cameras.
Abstract: Computational cameras use unconventional optics and software to produce new forms of visual information, including wide field-of-view images, high dynamic range images, multispectral images, and depth images. Using a controllable optical system to form the image and a programmable light source as the cameras flash can further enhance the capabilities of these cameras

Journal Article•DOI•
TL;DR: In this paper, the authors proposed an extension of this system to embed RFID devices in consumers' loyalty or frequent-shopper cards to identify individuals and charge the shopping cost directly to the customer's account.
Abstract: The past two years have witnessed an explosion of interest in radio-frequency identification and supporting technologies, due primarily to their rapidly expanding use in tracking grocery products through the supply chain. Currently such applications monitor store-keeping units (SKUs) rather than individual goods, as the relatively high cost of RFID deployment and the very low profit margin of supermarket products make item-level tagging impractical. Yet, economic and technical concerns aside, it is easy to envision a supermarket in which each item is tagged with an RFID label and all shopping carts feature RFID readers. The carts could potentially include onboard computers that recognize products placed inside and that display information and promotions retrieved wirelessly from the system back end. RFID-enabled smart phones, which are commercially available today and becoming increasingly popular, could carry out the same function. Item-level deployment of RFID technology would also allow for quick checkout aisles that scan all products at once and thus eliminate queues, which are consistently reported as one of the most negative aspects of supermarket shopping. A simple extension of this system would be to embed RFID devices in consumers' loyalty or frequent-shopper cards to identify individuals. This could expedite system login and charge the shopping cost directly to the customer's account at the point of sale-unless removed at the POS, item-level tags will inevitably follow the consumer home. This scenario undoubtedly raises numerous privacy concerns.

Journal Article•DOI•
TL;DR: In this article, the authors propose techniques that let software react to changes by self-organizing its structure and self-adapting its behavior in open-world settings. But this assumption no longer works well in today's unpredictable open world settings.
Abstract: Traditional software development is based on the closed-world assumption that the boundary between system and environment is known and unchanging However, this assumption no longer works within today's unpredictable open-world settings, which demand techniques that let software react to changes by self-organizing its structure and self-adapting its behavior

Journal Article•DOI•
TL;DR: Singularity, the most radical approach, uses a type-safe language, a single address space, and formal contracts to carefully limit what each module can do in the microkernel.
Abstract: Microkernels long discarded as unacceptable because of their lower performance compared with monolithic kernels might be making a comeback in operating systems due to their potentially higher reliability, which many researchers now regard as more important than performance. Each of the four different attempts to improve operating system reliability focuses on preventing buggy device drivers from crashing the system. In the Nooks approach, each driver is individually hand wrapped in a software jacket to carefully control its interactions with the rest of the operating system, but it leaves all the drivers in the kernel. The paravirtual machine approach takes this one step further and moves the drivers to one or more machines distinct from the main one, taking away even more power from the drivers. Both of these approaches are intended to improve the reliability of existing (legacy) operating systems. In contrast, two other approaches replace legacy operating systems with more reliable and secure ones. The multiserver approach runs each driver and operating system component in a separate user process and allows them to communicate using the microkernel's IPC mechanism. Finally, Singularity, the most radical approach, uses a type-safe language, a single address space, and formal contracts to carefully limit what each module can do.

Journal Article•DOI•
TL;DR: In this article, the authors proposed a power-law distribution of problem sizes, which means that about half of funding agency resources should be spent on tier-1 centers at the petascale level and the other half dedicated to tier-2 and tier-3 centers on a cost-sharing basis.
Abstract: A balanced cyberinfrastructure is necessary to meet growing data-intensitive scientific needs. We believe that available resources should be allocated to benefit the broadest cross-section of the scientific community. Given the power-law distribution of problem sizes, this means that about half of funding agency resources should be spent on tier-1 centers at the petascale level and the other half dedicated to tier-2 and tier-3 centers on a cost-sharing basis. Funding agencies should support balanced systems, not just CPU farms, as well as petascale IO and networking. They should also allocate resources for a balanced tier-1 through tier-3 cyberinfrastructure.

Journal Article•DOI•
TL;DR: A proposed four-phase design flow assists with computations by transforming a quantum algorithm from a high-level language program into precisely scheduled physical actions.
Abstract: Compilers and computer-aided design tools are essential for fine-grained control of nanoscale quantum-mechanical systems. A proposed four-phase design flow assists with computations by transforming a quantum algorithm from a high-level language program into precisely scheduled physical actions.

Journal Article•DOI•
TL;DR: Given the shortage of caregivers and the increase in an aging US population, the future of US healthcare quality does not look promising and definitely is unlikely to be cheaper.
Abstract: Given the shortage of caregivers and the increase in an aging US population, the future of US healthcare quality does not look promising and definitely is unlikely to be cheaper. Advances in health information systems and healthcare technology offer a tremendous opportunity for improving the quality of care while reducing costs. The development and production of medical device software and systems is a crucial issue, both for the US economy and for ensuring safe advances in healthcare delivery. As devices become increasingly smaller in physical terms but larger in software terms, the design, testing, and eventual Food and Drug Administration (FDA) device approval is becoming much more expensive for medical device manufacturers both in terms of time and cost. Furthermore, the number of devices that have recently been recalled due to software and hardware problems is increasing at an alarming rate. As medical devices are becoming increasingly networked, ensuring even the same level of health safety seems a challenge.

Journal Article•DOI•
TL;DR: Developers must understand the trade offs when designing reliable embedded systems, as many techniques to improve reliability can incur performance, energy, or cost penalties.
Abstract: Embedded computing systems have become a pervasive part of daily life, used for tasks ranging from providing entertainment to assisting the functioning of key human organs. As technology scales, designing a dependable embedded system atop a less reliable hardware platform poses great challenges for designers. Cost and energy sensitivity, as well as real-time constraints, make some fault-tolerant techniques unviable for embedded system design. Many techniques to improve reliability can incur performance, energy, or cost penalties. Further, some solutions targeted at a specific failure mechanism could negatively affect other mechanisms. For example, lowering operational voltage can help mitigate thermal problems but increases vulnerability to soft errors. Developers must understand the trade offs when designing reliable embedded systems.

Journal Article•DOI•
TL;DR: The University of Oldenburg's TrustSoft Graduate School aims to provide a holistic view of trustworthiness in software - one that considers system construction, evaluation/analysis, and certification - in an interdisciplinary setting and integrates the disciplines of computer science and computer law.
Abstract: Organizations such as Microsoft's Trusted Computing Group and Sun Microsystems' Liberty Alliance are currently leading the debate on "trustworthy computing." However, these and other initiatives primarily focus on security, and trustworthiness depends on many other attributes. To address this problem, the University of Oldenburg's TrustSoft Graduate School aims to provide a holistic view of trustworthiness in software - one that considers system construction, evaluation/analysis, and certification - in an interdisciplinary setting. Component technology is the foundation of our research program. The choice of a component architecture greatly influences the resulting software systems' nonfunctional properties. We are developing new methods for the rigorous design of trustworthy software systems with predictable, provable, and ultimately legally certifiable system properties. We are well aware that it is impossible to build completely error-free complex software systems. We therefore complement fault-prevention and fault-removal techniques with fault-tolerance methods that introduce redundancy and diversity into software systems. Quantifiable attributes such as availability, reliability, and performance call for analytical prediction models, which require empirical studies for calibration and validation. To consider the legal aspects of software certification and liability, TrustSoft integrates the disciplines of computer science and computer law.

Journal Article•DOI•
TL;DR: While the DARPA Grand Challenge has revitalized interest in intelligent highway systems, autonomous vehicles, and sensing technology, a host of other novel issues afford interesting design and computer-engineering challenges for the future.
Abstract: While the DARPA Grand Challenge has revitalized interest in intelligent highway systems, autonomous vehicles, and sensing technology, a host of other novel issues afford interesting design and computer-engineering challenges for the future.

Journal Article•DOI•
TL;DR: Services science is interested in both relatively simple service businesses such as fast-food restaurants and more sophisticated operations such as healthcare companies, and calls on the resources of social sciences such as psychology and sociology, as well as anthropology, to provide useful information about the way people and groups work and interact.
Abstract: Agriculture and manufacturing used to be the major elements of the modern world's economies. Now, services are a critical element, a trend also affecting the developing world. Universities throughout the world - most notably in North America, Europe, and Australia - are offering courses and graduate-level certification in services science, with the long-term goal of establishing degree programs. Services science is interested in both relatively simple service businesses such as fast-food restaurants and more sophisticated operations such as healthcare companies. Businesses could use technologies such as knowledge management and data mining to get targeted analytical information they can use to evaluate their operations. Companies can utilize technology to find patterns in the way they have successfully delivered services and interacted with customers. The companies could then repeat those patterns with multiple customers. Although technology is a key element of services science, a better understanding of human behavior is critical. The field calls on the resources of social sciences such as psychology and sociology, as well as anthropology, which could provide useful information about the way people and groups work and interact. Understanding these factors is an important aspect of services science. Businesses are also employing services science principles in their operations

Journal Article•DOI•
TL;DR: A verifiable set of well-chosen coding rules could, however, assist in analyzing critical software components for properties that go well beyond compliance with the set of rules itself.
Abstract: Existing coding guidelines therefore offer limited benefit, even for critical applications. A verifiable set of well-chosen coding rules could, however, assist in analyzing critical software components for properties that go well beyond compliance with the set of rules itself. To be effective, though, the set of rules must be small, and it must be clear enough that users can easily understand and remember it. In addition, the rules must be specific enough that users can check them thoroughly and mechanically. To put an upper bound on the number of rules, the set is restricted to no more than 10 rules that will provide an effective guideline. Although such a small set of rules cannot be all-encompassing, following it can achieve measurable effects on software reliability and verifiability

Journal Article•DOI•
TL;DR: The authors revisit their ten maxims to answer how formal methods commandments fared over the past decade and whether they are still valid in the current industrial setting.
Abstract: How have the formal methods commandments fared over the past decade? Are they still valid in the current industrial setting, and have attitudes toward formal methods improved? The authors revisit their ten maxims to answer these questions.

Journal Article•DOI•
TL;DR: A proposed conceptual framework for analyzing Web services interoperability issues provides a context for studying existing standards and specifications and for identifying new opportunities to provide automated support, for this technology.
Abstract: A proposed conceptual framework for analyzing Web services interoperability issues provides a context for studying existing standards and specifications and for identifying new opportunities to provide automated support, for this technology. Web services are becoming the technology of choice for realizing service-oriented architectures (SOAs). Web services simplify interoperability and, therefore, application integration. They provide a means for wrapping existing applications so developers can access them through standard languages and protocols.

Journal Article•DOI•
TL;DR: The Global Environment for Network Innovations is a major planned initiative of the US National Science Foundation to build an open, large-scale, realistic experimental facility for evaluating new network architectures to change the way networked and distributed systems are designed.
Abstract: The Global Environment for Network Innovations is a major planned initiative of the US National Science Foundation to build an open, large-scale, realistic experimental facility for evaluating new network architectures. The facility's goal is to change the way we design networked and distributed systems, creating over time new paradigms that integrate rigorous theoretical understanding with compelling and thorough experimental validation. The research that GENI enables can lead to a future Internet that is more secure, available, manageable, and efficient, and better at handling mobile nodes. GENI is intended to support two general kinds of activities: running controlled experiments to evaluate design, implementation, and engineering choices; and deploying prototype systems and learning from observations of how they behave under real usage

Journal Article•DOI•
TL;DR: Near-field communication (NFC) could be used in many ways, including merchandise and service payments, event ticketing, and facility- and computer-access control.
Abstract: Near-field communication (NFC) is a new wireless technology that could unite various standards and proprietary technologies found in the millions of standalone contactless cards. Contactless technology lets users pay for transactions by simply holding cards close to, rather than swiping them through, a reader. NFC is a short-range wireless technology that lets devices communicate when in close proximity. The technology allows for the development of devices, including mobile phones, that can be used like contactless cards. A shorter transmission range and slower data rates distinguish NFC from other short-range wireless technologies such as Bluetooth, radio-frequency identification (RFID), and Wi-Fi. NFC could be used in many ways, including merchandise and service payments, event ticketing, and facility- and computer-access control. The technology could even enter information from a buyer's NFC phone to a suitably equipped PC for e-commerce transactions.