scispace - formally typeset
Search or ask a question

Showing papers in "Communications of The ACM in 1997"


Journal ArticleDOI
TL;DR: This special section includes descriptions of five recommender systems, which provide recommendations as inputs, which the system then aggregates and directs to appropriate recipients, and which combine evaluations with content analysis.
Abstract: Recommender systems assist and augment this natural social process. In a typical recommender system people provide recommendations as inputs, which the system then aggregates and directs to appropriate recipients. In some cases the primary transformation is in the aggregation; in others the system’s value lies in its ability to make good matches between the recommenders and those seeking recommendations. The developers of the first recommender system, Tapestry [1], coined the phrase “collaborative filtering” and several others have adopted it. We prefer the more general term “recommender system” for two reasons. First, recommenders may not explictly collaborate with recipients, who may be unknown to each other. Second, recommendations may suggest particularly interesting items, in addition to indicating those that should be filtered out. This special section includes descriptions of five recommender systems. A sixth article analyzes incentives for provision of recommendations. Figure 1 places the systems in a technical design space defined by five dimensions. First, the contents of an evaluation can be anything from a single bit (recommended or not) to unstructured textual annotations. Second, recommendations may be entered explicitly, but several systems gather implicit evaluations: GroupLens monitors users’ reading times; PHOAKS mines Usenet articles for mentions of URLs; and Siteseer mines personal bookmark lists. Third, recommendations may be anonymous, tagged with the source’s identity, or tagged with a pseudonym. The fourth dimension, and one of the richest areas for exploration, is how to aggregate evaluations. GroupLens, PHOAKS, and Siteseer employ variants on weighted voting. Fab takes that one step further to combine evaluations with content analysis. ReferralWeb combines suggested links between people to form longer referral chains. Finally, the (perhaps aggregated) evaluations may be used in several ways: negative recommendations may be filtered out, the items may be sorted according to numeric evaluations, or evaluations may accompany items in a display. Figures 2 and 3 identify dimensions of the domain space: The kinds of items being recommended and the people among whom evaluations are shared. Consider, first, the domain of items. The sheer volume is an important variable: Detailed textual reviews of restaurants or movies may be practical, but applying the same approach to thousands of daily Netnews messages would not. Ephemeral media such as netnews (most news servers throw away articles after one or two weeks) place a premium on gathering and distributing evaluations quickly, while evaluations for 19th century books can be gathered at a more leisurely pace. The last dimension describes the cost structure of choices people make about the items. Is it very costly to miss IT IS OFTEN NECESSARY TO MAKE CHOICES WITHOUT SUFFICIENT personal experience of the alternatives. In everyday life, we rely on

3,993 citations


Journal ArticleDOI
TL;DR: It is explained how a hybrid system can incorporate the advantages of both methods while inheriting the disadvantages of neither, and how the particular design of the Fab architecture brings two additional benefits.
Abstract: The problem of recommending items from some fixed database has been studied extensively, and two main paradigms have emerged. In content-based recommendation one tries to recommend items similar to those a given user has liked in the past, whereas in collaborative recommendation one identifies users whose tastes are similar to those of the given user and recommends items they have liked. Our approach in Fab has been to combine these two methods. Here, we explain how a hybrid system can incorporate the advantages of both methods while inheriting the disadvantages of neither. In addition to what one might call the “generic advantages” inherent in any hybrid system, the particular design of the Fab architecture brings two additional benefits. First, two scaling problems common to all Web services are addressed—an increasing number of users and an increasing number of documents. Second, the system automatically identifies emergent communities of interest in the user population, enabling enhanced group awareness and communications. Here we describe the two approaches for contentbased and collaborative recommendation, explain how a hybrid system can be created, and then describe Fab, an implementation of such a system. For more details on both the implemented architecture and the experimental design the reader is referred to [1]. The content-based approach to recommendation has its roots in the information retrieval (IR) community, and employs many of the same techniques. Text documents are recommended based on a comparison between their content and a user profile. Data

3,175 citations


Journal ArticleDOI
TL;DR: The combination of high volume and personal taste made Usenet news a promising candidate for collaborative filtering and the potential predictive utility for Usenets news was very high.
Abstract: newsgroups carry a wide enough spread of messages to make most individuals consider Usenet news to be a high noise information resource. Furthermore, each user values a different set of messages. Both taste and prior knowledge are major factors in evaluating news articles. For example, readers of the rec.humor newsgroup, a group designed for jokes and other humorous postings, value articles based on whether they perceive them to be funny. Readers of technical groups, such as comp.lang.c11 value articles based on interest and usefulness to them—introductory questions and answers may be uninteresting to an expert C11 programmer just as debates over subtle and advanced language features may be useless to the novice. The combination of high volume and personal taste made Usenet news a promising candidate for collaborative filtering. More formally, we determined the potential predictive utility for Usenet news was very high. The GroupLens project started in 1992 and completed a pilot study at two sites to establish the feasibility of using collaborative filtering for Usenet news [8]. Several critical design decisions were made as part of that pilot study, including:

2,657 citations


Journal ArticleDOI
TL;DR: A new study reveals businesses are defining data quality with the consumer in mind, and within this larger context of information systems, data is collected from multiple data sources and stored in databases.
Abstract: ATA-QUALITY (DQ) PROBLEMS ARE INCREASINGLY EVIdent, particularly in organizational databases. Indeed, 50% to 80% of computerized criminal records in the U.S. were found to be inaccurate, incomplete, or ambiguous. The social and economic impact of poor-quality data costs billions of dollars. [5-7, 10]. Organizational databases, however, reside in the larger context of information systems (IS). Within this larger context, data is collected from multiple data sources and stored in databases. From this stored data, useful information is generated for organizational decision-making. A new study reveals businesses are defining data quality with the consumer in mind.

1,296 citations


Journal ArticleDOI
TL;DR: ReferralWeb as mentioned in this paper is an interactive system for reconstructing, visualizing, and searching social networks on the World Wide Web, which is based on the six degrees of separation phenomenon.
Abstract: Part of the success of social networks can be attributed to the “six degrees of separation’’ phenomena that means the distance between any two individuals in terms of direct personal relationships is relatively small. An equally important factor is there are limits to the amount and kinds of information a person is able or willing to make available to the public at large. For example, an expert in a particular field is almost certainly unable to write down all he knows about the topic, and is likely to be unwilling to make letters of recommendation he or she has written for various people publicly available. Thus, searching for a piece of information in this situation becomes a matter of searching the social network for an expert on the topic together with a chain of personal referrals from the searcher to the expert. The referral chain serves two key functions: It provides a reason for the expert to agree to respond to the requester by making their relationship explicit (for example, they have a mutual collaborator), and it provides a criteria for the searcher to use in evaluating the trustworthiness of the expert. Nonetheless, manually searching for a referral chain can be a frustrating and time-consuming task. One is faced with the trade-off of contacting a large number of individuals at each step, and thus straining both the time and goodwill of the possible respondents, or of contacting a smaller, more focused set, and being more likely to fail to locate an appropriate expert. In response to these problems we are building ReferralWeb, an interactive system for reconstructing, visualizing, and searching social networks on the World-Wide Web. Simulation experiments we ran before we began construction of ReferralWeb showed that automatically generated referrals can be highly

1,094 citations


Journal ArticleDOI
TL;DR: A body of work on computational immune systems that behave analogously to the natural immune system and in some cases have been used to solve practical engineering problems such as computer security are described.
Abstract: This review describes a body of work on computational immune systems that behave analogously to the natural immune system. These artificial immune systems (AIS) simulate the behavior of the natural immune system and in some cases have been used to solve practical engineering problems such as computer security. AIS have several strengths that can complement wet lab immunology. It is easier to conduct simulation experiments and to vary experimental conditions, for example, to rule out hypotheses; it is easier to isolate a single mechanism to test hypotheses about how it functions; agent-based models of the immune system can integrate data from several different experiments into a single in silico experimental system.

1,021 citations


Journal ArticleDOI
TL;DR: Object-oriented (OO) application frameworks are a promising technology for reifying proven software designs and implementations in order to reduce the cost and improve the quality of software.
Abstract: Computing power and network bandwidth have increased dramatically over the past decade However, the design and implementation of complex software remains expensive and error-prone Much of the cost and effort stems from the continuous rediscovery and re-invention of core concepts and components across the software industry In particular, the growing heterogeneity of hardware architectures and diversity of operating system and communication platforms makes it hard to build correct, portable, efficient, and inexpensive applications from scratch Object-oriented (OO) application frameworks are a promising technology for reifying proven software designs and implementations in order to reduce the cost and improve the quality of software A framework is a reusable, ``semi-complete'' application that can be specialized to produce custom applications [Johnson:88] In contrast to earlier OO reuse techniques based on class libraries, frameworks are targeted for particular business units (such as data processing or cellular communications) and application domains (such as user interfaces or real-time avionics) Frameworks like MacApp, ET++, Interviews, ACE, Microsoft's MFC and DCOM, JavaSoft's RMI, and implementations of OMG's CORBA play an increasingly important role in contemporary software development

857 citations


Journal ArticleDOI
Peter Wegner1

791 citations


Journal ArticleDOI
TL;DR: The notion of a worldwide computer, now taking shape through the Legion project, distributes computation like the World-Wide Web distributes multimedia, creating the illusion for users of a very, very powerful desktop computer.
Abstract: of a Worldwide Virtual Computer Long a vision of science fiction writers and distributed systems researchers, the notion of a worldwide computer, now taking shape through the Legion project, distributes computation like the World-Wide Web distributes multimedia, creating the illusion for users of a very, very powerful desktop computer. T ODAY’S DRAMATIC INCREASE IN AVAILABLE NETWORK BANDWIDTH WILL qualitatively change how the world computes, communicates, and collaborates. The rapid expansion of the World-Wide Web and the changes it has wrought are just the beginning. As high-bandwidth connections become available, they shrink distances and change our modes of computation, storage, and interaction. Inevitably, users will operate in a wide-area environment transparently consisting of workstations, PCs, graphics-rendering engines, supercomputers, and nontraditional computing devices, such as televisions. The relative physical locations of users and their resources is increasingly irrelevant. Realization of such an environment, sometimes called a “metasystem,” is not without problems. Today’s experimental high-speed networks, such as the Very

768 citations


Journal ArticleDOI
TL;DR: A historical perspective is provided; “community” is defined as a social network; studies of how computer-mediated communication (CMC) affects community interaction are summarized; examples of different kinds of communities communicating through the Internet are surveyed; and look at asynchronous learning networks (ALNs) as an example of an online community.
Abstract: omputer-mediated communication can enable people with shared interests to form and sustain relationships and communities. Compared to communities offline, computer-supported communities tend to be larger, more dispersed in space and time, more densely knit, and to have members with more heterogeneous social characteristics but with more homogeneous attitudes. Despite earlier fears to the contrary by those who worry about the possible dehumanizing effects of computers, online communities provide emotional support and sociability as well as information and instrumental aid related to shared tasks. Online virtual classrooms combine the characteristics of online communities and computer-supported workgroups. New software tools and systems for coordinating interaction may alleviate some of the problems of interacting online, like information overload and normless behavior. What kinds of communities are most suited to the virtual environment of computer networks? How does the medium affect interaction in online communities and the types of social structures emerging in postindustrial societies, like North America and Europe? To address these questions, we provide historical perspective; define “community” as a social network; summarize studies of how computer-mediated communication (CMC) affects community interaction; survey examples of different kinds of communities communicating through the Internet; and look at asynchronous learning networks (ALNs) as an example of an online community. With development of computer networks, the standalone computer was transformed into a technology that sustains the social networks of work and community. However, some of the debates about the nature of the Internet have continued the longstanding exchange between computerphiles and computerphobes. For example, John Perry Barlow, cofounder of the Electronic Frontier Foundation, has proclaimed, “With the development of the Internet and with the increasing pervasiveness of communication between networked computers, we are in the middle of the most transforming technological event since the capture of fire” [1, p. 40]. On the other side, an ad for Mark Slouka’s 1995 book War of the Worlds warned, “Face-to-face communication is Starr Roxanne Hiltz and Barry Wellman

657 citations


Journal ArticleDOI
TL;DR: Book of visual information retrieval, as an amazing reference becomes what you need to get, and book, as a source that may involve the facts, opinion, literature, religion, and many others are the great friends to join with.
Abstract: New updated! The latest book from a very famous author finally comes out. Book of visual information retrieval, as an amazing reference becomes what you need to get. What's for is this book? Are you still thinking for what the book is? Well, this is what you probably will get. You should have made proper choices for your better life. Book, as a source that may involve the facts, opinion, literature, religion, and many others are the great friends to join with.

Journal ArticleDOI
TL;DR: The feasibility of automatic recognition of recommendations is supported by empirical results and some resources are recommended by more than one person, and these multiconfirmed recommendations appear to be significant resources for the relevant community.
Abstract: The feasibility of automatic recognition of recommendations is supported by empirical results. First, Usenet messages are a significant source of recommendations of Web resources: 23% of Usenet messages mention Web resources, and ?>0% of these mentions are recommendations. Second, recommendation instances can be machine-recognized with nearly 90% accuracy. Third, some resources are recommended by more than one person. These multiconfirmed recommendations appear to be significant resources for the relevant community. Finally, the number of distinct recommenders of a resource is a tallying, and redistributing recom-


Journal ArticleDOI
TL;DR: A framework is a reusable design of all or part of a system that is represented by a set of abstract classes and the way their instances interact as discussed by the authors, and a framework is the skeleton of an application that can be customized by application developers.
Abstract: nition we use most is “a framework is a reusable design of all or part of a system that is represented by a set of abstract classes and the way their instances interact.” Another common definition is “a framework is the skeleton of an application that can be customized by an application developer.” These are not conflicting definitions; the first describes the structure of a framework while the second describes its purpose. Nevertheless, they point out the difficulty of defining frameworks clearly. Frameworks are important, and continually become more important. Systems like OLE, OpenDoc, and DSOM are frameworks; Java is spreading new frameworks like AWT and Beans. Most commercially available frameworks seem to be for technical domains such as user interfaces or distribution, and most applicationspecific frameworks are proprietary. But the steady rise of frameworks means every software developer should know what they are and how to deal with them. The ideal reuse technology provides components that can be easily connected to make a new system. The software developer does not have to know how the component is implemented, and it is easy for the developer to learn how to use it. The resulting system will be efficient, easy to maintain, and reliable. The electric power system is like that; you can buy a toaster from one store and a television from another, and they will both work at either your home or office. Most people do not know Ohm’s Law, yet they have no trouble connecting a new toaster to the power system. Unfortunately, software is not nearly as composable as the electric power system. The original vision of software reuse was based on components. In the beginning, commercial interest in object-oriented technology also focused on reusable Ralph E. Johnson Frameworks= (Components+Patterns)

Journal ArticleDOI
TL;DR: The CMM adopted the opposite of the quick-fix silver bullet philosophy, intended to be a coherent, ordered set of incremental improvements, all having experienced success in the field, packaged into a roadmap that showed how effective practices could be built on one another in a logical progression.
Abstract: A bout the time Fred Brooks was warning us there was not likely to be a single, “silver bullet” solution to the essential difficulties of developing software [3], Watts Humphrey and others at the Software Engineering Institute (SEI) were busy putting together the set of ideas that was to become the Capability Maturity Model (CMM) for Software. The CMM adopted the opposite of the quick-fix silver bullet philosophy. It was intended to be a coherent, ordered set of incremental improvements, all having experienced success in the field, packaged into a roadmap that showed how effective practices could be built on one another in a logical progression (see “The Capability Maturity Model for Software” sidebar). Far from a quick fix, it was


Journal ArticleDOI
Andries van Dam1
TL;DR: It is argued in this essay that the status quo does not suffice—that the newer forms of computing and computing devices available today necessitate new thinking.
Abstract: characterized by four distinguishable interface styles, each lasting for many years and optimized to the hardware available at the time. In the first period, the early 1950s and 1960s, computers were used in batch mode with punched-card input and line-printer output; there were essentially no user interfaces because there were no interactive users (although some of us were privileged to be able to do console debugging using switches and lights as our “user interface”). The second period in the evolution of interfaces (early 1960s through early 1980s) was the era of timesharing on mainframes and minicomputers using mechanical or “glass” teletypes (alphanumeric displays), when for the first time users could interact with the computer by typing in commands with parameters. Note that this era persisted even through the age of personal microcomputers with such operating systems as DOS and Unix with their command line shells. During the 1970s, timesharing and manual command lines remained deeply entrenched, but at Xerox PARC the third age of user interfaces dawned. Raster graphics-based networked workstations and “pointand-click” WIMP GUIs (graphical user interfaces based on windows, icons, menus, and a pointing device, typically a mouse) are the legacy of Xerox PARC that we’re still using today. WIMP GUIs were popularized by the Macintosh in 1984 and later copied by Windows on the PC and Motif on Unix workstations. Applications today have much the same look and feel as the early desktop applications (except for the increased “realism” achieved through the use of drop shadows for buttons and other UI widgets); the main advance lies in the shift from monochrome displays to color and in a large set of software-engineering tools for building WIMP interfaces. I find it rather surprising that the third generation of WIMP user interfaces has been so dominant for more than two decades; they are apparently sufficiently good for conventional desktop tasks that the field is stuck comfortably in a rut. I argue in this essay that the status quo does not suffice—that the newer forms of computing and computing devices available today necessitate new thinking t h e h u m a n c o n n e c t i o n Andries van Dam

Journal ArticleDOI
TL;DR: The results of this study suggest that IT investments have begun to show results in proving they can make a positive contribution to firm output and labor productivity, however, various measures of IT investment do not appear to have a positive relationship with administrative productivity, showing inconsistent results in terms of business performance.
Abstract: relationships between measures of information technology (IT) investment and facets of corporate business performance. The results of our study suggest that IT investments have begun to show results in proving they can make a positive contribution to firm output and labor productivity. However, various measures of IT investment do not appear to have a positive relationship with administrative productivity, showing inconsistent results in terms of business performance. Our analysis suggests that while IT is likely to improve organizational efficiency, its effect on administrative productivity and business performance might depend on such other factors as the quality of a firm’s management processes and ITstrategy links, which can vary significantly across organizations. Measurement of the business value of IT investment has been the subject of considerable debate by academics and practitioners. The term productivity paradox is gaining increasing notoriety as several studies point toward falling productivity and rising IT expenditure in the service sector. Loveman [9] summarizes the research that provides evidence suggesting IT investment produces negligible benefits. Other studies [3] take the position that the “shortfall of evidence” is not “evidence of a shortfall” [3]. Brynjolfsson [3] argues that lack of positive evidence is due to mismeasurements of outputs and inputs, lags in learning and adjustment, redistribution and dissipation of profits, and mismanagement of IT. Our first objective was to reexamine the performance effects of IT investment in light of data collected up to 1994 (see the sidebar, “How the Study Was Done”). We are uncomfortable making such a statement as we have not conducted similar systematic scientific analysis with data later than 1994. We included three measures of IT investments: aggregate IT, client/server systems, including Internet-related systems, and IT infrastructure. We studied firm performance in terms of firm output, measured using value added by the organization and total sales; business results, assessed using return on assets (ROA) and return on equity (ROE) measures of financial performance; and intermediate performance, assessed using labor productivity and administrative productivity. Older studies examining the value of IT investment treat such investment as a monolithic entity. It is reasonable to argue that how investment dollars are differentially allocated among various elements of the IT infrastructure should be examined in tandem with how many dollars are spent cumulatively. Our second objective was to examine the relationships between investments in

Journal ArticleDOI
TL;DR: The nature of the most urgent threats to patient information privacy in perspective is put in perspective, as well as the new threats that almost certainly will arise because of the technologies of digital information.
Abstract: W e are well into the digital information age. Digital communications and information resources affect almost every aspect of our lives— business, finance, education, government, and entertainment. Clinical medicine is highly information intensive, but it is one of the few areas of our society where computer access to information has had only limited success in selected areas such as billing and scheduling, laboratory result reporting, and diagnostic instrument systems (such as radiology and cardiology). The move to widely accepted electronic patient records (EPRs) is accelerating, however, and is inevitable because of many pressures. Among those pressures are the desire to improve health care through timely access to information and decision-support aids; the need for simultaneous access to records by doctors, nurses, and administrators in modern, integrated provider and referral systems; meeting the needs of highly mobile patients; the push toward improved cost effectiveness based on analyses of outcomes and utilization information; the need for better support of clinical research; and the growing use of telemedicine and telecare [5]. We are, of course, motivated by the great benefits to patient care and medicine that can derive from this effort. But almost daily we hear about network computer break-ins—often close to home—arousing vivid fears [4]. By putting our personal medical records online, might we be increasing the risk of exposing highly private and sensitive information to outsiders? In this article we take a systems view of privacy and information security in health care. We will put the nature of the most urgent threats to patient information privacy in perspective, the new threats that almost certainly will arise because of the technologies of digital information,

Journal ArticleDOI
TL;DR: The algorithms discussed here for automatically generating video abstracts first decompose the input video into semantic units, called “shot clusters” or “scenes,” then detect and extract semantically rich pieces, especially text from the title sequence and special events, such as dialog, gunfire, and explosions.
Abstract: s were quite different from those made directly by humans, the subjects could not tell which were better [8]. For browsing and searching large information archives, many users are familiar with Web interfaces. Therefore, our abstracting tool can compile its results into an html page, including the anchors for playing short video clips (see Figure 5). The top of the page in Figure 5 shows the film title, an animated gif image constructed from the text bitmaps (including the title), and the title sequence as a video clip. This information is followed by a randomly selected subset of special events, which are followed by a temporally ordered list of the scenes constructed by our shot-clustering algorithms. The bottom part of the page lists the creation parameters of the abstract, such as creation time, length, and statistics. Video abstracting is a young research field. We would like to mention two other systems suitable for creating abstracts of long videos. The first is video skimming [2], which mainly seeks to abstract documentaries and newscasts. It assumes that a transcript of the video is available; the video and the transcript are then aligned by word spotting. The audio track of the video skim is constructed by using language analysis to identify important words in the transcript; audio clips around those words are then cut out. Based on detected faces [10], text, and camera operation, video clips are selected from the surrounding frames. The second system is based on the image track only, generating not a video abstract but a static scene graph of thumbnail images on a 2D “canvas.” The scene graph represents the flow of the story in the form of keyframes, allowing users to interactively descend into the story by selecting a story unit of the graph [11]. Conclusions Using the algorithms discussed here for automatically generating video abstracts, we first decompose the input video into semantic units, called “shot clusters” or “scenes.” Then we detect and extract semantically rich pieces, especially text from the title sequence and special events, such as dialog, gunfire, and explosions. Video clips, audio clips, images, and text are extracted and composed into an abstract. The output can then be compiled into an html page for easy access through browsers. We expect our tools to be used for large multimedia archives in which video abstracts would be a much more powerful browsing technique than textual abstracts. For example, broadcast stations today sit on a gold mine of archived difficult-to-access video material. Another application of our technique could be to create an online TV guide on the Web, with short abstracts of upcoming shows, documentaries, and feature films. Just how well the generated abstracts capture the essentials of all kinds of videos remains to be seen in a larger series of practical experiments.

Journal ArticleDOI
TL;DR: Siteseer learns each user’s preferences and the categories through which they view the world, and at the same time it learns for each Web page how different communities or affinitybased clusters of users regard it.
Abstract: COMMUNICATIONS OF THE ACM March 1997/Vol. 40, No. 3 73 by the user. Over time, Siteseer learns each user’s preferences and the categories through which they view the world, and at the same time it learns for each Web page how different communities or affinitybased clusters of users regard it. Siteseer then delivers personalized recommendations of online content, Web pages, organized according to each user’s folders. Bookmarks (including Hotlists and Favorites) are a desirable mechanism for gathering preference information as they are already maintained by the user, and thus require no additional behavior for the purpose of informing the recommendation system. In contrast to a click, which can be inadvertently done and rarely takes much effort or investment, bookmarks are the result of a very intentional act, something which (especially if the bookmark is placed in a folder) takes some degree of thought and effort, making them a less “noisy” input for inference. Bookmarks also have specific limitations. A voluntary survey of free response and multiple choice questions, posted to various Usenet news groups and drawing 40 respondents, indicated that users typically bookmark fewer than half of the sites/pages they find interesting, often because a site is easily accessiPersonalized Navigation for the Web

Journal ArticleDOI
TL;DR: The 1990s have witnessed a shift from administrative systems to clinical information systems used by providers in delivering patient care, and demands for information have intensified due to changes in reimbursement for Medicare patients; the increase in prepaid contracts; and a focus on cost-effectiveness, quality assurance, and clinical outcomes.
Abstract: F or almost three decades computer-based information systems have been developed and implemented in health care settings. Under cost-based reimbursement, hospital information systems were designed primarily for administrative purposes to ensure that all charges were billed and collected. Physicians on the whole wrote orders in their traditional manner while ward clerks entered these written orders into the information system. The 1990s have witnessed a shift from administrative systems to clinical information systems used by providers in delivering patient care. Health care institutions are information-intensive. Demands for information have intensified due to changes in reimbursement for Medicare patients; the increase in prepaid contracts; and a focus on cost-effectiveness, quality assurance, and clinical outcomes. The current goals of cost containment and outcomes measurement can not be met by the older administrative systems. These older systems ignored the fact that as much as 75% of health care costs are determined by the provider. Moreover, most of these information systems were incapable of providing data on patient outcomes and instead relied on manual chart reviews. In recognition of the critical importance that medical records play in the delivery of health care, the Institute of Medicine (IOM) called for the development and implementation of computer-based patient records (CPRs). The IOM broadly defined a CPR as “. . . an electronic patient record that resides in a system specifically designed to support users through availability of complete and accurate data, practitioner reminders and alerts, clinical decision support systems, links to bodies of medical knowledge, and

Journal ArticleDOI
TL;DR: It is apparent that such cases are an industry-wide problem despite the significant progress made in IS development methodologies and tools since the early days of business computing almost four decades ago.
Abstract: company with plans for becoming a billion-dollar company in the near future realized it needed to develop a new information system (IS) to cope with the increasing workload it was experiencing—a situation expected to continue. After more than two years of investing in information technology resources—including new IS staff and new management to develop the system—the company was forced to cancel the project. Later, they would start again with a new IS staff and management [3]. In 1988, the Intrico consortium was formed by Hilton Hotels, Marriott, Budget Rent-A-Car and American Airlines Information Services (AMRIS), a subsidiary of American Airlines (AMR). They teamed up to develop and market what was intended to be “the most advanced reservation system in the combined industry of travel, lodging, and car rental.” Five years later, after a number of lawsuits and millions of dollars in cost overruns, the Confirm project was finally canceled amid bitter accusations from some of the top executives involved [10]. PC Week [11] reported in a study by the Standish Group that 31% of new IS projects are canceled before completion at an estimated combined cost of $81 billion. Furthermore, 52.7% of the projects completed are 189% over budget at an additional cost of $59 billion. These problems have led others to characterize the software industry as being in a state of crisis. Our research shows that such cases are not isolated incidents; rather they occur with some regularity in companies of all sizes [3, 4]. Therefore, it is apparent that such cases are an industry-wide problem despite the significant progress made in IS development methodologies and tools since the early days of business computing almost four decades ago. These incidents lead us to ask: What is it about IS development projects that make them so vulnerable to such fiascoes? This article highlights issues critical to aban-

Journal ArticleDOI
TL;DR: Concerns over the Internet's social impact will increase as similar stories arise about Internet friendships going awry, or even of these " friendships " being malicious cons in the first place.
Abstract: Stories of friendships found and forged on the Internet appear in headlines every day. These stories may not always have happy endings, but as a recent survey illustrates, there are definite patterns of friendship and community involvement among Internet users. R eaders of New York tabloid newspapers may have been shocked earlier this year by a front-page photograph showing a local computer expert being led away in handcuffs , having been arrested on charges of raping a woman he had met via the Internet. But troubles with Internet acquaintances are by no means unique. Stories appear in the news media with disturbing frequency about young boys or girls running away from their homes with adults they met through computer bulletin boards or chat groups. In one of the more bizarre events in America's experience with cyberspace, a Virginia woman met a man through the Internet, and after several dates and visits decided to get married. Only later did the Vir-ginia woman discover that she had actually married another woman who through various ruses had tricked her into believing that she was a he. Consequently she is suing the former \" husband \" for a variety of harms. As similar stories arise about Internet friendships going awry, or even of these \" friendships \" being malicious cons in the first place, concerns over the Internet's social impact will increase. Of course the concern is by no means limited to the one-on-one level of interpersonal friendships. National and international bodies are grappling with questions about what to do about various extremist political or religious groups who are aiming to suborn or recruit large groups of people. The mass suicide of the Heaven's Gate cult, which had a presence on the Internet, was a ready target for those who fear the way the Internet is changing society. But the Internet situation is not unique. Every new technology finds dour critics (as well as ebullient proponents). Communication technologies in particular can be seen as opening the doors to all varieties of social ills. When the telegraph, telephone and the automobile were in their infancy, each of these three earlier \" communication \" technologies found vitriolic critics who said these \" instruments of the devil \" would drastically alter society (which they did) with disastrous consequences for the quality of life and the moral order (readers may judge for themselves about this point) [1, 3, …

Journal ArticleDOI
TL;DR: The Internet today is a widespread information infrastructure, the initial prototype of what is often called the National Information Infrastructure, whose influence reaches not only to the technical fields of computer communications but throughout society as the authors move toward increasing use of online tools to accomplish electronic commerce, information acquisition, and community operations.
Abstract: The Internet also represents one of the most successful examples of sustained investment and commitment to research and development in information infrastructure. Beginning with early research in packet switching, the government, industry, and academia have been partners in evolving and deploying this exciting new technology. Today, terms like “leiner@mcc.com” and “http://www.acm.org” trip lightly off the tongue of random people on the street.1 The Internet today is a widespread information infrastructure, the initial prototype of what is often called the National (or Global or Galactic) Information Infrastructure. Its history is complex and involves many aspects—technological, organizational, and community. And its influence reaches not only to the technical fields of computer communications but throughout society as we move toward increasing use of online tools to accomplish electronic commerce, information acquisition, and community operations.2

Journal ArticleDOI
TL;DR: The emerging use of the TCP/IP communications protocol suite for internetworking has led to a global system of interconnected hosts and networks that is commonly referred to as the Internet.
Abstract: The emerging use of the TCP/IP communications protocol suite for internetworking has led to a global system of interconnected hosts and networks that is commonly referred to as the Internet. During the last decade, the Internet has experienced a triumphant advance. Projections based on its current rate of growth suggest there will be over one million computer networks and well over one billion users by the end of the century. Therefore, the Internet is seen as the first incarnation of a national information infrastructure (NII) as promoted by the U.S. government. But the initial, research-oriented Internet and its communications protocol suite were designed for a more benign environment than now exists. It could, perhaps, best be described as a collegial environment, where the users and hosts were mutually trusting and interested in a free and open exchange of information. In this environment, the people on the Internet were the people who actually built the Internet. As time went on, the Internet became more useful and reliable, and these people were joined by others. With fewer goals in common and more people, the Internet steadily twisted away from its original intent. Today, the Internet environment is much less collegial and trustworthy. It contains all the dangerous situations, nasty people, and risks that one can find in society as a whole. In this new environment, the openness of the Internet has turned out to be a douand Bey Internet Security: Firewalls

Journal ArticleDOI
TL;DR: The emergence of both EPSS and UCD can be seen as a shift toward consideration of a broader use of context in the development of usable systems, which is increasingly seen as requiring attention to the environment into which the system must fit.
Abstract: E lectronic Performance Support Systems share something in common with User-Centered Design, in that both have become prominent during the last 10 years. As we have turned increasingly to designing systems to support work, it has been necessary to expand our thinking about system requirements and about usability. One question is whether EPSS and UCD represent discoveries of something new, or (possibly related) evolutionary developments. I suggest the emergence of both can be seen as a shift toward consideration of a broader use of context in the development of usable systems. By this I mean that system design is increasingly seen as requiring attention to the environment into which the system must fit. Before EPSS or UCD we developed systems to meet requirements, often using human-factors techniques as a part of the process. I believe we develop toward the same goal now, though the ways we think about the usability of a system and the approaches we take for ensuring system acceptance have changed and continue to change. We don’t consider usability as limited to the display and keyboard interface between human and machine, but rather we recognize that it encompasses how any artifact fits into a complex work or home environment. Similarly, we don’t design systems merely to replace human work, but to enhance human capabilities to do productive work. Designing usable software involves more than user input, it requires many astute perspectives to attain a balanced view.

Journal ArticleDOI
TL;DR: With the tremendous amount of visual information becoming on-line, how does one find visual information from distributed repositories efficiently, at least to the same extent as that of existing information retrieval systems?
Abstract: Digital images and video are becoming an integral part of human communications. The ease of capturing and creating digital images has caused most on-line information sources look more “visual”. We use more and more visual content in expressing ideas, reporting, education, and entertainment. With the tremendous amount of visual information becoming on-line, how does one find visual information from distributed repositories efficiently, at least to the same extent as that of existing information retrieval systems. With the growing number of on-line users, how does one design a system with performance scalable to a large extent?


Journal ArticleDOI
TL;DR: The development process at Microsoft helps to determine what works and what doesn't by completely recompiling the source code and executing automated tests.
Abstract: • 20,500 employees • 250 products – Windows 95 • 11 million lines of code • 200 designers, programmers and testers • What development process do they use? Main Philosophy • Does not use adopt too many of the structured software-engineering practices • \" scaled-up \" a loosely structured small-team style (hacker philosophy?) – Small parallel teams of 3 to 8 developers each or – Individual programmers – Working together as a large team Philosophy • Each team has the freedom to evolve their design – Evolve features and whole products incrementally – Occasionally introduce new concepts and technologies • However – Since teams have so much freedom – There is a danger that products may become incompatible – They synchronize their changes frequently Synch-and-stabilize • Terms describing the process – \" daily-build \" – \" nightly build \" – \" zero defect \" – \" milestone \" • Build – Putting together partially completed or finished pieces of the software – Goal • To determine what works and what doesn't – Done by completely recompiling the source code and executing automated tests