scispace - formally typeset
Search or ask a question

Showing papers in "Communications of The ACM in 1993"


Journal ArticleDOI
Mark D. Weiser1
TL;DR: What is new and different about the computer science in ubiquitous computing is explained, and a series of examples drawn from various subdisciplines of computer science are outlined.
Abstract: Ubiquitous computing is the method of enhancing computer use by making many computers available throughout the physical environment, but making them effectively invisible to the user. Since we started this work at Xerox PARC in 1988, a number of researchers around the world have begun to work in the ubiquitous computing framework. This paper explains what is new and different about the computer science in ubiquitous computing. It starts with a brief overview of ubiquitous computing, and then elaborates through a series of examples drawn from various subdisciplines of computer science: hardware components (e.g. chips), network protocols, interaction substrates (e.g. software for screens and pens), applications, privacy, and computational methods. Ubiquitous computing offers a framework for new and exciting research across the spectrum of computer science.

2,662 citations


Journal ArticleDOI
TL;DR: The increased interest in the «productivity paradox,» as it has become known, has engendered a significant amount of research, but thus far, this has only deepened the mystery.
Abstract: The retationship between information technology IT and productivity is widely discussed but little understood. Delivered computing power in the U.S. economy has increased by more than two orders of magnitude since 1970 (Figure 1) yet productivity, especially in the service sector, seems to have stagnated (Figure 2). Given the enormous promise of IT to usher in «the biggest technological revolution men have known» [29], disillusionment and even frustration with the technology is increasingly evident in statements like «No, computers do not boost productivity, at least not most of the time» [13]. The increased interest in the «productivity paradox,» as it has become known, has engendered a significant amount of research, but thus far, this has only deepened the mystery

2,419 citations


Journal ArticleDOI
Pierre Wellner1
TL;DR: The DigitalDesk is built around an ordinary physical desk and can be used as such, but it has extra capabilities, including a video camera mounted above the desk that can detect where the user is pointing, and it can read documents that are placed on the desk.

1,127 citations


Journal ArticleDOI

1,082 citations



Journal ArticleDOI
TL;DR: Six years of on ISIS is reviewed, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied.
Abstract: The difficulty of developing reliable distributed software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems which are substantially easier to develop, fault-tolerance, and self-managing. Six years of research on ISIS are reviewed, describing the model, the types of applications to which ISIS was applied, and some of the reasoning that underlies a recent effort to redesign and reimplement ISIS as a much smaller, lightweight system.

856 citations


Journal ArticleDOI
TL;DR: An experimental system, the Information Visualizer, is presented, which explores a UI paradigm that goes beyond the desktop metaphor to exploit the emerging generation of graphical personal computers and to support the emerging application demand to retrieve, store, manipulate, and understand large amounts of infromation.
Abstract: UI innovations are often driven by a combination of technology advances and application demands. On the technology side, advances in interactive computer graphics hardware, coupled with low-cost mass storage, have created new possibilities for information retrieval systems in which UIs could play a more central role. On the application side, increasing masses of information confronting a business or an individual have created a demand for information management applications. In the 1980s, text-editing forced the shaping of the desktop metaphor and the now standard GUI paradigm. In the 1990s, it is likely that information access will be a primary force in shaping the successor to the desktop metapho. This article presents an experimental system, the Information Visualizer (see figure 1), which explores a UI paradigm that goes beyond the desktop metaphor to exploit the emerging generation of graphical personal computers and to support the emerging application demand to retrieve, store, manipulate, and understand large amounts of infromation. The basic problem is how to utilize advancing graphics technology to lower the cost of finding information and accessing it once found (the information's "cost structure"). We take four broad strategies: making the user's immediate workspace larger, enabling user interaction with multiple agents, increasing the real-time interaction rate between user and system, and using visual abstraction to shift information to the perceptual system to speed information assimilation and retrieval.

769 citations



Journal ArticleDOI
TL;DR: An application that uses hand gesture input to control a computer while giving a presentation and an interaction model, a notation for gestures, and a set of guidelines to design gestural command sets are presented.
Abstract: This paper presents an application that uses hand gesture input to control a computer while giving a presentation. In order to develop a prototype of this application, we have defined an interaction model, a notation for gestures, and a set of guidelines to design gestural command sets. This works aims to define interaction styles that work in computerized reality environments. In our application, gestures are used for interacting with the computer as well as for communicating with other people or operating other devices.

566 citations


Journal ArticleDOI
TL;DR: The goal is to go a step further by grounding and situating the information in a physical context to provide additional understanding of the organization of the space and to improve user orientation.
Abstract: article in this issue) will further these abilities and cause the generation of short-range and global electronic information spaces to appear lhroughout our everyday environments. How will this information be organized, and how will we interact with it? Wherever possible, we should look for ways of associating electronic information with physical objects in our environment. This raeans that our information spaces will be 3D. The SemNet system [4] is an example of a tool that offers users access to large, complicated 3D information spaces. Our goal is to go a step further by grounding and situating the information in a physical context to provide additional understanding of the organization of the space and to improve user orientation. As an example of ubiquitous computing and situated information spaces, consider a fax machine. The electronic data associated with a fax machine should be collecl:ed, associated , and colocated with [he physical device (see Figure 1). This means that your personal electronic phone book, a log of your incoming and outgoing calls, and fax messages could be accessible by browsing a situated 3D electronic information space surrounding the fax machine. The information would be organized by the layout of the physical device. Incoming calls would be located near 1:he earpiece of the hand receiver while outgoing calls would be situated near the mouthpiece. The phone, book could be found near the keypad. A log of the outgoing fax messages would be found near the fax paper feeder while a log of the incoming faxes would be located at the paper dispenser tray. These logical information hot spots on the physical device can be moved and customized by users according to their personal organizations. The key idea is that the physical object anchors the information, provides a logical means of partitioning and organizing the associated information space, and serves as a retrieval cue for users. A major design requirement of situated information spaces is the ability for users to visualize, browse, and manipulate the 3D space using a ,.RoE.ALL-portable, palmtop computer. That is, instead of a large fixed display on a desk, we want a small, mobile display to act as a window onto the information space. Since the information spaces will consist of multimedia data, the display of the palmtop should be able to handle all forms of data including text, graphics, video, and audio. Moreover, the desire to merge the physical and …

563 citations


Journal Article
TL;DR: In this article, the authors examine how knowledge acquisition, sharing, and integration activities unfolded over time inside an actual software design team and find that the levels of participation in these activities differ across team members.
Abstract: ore than half the cost of the development of complex computer-based information systems (IS) is attributable to decisions made in the upstream port ion of the software d e v e l o p m e n t process; namely , requirements specification and design [20]. There is growing recognition that research on how teams actually go about making requirement determinations and design decisions can provide valuable insights for improving the quality and productivity of large-scale c o m p u t e r b a s e d IS development efforts [9, 12, 23]. Traditional models of group dynamics, group decision making, and group development are not rich enough to thoroughly explain the real-world complexities faced by software design teams. Most of this research was performed on tasks that were shorter, less complex and did not require the extensive integration of knowledge domains that characterizes software systems design [9, 12]. Knowledge is the raw material of software design teams. For complex projects, knowledge from multiple technical and functional domains is a necessity [12]. Ideally, a software design team is staffed so that both the levels and the distribution of knowledge within the team match those required for the successful completion of the project. Because of knowledge shortfalls such as the thin spread of application domain knowledge in most organizations, however, this is seldom the case [12]. In general, individual team members do not have all of the knowledge required for the project and must acquire additional information before accomplishing productive work. The sources of this information can be relevant documentation, formal training sessions, the results of trial-and-error behavior, and other team members. Group meetings are an important environment for learning, since they allow team members to share information and learn about other domains relevant to their work. Productive design activities need to revolve around the integration of the various knowledge domains. This integration leads to shared models of the problem under consideration and potential solutions. A software design team seldom starts its life with shared models of the system to be built. Instead, these models develop over time as team members learn from one another about the expected behavior of the application and the computational structures required to produce this behavior. This means that team members need to be speaking the same language (or, at least, dialects whose semantics are similar enough to facilitate communication and understanding) in order to share knowledge about the system. Knowledge acquisition, knowledge sharing, and knowledge integration a r e significant, t ime-consuming activities that precede the development of a design document. The purpose of this article is to examine how these activities unfolded over time inside an actual software design team. Two related questions with respect to this team will be resolved: 1) How do the group members acquire, share, and integrate project-relevant knowledge? 2) Do the levels of participation in these activities differ across team members? The f ind ings r epo r t ed h e r e challenge some of the conventional wisdom and common practices of managing software design teams. An initial caveat is that the design team studied here worked in a research and development environment where knowledge acquisition, sharing, and integration activities are accentuated. However, to varying degrees, these activities characterize most software projects [12]. A better understanding of the role and process of knowledge acquisition, sharing, and integration in software design has very real implications for managing large software projects, particularly in the areas of planning, staffing, and training.

Journal ArticleDOI
TL;DR: A better understanding of the role and process of knowledge acquisition, sharing, and integration in software design has very real implications for managing large software projects, particularly in the areas of planning, staffing, and training.


Journal ArticleDOI
TL;DR: W h i l e m o d e r n m e t h o d s f o r i nf o r m a t i o n s y s t e m d e v e l o p m e n t g e n e r a l l y a c c e p t t h a t u s e r s s h o u l d b e i n v o l v e d in s o m e w a y.
Abstract: W h i l e m o d e r n m e t h o d s f o r i n f o r m a t i o n s y s t e m d e v e l o p m e n t g e n e r a l l y a c c e p t t h a t u s e r s s h o u l d b e i n v o l v e d in s o m e w a y [15], t h e f o r m o f t h e i n v o l v e m e n t d i f f e r s c o n s i d e r a b l y . M o s t l y , u s e r s a r e v i e w e d a s r e l a t i v e l y p a s s i v e s o u r c e s o f i n f o r m a t i o n , a n d t h e i n v o l v e m e n t is r e g a r d e d a s \" f u n c t i o n a l , \" in t h e s e n s e t h a t i t s h o u l d y i e l d b e t t e r s y s t e m r e q u i r e m e n t s a n d i n c r e a s e d a c c e p t a n c e b y u s e r s .

Journal ArticleDOI
TL;DR: Before empirical evidence linking software complexity to software maintenance costs is relatively weak, several researchers have noted that such results must be applied cautiously to the large-scale commercial application systems that account for most software maintenance expenditures.
Abstract: While the link between the difficulty in understanding computer software and the cost of maintaining it is appealing, prior empirical evidence linking software complexity to software maintenance costs is relatively weak [21]. Many of the attempts to link software complexity to maintainability are based on experiments involving small pieces of code, or are based on analysis of software written by students. Such evidence is valuable, but several researchers have noted that such results must be applied cautiously to the large-scale commercial application systems that account for most software maintenance expenditures [13,17]

Journal ArticleDOI
TL;DR: This paper investigates how to support work and in particular cooperation in large-scale technical projects in a specific Danish engineering company and it uncovers challenges to Computer Supported Cooperative Work (CSCW) in this setting.
Abstract: This paper investigates how to support work and in particular cooperation in large-scale technical projects. The investigation is based on a case study of a specific Danish engineering company and it uncovers challenges to Computer Supported Cooperative Work (CSCW) in this setting. The company is responsible for management and supervision of one of the world's largest tunnel/bridge construction projects. Our original goal was to determine requirements for CSCW as they unfold in this specific setting as opposed to survey and laboratory investigations. The requirements provide feedback to product development both on specific functionality and as a long term vision for CSCW in such settings. As it turned out, developing our cooperative design techniques in a product development setting also became a major issue. The initial cooperative analysis identified a number of bottlenecks in daily work, where support for cooperation is needed. Examples of bottlenecks are: sharing materials, issuing tasks, and keeping track of task status. Grounded in the analysis, cooperative design workshops based on scenarios of future work situations were established to investigate the potential of different CSCW technologies in this setting. In the workshops, mock-ups and prototypes were used to support end-users in assessing CSCW technologies based on concrete, hands-on experiences. The workshops uncovered several challenges. First, support for sharing materials would require a huge body of diverse materials to be integrated, for example into a hypermedia network. Second, tasks are closely coupled to materials being processed thus a coordination tool should integrate facilities for managing materials. Third, most daily work tasks are event driven and plans change too rapidly for people to register them on a computer. Without meeting these challenges, new CSCW tools are likely to introduce too much overhead to be really useful.

Journal ArticleDOI
TL;DR: Two binary translators are among the migration tools available for Alpha AXP computers: VEST translates OpenV MS VAX binary images to OpenVMS A XP images; mx translates ULTRIX MIPS images to DEC OSF/1 AXP images.
Abstract: Binary translation is a technique used to change an executable program for one computer architecture and operating system into an executable program for a different computer architecture and operating system. Two binary translators are among the migration tools available for Alpha AXP computers: VEST translates OpenVMS VAX binary images to OpenVMS AXP images; mx translates ULTRIX MIPS images to DEC OSF/1 AXP images. In both cases, translated code usually runs on Alpha AXP computers as fast or faster than the original code runs on the original architecture. In contrast to other migration efforts in the industry, the VAX translator reproduces subtle CISC behavior on a RISC machine, and both open-ended translators provide good performance on dynamically modified programs. Alpha AXP binary translators are important migration tools hundreds of translated OpenVMS VAX and ULTRIX MIPS images currently run on Alpha AXP systems.

Journal ArticleDOI
TL;DR: A new formalism called Gamma is presented in which programs are described in terms of multiset transformations, with the possibility of expressing algorithms in a very abstract way, without any artificial sequentiality.
Abstract: We present a new formalism called Gamma in which programs are described in terms of multiset transformations. A distinguishing property of Gamma is the possibility of expressing algorithms in a very abstract way, without any artificial sequentiality. The expressive power of the formalism is illustrated through a series of examples chosen from a wide range of domains (string processing problems, graph problems, geometric problems...).


Journal ArticleDOI
TL;DR: In this paper, the authors discuss reasons why such demonstrations cannot usually be provided with the means available: reliability growth models, testing with stable reliability, structural dependability modelling, as well as more informal arguments based on good engineering practice.
Abstract: Modern society depends on computers for a number of critical tasks in which failure can have very high costs. As a consequence, high levels of dependability (reliability, safety, etc.) are required from such computers, including their software. Whenever a quantitative approach to risk is adopted, these requirements must be stated in quantitative terms, and a rigorous demonstration of their being attained is necessary. For software used in the most critical roles, such demonstrations are not usually supplied. The fact is that the dependability requirements often lie near the limit of the current state of the art, or beyond, in terms not only of the ability to satisfy them, but also, and more often, of the ability to demonstrate that they are satisfied in the individual operational products (validation). We discuss reasons why such demonstrations cannot usually be provided with the means available: reliability growth models, testing with stable reliability, structural dependability modelling, as well as more informal arguments based on good engineering practice. We state some rigorous arguments about the limits of what can be validated with each of such means. Combining evidence from these different sources would seem to raise the levels that can be validated; yet this improvement is not such as to solve the problem. It appears that engineering practice must take into account the fact that no solution exists, at present, for the validation of ultra-high dependability in systems relying on complex software.

Journal ArticleDOI
TL;DR: In this Issue, Fltzmaurlc, = and Feiner describe two different augmented-reality systems that require highly capable head and object trackers to create an effective Illusion of virtual objects coexisting with the real world.
Abstract: I In this Issue, Fltzmaurlc,~ = and Feiner describe two different augmented-reality systems. Such sy~stems require highly capable head and object trackers to create an effective Illusion of virtual objects coexisting with the real world. For ordinary virtual environments that completely replace the real world with a virtual world, It sufflo~=s to know the approximate position and orientation of the user's head. Small errors are not easily discernible because the user's visual sense tencls to override 1:he conflictIng signals from his or her w~=stlbular and broprloceptlve systems. But In augmented reality, virtual objec\"ts supplement rather than supplant tl~e real world. Preserving the Illusion that the two coexist requires proper alignment and reglstral~lon of the vlrtu~al objects to the real world. Even tiny errors in regis-tratlon are easily detectable by the human visual system. What does augmented reality require from trackers to avoid such errors? First, a tracker must be accurate to a small fraction of a degree In orientation and a few millimeters (mm) in position. Figure 1. ,Conceptual ctrawing of sensors viiewing beacons In the ceiling Errors in measured head orientation usually cause larger registration offsets than object orientation errors do, making this requirement more critical for systems based on Head-Mounted Displays (HMDS). Try the following simple demonstration. Take out a dime and hold It at arm's length. The diameter of the dime covers approximately 1.5 degrees of arc. In comparison, a full moon covers 1/2 degree of arc. Now imagine a virtual coffee cup sitting on the corner of a real table two meters away from you. An angular error of 1.5 degrees in head orientation moves the cup by about 52 mm. Clearly, small orientation errors could result In a cup suspended in midair or interpene-trating the table. Similarly, If we want the cup to stay within 1 to 2 mm of Its true position, then we cannot tolerate tracker positional errors of more than 1 to 2 mm. Second, the combined latency of the tracker and the graphics engine must be very low. Combined latency is the delay from the time the tracker subsystem takes its measurements to the time the corresponding images appear In the display devices. Many HMD-based systems have a combined latency over 100 ms. At a moderate head or object rotation rate of 50 degrees per second, 100 milliseconds (ms) of latency causes 5 degrees of angular error. At a rapid rate …

Journal ArticleDOI
TL;DR: The next generation of UIs may move beyond the standard WIMP paradigm to involve elements such as virtual realities, head-mounted displays, sound and speech, pen and gesture recognition, animation and multimedia, limited artificial intelligence, and highly portable computers with cellular or other wireless communication capabilities.
Abstract: ost current Uls are fairly similar and belong to one of two common types: either the traditional alphanumeric full-screen terminals with a keyboard and function keys, or the more modern WIMP workstations with windows,/cons, menus, and a pointing device. In fact, most UI standards released since 1983 have been remarkably similar, and it is that category of canonical windOw system that is referred to as \"current\" throughout this article. In contrast, the next generation of UIs may move beyond the standard WIMP paradigm to involve elements such as virtual realities, head-mounted displays, sound and speech, pen and gesture recognition, animation and multimedia, limited artificial intelligence, and highly portable computers with cellular or other wireless communication capabilities. It is difficult to envision the use of this hodgepodge of technologies in a single, united UI design, and indeed, it may be one of the defining characteristics of nextgeneration UIs that they abandon the principle of conforming to a canonical interface style and instead become more radically tailored to the requirements of individual tasks. In any case, all previous generations of UIS, whether batch-, line-oriented, full-screen, or WIMP, have all had one defining characteristic in common: They were all

Journal ArticleDOI
TL;DR: This article discusses second-order computing facilities and a system that aims to foster and support knowledge building in school, which is computer-supported intentional learning environments (CSILE), which aims to engage students in the same sorts of intellectual and cultural processes that sustain realworld scientists in efforts at knowledge advancement.
Abstract: T here are pervasive strategies for school work that may be broadly characterized as knowledge reproduction strategies. They have limited potential for advancing knowledge, and often are not even very effective for purposes of memorization and organization of knowledge. Their most conspicuous failure, however, is in the development of understanding. Knowledge building strategies are, by contrast, focused centrally on the development of understanding. These strategies, however, are comparatively rare among school children [6]. Worse yet, they seem destined to remain so because school discourse effectively excludes them. Educational computing, unfortunately, tends to support knowledge reproduction strategies rather than knowledge-building ones. While this is obvious regarding much of the courseware on the market, in a more subtle way it is equally true of the software tools that are popularly thought to encourage more active learning. An explanation may be found in the origins of these software tools and in the evolution of the personal computer as a workstation. This evolution has been toward meeting the needs of a business community concerned with storing and retrieving information (hence, the saliency of files and folders), transferring it (hence, cut-and-paste, import-export, and communications software), displaying it (hence, graphing, graphics, desktop publishing, and multimedia presentation software), and making plans and decisions based on it (hence, spreadsheets, accounting, and projectmanagement software). Put it all together, and you have the desktop metaphor. It is not a metaphor for the construction and advancement of understanding. It represents activities that are important in any kind of information processing environment. We propose that these activities-copying, deleting, storing, retrieving, entering, displaying, and sending-be thought of as first-order knowledge-processing activities. In order to serve the purposes of knowledgebuilding, however, they must be subordinated to a second-order system of activities that has understanding as its primary purpose. In this article we discuss second-order computing facilities and a system we are developing that aims to foster and support knowledge building in school. The system is computer-supported intentional learning environments (CSILE). It aims to engage students in the same sorts of intellectual and cultural processes that sustain realworld scientists in efforts at knowledge advancement.

Journal ArticleDOI
TL;DR: This work model develops an abstract work model that brings together data from all customers, keeping good ideas, fixing problems, and using technology to combine and remove steps to create a consolidated model.
Abstract: work models and our system, so all customers can take advantage of it. Once we have this consolidated model, we study it for problems and inefficiencies. We develop an abstract work model that brings together data from all customers, keeping good ideas, fixing problems, and using technology to combine and remove steps. When done, we have a statement of how our users will work, if we can implement the Figure 1. Context model This and the fo l low ing models are examples of work models descr ib ing the use of email. This part ial model shows tha t the company's admin is t ra t ive groups constrain everyone by requi r ing certain repor ts and actions. The boss is also inf luenced by management requirements, and in tu rn sets requirements on the employees. The boss wi l l no t touch the computer, which affects wha t the secretary must do. The secretary asks the boss to change his work sty le to make the secretary 's job easier; fo r example to keep the paper mail in the o rder in which i t is g iven to him to make i t easier fo r the secretary to en te r his replies on-l ine. Figure 2. Physical model ThiS iS the physical env i ronmen t fo r our boss and secretary. Only the aspects re levant to our mail p rob lem are shown, no t the who le physical layout. The secretan /has a p r in te r In her own office. She shares a VAX w i th others. The VAX is over loaded and slow. The boss has no connect ion to the VAXmeven i f there is a terminal in his off ice, he never uses it, so i t is no t shown. The VAX IS ne tworked w i th o the r comput ers at remote sites. This ne twork does go down, so the secretary uses a s tore-andforward mail system. The VAX also has links to publ ic networks which can only handle plain t e x t messages. Figure 3. Flow model ThiS model shows the communi cat ion between people in the organizat ion. Messages sent to the boss are in tercepted by the secretary, who pr in ts them and passes t hem to the boss on paper. The boss wr i tes repl ies and gives t hem back to the secretary who sends the repl ies and o ther messages to the or ig inal sender and somet imes to others. Because the secretary uses s tore-andforward mail, she has no way of knowing i f the repl ies ever ge t th rough. We know the secretary is communica t ing w i th people by phone, bu t we do no t know why -pe rhaps to set up meet ings, we also do no t know how she coord inates w i th the boss offl ine. COMMUllICATIOllS Olin THI! ACM October 1993/Vol.36, No.10 97 r O Project Organization nnd Management F i g u r e 4. : S e q u e n c e m o d e l ThiS model shows the steps the secretary takes to answer the bess's mail from his handwrit ten replies. The secretary gets the stack of printed messages with the boss's; replies wr i t ten on them and works through the stack. The secretary may wri te a reply from the bess's wr i t ten reply, get fur ther cla rification from him, take other action such as calling the sender, or delete the mail w i thout doing anything. When the secretary has dealt with a message she marks the paper copy so she knows it is done. Decide to handle bess's mail Intent: Response Get stack to mail with bess's from boss ~'~, ~, System does not comments ~ ........... ~.~" know about stack Get yesterday's mail No good way to "~ #~ find message Find top message from stD::i i:t sy:ta~m ~ C~lel °r ct ~i ~ e i 2 ~ ~ i i c : r i p ~ D e I e t e ~ ~ . ~ " " " ~ C heckP!ff papa r / ~ / / " J ~ ~ copy and put aside F i g u r e S. A b s t r a c t f l o w m O d e l This is an abstraction of the work of communicating, incorporating the boss and secretary as well as other data. The secretary's role in helping the boss communicate has; been named "communication coordinator." Looking at other customers, we discovered group communication can break down when no one is handling it for the group. We borrow the idea of a communication coordinator frorn the secretary, and use it to solve the group's problem. The coordinator can manage a group's communications in the same way that secretaries manage their bess's. When support ing a group, the coordinator intercepts messages sent to the group as a whole and distributes them to individual members. Messages can still be sent to specific group members. The coordinator may be a group member playing both roles. r / 3 \ I ' ~ o n ' ~ ' ~ ' , ) ~ o r d i n a t o r , / ~ _ _ ] annotated 'it I / ~ , , ~ / " ~ ~ reply stack I / ~1 distributed ~ " ~ ~ " [ rnessage j " ~ message stack ( Notes: • A principal is any person or group whose mail is handled by another • Communication between coordinator and principal may be paper or electronic • Receipts may be for any message system to support it. We validate our redesign of the work by checking it against the data from customers we have visited and through Contextual Inquiry with new customers. When interviewing new customers, we look for aspects of their work our redesigned work model cannot account for. These refine and extend the redesigned work model. Making the work redesign conversation explicit ensures we do not do silly things unintentionally. For example, in creating a presentation, ideas move from slides to handout notes and back again as the creator tries different approaches to presenting the ideas. So a presentation system should support modifying slides and notes in parallel. Providing a notes facility that does not allow the slide to be changed, as some commercial systems do, is not enough. We verify any design idea against the redesigned work model to ensure that it fits into the users ' jobs well. We use it to see that the new work practice our system will support hangs together. We anticipate new problems our changes may cause, and


Journal ArticleDOI
TL;DR: The model, which makes extensive use of object-oriented techniques such as inheritance and polymorphism, is presented followed by development of the concurrent programming method.
Abstract: While there have been many attempts to provide object-oriented languages with a model of concurrency, few have dealt with reusability and methodology. Here we propose a concurrent model that takes into account such important concerns. Concept unifications are a necessity, and underlie the need to make object-oriented programming adaptable to concurrency. The model characteristics, especially reusability, permit definition of a concurrent object-oriented design method. The model, which makes extensive use of object-oriented techniques such as inheritance and polymorphism, is presented followed by development of the concurrent programming method

Journal ArticleDOI
TL;DR: Pdan Turing explains how he came to see that concurrency requires a fresh approach, not merely an extension of the repertoire of entities and constructions which explain sequential computing, and outlines a new basic calculus for concurrency.
Abstract: this award, bearing the name of Pdan Turing. Perhaps Turing would be pleased that it should go to someone educated at his old college, King's College at Cambridge. While there in 19561 wrote my first computer program; it was on the EDSAC. Of course EDSAC made history. But I am ashamed to say it did not lure me into computing, and I ignored computers for four years. In 1960 I thought that computers might be more peaceful to handle than schoolchildren-I was then a teacher-so I applied for a job at Ferranti in London, at the time of Pegasus. I was asked at the interview whether I would like to devote my life to computers. This daunting notion had never crossed my mind. Well, here I am still, and I have had the lucky chance to grow alongside computer science. This award gives an unusual opportunity, and I hope a license, to reflect on a line of research from a personal point of view. I thought I should seize the opportunity, because among my interests there is one thread which has preoccupied me for 20 years. Describing this kind of experience can surely yield insight, provided one remembers that it is a personal thread; science is woven from many such threads and is all the stronger when each thread is hard to trace in the finished fabric. The thread which I want to pick up is the semantic basis of concurrent computation. I shall begin by explaining how I came to see that concurrency requires a fresh approach , not merely an extension of the repertoire of entities and constructions which explain sequential computing. Then I shall talk about my efforts to find basic constructions for concurrency, guided by experience with sequential semantics. This is the work which led to a Calculus for Communicating Systems (CCS). At that point I shall briefly discuss the extent to which these constructions may be understood mathematically, in the way that sequential computing may be understood in terms of functions. Finally, I shall outline a new basic calculus for concurrency; it gives prominence to the old idea of naming or reference, which has hitherto been treated as a second-class citizen by theories of computing. I make a disclaimer. I reject the idea that there can be a unique conceptual model, or one preferred formalism , for all aspects of something as large as concurrent …


Journal ArticleDOI
TL;DR: The Code of Professional Consional Association (COPCA) as mentioned in this paper is a code of ethics for professional decision-making that was developed by the American Council on Professional Ethics (ACM).
Abstract: the public that they deserve to be Judith Perrolle are deserving of its confidence and self-regulating. Self-regulation deembodiment of a set of commitments respect, and of increased social and pends on ways to deter unethical of that association’s members. Someeconomic rewards” [S]. behavior of the members, and a times these commitments are exThe final and most important code, combined with an ethics review pressed as rules and sometimes as function of a code of ethics is its role board, was seen as the solution. ideals, but the essential social funcas an aid to individual decision makCodes of ethics have tended to list tion is to clarify and formally state ing. In the interest of facilitating betpossible violations and threaten sancthose ethical requirements that are ter ethical decision making, we have tions for such violations. ACM’s first important to the group as a profesdeveloped a set of nine classes that code, the Code of Professional Consional association. The new ACM describe situations calling for ethical duct, was adopted in 1972 and folCode of Ethics and Professional Condecision making. These cases address lowed this model. The latest ACM duct follows this philosophy. in turn the topics of intellectual code, the Code of Ethics and ProfesRecent codes of ethics emphasize property, privacy, confidentiality, sional Conduct, was adopted in 1992 socialization or education rather than professional quality, fairness or disand takes a new direction. enforced compliance. A code can crimination, liability, software risks, ACM and many other societies work toward the collective good even conflicts of interest, and unauthorhave had difficulties implementing though it may be a mere distillation ized access to computer systems. an ethics review system and came to of collective experience and reflecWithin each case we begin with a realize that self-regulation depends tion. A major benefit of an educascenario to illustrate a typical ethical mostly on the consensus and committionally oriented code is its contribudecision point and then lay out the ment of its members to ethical behavtion to the group by clarifying the different imperatives (principles) of ior. Now the most important rationprofessionals’ responsibility to socithe new Code of Ethics that pertain ale for a code of ethics is an ety. to that decision. There are 24 princi-

Journal ArticleDOI
TL;DR: A different approach is considered, which deduces shared-interest relationships between people based on the history of email communication, using a set of heuristic graph algorithms that are powerful and can threaten privacy.
Abstract: ngoing increases in wide-area network connectivity promise vastly increased opportunities for collaboration and resource sharing. A fundamental problem confronting users of such networks i,; how to discover the existence of resources of interest, such as files, retail products, network services, or people. In tZhis article we focus on the problem of discovering people who have particular interests or expertise. For an overview of tlhe larger research project into which this work fits, the reader is referred to [16]. The typical approach to locating people is to build a directory from explicitly registered data. This approach is taken, for example, by the X.500 directory service standard [3]. While this approach provides good support for locating particular users (the \"white-pages\" problem), it does not: easily support finding users who have particular interests or expertise (the \"yellow-pages\" problem). One could create special interest group lists, but doing so requires a significant amount of effort. For each group someone has to build and maintain a membership list. Moreover, building such lists assumes one knows which lists should be compiled and who should be included in each list. In a large network, the set of possible interest groups can be quite large and rapidly evolving. It is difficult to track the interests of such a community using explicitly registered data. We consider a different approach, which deduces shared-interest relationships between people based on the history of email communication. Using this approach, a user could search J~or people by requesting a list of people whose interests are similar to several people known to have the interest in question. This technique can support a fine-grained, dynamic means of locating people with related interests. The set of possible interests can be arbitrarily specialized, and the people located will be appropriate at the time of the search, rather than at some earlier time when a list was compiled. One might at tempt to discern shared interests by analyzing subject lines and message bodies in electronic mail messages. Beyond the obvious privacy problems, doing this would pose difficult natura l l anguage recognition problems. Instead, we approached the problem by analyzing the structure of the graph formed from \"From:/To:\" email logs, using a set of heuristic graph algorithms. We demonstrate the algorithms by applying them to email logs we collected from 15 sites around the world between December 1, 1988 and January 31, 1989. The graph generated from these logs contained approximately 50,000 people in 3,700 different sites worldwide. Using these algorithms, we were able to deduce sharedinterest lists for people far beyond the data collection sites. Because the algorithms we present can deduce shared-interest relationships from any c o m m u n i c a t i o n graph, they are powerful and can threaten privacy. We propose recommendations that we believe should underlie the ethical use of these algorithms and discuss several possible applications we believe to not threaten privacy.