scispace - formally typeset
Search or ask a question

Showing papers on "The Internet published in 1994"


Book
01 Oct 1994
TL;DR: In this paper, the authors present a look inside the development, inner workings and future of the Internet, and recommend the book as "a must-read for anyone hoping to understand the next wave of human culture and communication".
Abstract: From the Publisher: Praised as "one of the ten best books of the year" by Business Week, this lively and provocative look inside the development, inner workings and future of the Internet is a must-read for anyone hoping to understand the next wave of human culture and communication."Read, learn, smile, weep, enjoy: managers, policy-makers, and fellow citizens, this book is worth your time." --Tom PetersA

4,574 citations


01 Jun 1994
TL;DR: This memo discusses a proposed extension to the Internet architecture and protocols to provide integrated services, i.e., to support real- time as well as the current non-real-time service of IP.
Abstract: This memo discusses a proposed extension to the Internet architecture and protocols to provide integrated services, i.e., to support real- time as well as the current non-real-time service of IP. This extension is necessary to meet the growing need for real-time service for a variety of new applications, including teleconferencing, remote seminars, telescience, and distributed simulation.

3,114 citations


Journal ArticleDOI
Tim Berners-Lee1, Robert Cailliau1, Ari Luotonen1, Henrik Frystyk Nielsen1, Arthur Secret1 
TL;DR: The World Wide Web (W3) as mentioned in this paper is a pool of human knowledge that allows collaborators in remote sites to share their ideas and all aspects of a common project, which is the basis of the Web.
Abstract: Publisher Summary This chapter discusses the history and growth of World Wide Web (W3). The World-Wide Web was developed to be a pool of human knowledge, which would allow collaborators in remote sites to share their ideas and all aspects of a common project. Physicists and engineers at CERN, the European Particle Physics Laboratory in Geneva, Switzerland, collaborate with many other institutes to build the software and hardware for high-energy physics research. The idea of the Web was prompted by positive experience of a small “home-brew” personal hypertext system used for keeping track of personal information on a distributed project. The Web was designed so that if it was used independently for two projects, and later relationships were found between the projects, then no major or centralized changes would have to be made, but the information could smoothly reshape to represent the new state of knowledge. This property of scaling has allowed the Web to expand rapidly from its origins at CERN across the Internet irrespective of boundaries of nations or disciplines.

1,065 citations


Patent
16 Sep 1994
TL;DR: In this article, a payment system for enabling a first Internet user to make a payment to a second Internet user, typically for the purchase of an information product deliverable over the Internet, was proposed.
Abstract: A payment system for enabling a first Internet user to make a payment to a second Internet user, typically for the purchase of an information product deliverable over the Internet. The payment system provides cardholder accounts for the first and second Internet users. When the second user sends the information product to the first user over the Internet, the second user also makes a request over the Internet to a front end portion of the payment system requesting payment from the first user. The front end portion of the payment system queries the first user over the Internet whether to proceed with payment to the second user. If the first user replies affirmatively, a charge to the first user is processed off the Internet; however if the first user replies negatively, the first user is not charged for the information product. The payment system informs the second user regarding whether the first user's decision and pays the second user upon collection of the charge from the first user. Security is maintained by isolating financial and credit information of users' cardholder accounts from the front end portion of the payment system and by isolating the account identifying information from the associated e-mail address.

816 citations


Book
01 Dec 1994
TL;DR: This publication explains how to use the GENESIS simulation/modeling software system available through the Internet file-server at the California Institute of Technology, Pasadena, California, USA.
Abstract: This publication explains how to use the GENESIS simulation/modeling software system available through the Internet file-server at the California Institute of Technology, Pasadena, California, USA. The first part of the book consists of edited contributions from an international team of neural networks researchers working with GENESIS. They show the user the kind of models/simulations which can be created by using the software. The second part is a step-by-step tutorial for all professionals, researchers and students working in the area of neural networks and the cognitive sciences.

815 citations


01 Mar 1994
TL;DR: This document describes address allocation for private internets and specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.
Abstract: This document describes address allocation for private internets This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements

727 citations


Book
01 Jan 1994
TL;DR: The first market leader to cover entrepreneurship in one entire text as discussed by the authors, this market leader was the first of its kind to cover entrepreneurial in one whole text and its practical step-by-step approach helps develop entrepreneurial skills.
Abstract: This market leader was the first of its kind to cover entrepreneurship in one entire text. Its practical step-by-step approach helps develop entrepreneurial skills. The revision of this successful text features the Internet and a chapter on "Quality and the Human Factor," as well as current management themes that will keep the text at the forefront of the market.

685 citations


Proceedings ArticleDOI
12 Jun 1994
TL;DR: The authors investigate the performance of four different algorithms for adaptively adjusting the playout delay of audio packets in an interactive packet-audio terminal application, and indicate that an adaptive algorithm which explicitly adjusts to the sharp, spike-like increases in packet delay can achieve a lower rate of lost packets.
Abstract: Recent interest in supporting packet-audio applications over wide area networks has been fueled by the availability of low-cost, toll-quality workstation audio and the demonstration that limited amounts of interactive audio can be supported by today's Internet. In such applications, received audio packets are buffered, and their playout delayed at the destination host in order to compensate for the variable network delays. The authors investigate the performance of four different algorithms for adaptively adjusting the playout delay of audio packets in an interactive packet-audio terminal application, in the face of such varying network delays. They evaluate the playout algorithms using experimentally-obtained delay measurements of audio traffic between several different Internet sites. Their results indicate that an adaptive algorithm which explicitly adjusts to the sharp, spike-like increases in packet delay which were observed in the traces can achieve a lower rate of lost packets for both a given average playout delay and a given maximum buffer size. >

567 citations


Journal ArticleDOI
TL;DR: Etzioni, Lcsh, and Segal as discussed by the authors developed the Internet Softbot (software robot) which uses a UNIX shell and the World Wide Web to interact with a wide range of internet resources.
Abstract: The Internet Softbot (software robot) is a fullyimplemented AI agent developed at the University of Washington (Etzioni, Lcsh, & Segal 1993). The softbot uses a UNIX shell and the World-Wide Web to interact with a wide range of internet resources. The softbot’s effectors include ftp, telnet, mail, and numerous file manipulation commaslds. Its sensors include internet facilities such as archie, gopher, netfind, and many more. The softbot is designed to incorporate new facilities into its repertoirc as they become available. The softbot’s "added value" is three-fold. First, it provides an integrated and expressive interface to the internet. Second, the softbot dynamically chooses which facilities to invoke, and in what sequence. For example, the softbot might use netfind to determine David McAllester’s e-mail address. Since it knows that netfind requires a person’s institution as input, the softbot would first search bibliographic databases for a technical report by McAllester which would reveal his institutkm, and then feed that information to netfind. Third, the softbot fluidly backtracks from one facility to another based on information collected at run time. As a result., the softbot’s behavior changes in response to transient system conditions (e.g., the UUCP gateway is down). In this article, we focus on the ideas underlying the softbot-based interface.

553 citations


Book
01 Apr 1994
TL;DR: In this paper, the authors present a road map for business-to-business Electronic Commerce, based on the Internet Electronic Commerce (BEEC) model and a set of business models for BEEC.
Abstract: Key Features of Internet Electronic Commerce. Business Models for Electronic Commerce. Business-to-Business Electronic Commerce Cases. Markets and Competition. Marketing Strategies and Programmes. Roadmap for Business-to-Business Electronic Commerce. Bibliography. Endnotes. Subject Index.

526 citations


Journal ArticleDOI
TL;DR: This paper explores the issues involved in designing and developing network software architectures for large-scale virtual environments in the context of NPSNET-IV, the first 3-D virtual environment that incorporates both the IEEE 1278 distributed interactive simulation (DIS) application protocol and the IP multicast network protocol for multiplayer simulation over the Internet.
Abstract: This paper explores the issues involved in designing and developing network software architectures for large-scale virtual environments. We present our ideas in the context of NPSNET-IV, the first 3-D virtual environment that incorporates both the IEEE 1278 distributed interactive simulation DIS application protocol and the IP multicast network protocol for multiplayer simulation over the Internet.

Posted Content
TL;DR: In this article, a smart-market mechanism for pricing traffic on the Internet is proposed, and the authors discuss the components of an efficient pricing structure, including technology and costs relevant to pricing access to and usage of the Internet.
Abstract: This paper was prepared for the conference ``Public Access to the Internet,'' JFK School of Government, May 26--27 , 1993. We describe some of the technology and costs relevant to pricing access to and usage of the Internet, and discuss the components of an efficient pricing structure. We suggest a possible smart-market mechanism for pricing traffic on the Internet.

Journal ArticleDOI
TL;DR: The network concepts underlying MBone, the importance of bandwidth considerations, various application tools, MBone events, interesting MBone uses, and guidance on how to connect your Internet site to the MBone are described.
Abstract: Researchers have produced the Multicast Backbone (MBone), which provides audio and video connectivity from outer space to under water/spl minus/and virtually everyplace in between. MBone is a virtual network that has been in existence since early 1992. It originated from an effort to multicast audio and video from meetings of the Internet Engineering Task Force. Today. hundreds of researchers use MBone to develop protocols and applications for group communication. Multicast provides one-to-many and many-to-many network delivery services for applications such as videoconferencing and audio where several hosts need to communicate simultaneously. This article describes the network concepts underlying MBone, the importance of bandwidth considerations, various application tools, MBone events, interesting MBone uses, and provides guidance on how to connect your Internet site to the MBone. >

Proceedings ArticleDOI
01 Oct 1994
TL;DR: The mechanism uses a novel probing mechanism to solicit feedback information in a scalable manner and to estimate the number of receivers and separates the congestion signal from the congestion control algorithm, so as to cope with heterogeneous networks.
Abstract: We describe a mechanism for scalable control of multicast continuous media streams. The mechanism uses a novel probing mechanism to solicit feedback information in a scalable manner and to estimate the number of receivers. In addition, it separates the congestion signal from the congestion control algorithm, so as to cope with heterogeneous networks.This mechanism has been implemented in the IVS video conference system using options within RTP to elicit information about the quality of the video delivered to the receivers. The H.261 coder of IVS then uses this information to adjust its output rate, the goal being to maximize the perceptual quality of the image received at the destinations while minimizing the bandwidth used by the video transmission. We find that our prototype control mechanism is well suited to the Internet environment. Furthermore, it prevents video sources from creating congestion in the Internet. Experiments are underway to investigate how the scalable proving mechanism can be used to facilitate multicast video distribution to large number of participants.

01 Dec 1994
TL;DR: The Internet Message Access Protocol, Version 4rev1 (IMAP4rev1) allows a client to access and manipulate electronic mail messages on a server in a way that is functionally equivalent to local mailboxes.
Abstract: The Internet Message Access Protocol, Version 4rev1 (IMAP4rev1) allows a client to access and manipulate electronic mail messages on a server. IMAP4rev1 permits manipulation of remote message folders, called "mailboxes", in a way that is functionally equivalent to local mailboxes. IMAP4rev1 also provides the capability for an offline client to resynchronize with the server (see also [IMAP-DISC]).

Journal ArticleDOI
Ari Luotonen1, Kevin Altis2
01 Nov 1994
TL;DR: An overview of proxies is given and a brand new feature is caching performed by the proxy, resulting in shorter response times after the first document fetch, making proxies useful even to the people who have full Internet access and do not really need the proxy just to get out of their local subnet.
Abstract: A WWW proxy server, proxy for short, provides access to the Web for people on closed subnets who can only access the Internet through a firewall machine. The hypertext server developed at CERN, cern_httpd, is capable of running as a proxy, providing seamless external access to HTTP, Gopher, WAIS and FTP. cern_httpd has had gateway features for a long time, but only this spring they were extended to support all the methods in the HTTP protocol used by WWW clients. Clients do not lose any functionality by going through a proxy, except special processing they may have done for non-native Web protocols such as Gopher and FTP. A brand new feature is caching performed by the proxy, resulting in shorter response times after the first document fetch. This makes proxies useful even to the people who do have full Internet access and do not really need the proxy just to get out of their local subnet. This paper gives an overview of proxies and reports their current status.

Journal ArticleDOI
TL;DR: This work survey the technological issues for designing a large-scale, distributed, interactive multimedia system.
Abstract: Interactive multimedia systems are rapidly evolving from marketing hype and research prototypes to commercial deployments.We survey the technological issues for designing a large-scale, distributed, interactive multimedia system.

Book
01 Jan 1994
TL;DR: The 2-amino-3-bromoanthraquinone which is isolated may be used for the manufacture of dyes and is at least as pure as that obtained from purified 2- aminoanthraquin one by the process of the prior art.
Abstract: In a process for the manufacture of 2-amino-3-bromoanthraquinone by heating 2-aminoanthraquinone with bromine (in the molar ratio of 1:1) in sulfuric acid, while mixing, the improvement wherein crude 2-aminoanthraquinone, in sulfuric acid of from 60 to 90 percent strength by weight, which contains from 10 to 15% by weight of an alkanecarboxylic acid of 3 or 4 carbon atoms or a mixture of such acids, is heated with from 1 to 1.05 moles of bromine per mole of 2-aminoanthraquinone at from 130 to 150 DEG C. The 2-amino-3-bromoanthraquinone which is isolated may be used for the manufacture of dyes. It is at least as pure as that obtained from purified 2-aminoanthraquinone by the process of the prior art.

Book
01 Mar 1994
TL;DR: In this article, How the Internet Works, Fourth Edition explains the Internet and the technologies that make it work, including how the Internet works, and how it works in practice and how to use it.
Abstract: From the Publisher: Covering today's latest technology, How the Internet Works, Fourth Edition explains the Internet and the technologies that make it work.

Journal ArticleDOI
TL;DR: The authors classify firewalls into three main categories: packet filtering, circuit gateways, and application gateways; their focus is on the TCP/IP protocol suite, especially as used on the Internet.
Abstract: Computer security is a hard problem. Security on networked computers is much harder. Firewalls (barriers between two networks), when used properly, can provide a significant increase in computer security. The authors classify firewalls into three main categories: packet filtering, circuit gateways, and application gateways. Commonly, more than one of these is used at the same time. Their examples and discussion relate to UNIX systems and programs. The majority of multiuser machines on the Internet run some version of the UNIX operating system. Most application-level gateways are implemented in UNIX. This is not to say that other operating systems are more secure; however, there are fewer of them on the Internet, and they are less popular as targets for that reason. But the principles and philosophy apply to network gateways built on other operating systems as well. Their focus is on the TCP/IP protocol suite, especially as used on the Internet. >

Book
01 Sep 1994
TL;DR: ATM Theory and Applications explores the remarkable range of ATM applications, objectively compares ATM with competing technologies, and even offers some provocative predictions about how ATM technology might evolve.
Abstract: From the Publisher: The most current,most complete,and most "real-world" guide ever on ATM! This landmark reference is the ultimate "in print" database of ATM technology,services,and applications From basic principles to detailed real-world examples,this text has it all,including exclusive in-depth treatment of such "hot" new protocols as IP and Tag Switching,Private Network Network Interface (PNNI),LAN Emulation (LANE),Understandable View of ATM Signaling,MultiProtocol Over ATM (MPOA),Available Bit rate (ABR) flow control,and Internet ReSerVation Protocol (RSVP) Traffic engineering and network design considerations are also extensively explained and illustrated This is a must-have reference that will substantially enable any reader to make smarter technological and strategic business decisions regarding virtually every aspect of how,where,and why to apply ATM The most current,most complete,and most "real world" guide ever on ATM This landmark reference is the ultimate "in print" database of ATM technology,services,and applications Emphasizing practice over theory in an engaging,fun-to-read style,the authors present high-level summaries followed up with detailed treatment of all key areas of ATM technology It is the only book that provides in-depth treatment of such "hot" new protocols as: IP and Tag Switching Private Network Network Interface (PNNI) LAN Emulation (LANE) Understandable View of ATM Signaling Multiprotocol Over ATM (MPOA) Available Bit Rate (ABR) flow control Internet ReSerVation Protocol (RSVP) In addition,ATM Theory and Applications explores the remarkable range of ATM applications,objectively comparesATM with competing technologies,and even offers some provocative predictions about how ATM technology might evolve "This update to our best selling book not only provides a working tool for someone wanting to learn ATM,but also provides a reference for the practicing professional applying real-world solutions to business and technology problems "

Journal ArticleDOI
01 Nov 1994
TL;DR: The methodology used at the National Center for Supercomputing Applications in building a scalable World Wide Web server is outlined, allowing for dynamic scalability by rotating through a pool of http servers that are alternately mapped to the hostname alias of the www server.
Abstract: While the World Wide Web (www) may appear to be intrinsically scalable through the distribution of files across a series of decentralized servers, there are instances where this form of load distribution is both costly and resource intensive. In such cases it may be necessary to administer a centrally located and managed http server. Given the exponential growth of the internet in general, and www in particular, it is increasingly more difficult for persons and organizations to properly anticipate their future http server needs, both in human resources and hardware requirements. It is the purpose of this paper to outline the methodology used at the National Center for Supercomputing Applications in building a scalable World Wide Web server. The implementation described in the following pages allows for dynamic scalability by rotating through a pool of http servers that are alternately mapped to the hostname alias of the www server. The key components of this configuration include: (1) cluster of identically configured http servers; (2) use of Round-Robin DNS for distributing http requests across the cluster; (3) use of distributed File System mechanism for maintaining a synchronized set of documents across the cluster; and (4) method for administering the cluster. The result of this design is that we are able to add any number of servers to the available pool, dynamically increasing the load capacity of the virtual server. Implementation of this concept has eliminated perceived and real vulnerabilities in our single-server model that had negatively impacted our user community. This particular design has also eliminated the single point of failure inherent in our single-server configuration, increasing the likelihood for continued and sustained availability. while the load is currently distributed in an unpredictable and, at times, deleterious manner, early implementation and maintenance of this configuration have proven promising and effective.

01 Dec 1994
TL;DR: This document specifies a minimum set of requirements for a kind of Internet resource identifier known as Uniform Resource Names (URNs) and provides information for the Internet community.
Abstract: This document specifies a minimum set of requirements for a kind of Internet resource identifier known as Uniform Resource Names (URNs) This memo provides information for the Internet community This memo does not specify an Internet standard of any kind

ReportDOI
01 Jul 1994
TL;DR: This paper introduces Harvest, a system that provides a set of customizable tools for gathering information from diverse repositories, building topic-specific content indexes, flexibly searching the indexes, widely replicating them, and caching objects as they are retrieved across the Internet.
Abstract: : Rapid growth in data volume user base and data diversity render Internet-accessible information increasingly difficult to use effectively. In this paper we introduce Harvest, a system that provides a set of customizable tools for gathering information from diverse repositories, building topic-specific content indexes, flexibly searching the indexes, widely replicating them, and caching objects as they are retrieved across the Internet. The system interoperates with Mosaic and with HTTP, FTP, and Gopher information resources. We discuss the design and implementation of each subsystem and provide measurements indicating that Harvest can reduce server load, network traffic and index space requirements significantly compared with previous indexing systems. We also discuss a half dozen indexes we have built using Harvest, underscoring both the customizability and scalability of the system.

Book
01 Apr 1994
TL;DR: In this paper, a major new book by a leading researcher addresses these pressing questions and reveals the complex dynamic between online opportunities and online risks, exploring this in relation to much debated issues such as: Digital in/exclusion Learning and literacy Peer networking and privacy Civic participation Risk and harm.
Abstract: Is the internet really transforming children and young peoples lives? Is the so-called digital generation genuinely benefiting from exciting new opportunities? And, worryingly, facing new risks?This major new book by a leading researcher addresses these pressing questions. It deliberately avoids a techno-celebratory approach and, instead, interprets childrens everyday practices of internet use in relation to the complex and changing historical and cultural conditions of childhood in late modernity. Uniquely, Children and the Internet reveals the complex dynamic between online opportunities and online risks, exploring this in relation to much debated issues such as: Digital in/exclusion Learning and literacy Peer networking and privacy Civic participation Risk and harmDrawing on current theories of identity, development, education and participation, this book includes a refreshingly critical account of the challenging realities undermining the great expectations held out for the internet - from governments, teachers, parents and children themselves. It concludes with a forward-looking framework for policy and regulation designed to advance childrens rights to expression, connection and play online as well as offline.

Journal ArticleDOI
TL;DR: The visions of SHARE are presented, along with the research and strategies undertaken to build an infrastructure toward its realization, and a preliminary prototype environment is used by designers working on a variety of industry sponsored design projects.
Abstract: The SHARE project seeks to apply information technologies in helping design teams gather, organize, re-access, and communicate both informal and formal design information to establish a "shared understanding" of the design and design process. This paper presents the visions of SHARE, along with the research and strategies undertaken to build an infrastructure toward its realization. A preliminary prototype environment is being used by designers working on a variety of industry sponsored design projects. This testbed continues to inform and guide the development of NoteMail, MovieMail, and Xshare, as well as other components of the next generation SHARE environment that will help distributed design teams work together more effectively on the Internet.

Patent
Ashar Aziz1
03 Jun 1994
TL;DR: In this article, a client workstation provides a login address as an anonymous ftp (file transfer protocol) request, and a password as a user's e-mail address.
Abstract: A client workstation provides a login address as an anonymous ftp (file transfer protocol) request, and a password as a user's e-mail address. A destination server compares the user's e-mail address provided as a password to a list of authorized users' addresses. If the user's e-mail address is located on the list of authorized users' addresses maintained by the destination server, the destination server generates a random number (X), and encrypts the random number in an ASCII representation using encryption techniques provided by the Internet Privacy Enhanced Mail (PEM) procedures. The encrypted random number is stored in a file as the user's anonymous directory. The server further establishes the encrypted random number as one-time password for the user. The client workstation initiates an ftp request to obtain the encrypted PEM random number as a file transfer (ftp) request from the destination server. The destination server then sends the PEM encrypted password random number, as an ftp file, over the Internet to the client workstation. The client workstation decrypts the PEM encrypted file utilizing the user's private RSA key, in accordance with established PEM decryption techniques. The client workstation then provides the destination server with the decrypted random number password, which is sent in the clear over the Internet, to login to the destination server. Upon receipt of the decrypted random number password, the destination server permits the user to login to the anonymous directory, thereby completing the user authentication procedure and accomplishing login.

Journal ArticleDOI
TL;DR: In this paper, the authors indicate trends in these three dimensions and survey problems these trends will create for current approaches and suggest several promising directions of future resource discovery research, along with some initial results from projects carried out by members of the Internet Research Task Force Research Group on Resource Discovery and Directory Service.
Abstract: Over the past several years, a number of information discovery and access tools have been introduced in the Internet, including Archie, Gopher, Net nd, and WAIS. These tools have become quite popular, and are helping to rede ne how people think about wide-area network applications. Yet, they are not well suited to supporting the future information infrastructure, which will be characterized by enormous data volume, rapid growth in the user base, and burgeoning data diversity. In this paper we indicate trends in these three dimensions and survey problems these trends will create for current approaches. We then suggest several promising directions of future resource discovery research, along with some initial results from projects carried out by members of the Internet Research Task Force Research Group on Resource Discovery and Directory Service.

Book
19 Sep 1994
TL;DR: In this paper, the authors propose a system approach to understand technological change, which combines old and new technologies to reshape systems: recombining large technical systems, Ingo Braun and Bernward Joerges changing embedded systems, 1876-1914, Steven W. Usselman integrating supple technologies into utility power systems, possibilities for reconfiguration, Alexandra Suchard the normal accident of July 1914, Arden Bucholz.
Abstract: Introduction - the systems approach to understanding technological change, Jane Ummerton. Part 1 Combining old and new technologies to reshape systems: recombining large technical systems - the case of European organ transplantation, Ingo Braun and Bernward Joerges changing embedded systems - the economics and politics of innovation in American railroad signalling, 1876-1914, Steven W. Usselman integrating supple technologies into utility power systems - possibilities for reconfiguration, Alexandra Suchard the normal accident of July 1914, Arden Bucholz. Part 2 Crossing borders reconfigures systems: transformation through integration - the unification of German telecommunications, Tobias Robischon multinationals in transition - global technical integration and the role of corporate telecommunication networks, Volker Schneider the Australian electric power industry and the politics of radical reconfiguration, Stephen M. Salsbury economics of changing grid systems - competition in the electricity supply industry, Olivier Coutard. Part 3 Confronting incompatibilities between changing systems and their cultures: the Internet challenge - conflict and compromise in computer networking, Jane Abbate broken plowshare - system failure and the nuclear power industry, Gene I. Rochlin. Part 4 Controlling the car - will the system change?: car traffic at the crossroads - new technologies for cars, traffic systems and their interaction, Reiner Grundmann rethinking road traffic as social interaction, Oskar Juhlin. Part 5 The logic of systemic technology: four notes on systemic technology, Svante Beckman. Part 6 Concluding comments: conclusion, Jane Summerton.

Journal ArticleDOI
TL;DR: The model, which traces its origins, and the manner in which the digital persona contributes to an understanding of particular dataveillance techniques such as computer matching and profiling is discussed.
Abstract: The digital persona is a model of the individual established through the collection, storage, and analysis of data about that person. It is a useful and even necessary concept for developing an understanding of the behavior of the new, networked world. This paper introduces the model, traces its origins, and provides examples of its application. It is suggested that an understanding of many aspects of network behavior will be enabled or enhanced by applying this concept. The digital persona is also a potentially threatening, demeaning, and perhaps socially dangerous phenomenon. One area in which its more threatening aspects require consideration is in data surveillance, the monitoring of people through their data. Data surveillance provides an economically efficient means of exercising control over the behavior of individuals and societies. The manner in which the digital persona contributes to an understanding of particular dataveillance techniques such as computer matching and profiling is discussed, an...