scispace - formally typeset
Search or ask a question

Showing papers on "Disaster recovery published in 2002"


Patent
28 Mar 2002
TL;DR: In this article, a carrier medium is used to enable the execution of a first virtual machine, where the computer system is configured to capture a state of the first VM, the state corresponding to a point in time of execution of the VM, and copy at least a portion of the state to a destination separate from a storage device to which the VM is suspendable.
Abstract: One or more computer systems, a carrier medium, and a method are provided for backing up virtual machines. The backup may occur, e.g., to a backup medium or to a disaster recovery site, in various embodiments. In one embodiment, an apparatus includes a computer system configured to execute at least a first virtual machine, wherein the computer system is configured to: (i) capture a state of the first virtual machine, the state corresponding to a point in time in the execution of the first virtual machine; and (ii) copy at least a portion of the state to a destination separate from a storage device to which the first virtual machine is suspendable. A carrier medium may include instructions which, when executed, cause the above operation on the computer system. The method may comprise the above highlighted operations.

243 citations


Journal ArticleDOI
TL;DR: Alesch et al. as mentioned in this paper studied the long-term economic impacts of disasters on the private sector, focusing on short-term impacts, rather than the longer-term consequences of disaster victimization.

235 citations


Proceedings Article
28 Jan 2002
TL;DR: Snap-Mirror is presented, an asynchronous mirroring technology that leverages file system snapshots to ensure the consistency of the remote mirror and optimize data transfer, and exploiting file system knowledge of deletions is critical to achieving any reduction for no-overwrite file systems such as WAFL and LFS.
Abstract: Computerized data has become critical to the survival of an enterprise. Companies must have a strategy for recovering their data should a disaster such as a fire destroy the primary data center. Current mechanisms offer data managers a stark choice: rely on affordable tape but risk the loss of a full day of data and face many hours or even days to recover, or have the benefits of a fully synchronized on-line remote mirror, but pay steep costs in both write latency and network bandwidth to maintain the mirror. In this paper, we argue that asynchronous mirroring, in which batches of updates are periodically sent to the remote mirror, can let data managers find a balance between these extremes. First, by eliminating the write latency issue, asynchrony greatly reduces the performance cost of a remote mirror. Second, by storing up batches of writes, asynchronous mirroring can avoid sending deleted or overwritten data and thereby reduce network bandwidth requirements. Data managers can tune the update frequency to trade network bandwidth against the potential loss of more data. We present Snap-Mirror, an asynchronous mirroring technology that leverages file system snapshots to ensure the consistency of the remote mirror and optimize data transfer. We use traces of production filers to show that even updating an asynchronous mirror every 15 minutes can reduce data transferred by 30% to 80%. We find that exploiting file system knowledge of deletions is critical to achieving any reduction for no-overwrite file systems such as WAFL and LFS. Experiments on a running system show that using file system metadata can reduce the time to identify changed blocks from minutes to seconds compared to purely logical approaches. Finally, we show that using SnapMirror to update every 30 minutes increases the response time of a heavily loaded system only 22%.

194 citations


01 Jan 2002
TL;DR: The requirements and innovative technology for an integrated disaster management communication and information system are sketched, addressing in particular network, configuration, scheduling and data management issues during the response and recovery phases.
Abstract: Disaster response and recovery efforts require timely interaction and coordination of public emergency services in order to save lives and property. Today, IT is used in this field only to a limited extent, but there is a tremendous potential for increasing efficiency and effec- tiveness in coping with a disaster. In this paper we sketch requirements and innovative technology for an integrated disaster management communication and information system, addressing in particular network, configuration, scheduling and data management issues during the response and recovery phases.

191 citations


Journal ArticleDOI
TL;DR: Wireless and mobile networks are being used in diverse areas such as travel, education, stock trading, military, package delivery, disaster recovery, and medical emergency care.
Abstract: Wireless and mobile networks are being used in diverse areas such as travel, education, stock trading, military, package delivery, disaster recovery, and medical emergency care.

142 citations


Journal ArticleDOI
TL;DR: A mathematical programming model is presented which helps the decision maker to select among competing subplans, a subset of subpl plans which maximizes the “value” of the recovery capability of a recovery strategy.

116 citations


Journal ArticleDOI
TL;DR: In this paper, the authors outline the key content of such a plan and the issues to be addressed in drawing one up to ensure it meets real business recovery needs and continue the plan through to the actions needed to handle an actual emergency.
Abstract: The 11 September tragedy in the USA has provided a wake up call to remind businesses of the need for adequate disaster recovery and business continuity planning. A business continuity plan must be comprehensive and up to date. This paper outlines the key content of such a plan and the issues to be addressed in drawing one up to ensure it meets real business recovery needs. Continues the plan through to the actions needed to handle an actual emergency.

113 citations


Journal ArticleDOI
TL;DR: The Community Vulnerability Assessment Tool (CVAT) as mentioned in this paper is a risk and vulnerability assessment methodology designed by the National Oceanic and Atmospheric Administration's Coastal Services Center to assist emergency managers and planners in their efforts to reduce hazard vulnerabilities through hazard mitigation, comprehensive land-use, and development planning.
Abstract: Communities must identify exposure to hazard impacts to proactively address emergency response, disaster recovery and hazard mitigation, and incorporate sustainable development practices into comprehensive planning. Hazard mitigation, an important part of sustainable development, eliminates or minimizes disaster-related damages and empowers communities to respond to and recover more quickly from disasters. The Community Vulnerability Assessment Tool (CVAT) is a risk and vulnerability assessment methodology designed by the National Oceanic and Atmospheric Administration's Coastal Services Center to assist emergency managers and planners in their efforts to reduce hazard vulnerabilities through hazard mitigation, comprehensive land-use, and development planning. CVAT analysis results provide a baseline to prioritize mitigation measures and to evaluate the effectiveness of those measures over time. This methodology is flexible, as results may be achieved using a geographic information system or static maps with overlays and handwritten data. This paper outlines how to engage stakeholders and explains the CVAT process. Several case studies also highlight some of the challenges/problems and best practices/opportunities associated with applying the CVAT methodology.

112 citations


Journal ArticleDOI
TL;DR: This paper views Business Continuity Management as a progression from more traditional Disaster Recovery Planning, while recovery presupposes an event that causes a failure, continuity suggests the avoidance, or at least minimizing, the impact of a failure.
Abstract: This paper views Business Continuity Management as a progression from more traditional Disaster Recovery Planning. While recovery presupposes an event that causes a failure, continuity suggests the avoidance, or at least minimizing, the impact of a failure. Business Continuity Management is not just about Information Systems. Rather it is about ensuring that the critical business functions can continue. Business Continuity Management is a process not an event and should deal with any threat that could affect the business. For many organizations reliant on sophisticated Information Technology, adequate Business Continuity Management is a basic requirement.

110 citations


Patent
30 May 2002
TL;DR: In this paper, a disaster recovery virtual roll call and recovery management system and method allows any organization to locate their staff and allocate resources to their staff in the event of a disaster.
Abstract: A disaster recovery virtual roll call and recovery management system and method allows any organization to locate their staff and allocate resources to their staff in the event of a disaster. User information can be stored on remote, distributed computer networks to assure that the information is available during a disaster. The computer networks can be web networks and Interactive Voice Response (IVR) networks, to provide different methods of user interaction with the system. In case of disaster, the system can contact the users over one or more communications networks, such as by email or IVR message, and request the user provide their status. The users can send user status updates to the web network using internet enabled devices, such as personal computers, telephones, or handheld portable computers, or to the IVR network using standard or wireless telephones. The system can compile the information, and generate reports on group and individual status.

105 citations


Journal ArticleDOI
TL;DR: The thoroughgoing approach to business continuity planning (BCP) that is presented is generic enough to have practical value in a wide range of IT-related organizations, and it is process-oriented, ensuring well-guided BCP efforts and tangible results.
Abstract: Last year's terrorist attacks in the US have forced many organizations to critically reevaluate the adequacy of their existing business continuity plans and disaster recovery arrangements. The tragedy highlighted how important it is for organizations to remain commercially operational under even the most exceptional circumstances. E-business, which relies heavily on IT, is particularly vulnerable, because IT failures directly limit the capability to generate revenue. The thoroughgoing approach to business continuity planning (BCP) that I present-called the BCP cycle-can help you avoid those pitfalls. The BCP cycle is generic enough to have practical value in a wide range of IT-related organizations, and it is process-oriented, ensuring well-guided BCP efforts and tangible results.

Journal ArticleDOI
TL;DR: In this paper, six case studies of UK libraries and archives were used to investigate the development and use of disaster plans and found that the most useful part of the plan for disaster response is its contact lists.
Abstract: The disaster plan is promoted as a central part of disaster management. Six case studies of UK libraries and archives were used to investigate the development and use of disaster plans. During a disaster, the key in any response is leadership, an experienced team of staff with knowledge of the collections and on‐site conservation expertise. The most useful part of the plan for disaster response is its contact lists. However, the plan is an important policy and training document. It requires continued managerial commitment and should be supported by an organisational culture of disaster awareness and prevention. Organisational issues are the major constraint on the effectiveness of disaster planning and response. There is a need to investigate current levels of planning in the UK in order to determine what still needs to be done in terms of awareness raising. Methods of testing the disaster plan and co‐operation in disaster management also require further research.

Journal ArticleDOI
TL;DR: The Pascagoula refinery as discussed by the authors implemented standard hurricane readiness plans and initiated a shutdown of the refinery when the storm's forecasted path changed from landfall at the mouth of the Mississippi River to landfall to the east along the Mississippi Gulf Coast.
Abstract: When Hurricane Georges hit the Florida Keys in 1998, deflecting its course and putting Pascagoula, Mississippi, USA within its potential path, the local refinery implemented standard hurricane readiness plans. A shutdown of the refinery was initiated when the storm's forecasted path changed from landfall at the mouth of the Mississippi River to landfall to the east along the Mississippi Gulf Coast. All processes were brought to a halt, and the entire refinery was brought to a stop. The four primary buildings, including facility maintenance, were flooded in up to 64 in of salt water. The destruction and disruption was almost overwhelming. The next week was spent in an effort to access the scope of the damage and develop a strategy to recover from the disaster. Since nothing of this magnitude had ever been experienced, all existing planning was insufficient to undertake such a task. This paper discusses the development of the recovery plan and the scope of the work. Establishing priorities, equipment reliability, and economic considerations are discussed. Execution of the recovery plan is described including communication . The key points outlined in this article can be of great assistance identifying key areas in planning and executing a successful recovery.

Patent
23 Apr 2002
TL;DR: In this paper, a method for recovering from a failure of an information handling system may include monitoring a system using at least one item of software and an application level component of a recovery utility to correct the detected failure.
Abstract: The present invention is directed to a BIOS level and application level recovery of an information handling system A method for recovering from a failure of an information handling system may include monitoring an information handling system, the information handling system utilizing at least one item of software Failure of at least one item of software utilized by the information handling system is detected and an application level component of a recovery utility to correct the detected failure is initiated The initiated application level component of the recovery utility is at least one of unsuccessful in correcting the detected failure and unavailable, a BIOS level component of the recovery utility begins a recovery process

Book
12 Nov 2002
TL;DR: This IBM Redbook explores the role that IBM Tivoli Storage Manager plays in disaster protection and recovery, from both the client and server side, and describes basic sample procedures for bare metal recovery of popular operating systems.
Abstract: Keeping your TSM server and clients safe from disaster How and why to build a disaster recovery plan Testing your disaster recovery plan Disasters, by their very nature, cannot be predicted, in either their intensity, timing, or effects However, all enterprises can and should prepare for whatever might happen in order to protect themselves against loss of data or, worse, their entire business It is too late to start preparing after a disaster occurs This IBM Redbook will help you protect against a disaster - taking you step by step through the planning stages, with templates for sample documents It explores the role that IBM Tivoli Storage Manager plays in disaster protection and recovery, from both the client and server side Plus, it describes basic sample procedures for bare metal recovery of some popular operating systems, such as Windows 2000, AIX, Solaris, and Linux This book is written for any computing professional who is concerned about protecting their data and enterprise from disaster It assumes you have basic knowledge of storage technologies and products, in particular, IBM Tivoli Storage Manager

01 Jan 2002
TL;DR: The proposed classification system seeks to enable comparison of different computer systems in the dimensions of availability, data integrity, disaster recovery, and security.
Abstract: A number of the industrial partners of the IFIP WG 10.4 Dependability Benchmarking SIG (SIGDeB) have identified a set of standardized classes for characterizing the dependability of computer systems. The proposed classification system seeks to enable comparison of different computer systems in the dimensions of availability, data integrity, disaster recovery, and security. Different sets of criteria are proposed for computer systems that are used for different application types, e.g. transaction processing, process control, etc. This paper describes the classification system, and gives a progress report on the work to fill in the details of the classification criteria

Journal ArticleDOI
TL;DR: The virtual issue process (VIP) as discussed by the authors is a strategic planning tool developed by Sandia to provide concise information from a community or group that can be used to resolve complex issues and problems.
Abstract: The Southwest Indiana Disaster Resistant Community Corporation (SWIDRCC) and Sandia National Laboratories formed a partnership in 1999 with the intent of developing and deploying a system that will significantly lessen the loss of human life and lower the cost of disaster recovery in a five-county region. Although this region currently has a response system in place that appears adequate to meet the challenges posed by a disaster, the partnership is considering substantial improvements that could significantly lessen the cost of disasters. As a result of the SWIDRCC-Sandia partnership, a policy portfolio for the SWIDRCC has been developed and a significant technology development activity has been structured using the virtual issue process (VIP). VIP is a strategic planning tool developed by Sandia to provide concise information from a community or group that can be used to resolve complex issues and problems. The disaster management system that was defined as the result of the VIP will integrate sensor technologies, modeling and simulation tools, telemetry systems, and computing platforms, in addition to nonautomated elements including increased community education and involvement. The system is expected to provide information (pre-event, during event, and in event recovery) to community leaders that will significantly enhance the ability of the community to manage disaster response. The value of the system will be manifest in both reducing the loss of human life and staying the economic well-being of the community. It is expected that the system will serve as a prototype for other communities throughout the country. To aid in system definition during the process, two dynamic simulation models were also developed: the policy portfolio analysis tool and the infrastructure modeling tool.

31 Jul 2002
TL;DR: In this article, the authors examined the ways in which natural resources management and environmental degradation affect natural hazard risk, and made a preliminary assessment of the importance of such linkages and the extent of their incorporation into disaster mitigation strategies and activities.
Abstract: This paper examines the ways in which natural resources management and environmental degradation affect natural hazard risk, and makes a preliminary assessment of the importance of such linkages and the extent of their incorporation into disaster mitigation strategies and activities. Our analysis is based upon case studies in three countries in the Caribbean: Dominica, the Dominican Republic and St. Lucia, which are all highly vulnerable to natural hazards. In these three countries, detailed comprehensive analyses of these linkages do not exist. Such detailed analyses are also beyond the scope of this paper, which is a desk study without benefit of direct of site field surveys or experience. Nevertheless, we have found strong circumstantial evidence from documents and interviews to support the conclusion that natural resources and environmental management can have a significant influence on natural hazard risks. For instance, the degradation of mangroves, reefs and natural beaches affects storm surge and wave risk, and deforestation and unsustainable agricultural practices on mountain slopes lead to increases in flood and landslide risk, locally and downstream. These linkages are often recognized in the disaster management literature, but they have not been incorporated in appropriate strategies and activities.

Proceedings Article
08 Nov 2002
TL;DR: The resulting disaster recovery site, allows off-line verification of disaster recovery procedures and quick recovery times of critical data center services that is more cost effective than a transactionally aware replication of the data center and more comprehensive than a commercial data replication solution used exclusively for data vaulting.
Abstract: This paper presents the results of a proof-of-concept implementation of an on-going project to create a cost effective method to provide geographic distribution of critical portions of a data center along with methods to make the transition to these backup services quick and accurate The project emphasizes data integrity over timeliness and prioritizes services to be offered at the remote site The paper explores the tradeoff of using some common clustering techniques to distribute a backup system over a significant geographical area by relaxing the timing requirements of the cluster technologies at a cost of fidelityThe trade-off is that the fail-over node is not suitable for high availability use as some loss of data is expected and fail-over time is measured in minutes not in seconds Asynchronous mirroring, exploitation of file commonality in file updates, IP Quality of Service and network efficiency mechanisms are enabling technologies used to provide a low bandwidth solution for the communications requirements Exploitation of file commonality in file updates decreases the overall communications requirement IP Quality of Service mechanisms are used to guarantee a minimum available bandwidth to ensure successful data updates Traffic shaping in conjunction with asynchronous mirroring is used to provide an efficient use of network bandwidthTraffic shaping allows a maximum bandwidth to be set minimizing the impact on the existing infrastructure and provides a lower requirement for a service level agreement if shared media is used The resulting disaster recovery site, allows off-line verification of disaster recovery procedures and quick recovery times of critical data center services that is more cost effective than a transactionally aware replication of the data center and more comprehensive than a commercial data replication solution used exclusively for data vaulting The paper concludes with a discussion of the empirical results of a proof-of-concept implementation

Proceedings ArticleDOI
TL;DR: The paper provides a detailed prognosis of the FSO and its complementing 60GHz RF technology, besides analyses of the total cost of ownership, which leads to some suggestions on how to improve the technical and economical viability of the technology for carriers' applications.
Abstract: The paper examines, from a carrier's perspective, the viability of free-space optical (FSO) technology as a cost-effective access alternative to fixed point-to-point applications. These include extension of metropolitan area edge networks, network backhaul, temporary deployment while awaiting fiber, disaster recovery, and low cost fiber protection circuits. The paper provides a detailed prognosis of the FSO and its complementing 60GHz RF technology, besides analyses of the total cost of ownership. This leads to some suggestions on how to improve the technical and economical viability of the technology for carriers' applications.

Book
01 Jan 2002
TL;DR: In this paper, the authors present an approach for the assignment of authors' Royalties. But they do not discuss the impact of their work on the safety practices of the authors' work.
Abstract: Acknowledgments. Assignment of Authors' Royalties. Preface. Introduction. Chapter 1: Preparation. Chapter 2: Response. Chapter 3: Recovery. Chapter 4: Sample IT Solutions. Epilogue. Appendix: Basic Safety Practices. Resources. Glossary. Index.

Patent
23 Jul 2002
TL;DR: In this paper, a method for backup of Home Location Register (HLR) comprising configuring a universal HLR as a disaster recovery center HLR which is to backup many HLRs, establishing network connection and loading user data to disaster recovery centre through uniform text files.
Abstract: The present invention discloses a method for backup of Home Location Register (HLR), comprising: configuring a universal HLR as a disaster recovery center HLR which is to backup many HLRs, establishing network connection and loading user data to disaster recovery center through uniform text files; during operating, each active HLR will synchronize the user's data to the disaster recovery center; and a signaling will be forwarded to disaster recovery center to process after active HLR fails. So the present invention can realize service backup compatibility with equipment by different manufacturers, decrease cost, and be realized and managed easily, therefore the present invention has solved problems of characteristic service data backup in different HLRs.

Proceedings ArticleDOI
10 Dec 2002
TL;DR: In this paper, a disaster management plan should be consistent with company's loss control philosophy and it can be split into two distinct parts: (1) normal mode (loss control management) which encompasses planning and mitigation; and (2) crisis mode (crisis management) - an emergency response, incident management and production recovery following a disaster.
Abstract: Peter Drucker, widely know management consultant, has stated, "The first duty of business is to survive, and the guiding principle of business economics is not maximization of profit - it is an avoidance of loss". To prepare "Disaster Management Plan" will increase the probability of survival as well as profitability, although it may never go exactly as planned, a lack of planning will surely make a disaster's effect worse. First, disaster management plan shall be consistent with company's loss control philosophy and it can be split into two distinct parts: (1) normal mode (loss control management) - business as usual, which encompasses planning and mitigation; and (2) crisis mode (crisis management) - an emergency response, incident management and production recovery following a disaster. Good loss control management will avoid and reduce the probability of crisis.

Proceedings ArticleDOI
16 May 2002
TL;DR: A novel approach towards a fault-tolerant solution for disaster recovery of short-term PACS image data using an Application Service Provider model for service is described and the ASP back-up archive was able to recover two months of PACS images data for comparison studies with no complex operational procedures.
Abstract: A single point of failure in PACS during a disaster scenario is the main archive storage and server. When a major disaster occurs, it is possible to lose an entire hospital's PACS data. Few current PACS archives feature disaster recovery, but the design is limited at best. These drawbacks include the frequency with which the back-up is physically removed to an offsite facility, the operational costs associated to maintain the back-up, the ease-of-use to perform the backup consistently and efficiently, and the ease-of-use to perform the PACS image data recovery. This paper describes a novel approach towards a fault-tolerant solution for disaster recovery of short-term PACS image data using an Application Service Provider model for service. The ASP back-up archive provides instantaneous, automatic backup of acquired PACS image data and instantaneous recovery of stored PACS image data all at a low operational cost. A back-up archive server and RAID storage device is implemented offsite from the main PACS archive location. In the example of this particular hospital, it was determined that at least 2 months worth of PACS image exams were needed for back-up. Clinical data from a hospital PACS is sent to this ASP storage server in parallel to the exams being archived in the main server. A disaster scenario was simulated and the PACS exams were sent from the offsite ASP storage server back to the hospital PACS. Initially, connectivity between the main archive and the ASP storage server is established via a T-1 connection. In the future, other more cost-effective means of connectivity will be researched such as the Internet 2. A disaster scenario was initiated and the disaster recovery process using the ASP back-up archive server was success in repopulating the clinical PACS within a short period of time. The ASP back-up archive was able to recover two months of PACS image data for comparison studies with no complex operational procedures. Furthermore, no image data loss was encountered during the recovery.

Book ChapterDOI
01 Jan 2002
TL;DR: A root cause analysis is reported following an information system failure that compromised the organization’s ability to capture clinical documentation for a 33-hour period.
Abstract: Preparedness for response and continued operation of a health care facility following an information systems disaster must encompass two facets: continuation of patient care delivery and continuation of business processes. This paper reports a root cause analysis following an information system failure that compromised the organization’s ability to capture clinical documentation for a 33-hour

Book
28 May 2002
TL;DR: This expert resource is crucial for keeping your network safe from any outside intrusions, and includes hands-on security checklists, design maps, and sample plans.
Abstract: From the Publisher: Proactively implement a successful security and disaster recovery plan—before a security breach occurs. Including hands-on security checklists,design maps,and sample plans,this expert resource is crucial for keeping your network safe from any outside intrusions.

Journal ArticleDOI
TL;DR: In this paper, the authors present cost-benefit analysis methods of catastrophe risk mitigation and summarize research issues to be solved in future, and claim that the disaster risk management methods can be classified into risk control and financing methods.
Abstract: Once large scaled disaster hits a society, a large number of households and firms are simultaneously damaged. The traditional cost benefit analysis of disaster mitigation mainly focuses upon expected loss reduction, neglecting the catastrophe aspects of disaster risks. In this paper, it is claimed that the disaster risk management methods can be classified into risk control and financing methods. In order to cope with the catastrophic disaster risks, it is requested to build the integrated disaster management systems and extend the theoretical framework of cost-benefit analysis. In this paper, the authors present cost-benefit analysis methods of catastrophe risk mitigation and summarize research issues to be solved in future.

Book
01 Jan 2002
TL;DR: This text contains real-life scenarios and problem-solving situations based on case studies that examine the intricacies of securing the data and knowledge base of an organization.
Abstract: Covering the basics of network security and disaster recovery, this text moves on to examine the intricacies of securing the data and knowledge base of an organization. It contains real-life scenarios and problem-solving situations based on case studies.


12 Dec 2002
TL;DR: This report assesses the impact of the September 11, 2001 attacks on public and private information infrastructures in the context of critical infrastructure protection, continuity of operations (COOP) planning, and homeland security.
Abstract: : This report assesses the impact of the September 11, 2001 attacks on public and private information infrastructures in the context of critical infrastructure protection, continuity of operations (COOP) planning, and homeland security. Analysis of the effects of the terrorist attacks suggests various lessons learned. These lessons support three general principles. The first principle emphasizes the establishment and practice of comprehensive continuity and recovery plans. One lesson learned in this area is to augment disaster recovery plans. Businesses and agencies, who now must consider the possibility of complete destruction and loss of a building, may need to augment their disaster recovery plans to include the movement of people, the rapid acquisition of equipment and furniture, network connectivity, adequate workspace, and more. A corollary to this lesson learned is the need to assure that recovery procedures are well-documented and safeguarded so that they can be fully utilized when necessary. A second lesson is the need to back up data and applications. Without a comprehensive backup system that captures more than just an organization's data files, a significant amount of time can be lost trying to recreate applications, organize data, and reestablish user access. A corollary to this lesson learned is the need to fully and regularly test backup sites and media to ensure their reliability and functionality. The second principle focuses on the decentralization of operations and the effectiveness of distributed communications. Industry experts suggest recovery sites be located at least 20-50 miles away from the primary work site. Another lesson in this area is to ensure the ability to communicate with internal and external constituencies. The third principle involves the institutionalization of system redundancies to eliminate single points of weakness. The lesson of employing redundant service providers is applied primarily to telecommunications services.