scispace - formally typeset
Search or ask a question

Showing papers in "AT&T technical journal in 1996"


Journal ArticleDOI
Dave K. Kythe1
TL;DR: The advantages of replacing hand-crafted software with reusable components as a solution to the software crisis are discussed and object-oriented programming provides insights on how to build components.
Abstract: This paper discusses the advantages of replacing hand-crafted software with reusable components as a solution to the software crisis. Object-oriented programming provides insights on how to build components. Because components must have distributed implementations and well-defined interfaces, both the Microsoft Component Object Model (COM) and the Object Management Group (OMG) Common Object Request Broker Architecture (CORBA) models are described as distributed object architectures to support reusable components. A transaction processing monitor is also necessary for accessing business logic and the information contained in relational databases. Components are composed of object-oriented frameworks based on models of the problem domain or business.

46 citations


Journal ArticleDOI
TL;DR: Five reusable software components that provide automatic detection and restart of failed processes, checkpointing and recovery of data in memory, replication and synchronization of files, and software rejuvenation are described.
Abstract: Software fault tolerance is the task of detecting and recovering from failures that are not handled in the underlying hardware or operating system layers of an application Software rejuvenation prevents failures by periodically, and gracefully, terminating an application and restarting it at a clean internal state This paper describes five reusable software components that provide these capabilities They perform automatic detection and restart of failed processes, checkpointing and recovery of data in memory, replication and synchronization of files, and software rejuvenation These components, which have been ported to a number of UNIX∗ platforms, can be used in any application with minimal programming effort The fault tolerance capabilities of several communication products and services in AT&T have been enhanced by incorporating these components Experience with these products to date indicates that the components provide efficient, economical means to increase the level of fault tolerance in an application

30 citations


Journal ArticleDOI
TL;DR: The four case studies presented in this paper demonstrate the technology's general applicability and its use within AT&T to address strategic business problems and motivate its guiding research principles.
Abstract: Visualization is an emerging technology for understanding large, complex, information-rich data sets. Just as spreadsheets revolutionized our ability to understand small amounts of data, visualization is revolutionizing the way we understand large data sets. AT&T has developed a suite of applications, based on a common software infrastructure, to analyze strategic data sets and solve key business problems. Even as these software tools are being used internally, AT&T is also selling them in the commercial marketplace. The four case studies presented in this paper demonstrate the technology's general applicability and its use within AT&T to address strategic business problems and motivate its guiding research principles.

27 citations


Journal ArticleDOI
John K. Whetzel1
TL;DR: This paper describes experiences of application developers working at NCR, formerly AT&T Global Information Solutions, and analyzes the strengths and weaknesses of the Web/database combination and seeks to prove that this combination is a viable alternative for providing database-oriented solutions.
Abstract: The recent popularity of the World Wide Web (WWW, or the Web) has created a massive increase in both the supply and demand of Web-based technologies. However, the HyperText Markup Language (HTML) used to construct the Web has limitations that challenge information content providers who want to supply current, up-to-date information with minimal administrative overhead. A powerful, extensible solution to many of these challenges is the use of a database as a back end, or data source, for Web applications. Combining the Web with a database maximizes the strengths of its components. From the Web perspective, this combination offers user friendliness, cross-platform compatibility, and high-speed prototyping capabilities. From the database perspective, it offers relational data manipulation, high-speed search capabilities, and industrial-grade data input and retrieval. This paper describes experiences of application developers working at NCR, formerly AT&T Global Information Solutions. It also analyzes the strengths and weaknesses of the Web/database combination and seeks to prove that this combination is a viable alternative for providing database-oriented solutions.

12 citations


Journal ArticleDOI
TL;DR: The cognitive engineering activities of self-service information technology, including end-user perceptions, performance, and satisfaction, are described, including heuristic usability evaluations, “Wizard of Oz” investigations, and formal task-based evaluations.
Abstract: Self-service products such as automatic teller machines are becoming more complex as they support new services, are being used in new environments, and employ new technologies. End-users' expectations of self-service products also are being raised as they gain more experience with interactive technologies. This paper describes the cognitive engineering activities of self-service information technology, including end-user perceptions, performance, and satisfaction. Such issues as specifying usability at concept stages, integrating prototype evaluations, and incorporating design recommendations also are discussed. In addition, the paper presents practical methods to design self-service products, including heuristic usability evaluations, “Wizard of Oz” investigations, and formal task-based evaluations. The paper also draws upon a number of usability studies conducted by self-service product and advanced technology developers.

7 citations


Journal ArticleDOI
TL;DR: The Service Design and Inventory system — also known as the Attribute Design Database System (ADDS) — has achieved a high degree of reusability and customer-configurable adaptability through a unique application of object-oriented technology.
Abstract: Competition and the fast pace of technological evolution in today's global telecommunications industry are placing unique demands on the flexibility of operations support software systems. The industry must be capable of rapidly introducing new services, technologies, and organizational structures. Networks must be capable of being partitioned for various applications and administered based on complex ownership relationships among the various network components. User permissions must be readily adaptable to reflect various combinations of services, network partitions, and work functions. To meet the time constraints of the market, telecommunications providers require the ability to configure systems to meet their needs without relying on traditional software development intervals and external software development resources. Traditional software development methodologies generally do not provide the timeliness and flexibility required. The Service Design and Inventory (SDI) system — also known as the Attribute Design Database System (ADDS) — has achieved a high degree of reusability and customer-configurable adaptability through a unique application of object-oriented technology.

6 citations


Journal ArticleDOI
TL;DR: AT&T software professionals are now better able to keep their technical knowledge and skills up to date and remain prepared for technology changes as they occur, and AT&T's future leadership in IT and software development is ensured.
Abstract: Software is integral to every part of AT&T's business. More than 30,000 company employees are involved in some aspect of software development. Until recently, however, the AT&T information technology (IT) population was not recognized and supported as a professional community. In 1994, this situation began to change when a group called the People Team was established as part of a larger initiative to address AT&T's software competitiveness. The team comprised technical managers and human resources (HR) professionals appointed by the AT&T business units and divisions. Its objective was to develop different approaches to migrating the IT community to best-in-class HR practices. To guide its work, the People Team benchmarked HR practices for software professionals in selected high-technology companies. Key outputs from the team included a curriculum guide, recruiting and staffing strategies, and reward and recognition practices. Thus, AT&T software professionals are now better able to keep their technical knowledge and skills up to date and remain prepared for technology changes as they occur. In addition, AT&T's future leadership in IT and software development is ensured.

5 citations


Journal ArticleDOI
Larry Bernstein1
TL;DR: Management must maintain a humanistic point of view to keep the project workers focused on their goals, as these workers typically are affiliated with different business units, work in different locations, and have different responsibilities.
Abstract: Large software projects — those that require more than 100 people to develop — are difficult to manage. They usually take more than one year to complete and only one in ten finish on time, within budget, and with the features users need. It is not the people, but how they are deployed, that is the critical issue in managing a large software project. One strategy is to partition the project into a collection of smaller ones, provide the technology and organizational structures to tie these parts together, employ common tools and processes, and schedule formal partial product delivery dates within the project. Management also must maintain a humanistic point of view to keep the project workers focused on their goals, as these workers typically are affiliated with different business units, work in different locations, and have different responsibilities.

5 citations


Journal ArticleDOI
TL;DR: This paper describes a design-partitioning process applied to the new Signal Operations Platforms-Provisioning (SOP-P) operations system and shows that it is feasible to identify large design components confined within a few architecture styles that are common to network management and operations software.
Abstract: The process of designing vertically integrated applications is enhanced if the distinct architectures, or architecture styles, and relevant performance constraints and interactions can first be identified. Applications, although running in varied environments, also may require specific architecture services, non-operational features such as portability or fault-tolerance that might be common across several architectural styles. The application design process should be an iterative exercise of first understanding system requirements and then determining how they may be partitioned according to styles and services. An integral part of this process is to identify software components and subsystems that must be developed or can be reused from other systems. This paper describes a design-partitioning process applied to the new Signal Operations Platforms-Provisioning (SOP-P) operations system. The experiment shows that it is feasible to identify large design components confined within a few architecture styles that are common to network management and operations software.

5 citations


Journal ArticleDOI
Robert N. Sulgrove1
TL;DR: The project-scoping process, which is being successfully used by software developers at AT&T Global Information Solutions, is discussed, which provides a basis for continuously monitoring risks during development to detect emerging problems at the earliest possible moment.
Abstract: The key to risk management is to be as complete as possible in identifying project risks. This paper discusses the project-scoping process, which is being successfully used by software developers at AT&T Global Information Solutions. Project scoping is a method or process used for identifying and assessing risks to determine a project's feasibility. Lists of requirement categories and risk factors are provided as facilitating tools. Project scoping provides a basis for defining a less risky project and for redefining or discontinuing projects that are too risky. The project-scoping process also provides a basis for continuously monitoring risks during development to detect emerging problems at the earliest possible moment — while there is still time to take effective corrective action. Thus, project management can focus on development problems in addition to tracking schedule compliance. The bottom line is that by implementing project scoping, management has better control over a project.

3 citations


Journal ArticleDOI
TL;DR: This paper illustrates the separation of concerns by examining its application to interfaces, a particularly difficult area in which these concerns are traditionally intertwined.
Abstract: Systematic software reuse, or multiuse, is a key to increasing the productivity and quality of software development. In the past 20 years, reuse has experienced many failures and few successes. Many technological, organizational, and cultural obstacles have been placed in its path. A critical step to increasing software reuse is to recognize that a new division of labor is required, one in which component developers create reusable components and product developers compose products from these components. Changing organizational structure and software development processes to nurture these roles is challenging. Once these roles are recognized and established, however, standard abstraction techniques and other software reuse technologies can help separate the concerns of component developers and product developers. This paper illustrates the separation of concerns by examining its application to interfaces, a particularly difficult area in which these concerns are traditionally intertwined.

Journal ArticleDOI
TL;DR: The evolution of the ASCC is traced, how it operates today is described, and current measures of the interval and quality of ASCC software development are provided.
Abstract: The AT&T Network Systems Silver Bullet Project, launched in August 1990, represents a major breakthrough in software engineering. It optimizes and accelerates the software development process by incrementally improving business, organizational, and technical processes used by the Operations Systems (OS) Business Unit. In July 1991 OS opened the Advanced Software Construction Center (ASCC) to define and implement an organizational and business model based on the Silver Bullet processes and to expose the model to the stresses of developing products for OS. Since then, the ASCC has developed more than 14 products, achieved International Organization for Standardization (ISO) 9001 certification, and reduced its average product interval from 25 to 15 weeks. It has also been evaluated as one of the top three software organizations in AT&T, based on software process assessments. All this was achieved while keeping its costs one-third lower than the rest of the business unit. The application of the Silver Bullet processes to the OneOS Change Program — an AT&T initiative to create an integrated set of OS assets that can be delivered as an integrated product offering — illustrates the applicability of these processes to larger, more complex systems and organizations. This paper traces the evolution of the ASCC, describes how it operates today, and provides current measures of the interval and quality of ASCC software development.

Journal ArticleDOI
TL;DR: This paper describes how a team of developers in AT&T ISTEL used distributed objects and the Common Object Request Broker Architecture (CORBA) standard to implement the updated system in time for the change in legislation.
Abstract: There has been much talk of the potential of object (component-oriented) technology for building distributed systems, especially on-line transaction services, but few opportunities or imperatives to actually use it in production systems. One such opportunity arose in the United Kingdom (UK), when legislation covering the provision of life insurance quotations changed in January 1995, rendering obsolete the existing national quotations service provided by AT&T. The necessity for change, even radical change, in the system that produced these insurance quotations had become clear nine months earlier. At that time, managers and support staff of the existing service became aware that the changes required by the legislation could not be made rapidly enough, nor reliably enough, using conventional development techniques. This paper describes how a team of developers in AT&T ISTEL used distributed objects and the Common Object Request Broker Architecture (CORBA) standard to implement the updated system in time for the change in legislation. Running across more than 50 Windows NT∗ servers, the system has given distributed objects operational credibility and provided valuable lessons on the technology adoption process.

Journal ArticleDOI
Stacey J. Gelman1, W. Douglas Peck1
TL;DR: The importance of W INS to the business management strategy of AT&T-NS, the WINS technical architecture, the status of WINS, and plans for future implementation are discussed.
Abstract: The concept of data warehousing originated from the observation that the systems used to run businesses on a daily basis differ fundamentally from those employed to help plan and develop future businesses. For example, operational systems are generally focused on specific functional views based on the needs of a single aspect of the business. However, managers need information that shows relationships, trends, and correlations about different kinds of data, integrating several functions into a broader view. Historically, systems and manual processes were established to gather management data from the various operational data sources — one for each kind of decision. Extracting and combining such data from different systems is time consuming and often leads to inconsistent results. Users must accommodate printed reports, manual reentry of data into spreadsheets, and significant rework to produce summary reports that match the way they manage the business. Furthermore, by the time some of these reports are ready, the data are no longer current. The Warehouse of Information for Network Systems (WINS) provides needed information to AT&T Network Systems (AT&T-NS) managers world wide. WINS transforms operational and financial data into consolidated business views that are used to analyze certain activities and to make management decisions. This paper discusses the importance of WINS to the business management strategy of AT&T-NS, the WINS technical architecture, the status of WINS, and plans for future implementation.