scispace - formally typeset
Search or ask a question

Showing papers in "Bell Labs Technical Journal in 1997"


Journal ArticleDOI
TL;DR: Various aspects of the system design of WaveLAN-II and characteristics of its antenna, radio-frequency (RF) front-end, digital signal processor (DSP) transceiver chip, and medium access controller (MAC) chip are discussed.
Abstract: In July 1997 the Institute of Electrical and Electronics Engineers (IEEE) completed standard 802.11 for wireless local area networks (LANs). WaveLAN®-II, to be released early in 1998, offers compatibility with the IEEE 802.11 standard for operation in the 2.4-GHz band. It is the successor to WaveLAN-I, which has been in the market since 1991. As a next-generation wireless LAN product, WaveLAN-II has many enhancements to improve performance in various areas. An IEEE 802.11 direct sequence spread spectrum (DSSS) product, WaveLAN-II supports the basic bit rates of 1 and 2 Mb/s, but it can also provide enhanced bit rates as high as 10 Mb/s. This paper discusses various aspects of the system design of WaveLAN-II and characteristics of its antenna, radio-frequency (RF) front-end, digital signal processor (DSP) transceiver chip, and medium access controller (MAC) chip.

1,353 citations


Journal ArticleDOI
TL;DR: The Inferno™ operating system facilitates the creation and support of distributed services in the new and emerging world of network environments, such as those typified by CATV and direct satellite broadcasting systems, as well as the Internet, and is intended for licensing in the marketplace and for use in conjunction with new Lucent offerings.
Abstract: The Inferno™ operating system facilitates the creation and support of distributed services in the new and emerging world of network environments, such as those typified by CATV and direct satellite broadcasting systems, as well as the Internet. In addition, as the entertainment, telecommunications, and computing industries converge and interconnect, different types of data networks are arising, each one as potentially useful and profitable as the telephone network. However, unlike the telephone system, which started with standard terminals and signaling, these new networks are developing in a world of diverse terminals, network hardware, and protocols. Inferno is designed so that it can insulate the diverse providers of content and services from the equally varied transport and presentation platforms. The Inferno Business Unit of Lucent Technologies and the Computing Sciences Research Center of Bell Labs, the R&D arm of Lucent, designed it specifically as a commercial product. It is intended for licensing in the marketplace and for use in conjunction with new Lucent offerings. Inferno incorporates many years of Bell Labs research in operating systems, languages, on-the-fly compilers, graphics, security, networking, and portability in providing an effective and economical network operating system.

91 citations


Journal ArticleDOI
TL;DR: Mawl and TelePortal provide a new way to create integrated services, as well as IVR services that require access from multiple devices, and the ability to develop such services in a single environment appears to be unique.
Abstract: We describe a system for creating, maintaining, and analyzing interactive services that require access from multiple devices. The system augments the infrastructure of the World Wide Web with Mawl, an application-oriented language for specifying form-based services in a device-independent manner, and TelePortal, a software/hardware platform that enables telephone access to Web content via standard interactive voice response (IVR) platforms. Service creators link service logic and presentation with templates written in an extension of the HyperText Markup Language (HTML). HTML is extended with marks for dynamic content (Meta-HTML) and with marks specific to a user interface or access device. Documents to be interpreted by TelePortal are written in the Phone Markup Language (PML). Together, Mawl and TelePortal provide a new way to create integrated services, as well as IVR services. The ability to develop such services in a single environment appears to be unique.

72 citations


Journal ArticleDOI
TL;DR: A new AAL called the AAL Type 2 (AAL-2), which allows very high efficiency for carrying small packets and is being used to define multiplexing protocols over the Internet and frame-relay networks.
Abstract: Asynchronous transfer mode (ATM) networks carry fixed-size cells within the network irrespective of the applications being supported. At the network edge or at the end equipment, an ATM adaptation layer (AAL) maps the services offered by the ATM network to the services required by the application. Many trunking applications that have voice compression and silence suppression require transmission of small delay-sensitive packets. Existing AALs are very inefficient for this purpose. In this paper, we discuss a new AAL called the AAL Type 2 (AAL-2), which allows very high efficiency for carrying small packets. We describe the basic principles and compare several alternatives with respect to transmission error performance, bandwidth efficiency, and delay/jitter performance. The results show that the AAL-2 adds significant value to packet telephony applications over ATM networks. We discuss the desirability of additional rebundling in the network and the need for a signaling protocol to communicate changes in native connections (voice calls) within the same ATM connection. We also describe how the principles of the AAL-2 are being used to define multiplexing protocols over the Internet and frame-relay networks.

69 citations


Journal ArticleDOI
Sanjeev Khanna1
TL;DR: Building on the work of Schrijver, Seymour, and Winkler, it is shown that for any fixed ɛ > 0, there exists a polynomial time algorithm that computes a solution that requires bandwidth at most (1 + ɚ) times the optimal bandwidth.
Abstract: A rapidly emerging design paradigm for large-scale optical fiber networks involves connecting clusters of nodes through a synchronous optical network (SONET) ring and building a network over these rings. Although the fiber itself offers virtually unlimited bandwidth, the add/drop multiplexers (ADMs) determine the actual bandwidth available along any edge of the SONET ring. Consequently, there is a cost involved in supporting a high bandwidth along the ring. An important optimization problem arises in this context: Given a set of nodes connected along a bidirectional SONET ring, we must determine a routing scheme that minimizes the bandwidth required to satisfy all the pairwise traffic demands. This problem, known as the ring loading problem, has been studied extensively in recent years. S. Cosares and I. Saniee reported in Telecommunications Systems (Volume 3, 1994) that this problem is NP-hard, which makes it unlikely that a polynomial time algorithm exists to compute an optimal solution. We therefore shift our attention to polynomial time approximation algorithms. Towards this end, Cosares and Saniee presented a polynomial time algorithm that approximates the optimal solution value to within a multiplicative factor of two. More recently, A. Schrijver, P. Seymour, and P. Winkler developed an efficient algorithm (documented in the SIAM Journal of Discrete Mathematics, 1997) that can compute a solution that exceeds the optimum by at most an additive term of 1.5d max , where d max is the largest traffic demand between any pair of nodes. While this additive term is relatively small, in many input instances it would represent a significant fraction of the optimal solution value and possibly a significant deviation from it. Is there a polynomial time approximation algorithm that allows us to achieve an error that is an arbitrarily small fraction of the optimal solution value? In this paper, we answer this question in the affirmative. More precisely, building on the work of Schrijver, Seymour, and Winkler, we show that for any fixed ɛ > 0, there exists a polynomial time algorithm that computes a solution that requires bandwidth at most (1 + ɛ) times the optimal bandwidth.

67 citations


Journal ArticleDOI
TL;DR: It is shown that a significant portion of the design requirements can be captured in formalized message sequence charts (MSCs) using a set of tools that are built to reliably create, organize, and analyze such charts.
Abstract: Industrial software design projects often begin with a requirements capture and analysis phase. During this phase, the main architectural and behavioral requirements for a new system are collected, documented, and validated. To date, however, requirements engineers have had few reliable tools to guide and support this work. We show that a significant portion of the design requirements can be captured in formalized message sequence charts (MSCs) using a set of tools that we built to reliably create, organize, and analyze such charts.

43 citations


Journal ArticleDOI
TL;DR: An overview of CDPD's network architecture, its network performance issues, and wireless data applications that take advantage ofCDPD's seamless support of IP and CLNP data are presented.
Abstract: Cellular digital packet data (CDPD) is the first wide area wireless data network with open interfaces to enter the wireless services market. Operating in the 800-MHz cellular bands, CDPD offers native support of transmission control protocol/Internet protocol (TCP/IP) and connectionless network protocol (CLNP). This paper presents an overview of CDPD's network architecture, its network performance issues, and wireless data applications that take advantage of CDPD's seamless support of IP and CLNP data.

42 citations


Journal ArticleDOI
Anil K. Midha1
TL;DR: The characteristics ofSCM systems, the SCM challenges for Lucent Technologies, the principal SCM systems being used within the company, and the issues of choosing and successfully implementing the best SCM system are focused on.
Abstract: The increasing complexity of both software systems and the environments in which they are produced is pressuring projects to improve the development process using innovative methods. The role of software configuration management (SCM) systems, policies, and procedures that help control and manage software development environments is being stretched beyond the conceptual boundaries it has had for the last decade. One of the key enablers of producing higher quality software is a better software development process. The SCM system must instantiate a quality process, allow tracking and monitoring of process metrics, and provide mechanisms for tailoring and continual improvement of the software development process. More than a dozen SCM systems are now available, each one having a distinct architecture and set of core functionalities. Currently, no single system provides all the key SCM functions in the best form. Thus, a project must assess its real needs and choose the right SCM system to meet its software development challenges. This paper focuses on the characteristics of SCM systems, the SCM challenges for Lucent Technologies, the principal SCM systems being used within the company, and the issues of choosing and successfully implementing the best SCM systems.

39 citations


Journal ArticleDOI
James T. Clemens1
TL;DR: A historical review of that revolution — from the first integrated circuit to modern very large scale integration (VLSI) technology — is reviewed and the development of present-day microelectronics manufacturing technology is reviewed, based on the concept of the “planar process.”
Abstract: Two inventions — the bipolar transistor and the integrated circuit — have fundamentally revolutionized the technology of mankind. Within a period of fifty years, the microelectronics industry has increased the number of transistors fabricated on a single piece of semiconductor crystal by a factor of about 100 million, that is, 1.0e10+8, a productivity phenomenon unparalleled in the history of technology and mankind. This paper begins with a historical review of that revolution — from the first integrated circuit to modern very large scale integration (VLSI) technology — and then reviews the development of present-day microelectronics manufacturing technology, based on the concept of the “planar process.” The topics covered include silicon crystal technology, crystal dopant techniques, silicon oxidation development, lithography, materials deposition processes, pattern transfer mechanisms, metal interconnect technology, and material passivation technology. The paper concludes with a review of the major technical and economic issues that face the microelectronics industry today and discusses the future technical and economic paths that the industry may take.

31 citations


Journal ArticleDOI
TL;DR: The principal observation of the paper is that, while the mobile networks based on different standards are substantially dissimilar, it is still feasible to define one standard that would allow these dissimilar mobile networks to access the IN to provide telecommunications services globally and seamlessly.
Abstract: This paper summarizes the current International Telecommunication Union — Telecommunication Standardization Sector (ITU-T) intelligent network (IN) standards in the context of the wireless intelligent network (WIN) and relevant parts of the ANSI-41 family of standards developed in the United States. In addition, this paper outlines the concepts on which the standardization of the IN support of wireless networks should be developed internationally. The principal observation of the paper is that, while the mobile networks based on different standards are substantially dissimilar, it is still feasible to define one standard that would allow these dissimilar mobile networks to access the IN to provide telecommunications services globally and seamlessly. WIN sets the direction for such a standard.

26 citations


Journal ArticleDOI
TL;DR: The virtual path group (VPG) protection switching technique described in this paper offers both fast restoration and minimum processing, which forms a basis for emerging ATM network protection standards, and will provide leading-edge network survivability features to future Lucent Technologies ATM solutions.
Abstract: A telecommunications network often has survivability capabilities to restore service following the occurrence of various service-affecting defects. To ensure high reliability over a wide range of services, the network must be able to restore service very quickly, on the order of 60 to 200 ms. This requirement can place extreme and costly processing demands on network elements (NEs). Many different survivability techniques are possible depending on an application's cost, service, and performance requirements. Furthermore, networks are now being developed based on asynchronous transfer mode (ATM) technology and carried within a synchronous digital hierarchy/synchronous optical network (SDH/SONET) physical layer. In these networks, the existing physical-layer protection switching capabilities (for example, SDH/SONET line or path protection switching) cannot detect ATM-specific defects (such as loss of ATM connection continuity) and thus cannot protect against them. While protection switching of individual ATM virtual connections (virtual path [VP] or virtual channel [VC]) could be added to accommodate ATM-specific defects, such a large-scale defect as a facility failure will cause a fault on each of hundreds or even thousands of separate ATM connections and thus will require as many simultaneous and independent restoration operations. The virtual path group (VPG) protection switching technique described in this paper offers both fast restoration (on the order of 60 ms) and minimum processing. This technique, useful in a broad range of ATM networking applications, provides high-performance network protection largely consistent with established ATM standards. It forms a basis for emerging ATM network protection standards, and it will provide leading-edge network survivability features to future Lucent Technologies ATM solutions.

Journal ArticleDOI
TL;DR: The Dali system is a main memory storage manager designed to provide the persistence (that is, the retention of data after a crash), availability, and safety guarantees that users typically expect from a disk-resident database, including support for transactions.
Abstract: The performance needs of many database applications require that the entire database be stored in main memory. The Dali system is a main memory storage manager designed to provide the persistence (that is, the retention of data after a crash), availability, and safety guarantees that users typically expect from a disk-resident database, including support for transactions. Because it is tuned to support in-memory data, Dali offers very high performance. User processes map the entire database into their address space and access data directly, thereby avoiding expensive remote procedure calls and buffer manager interactions typical of accesses in the disk-resident commercial systems available today. Dali recovers from a system or process failure by restoring the database to a consistent state. It also provides unique concurrency control and memory protection features, as well as index management and a relational application programming interface.

Journal ArticleDOI
TL;DR: The design methodology of active control, the class of problems that responds best to this solution, and the complementary nature of active and passive vibration reduction strategies are described.
Abstract: Recent advances in digital signal processing and actuation technology have opened the door to applying active control to reducing mechanical vibrations and acoustic noise. Because these techniques work best at lower frequency, they complement conventional passive damping techniques, which are most effective at higher frequencies. Engineers in Lucent Technologies' Advanced Technology Systems have been developing active vibration and noise control systems for the Defense Advanced Research Project Agency (DARPA) and the Department of Defense (DoD) since 1988. This paper describes the design methodology of active control, the class of problems that responds best to this solution, and the complementary nature of active and passive vibration reduction strategies. It also gives a guide for when, where, and how to use active control to solve practical vibration problems.

Journal ArticleDOI
TL;DR: An in-depth look at the technical aspects of ATM internetworking technologies for both local and wide area networks and describes the fundamental concepts and components of two approaches to internetworking: the overlay model and the peer model.
Abstract: The success of internetworking businesses in the next decade will be based largely on the design and implementation of network infrastructures that support multimedia services. These infrastructures are based in part on asynchronous transfer mode (ATM) technology, which is revolutionizing the telecommunications world by enabling the transmission of integrated voice, data, and video simultaneously at very high speeds. It is clear, however, that coexistence with legacy networks and transition to ATM networks must be addressed. Internetworking with existing networks is not only required for the acceptance of ATM, it is also key to the success of ATM in the next century. This paper provides an in-depth look at the technical aspects of ATM internetworking technologies for both local and wide area networks. It describes the fundamental concepts and components of two approaches to internetworking: the overlay model and the peer model. Specific approaches that are examined in detail include classical Internet protocol over ATM, routing over large clouds, ATM local area network emulation, multiprotocol over ATM, private network-network interface (PNNI) augmented routing, and integrated PNNI. This paper provides a critical assessment and competitive analysis of proposed technologies and concludes with a review of current and future directions in ATM internetworking technology.

Journal ArticleDOI
TL;DR: The events that led to the invention of the point-contact transistor in December of 1947 are described and the development of the theory of the junction transistor in early 1948 and the fabrication of the first grown-junction transistor in 1950 are described.
Abstract: The invention of the transistor almost fifty years ago was one of the most important technical developments of this century. It has had profound impact on the way we live and the way we work. This paper describes the events that led to the invention of the point-contact transistor in December of 1947. It continues with the development of the theory of the junction transistor in early 1948 and the fabrication of the first grown-junction transistor in 1950. The paper next covers the major hurdles that had to be overcome and the major breakthroughs that had to be made to turn an exciting invention into a far-reaching technical innovation. The final part of the paper suggests some of the reasons why such an important technological progress could occur in a relatively short period of time.

Journal ArticleDOI
TL;DR: The architecture, design, and performance of the Burst Admission based on Load and Interference (BALI) high-speed data solution for CDMA is described and a new burst-mode capability is defined to allow better interference management and capacity utilization.
Abstract: Existing standards-based code division multiple access (CDMA) systems support circuit-mode and packet-mode data services at an effective rate limited to 8 to 13 kb/s. Lucent Technologies has taken the lead in addressing this limitation. Lucent's phase 1 solution offers the highest possible performance while conforming to current IS-95-A air interface design characteristics and thereby maintaining strict compatibility with existing base station hardware. The proposed standard will support a high data rate (64 kb/s) both to and from the mobile device. This paper describes the architecture, design, and performance of the Burst Admission based on Load and Interference (BALI) high-speed data solution for CDMA. High-speed service is provided through the aggregation of multiple codes. A new burst-mode capability is defined to allow better interference management and capacity utilization. In the burst mode, the number of codes that may be used by a mobile on the forward or reverse link for the duration of a burst is controlled by the infrastructure. Static allocation of multiple codes to a small number of users can result in inefficient use of CDMA air interface capacity. Dynamic infrastructure-controlled burst allocation makes it possible to share the bandwidth efficiently among several high-speed packet data mobiles.

Journal ArticleDOI
TL;DR: The scope of the simulation platform is described and the specific use of the W tool for examination of a next-generation channel assignment algorithm currently under development is illustrated, suggesting a significant improvement in quality of service in terms of call blocking and dropping.
Abstract: Wireless service providers are continually looking for new features and products to improve quality of service, increase system capacity, and reduce administrative overhead. The simulation tool W provides a flexible platform for the exploration of a broad range of system-level design and performance issues in wireless networks. Investigation of network issues using W gives Lucent Technologies the ability to design new features and products more effectively by reducing development intervals and implementing a one-pass design on field-quality products. We describe the scope of the simulation platform and illustrate the specific use of the W tool for examination of a next-generation channel assignment algorithm currently under development. The algorithm's adaptive nature provides automatic configuration at system initialization, as well as adaptation to system expansion and traffic patterns with spatial or temporal variations, thus ensuring ease of operation for service providers. Initial simulation experiments confirm its self-organizing capability and suggest a significant improvement in quality of service in terms of call blocking and dropping.

Journal ArticleDOI
TL;DR: LIBRA as mentioned in this paper is an integrated framework that comprises some of these mechanisms, such as type of service-based adaptive routing and load sharing, which are targeted to improve network performance and alleviate the congestion in current Internet protocol (IP) networks.
Abstract: The Internet's sustained explosive growth calls for a solution to several issues that affect access to the information disseminated over wide areas. Such growth also requires the delivery of real-time and traditional data traffic in an integrated fashion with increasing performance, reliability, and security. As recently as 1985, the Internet had only about 50 networks and 1,000 hosts. Now, however, these same numbers are well over 70,000 and 3,000,000, respectively, and they continue to increase at an astonishing rate. The current Internet architecture and service model are not well suited to support the increasing use of real-time applications, such as packet voice and video over the Internet, which demand bounded network delays and high bandwidth. The current Internet service model offers only a best-effort delivery service. It neither allows applications to request a certain quality of service (QoS) from the network nor includes mechanisms to enforce QoS characteristics, such as minimum delay or guaranteed bandwidth. The Internet Engineering Task Force (IETF) is developing a new and enhanced service model that can accommodate these elements. However, the current Internet architecture and service model provide some mechanisms that can be exploited to improve the performance offered to delay-sensitive traffic, thus reducing network congestion. LIBRA is an integrated framework that comprises some of these mechanisms, such as type of service-based adaptive routing, type of service-based packet scheduling, and load sharing, which are targeted to improve network performance and alleviate the congestion in current Internet protocol (IP) networks.

Journal ArticleDOI
TL;DR: The history of the microprocessor is presented in the context of the technology and applications that drove its continued advancements.
Abstract: Invented in 1971, the microprocessor evolved from the inventions of the transistor (1947) and the integrated circuit (1958). Essentially a computer on a chip, it is the most advanced application of the transistor. The influence of the microprocessor today is well known, but in 1971 the effect the microprocessor would have on everyday life was a vision beyond even those who created it. This paper presents the history of the microprocessor in the context of the technology and applications that drove its continued advancements.

Journal ArticleDOI
TL;DR: Key considerations that will be discussed in this paper are where new services belong, where the triggers or queries should be invoked, and how standards will guide these decisions.
Abstract: Wireless service providers are challenging equipment vendors to help them meet the rigorous demands placed on them from subscribers insisting on more functionality. Not only are subscriber bases growing at a tremendous rate, but as subscribers become increasingly accustomed to using wireless phones, they are becoming more mobile and requiring more services. Expectations for services have gone beyond the need for emergency assistance; people require the same functionality (such as messaging, message notification, and enhanced features) that they are using on their landline phones. The wireless intelligent network (WIN) paradigm is key to helping service providers offer new enhanced services, but equipment vendors have not been able to keep up with requests for new triggers and protocols that the market demands in order to provide enhanced services. To address the urgency for quicker time-to-market and ubiquitous service offerings, alternative means of providing enhanced services must be deployed while waiting for the standards to finalize and for equipment vendors to catch up. As the market places pressure on the WIN architecture to improve time-to-market requirements, many critical decisions will be made. Some key considerations that will be discussed in this paper are where new services belong, where the triggers or queries should be invoked, and how standards will guide these decisions.

Journal ArticleDOI
TL;DR: A vision of the long-term services architecture for a functional solution and several near-term product concepts designed to provide an integrated wireless/wireline service are described.
Abstract: Recent changes in the United States telecommunications industry — both regulatory and market driven — provide an environment for integrating wireless and wireline services and the infrastructure used to provide them. Changes in the industry have created competition in local access, deregulation that allows bundling of services and one-stop shopping, and multiple personal communication services (PCS) licensees, each with the desire to differentiate its service offering with value-added capabilities. Integrating wireless and wireline services enables service providers to offer a seamless wireless/wireline service and help control infrastructure and ongoing administrative costs. This paper highlights some advantages for integrating these systems in the U.S. market today. We describe a vision of the long-term services architecture for a functional solution and several near-term product concepts designed to provide an integrated wireless/wireline service.

Journal ArticleDOI
Kevin Houzhi Xu1
TL;DR: Three approaches — known as fast retransmissions, data interceptions, and packet interceptions — that can be used to improve the performance and reliability of transmission protocols in mobile computing environments are presented.
Abstract: The popularity and usefulness of the Internet seem to grow exponentially every day. As a result of this rapid growth, mobile users expect to access the Internet's information resources and to communicate with other Internet users. However, providing efficient and reliable stream transmissions in mobile computing environments presents a significant obstacle to the development of mobile computing systems over the Internet. This paper presents three approaches — known as fast retransmissions, data interceptions, and packet interceptions — that can be used to improve the performance and reliability of transmission protocols in mobile computing environments. The paper also discusses how packet interception has become the most effective approach of the three and describes its implementation.

Journal ArticleDOI
TL;DR: The RMTP+ protocol is described, its theoretical performance is analyzed, its implementation for Internet protocol (IP) networks is detailed, and factors that cause deviations between theoretical and actual performance are discussed.
Abstract: This paper presents RMTP+, a multicast transport protocol for reliably transmitting continuous data streams from a sender to an unknown number of receivers. The protocol uses a hierarchical grouping of receivers and special designated receivers to keep the sender from becoming overloaded by acknowledgments of data received. Placing the burden of detecting missing data on the receiver rather than the sender enables the protocol to multicast to very large groups of receivers without diminishing performance or reliability. Continuous data streams are transmitted by partitioning each stream into blocks of data and reliably multicasting each block. In this paper, we describe the RMTP+ protocol, analyze its theoretical performance, and detail its implementation for Internet protocol (IP) networks. We also report on experiments measuring RMTP+ performance on production machines in a real network and discuss factors that cause deviations between theoretical and actual performance. An application that uses RMTP+ to distribute call detail information in a large network is also described.

Journal ArticleDOI
TL;DR: The history of integrated circuit (IC) design and the major problems confronting IC designers are reviewed and current ideas on the directions that electronics system design may take are looked at, so that the industry can continue to exploit the potential of silicon fabrication in the future.
Abstract: This paper reviews the history of integrated circuit (IC) design and the major problems confronting IC designers. It also looks at current ideas on the directions that electronics system design may take, so that the industry can continue to exploit the potential of silicon fabrication in the future. In describing the past, present, and future of IC design, this paper specifically focuses on the problems of managing the enormous complexity inherent in IC technology. It examines this topic from the perspective of the forces that have shaped the science and art of IC design during the past forty years, reviewing not only the current design tools and methods, but also the discontinuities that are challenging the designers of ICs today. In conclusion, it offers insights into the direction that IC design is likely to take in the next decade.

Journal ArticleDOI
TL;DR: This paper discusses the virtual finite-state machine (VFSM) design and implementation paradigm and the experience in introducing VFSM on software development projects for several Lucent Technologies products, and presents an overview of the VFSF toolset.
Abstract: This paper discusses the virtual finite-state machine (VFSM) design and implementation paradigm and our experience in introducing VFSM on software development projects for several Lucent Technologies products. VFSM, which allows software developers to specify the control behavior of a module as a finite-state machine, is supported by a toolset that automates many tasks associated with producing an implementation, including aspects of code generation, documentation, and testing. VFSM has been used in the design of more than 75 software modules, and its application has resulted in shorter development intervals and the elimination of defects prior to testing. In this paper, we present an overview of the VFSM design and implementation paradigms and the capabilities provided by the VFSM toolset. We also discuss the technical and nontechnical issues that have had an impact on the successful introduction of VFSM.

Journal ArticleDOI
Malathi Veeraraghavan1
TL;DR: This paper describes a new connection management algorithm called parallel connection control (PCC) that allows for fast connection setup in scaleable reliable asynchronous transfer mode (ATM) networks and proposes two models for handling complex communication configurations: an end-host-driven model and a network-value-added model using application-dependent route (ADR) servers.
Abstract: This paper describes a new connection management algorithm called parallel connection control (PCC) that allows for fast connection setup in scaleable reliable asynchronous transfer mode (ATM) networks. The two main concepts of PCC are execution of connection setup actions at multiple switches on the route of a connection in parallel, and the separation of the route computation function into servers distinct from switch processors. The associated benefits of these concepts are fast connection setup and increased per-switch call handling capacity relative to the conventional approach of setting up connections sequentially switch-by-switch with switches performing route computation. The PCC solution is designed to work in conjunction with the current ATM Forum signaling and routing standards solutions, offering a means to achieve improved call handling performance. This paper also introduces the concept of application-dependent routing for setting up logical connections, which consist of multiple network-layer connections routed through application-layer resources, such as bridges and converters. In this paper, we propose two models for handling complex communication configurations: an end-host-driven model and a network-value-added model using application-dependent route (ADR) servers.

Journal ArticleDOI
TL;DR: A brief history of advances in the design and implementation of shared-memory asynchronous transfer mode (ATM) switches for high-speed high-performance applications is described, focusing on key technological advances that have enabled significant prototyping achievements in research and development programs.
Abstract: In this paper, we describe a brief history of advances in the design and implementation of shared-memory asynchronous transfer mode (ATM) switches for high-speed high-performance applications, focusing on key technological advances that have enabled significant prototyping achievements in our research and development programs. We also discuss a series of gigabit switch prototypes and their unique technological characteristics, evolving from an early rudimentary design to today's sophisticated platform. Capable of expanding its capacity from 5 to 160 Gb/s and beyond, today's platform supports features such as traffic policing, traffic shaping, per-virtual channel (VC) queuing, multicast, and priority and errorless protection switching. Technological advances that have taken place in the past five years have enabled us to reduce the size of switch fabric hardware by a factor of four (from four boards to one board for a 20-Gb/s capacity design) and increase its total memory capacity by a factor of more than 250 (from 2K cells to over 500k cells).

Journal Article
TL;DR: The architecture for consumer data deployed as part of a service-independent and intrinsically secure ATM access system such as SDBAS, an end-to-end high-speed data transport architecture coexisting with other services over the same access system, is described.
Abstract: The Switched Digital Broadband Access System (SDBAS) from Lucent Technologies (co-developed with BroadBand Technologies, Inc.) offers residential and small business customers access services based on asynchronous transfer mode (ATM) technology. SDBAS is now being deployed by several carriers as a fiber-in-the-loop (FITL) application. With the advent of inexpensive devices from multiple vendors implementing the ATM segmentation and reassembly (SAR) function and interfacing with the peripheral component interconnect (PCI) bus on the personal computer, it is now possible to support cost-effectively general-purpose consumer data services such as Internet access coexisting with other multimedia services. This paper describes the architecture for consumer data deployed as part of a service-independent and intrinsically secure ATM access system such as SDBAS. Features of such a system include support for Internet access; work-at-home (as a service differentiated from Internet access); billing, access control, authentication, and transport layer protocol management using a session and resource management entity; protocols for end point Layer 3 management; and network interworking functions. This paper describes the multi-application service-independent system architecture of an end-to-end high-speed data transport architecture coexisting with other services over the same access system.

Journal ArticleDOI
Mark Hansen1, David A. James1
TL;DR: The S-wafers environment provides the means to formalize these observations into implementable strategies by furnishing extensive tools for data visualization; identification of groups of similarly patterned wafers; spatial analysis of designed experiments; assessment of the impact of particle contamination; and much more.
Abstract: This paper describes a computing environment called S-wafers that is tailored for the analysis of spatial data collected during semiconductor manufacturing processes. At the core of S-wafers lies a new statistical methodology that systematically exploits the basic spatial nature of these data. The S-wafers environment builds on the experience of Lucent Technologies' Microelectronics engineers and Bell Labs researchers who have noticed that patterns in mapped wafer data can provide “signatures,” which can be used to help identify and correct process problems. The S-wafers environment provides the means to formalize these observations into implementable strategies by furnishing extensive tools for data visualization; identification of groups of similarly patterned wafers; spatial analysis of designed experiments; assessment of the impact of particle contamination; and much more. In addition to supporting statistical research into problems of process improvement in this area, the S-wafers environment has been used for more than two years by various groups in Microelectronics. Extensions to both the software environment and the underlying statistical methodology continue at a rapid pace.

Journal ArticleDOI
TL;DR: This paper discusses ongoing experimental research involving characterization and modeling of the noise performance in silicon (Si) RF devices used in wireless applications.
Abstract: Mobile wireless communication links place stringent demands on the required signal-to-noise (S/N) ratio of terminal products. Link performance is strongly dependent on both the sensitivity and dynamic range of the transceiver circuit. Receiver sensitivity, in turn, is determined predominantly by the radio frequency (RF) small-signal and noise performance of the devices used in the front-end RF analog section. Two major design concerns are the noise figure of the low-noise amplifier (LNA), which is related to the RF noise of the devices, as well as the phase noise of the local oscillator (LO) and mixer circuits, which can be caused by up-conversion of low-frequency device noise (flicker, thermal, and shot noise). This paper discusses ongoing experimental research involving characterization and modeling of the noise performance in silicon (Si) RF devices used in wireless applications.