scispace - formally typeset
Search or ask a question

Showing papers by "Bell Labs published in 1992"


Proceedings ArticleDOI
01 Jul 1992
TL;DR: A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented, applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions.
Abstract: A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leave-one-out method and the VC-dimension are given. Experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning algorithms.

11,211 citations


Journal ArticleDOI
TL;DR: It is reported that visual stimulation produces an easily detectable (5-20%) transient increase in the intensity of water proton magnetic resonance signals in human primary visual cortex in gradient echo images at 4-T magnetic-field strength.
Abstract: We report that visual stimulation produces an easily detectable (5-20%) transient increase in the intensity of water proton magnetic resonance signals in human primary visual cortex in gradient echo images at 4-T magnetic-field strength. The observed changes predominantly occur in areas containing gray matter and can be used to produce high-spatial-resolution functional brain maps in humans. Reducing the image-acquisition echo time from 40 msec to 8 msec reduces the amplitude of the fractional signal change, suggesting that it is produced by a change in apparent transverse relaxation time T*2. The amplitude, sign, and echo-time dependence of these intrinsic signal changes are consistent with the idea that neural activation increases regional cerebral blood flow and concomitantly increases venous-blood oxygenation.

3,568 citations


Journal ArticleDOI
Diane Lambert1
TL;DR: Zero-inflated Poisson (ZIP) regression as discussed by the authors is a model for counting data with excess zeros, which assumes that with probability p the only possible observation is 0, and with probability 1 − p, a Poisson(λ) random variable is observed.
Abstract: Zero-inflated Poisson (ZIP) regression is a model for count data with excess zeros. It assumes that with probability p the only possible observation is 0, and with probability 1 – p, a Poisson(λ) random variable is observed. For example, when manufacturing equipment is properly aligned, defects may be nearly impossible. But when it is misaligned, defects may occur according to a Poisson(λ) distribution. Both the probability p of the perfect, zero defect state and the mean number of defects λ in the imperfect state may depend on covariates. Sometimes p and λ are unrelated; other times p is a simple function of λ such as p = l/(1 + λ T ) for an unknown constant T . In either case, ZIP regression models are easy to fit. The maximum likelihood estimates (MLE's) are approximately normal in large samples, and confidence intervals can be constructed by inverting likelihood ratio tests or using the approximate normality of the MLE's. Simulations suggest that the confidence intervals based on likelihood ratio test...

3,440 citations


Journal ArticleDOI
Eric Betzig1, Jay K. Trautman1
10 Jul 1992-Science
TL;DR: The near-field optical interaction between a sharp probe and a sample of interest can be exploited to image, spectroscopically probe, or modify surfaces at a resolution inaccessible by traditional far-field techniques, resulting in a technique of considerable versatility.
Abstract: The near-field optical interaction between a sharp probe and a sample of interest can be exploited to image, spectroscopically probe, or modify surfaces at a resolution (down to ∼12 nm) inaccessible by traditional far-field techniques. Many of the attractive features of conventional optics are retained, including noninvasiveness, reliability, and low cost. In addition, most optical contrast mechanisms can be extended to the near-field regime, resulting in a technique of considerable versatility. This versatility is demonstrated by several examples, such as the imaging of nanometric-scale features in mammalian tissue sections and the creation of ultrasmall, magneto-optic domains having implications for highdensity data storage. Although the technique may find uses in many diverse fields, two of the most exciting possibilities are localized optical spectroscopy of semiconductors and the fluorescence imaging of living cells.

1,743 citations


Journal ArticleDOI
Arthur Ashkin1
TL;DR: It is shown that good trapping requires high convergence beams from a high numerical aperture objective and a comparison is given of traps made using bright field or differential interference contrast optics and phase contrast optics.

1,609 citations


Proceedings ArticleDOI
04 May 1992
TL;DR: A combination of asymmetric (public-key) and symmetric (secret- key) cryptography that allow two parties sharing a common password to exchange confidential and authenticated information over an insecure network is introduced.
Abstract: Classic cryptographic protocols based on user-chosen keys allow an attacker to mount password-guessing attacks. A combination of asymmetric (public-key) and symmetric (secret-key) cryptography that allow two parties sharing a common password to exchange confidential and authenticated information over an insecure network is introduced. In particular, a protocol relying on the counter-intuitive motion of using a secret key to encrypt a public key is presented. Such protocols are secure against active attacks, and have the property that the password is protected against offline dictionary attacks. >

1,571 citations


Proceedings Article
12 Jul 1992
TL;DR: A greedy local search procedure called GSAT is introduced for solving propositional satisfiability problems and its good performance suggests that it may be advantageous to reformulate reasoning tasks that have traditionally been viewed as theorem-proving problems as model-finding tasks.
Abstract: We introduce a greedy local search procedure called GSAT for solving propositional satisfiability problems. Our experiments show that this procedure can be used to solve hard, randomly generated problems that are an order of magnitude larger than those that can be handled by more traditional approaches such as the Davis-Putnam procedure or resolution. We also show that GSAT can solve structured satisfiability problems quickly. In particular, we solve encodings of graph coloring problems, N-queens, and Boolean induction. General application strategies and limitations of the approach are also discussed. GSAT is best viewed as a model-finding procedure. Its good performance suggests that it may be advantageous to reformulate reasoning tasks that have traditionally been viewed as theorem-proving problems as model-finding tasks.

1,410 citations


Journal ArticleDOI
Arnaud E. Jacquin1
TL;DR: The author proposes an independent and novel approach to image coding, based on a fractal theory of iterated transformations, that relies on the assumption that image redundancy can be efficiently exploited through self-transformability on a block-wise basis and approximates an original image by a Fractal image.
Abstract: The author proposes an independent and novel approach to image coding, based on a fractal theory of iterated transformations. The main characteristics of this approach are that (i) it relies on the assumption that image redundancy can be efficiently exploited through self-transformability on a block-wise basis, and (ii) it approximates an original image by a fractal image. The author refers to the approach as fractal block coding. The coding-decoding system is based on the construction, for an original image to encode, of a specific image transformation-a fractal code-which, when iterated on any initial image, produces a sequence of images that converges to a fractal approximation of the original. It is shown how to design such a system for the coding of monochrome digital images at rates in the range of 0.5-1.0 b/pixel. The fractal block coder has performance comparable to state-of-the-art vector quantizers. >

1,386 citations


Journal ArticleDOI
Raymond T. Tung1
TL;DR: Results suggest that the formation mechanism of the Schottky barrier is locally nonuniform at common, polycrystalline, metal-semiconductor interfaces.
Abstract: A dipole-layer approach is presented, which leads to analytic solutions to the potential and the electronic transport at metal-semiconductor interfaces with arbitrary Schottky-barrier-height profiles. The presence of inhomogeneities in the Schottky-barrier height is shown to lead to a coherent explanation of many anomalies in the experimental results. These results suggest that the formation mechanism of the Schottky barrier is locally nonuniform at common, polycrystalline, metal-semiconductor interfaces.

1,347 citations


Proceedings Article
12 Jul 1992
TL;DR: It is shown that by using the right distribution of instances, and appropriate parameter values, it is possible to generate random formulas that are hard, that is, for which satisfiability testing is quite difficult.
Abstract: We report results from large-scale experiments in satisfiability testing. As has been observed by others, testing the satisfiability of random formulas often appears surprisingly easy. Here we show that by using the right distribution of instances, and appropriate parameter values, it is possible to generate random formulas that are hard, that is, for which satisfiability testing is quite difficult. Our results provide a benchmark for the evaluation of satisfiability-testing procedures.

1,004 citations


Journal ArticleDOI
R.-H. Yan1, Abbas Ourmazd1, K.F. Lee1
TL;DR: In this article, the scaling of fully depleted SOI devices is considered and the concept of controlling horizontal leakage through vertical structures is highlighted, and several structural variations of conventional SOI structures are discussed in terms of a natural length scale to guide the design.
Abstract: Scaling the Si MOSFET is reconsidered. Requirements on subthreshold leakage control force conventional scaling to use high doping as the device dimension penetrates into the deep-submicrometer regime, leading to an undesirably large junction capacitance and degraded mobility. By studying the scaling of fully depleted SOI devices, the important concept of controlling horizontal leakage through vertical structures is highlighted. Several structural variations of conventional SOI structures are discussed in terms of a natural length scale to guide the design. The concept of vertical doping engineering can also be realized in bulk Si to obtain good subthreshold characteristics without large junction capacitance or heavy channel doping. >

Book
31 Mar 1992
TL;DR: An overview of the fundamental principles behind modeling and simulation of communication systems is presented, which include Monte Carlo simulation, discrete time representation, signals, and random-number generation.
Abstract: Simulation plays an important role in the design, analysis, and implementation of communication systems During the design of complex communication systems it is often infeasible to conduct performance analysis and design tradeoff studies using closed-form mathematical formula techniques Quite frequently, simulation is the only tool available for addressing important issues in the design, analysis, and implementation of communication systems Simulation can be used to verify the functionality of communication systems, evaluate the performance of proposed systems, and generate specifications to guide their design Since the early 1980s a variety of modeling and simulation techniques and tools have been developed and used to support the design and implementation of a broad range of communication systems and products ranging from multi-million-dollar communication satellites to handsets for the next generation of personal communication systems This article presents an overview of the fundamental principles behind modeling and simulation of communication systems Keywords: communication systems; discrete time representation; signals; systems; modeling of functional blocks; simulation of functional blocks; Monte Carlo simulation; random-number generation; performance estimation

Journal ArticleDOI
TL;DR: T h e string-matching problem is a very c o m m o n problem; there are many extensions to t h i s problem; for example, it may be looking for a set of patterns, a pattern w i t h "wi ld cards," or a regular expression.
Abstract: T h e string-matching problem is a very c o m m o n problem. We are searching for a string P = PtP2. . "Pro i n s i d e a la rge t ex t f i le T = t l t2. . . t . , b o t h sequences of characters from a f i n i t e character set Z. T h e characters may be English characters in a text file, DNA base pairs, lines of source code, angles between edges in polygons, machines or machine parts in a production schedule, music notes and tempo in a musical score, and so fo r th . We w a n t to f i n d a l l occurrences of P i n T; n a m e l y , we are searching for the set of starting posit ions F = {i[1 --i--n m + 1 s u c h t h a t titi+ l " " t i + m 1 = P } " T h e two most famous algorithms for this problem are t h e B o y e r M o o r e algorithm [3] and t h e K n u t h Morris Pratt algorithm [10]. There are many extensions to t h i s problem; for example, we may be looking for a set of patterns, a pattern w i t h "wi ld cards," or a regular expression. String-matching tools are included in every reasonable text editor, word processor, and many other applications.

Proceedings ArticleDOI
David Yarowsky1
23 Aug 1992
TL;DR: A program that disambiguates English word senses in unrestricted text using statistical models of the major Roget's Thesaurus categories, enabling training on unrestricted monolingual text without human intervention.
Abstract: This paper describes a program that disambiguates English word senses in unrestricted text using statistical models of the major Roget's Thesaurus categories. Roget's categories serve as approximations of conceptual classes. The categories listed for a word in Roget's index tend to correspond to sense distinctions; thus selecting the most likely category provides a useful level of sense disambiguation. The selection of categories is accomplished by identifying and weighting words that are indicative of each category when seen in context, using a Bayesian theoretical framework.Other statistical approaches have required special corpora or hand-labeled training examples for much of the lexicon. Our use of class models overcomes this knowledge acquisition bottleneck, enabling training on unrestricted monolingual text without human intervention. Applied to the 10 million word Grolier's Encyclopedia, the system correctly disambiguated 92% of the instances of 12 polysemous words that have been previously studied in the literature.

Journal ArticleDOI
TL;DR: A fundamental technique for designing a classifier that approaches the objective of minimum classification error in a more direct manner than traditional methods is given and is contrasted with several traditional classifier designs in typical experiments to demonstrate the superiority of the new learning formulation.
Abstract: A formulation is proposed for minimum-error classification, in which the misclassification probability is to be minimized based on a given set of training samples. A fundamental technique for designing a classifier that approaches the objective of minimum classification error in a more direct manner than traditional methods is given. The method is contrasted with several traditional classifier designs in typical experiments to demonstrate the superiority of the new learning formulation. The method can applied to other classifier structures as well. Experimental results pertaining to a speech recognition task are provided to show the effectiveness of the technique. >

Journal ArticleDOI
Joseph Abate, Ward Whitt1
TL;DR: This paper reviews the Fourier-series method for calculating cumulative distribution functions (cdf's) and probability mass functions (pmf's) by numerically inverting characteristic functions, Laplace transforms and generating functions and describes two methods for inverting Laplace transform based on the Post-Widder inversion formula.
Abstract: This paper reviews the Fourier-series method for calculating cumulative distribution functions (cdf's) and probability mass functions (pmf's) by numerically inverting characteristic functions, Laplace transforms and generating functions. Some variants of the Fourier-series method are remarkably easy to use, requiring programs of less than fifty lines. The Fourier-series method can be interpreted as numerically integrating a standard inversion integral by means of the trapezoidal rule. The same formula is obtained by using the Fourier series of an associated periodic function constructed by aliasing; this explains the name of the method. This Fourier analysis applies to the inversion problem because the Fourier coefficients are just values of the transform. The mathematical centerpiece of the Fourier-series method is the Poisson summation formula, which identifies the discretization error associated with the trapezoidal rule and thus helps bound it. The greatest difficulty is approximately calculating the infinite series obtained from the inversion integral. Within this framework, lattice cdf's can be calculated from generating functions by finite sums without truncation. For other cdf's, an appropriate truncation of the infinite series can be determined from the transform based on estimates or bounds. For Laplace transforms, the numerical integration can be made to produce a nearly alternating series, so that the convergence can be accelerated by techniques such as Euler summation. Alternatively, the cdf can be perturbed slightly by convolution smoothing or windowing to produce a truncation error bound independent of the original cdf. Although error bounds can be determined, an effective approach is to use two different methods without elaborate error analysis. For this purpose, we also describe two methods for inverting Laplace transforms based on the Post-Widder inversion formula. The overall procedure is illustrated by several queueing examples.

Journal ArticleDOI
TL;DR: The Seesoft software visualization system as discussed by the authors allows one to analyze up to 50000 lines of code simultaneously by mapping each line of code into a thin row, and the color of each row indicates a statistic of interest, e.g., red rows are those most recently changed, and blue are those least recently changed.
Abstract: The Seesoft software visualization system allows one to analyze up to 50000 lines of code simultaneously by mapping each line of code into a thin row. The color of each row indicates a statistic of interest, e.g., red rows are those most recently changed, and blue are those least recently changed. Seesoft displays data derived from a variety of sources, such as version control systems that track the age, programmer, and purpose of the code (e.g., control ISDN lamps, fix bug in call forwarding); static analyses, (e.g., locations where functions are called); and dynamic analyses (e.g., profiling). By means of direct manipulation and high interaction graphics, the user can manipulate this reduced representation of the code in order to find interesting patterns. Further insight is obtained by using additional windows to display the actual code. Potential applications for Seesoft include discovery, project management, code tuning, and analysis of development methodologies. >

Journal ArticleDOI
TL;DR: A new low-loss fast intracavity semiconductor Fabry-Perot saturable absorber operated at anti-resonance both to start and sustain stable mode locking of a cw-pumped Nd:YLF laser is introduced.
Abstract: We introduce a new low-loss fast intracavity semiconductor Fabry-Perot saturable absorber operated at anti-resonance both to start and sustain stable mode locking of a cw-pumped Nd:YLF laser. We achieved a 3.3-ps pulse duration at a 220-MHz repetition rate. The average output power was 700 mW with 2 W of cw pump power from a Ti:sapphire laser. At pump powers of less than 1.6 W the laser self-Q switches and produces 4-ps pulses within a 1.4-micros Q-switched pulse at an approximately 150-kHz repetition rate determined by the relaxation oscillation of the Nd:YLF laser. Both modes of operation are stable. In terms of coupled-cavity mode locking, the intra-cavity antiresonant Fabry-Perot saturable absorber corresponds to monolithic resonant passive mode locking.

Journal ArticleDOI
TL;DR: A group of practitioners and researchers discuss the role of parameter design and Taguchi's methodology for implementing it and the importance of parameter-design principles with well-established statistical techniques.
Abstract: It is more than a decade since Genichi Taguchi's ideas on quality improvement were inrroduced in the United States. His parameter-design approach for reducing variation in products and processes has generated a great deal of interest among both quality practitioners and statisticians. The statistical techniques used by Taguchi to implement parameter design have been the subject of much debate, however, and there has been considerable research aimed at integrating the parameter-design principles with well-established statistical techniques. On the other hand, Taguchi and his colleagues feel that these research efforts by statisticians are misguided and reflect a lack of understanding of the engineering principles underlying Taguchi's methodology. This panel discussion provides a forum for a technical discussion of these diverse views. A group of practitioners and researchers discuss the role of parameter design and Taguchi's methodology for implementing it. The topics covered include the importance of vari...

Journal ArticleDOI
Peter Adam Hoeher1
TL;DR: The computation of the tap gains of the discrete-time representation of a slowly time-varying multipath channel is investigated and a known Monte Carlo based method approximating the given scattering function is extended by including filtering and sampling.
Abstract: The computation of the tap gains of the discrete-time representation of a slowly time-varying multipath channel is investigated. Assuming the channel is wide-sense stationary with uncorrelated scattering (WSSUS), a known Monte Carlo based method approximating the given scattering function (which fully determines the WSSUS channel) is extended by including filtering and sampling. The result is a closed-form solution for the tap gains. This allows the efficient simulation of the continuous-time channel with, e.g., only one sample per symbol, and without explicit digital filtering. >

Journal ArticleDOI
TL;DR: In this paper, the history of MCDM and MAUT is discussed and topics are discussed for their continued development and usefulness to management science over the next decade, identifying exciting directions and promising areas for future research.
Abstract: Management science and decision science have grown exponentially since midcentury. Two closely-related fields central to this growth are multiple criteria decision making MCDM and multiattribute utility theory MAUT. This paper comments on the history of MCDM and MAUT and discusses topics we believe are important in their continued development and usefulness to management science over the next decade. Our aim is to identify exciting directions and promising areas for future research.

Proceedings ArticleDOI
23 Feb 1992
TL;DR: An experiment confirmed the hypothesis that if a polysemous word such as sentence appears two or more times in a well-written discourse, it is extremely likely that they will all share the same sense and found that the tendency to share sense in the same discourse is extremely strong.
Abstract: It is well-known that there are polysemous words like sentence whose "meaning" or "sense" depends on the context of use. We have recently reported on two new word-sense disambiguation systems, one trained on bilingual material (the Canadian Hansards) and the other trained on monolingual material (Roget's Thesaurus and Grolier's Encyclopedia). As this work was nearing completion, we observed a very strong discourse effect. That is, if a polysemous word such as sentence appears two or more times in a well-written discourse, it is extremely likely that they will all share the same sense. This paper describes an experiment which confirmed this hypothesis and found that the tendency to share sense in the same discourse is extremely strong (98%). This result can be used as an additional source of constraint for improving the performance of the word-sense disambiguation algorithm. In addition, it could also be used to help evaluate disambiguation algorithms that did not make use of the discourse constraint.

Journal ArticleDOI
Thrasyvoulos N. Pappas1
TL;DR: The algorithm that is presented is a generalization of the K-means clustering algorithm to include spatial constraints and to account for local intensity variations in the image to preserve the most significant features of the originals, while removing unimportant details.
Abstract: The problem of segmenting images of objects with smooth surfaces is considered. The algorithm that is presented is a generalization of the K-means clustering algorithm to include spatial constraints and to account for local intensity variations in the image. Spatial constraints are included by the use of a Gibbs random field model. Local intensity variations are accounted for in an iterative procedure involving averaging over a sliding window whose size decreases as the algorithm progresses. Results with an 8-neighbor Gibbs random field model applied to pictures of industrial objects, buildings, aerial photographs, optical characters, and faces show that the algorithm performs better than the K-means algorithm and its nonadaptive extensions that incorporate spatial constraints by the use of Gibbs random fields. A hierarchical implementation is also presented that results in better performance and faster speed of execution. The segmented images are caricatures of the originals which preserve the most significant features, while removing unimportant details. They can be used in image recognition and as crude representations of the image. >

Journal ArticleDOI
Léon Bottou1, Vladimir Vapnik1
TL;DR: A single analysis suggests that neither kNN or RBF, nor nonlocal classifiers, achieve the best compromise between locality and capacity.
Abstract: Very rarely are training data evenly distributed in the input space. Local learning algorithms attempt to locally adjust the capacity of the training system to the properties of the training set in...

Journal ArticleDOI
01 Jan 1992-Networks
TL;DR: A survey up to 1989 on the Steiner tree problems which include the four important cases of euclidean, rectilinear, graphic, phylogenetic and some of their generalizations.
Abstract: We give a survey up to 1989 on the Steiner tree problems which include the four important cases of euclidean, rectilinear, graphic, phylogenetic and some of their generalizations. We also provide a rather comprehensive and up-to-date bibliography which covers more than three hundred items.

Journal ArticleDOI
Stephen H. Lewis1, H.S. Fetterman1, George Gross1, R. Ramachandran1, T.R. Viswanathan1 
TL;DR: In this paper, a 10-b 20-Msample/s analog-to-digital converter fabricated in a 0.9-mu m CMOS technology is described, which uses a pipelined nine-stage architecture with fully differential analog circuits and achieves a SNDR of 60 dB with a full-scale sinusoidal input at 5 MHz.
Abstract: A 10-b 20-Msample/s analog-to-digital converter fabricated in a 0.9- mu m CMOS technology is described. The converter uses a pipelined nine-stage architecture with fully differential analog circuits and achieves a signal-to-noise-and-distortion ratio (SNDR) of 60 dB with a full-scale sinusoidal input at 5 MHz. It occupies a 8.7 mm/sup 2/ and dissipates 240 mW. >

Journal ArticleDOI
TL;DR: In this paper, the authors summarize the current theoretical and experimental understanding of clustering phenomena on surfaces, with an emphasis on dynamical properties, including surface diffusion coefficients and adatom binding energies.

Proceedings Article
30 Nov 1992
TL;DR: A new distance measure which can be made locally invariant to any set of transformations of the input and can be computed efficiently is proposed.
Abstract: Memory-based classification algorithms such as radial basis functions or K-nearest neighbors typically rely on simple distances (Euclidean, dot product...), which are not particularly meaningful on pattern vectors. More complex, better suited distance measures are often expensive and rather ad-hoc (elastic matching, deformable templates). We propose a new distance measure which (a) can be made locally invariant to any set of transformations of the input and (b) can be computed efficiently. We tested the method on large handwritten character databases provided by the Post Office and the NIST. Using invariances with respect to translation, rotation, scaling, shearing and line thickness, the method consistently outperformed all other systems tested on the same databases.

Book
01 Jan 1992
TL;DR: A review of the properties of hydrogen in crystalline semiconductors is presented in this paper, together with the reactions of atomic hydrogen with shallow and deep level impurities that passivate their electrical activity.
Abstract: A review of the properties of hydrogen in crystalline semiconductors is presented. The equilibrium lattice positions of the various states of hydrogen are detailed, together with the reactions of atomic hydrogen with shallow and deep level impurities that passivate their electrical activity. Evidence for several charge states of mobile hydrogen provides a consistent picture for both the temperature dependence of its diffusivity and the chemical reactions with shallow level dopants. The electrical and optical characteristics of hydrogen-related defects in both elemental and compound semiconductors are discussed, along with the surface damage caused by hydrogen bombardment. The bonding configurations of hydrogen on semiconductor surfaces and the prevalence of its incorporation during many benign processing steps are reviewed. We conclude by identifying the most important areas for future effort.

Patent
21 Feb 1992
TL;DR: In this paper, general methods and apparatus for mediating transactions, methods and mechanisms for allowing information from one transaction to be used in other transactions, and methods and methods for performing credit card transactions in which the vendee need not disclose his credit card to the vendor.
Abstract: Methods and apparatus for employing a communications system with actively connects communicating entities to mediate transactions. Disclosed are general methods and apparatus for mediating transactions, methods and apparatus permitting information from one transaction to be used in other transactions, and methods and apparatus for performing credit card transactions in which the vendee need not disclose his credit card to the vendor. An implementation of a system for performing credit card transactions in a stored program-controlled telephone switching network is also disclosed.