scispace - formally typeset
Search or ask a question

Showing papers by "Bell Labs published in 1980"


Journal ArticleDOI
TL;DR: In this article, observed stock return data are employed to examine various methodologies which are used 111 event studies to measure security price performance, and abnormal performance is introduced into this data and misuse of any of the methodologies can result in false inferences about the presence of abnormal performance.

3,389 citations


Journal ArticleDOI
TL;DR: In this paper, the authors reported narrowing and splitting of 7-ps-duration pulses from a mode-locked color-center laser by a 700m-long, singlemode silica-glass fiber, at a wavelength (1.55 \ensuremath{mu}m) of loss and large but negative group-velocity dispersion.
Abstract: This paper reports narrowing and splitting of 7-ps-duration pulses from a mode-locked color-center laser by a 700-m-long, single-mode silica-glass fiber, at a wavelength (1.55 \ensuremath{\mu}m) of loss and large but negative group-velocity dispersion. At certain critical power levels, the observed behavior is characteristic of solitons.

1,844 citations


Journal ArticleDOI
Turner Whitted1
TL;DR: Consideration of all of these factors allows the shader to accurately simulate true reflection, shadows, and refraction, as well as the effects simulated by conventional shaders.
Abstract: To accurately render a two-dimensional image of a three-dimensional scene, global illumination information that affects the intensity of each pixel of the image must be known at the time the intensity is calculated. In a simplified form, this information is stored in a tree of “rays” extending from the viewer to the first surface encountered and from there to other surfaces and to the light sources. A visible surface algorithm creates this tree for each pixel of the display and passes it to the shader. The shader then traverses the tree to determine the intensity of the light received by the viewer. Consideration of all of these factors allows the shader to accurately simulate true reflection, shadows, and refraction, as well as the effects simulated by conventional shaders. Anti-aliasing is included as an integral part of the visibility calculations. Surfaces displayed include curved as well as polygonal surfaces.

1,559 citations


Journal ArticleDOI
David A. Rosenbaum1
TL;DR: A method for discovering how the defining values of forthcoming body movements are specified is presented, consistant with a distinctive-feature view, rather than a hierarchical view, of motor programming.
Abstract: This article presents a method for discovering how the defining values of forthcoming body movements are specified. In experiments using this movement precuing technique, information is given about some, none, or all of the defining values of a movement that will be required when a reaction signal is presented. It is assumed that the reaction time (RT) reflects the time to specify those values that were not precued. With RTs for the same movements in different precue conditions, it is possible to make detailed inferences about the value specification process for each of the movements under study. The present experiments were concerned with the specification of the arm, direction, and extent (or distance) of aimed hand movements. In Experiment 1 it appeared that (a) specification times during RTs were longest for arm, shorter for direction, and shortest for extent, and (b) these values were specified serially but not in an invariant order. Experiment 2 suggested that the precuing effects obtained in Experiment 1 were not attributable to stimulus identification. Experiment 3 suggested that subjects in Experiment 1 did not use precues to prepare sets of possible movements from which the required movement was later selected. The model of value specification supported by the data is consistant with a distinctive-feature view, rather than a hierarchical view, of motor programming.

925 citations


Journal ArticleDOI
TL;DR: In this paper, the interaction effects in disordered Fermi systems are considered in the metallic regime, and logarithmic corrections are obtained for conductivity, density of states, specific heat, and Hall constant.
Abstract: Interaction effects in disordered Fermi systems are considered in the metallic regime. In two dimensions, logarithmic corrections are obtained for conductivity, density of states, specific heat, and Hall constant. These results are compared with a recent theory of localization as well as some experiments.

743 citations


Journal ArticleDOI
TL;DR: In this paper, a weak magnetic field is found to have strong effects on the previously predicted logarithmic rise in resistivity at low temperatures, which is a strong indicator of disorder in two dimensions.
Abstract: Two models of disorder in two dimensions are discussed. The first is a localization theory that treats noninteracting particles by perturbation theory in the weak scattering limit. A weak magnetic field is found to have strong effects on the previously predicted logarithmic rise in resistivity at low temperatures. No logarithmic divergence is found for the Hall constant. A second model treats the disorder scattering by conventional diagramatic technique but includes the effects of interactions. In a short communication it has previously been reported that the resistivity and Hall constant both show a logarithmic increase at low temperatures. The details of the calculation are reported here, together with an extension to thin wires which shows a ${T}^{\ensuremath{-}\frac{1}{2}}$ divergence in the resistivity.

726 citations


Proceedings ArticleDOI
James D. Johnston1
09 Apr 1980
TL;DR: This paper discusses a family of filters that have been designed for Quadrature Mirror Filter (QMF) Banks that provide a significant improvement over conventional optimal equiripple and window designs when used in QMF banks.
Abstract: This paper discusses a family of filters that have been designed for Quadrature Mirror Filter (QMF) Banks. These filters provide a significant improvement over conventional optimal equiripple and window designs when used in QMF banks. The performance criterion for these filters differ from those usually used for filter design in a way which makes the usual filter design techniques difficult to apply. Two filters are actually designed simultaneously, with constraints on the stop band rejection, transition band width, and pass and transition band performance of the QMF filter structure made from those filters. Unlike most filter design problems, the behavior of the transition band is constrained, which places unusual requirements on the design algorithm. The requirement that the overall passband behavior of the QMF bank be constrained (which is a function of the passband and stop band behavior of the filter) also places very unusual requirements on the filter design. The filters were designed using a Hooke and Jeaves optimization routine with a Hanning window prototype. Theoretical results suggest that exactly flat frequency designs cannot be created for filter lengths greater than 2, however, using the discussed procedure, one can obtain QMF banks with as little as ±.0015dB ripple in their frequency response. Due to the nature of QMF filter applications, a small set of filters can be derived which will fit most applications.

724 citations


Journal ArticleDOI
TL;DR: In this article, a scaling theory based on the conductivity of a system of random elastic scatterers in terms of its scattering properties at a fixed energy was proposed. But it was shown that scaling leads to a wellbehaved probability distribution of this variable and to a simple scaling law not previously given in the literature.
Abstract: We base a scaling theory of localization on an expression for conductivity of a system of random elastic scatterers in terms of its scattering properties at a fixed energy. This expression, proposed by Landauer, is first derived and generalized to a system of indefinite size and number of scattering channels (a "wire"), and then an exact scaling theory for the one-dimensional chain is given. It is shown that the appropriate scaling variable is $f(\ensuremath{\rho})=\mathrm{ln}(1+\ensuremath{\rho})$ where $\ensuremath{\rho}$ is the dimensionless resistance, which has the property of "additive mean," and that scaling leads to a well-behaved probability distribution of this variable and to a very simple scaling law not previously given in the literature.

691 citations


Journal ArticleDOI
TL;DR: The results suggest a new approach to dynamic time warping for isolated words in which both the reference and test patterns are linearly warped to a fixed length, and then a simplified dynamic time Warping algorithm is used to handle the nonlinear component of the time alignment.
Abstract: The technique of dynamic programming for the time registration of a reference and a test pattern has found widespread use in the area of isolated word recognition. Recently, a number of variations on the basic time warping algorithm have been proposed by Sakoe and Chiba, and Rabiner, Rosenberg, and Levinson. These algorithms all assume that the test input is the time pattern of a feature vector from an isolated word whose endpoints are known (at least approximately). The major differences in the methods are the global path constraints (i.e., the region of possible warping paths), the local continuity constraints on the path, and the distance weighting and normalization used to give the overall minimum distance. The purpose of this investigation is to study the effects of such variations on the performance of different dynamic time warping algorithms for a realistic speech database. The performance measures that were used include: speed of operation, memory requirements, and recognition accuracy. The results show that both axis orientation and relative length of the reference and the test patterns are important factors in recognition accuracy. Our results suggest a new approach to dynamic time warping for isolated words in which both the reference and test patterns are linearly warped to a fixed length, and then a simplified dynamic time warping algorithm is used to handle the nonlinear component of the time alignment. Results with this new algorithm show performance comparable to or better than that of all other dynamic time warping algorithms that were studied.

618 citations


Journal ArticleDOI
Arthur Ashkin1
05 Dec 1980-Science
TL;DR: Use of lasers has revolutionized the study and applications of radiation pressure, and it is now possible to optically accelerate, slow, stably trap, and manipulate micrometer-sized dielectric particles and atoms.
Abstract: Use of lasers has revolutionized the study and applications of radiation pressure. Light forces have been achieved which strongly affect the dynamics of individual small particles. It is now possible to optically accelerate, slow, stably trap, and manipulate micrometer-sized dielectric particles and atoms. This leads to a diversity of new scientific and practical applications in fields where small particles play a role, such as light scattering, cloud physics, aerosol science, atomic physics, quantum optics, and high-resolution spectroscopy.

559 citations


Journal ArticleDOI
Arun N. Netravali1, J.O. Limb1
01 Mar 1980
TL;DR: This paper presents a review of techniques used for digital encoding of picture material, covering statistical models of picture signals and elements of psychophysics relevant to picture coding, followed by a description of the coding techniques.
Abstract: This paper presents a review of techniques used for digital encoding of picture material. Statistical models of picture signals and elements of psychophysics relevant to picture coding are covered first, followed by a description of the coding techniques. Detailed examples of three typical systems, which combine some of the coding principles, are given. A bright future for new systems is forecasted based on emerging new concepts, technology of integrated circuits and the need to digitize in a variety of contexts.

Journal ArticleDOI
R.C. Cheung1
TL;DR: A user-oriented software reliability figure of merit is defined to measure the reliability of a software system with respect to a user environment and the effects of the user profile, which summarizes the characteristics of the users of a system, on system reliability are discussed.
Abstract: A user-oriented reliability model has been developed to measure the reliability of service that a system provides to a user community. It has been observed that in many systems, especially software systems, reliable service can be provided to a user when it is known that errors exist, provided that the service requested does not utilize the defective parts. The reliability of service, therefore, depends both on the reliability of the components and the probabilistic distribution of the utilization of the components to provide the service. In this paper, a user-oriented software reliability figure of merit is defined to measure the reliability of a software system with respect to a user environment. The effects of the user profile, which summarizes the characteristics of the users of a system, on system reliability are discussed. A simple Markov model is formulated to determine the reliability of a software system based on the reliability of each individual module and the measured intermodular transition probabilities as the user profile. Sensitivity analysis techniques are developed to determine modules most critical to system reliability. The applications of this model to develop cost-effective testing strategies and to determine the expected penalty cost of failures are also discussed. Some future refinements and extensions of the model are presented.

Journal ArticleDOI
TL;DR: In this paper, the authors derived from quantum theory the optical force, its first-order velocity dependence, and its fluctuations for arbitrary light intensity, and applied the results to the problem of creating a stable optical trap for sodium atoms.
Abstract: The force exerted by optical-frequency radiation on neutral atoms can be quite substantial, particularly in the neighborhood of an atomic resonance line. In this paper we derive from quantum theory the optical force, its first-order velocity dependence, and its fluctuations for arbitrary light intensity, and apply the results to the problem of creating a stable optical trap for sodium atoms. New results include the position dependence of the velocity-dependent force, a complete expression for the momentum diffusion constant including the substantial contribution from fluctuations of the dipole force, and an estimate of trapping times in excess of 1 sec even in the absence of effective damping. The paper concludes with a discussion of the prospects and difficulties in providing sufficient damping to stabilize such a trap.

Proceedings ArticleDOI
01 Jul 1980
TL;DR: This paper describes a method whereby the object space is represented entirely by a hierarchical data structure consisting of bounding volumes, with no other form of representation, which allows the visible surface rendering to be performed simply and efficiently.
Abstract: Hierarchical representations of 3-dimensional objects are both time and space efficient. They typically consist of trees whose branches represent bounding volumes and whose terminal nodes represent primitive object elements (usually polygons). This paper describes a method whereby the object space is represented entirely by a hierarchical data structure consisting of bounding volumes, with no other form of representation. This homogencity allows the visible surface rendering to be performed simply and efficiently.The bounding volumes selected for this algorithm are parallelepipeds oriented to minimize their size. With this representation, any surface can be rendered since in the limit the bounding volumes make up a point representation of the object. The advantage is that the visibility calculations consist only of a search through the data structure to determine the correspondence between terminal level bounding volumes and the current pixel. For ray tracing algorithms, this means that a simplified operation will produce the point of intersection of each ray with the bounding volumes.Memory requirements are minimized by expanding or fetching the lower levels of the hierarchy only when required. Because the viewing process has a single operation and primitive type, the software or hardware chosen to implement the search can be highly optimized for very fast execution.

Journal ArticleDOI
Ward Whitt1
TL;DR: This paper facilitates applications of the continuous mapping theorem by determining when several important functions and sequences of functions preserve convergence.
Abstract: Many useful descriptions of stochastic models can be obtained from functional limit theorems invariance principles or weak convergence theorems for probability measures on function spaces. These descriptions typically come from standard functional limit theorems via the continuous mapping theorem. This paper facilitates applications of the continuous mapping theorem by determining when several important functions and sequences of functions preserve convergence. The functions considered are composition, addition, composition plus addition, multiplication, supremum, reflecting barrier, first passage time and time reversal. These functions provide means for proving new functional limit theorems from previous ones. These functions are useful, for example, to establish the stability or continuity of queues and other stochastic models.

Journal ArticleDOI
TL;DR: Efficient algorithms are described for computing congruence closures in the general case and in the following two special cases to test expression eqmvalence and to test losslessness of joins in relational databases.
Abstract: Let G be a directed graph such that for each vertex v in G, the successors of v are ordered Let C be any equivalence relation on the vertices of G. The congruence closure C* of C is the finest equivalence relation containing C and such that any two vertices having corresponding successors equivalent under C* are themselves equivalent under C* Efficient algorithms are described for computing congruence closures in the general case and in the following two special cases. 0) G under C* is acyclic, and (it) G is acychc and C identifies a single pair of vertices. The use of these algorithms to test expression eqmvalence (a problem central to program verification) and to test losslessness of joins in relational databases is described

Patent
David S. Jordan1, Roy P. Weber1
18 Jan 1980
TL;DR: In this paper, a method is disclosed of using the public switched telephone network, including the CCIS (Common Channel Interoffice Signaling) system, to provide nationwide personal locator telephone service for called subscribers.
Abstract: A method is disclosed of using the public switched telephone network, including the CCIS (Common Channel Interoffice Signaling) system, to provide nationwide personal locator telephone service for called subscribers. In accordance with the service, a calling party may reach a called subscriber wherever the subscriber may be located by merely keying from a standard push-button telephone set a personal number unique to the subscriber. A telephone office having access to CCIS sends the number to a centralized data base. A telephone number at which the subscriber is located is returned to the access office, and the call is automatically completed to the telephone number. Other information may be returned to the access office to provide enhanced features. Such features include called subscriber unavailability announcements to the calling party, storage of a caller's telephone number, automatic call-back by the subscriber, and paging of the subscriber. A subscriber may update the information stored at the centralized data base from any telephone.

Journal ArticleDOI
Roy Radner1
TL;DR: In a game of a finite number of repetitions of a Cournot-type model of an industry, if firms are satisfied to get close to (but not necessarily achieve) their optimal responses to other firms' sequential strategies, then in the resulting non-cooperative “equilibria of the sequential market game, (1) if the lifetime of the industry is large compared to the number of firms, there are equilibria corresponding to any given duration of the cartel, whereas (2) if an industry's lifetime will be close (in some sense) to the

Journal ArticleDOI
TL;DR: In this paper, a simple, empirical relation has been found between MOSFET parameters and the minimum channel length for which long-channel subthreshold behavior will be observed.
Abstract: As MOSFET dimensions are reduced, lower voltages, shallower junctions, thinner oxides, and heavier doping help to maintain long-channel behavior. A simple, empirical relation has been found between these parameters and the minimum channel length for which long-channel subthreshold behavior will be observed. This approximate relation provides an estimate for MOSFET parameters not requiring reduction of all dimensions by the same scale factor.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the problem of tracking a standard Wiener process x+w t by a non-anticipating process ξ t having ξ0=0 and total variation (fuel) so as to minimize the expected discounted cost.
Abstract: We find the explicit solution to several new problems in stochastic control, among them the finite-fuel problem of optimally tracking a standard Wiener process x+w t started at x by a nonanticipating process ξ t having ξ0=0 and total variation (fuel) so as to minimize the expected discounted cost . In n dimensions, the optimal process ξ is given thus: fuel is expended in a singular way to force x+w–ξ t reach and stay in the region remaining at time t, and f′ is a Bessel (n even) or an elementary function (n odd). Except for a possible initial jump in ξ the process is a degenerate diffusion that reflects at fixed angels off the boundary and is expressible in terms of the local times on the boundary components.

Journal ArticleDOI
R. Crochiere1
TL;DR: A new structure and a simplified interpretation of short-time Fourier synthesis using synthesis windows is presented and it is shown how this structure can be used for analysis/synthesis applications which require different analysis and synthesis rates, such as time compression or expansion.
Abstract: In this correspondence we present a new structure and a simplified interpretation of short-time Fourier synthesis using synthesis windows. We show that this approach can be interpreted as a modification of the overlap-add method where we inverse the Fourier transform and window by the synthesis window prior to overlap-adding. This simplified interpretation results in a more efficient structure for short-time synthesis when a synthesis window is desired. In addition, we show how this structure can be used for analysis/synthesis applications which require different analysis and synthesis rates, such as time compression or expansion.

Journal ArticleDOI
Judith E. Tschirgi1
TL;DR: For example, TCHmcI et al. as mentioned in this paper investigated the differences in reasoning between adults and second, fourth, and sixth graders in a manipulation-of-variables task using common everyday situations.
Abstract: TsCHmcI, JUDrTH E. Sensible Reasoning: A Hypothesis about Hypotheses. CHILD DEVELOPMENT, 1980, 51, 1-10. This study investigated the asserted differences in reasoning between adults and second, fourth, and sixth graders in a manipulation-of-variables task using common everyday situations. It is proposed that hypothesis-testing skills used in this task do not necessarily develop from an understanding of the logic of classes. These skills may develop through experience in manipulating elements in those naturally occurring situations where it is desirable to eliminate negative outcomes or reproduce positive outcomes. Subjects were given 8 multivariate stories with different outcomes, either good or bad, and asked to choose a test to determine which element caused the outcome. Subjects were expected to choose a test systematically on the basis of sensible considerations of the outcome value of any situation, not on purely logical grounds. For all age groups the general pattern of responses was the same. A logical disconfirming test (vary only the hypothesized variable) was preferred in bad-outcome situations, and a logically inappropriate confirming test (repeat the hypothesized variable in a different context) was preferred in good-outcome stories. The main developmental trend was a shift in choice of strategy to eliminate the hypothesized variable in bad-outcome stories. There was no relation between children's performance on class-inclusion problems and their story-problem solutions. These results are discussed in terms of "sensible" reasoning and problem-solving skills.

Journal ArticleDOI
TL;DR: In this paper, Raman scattering from folded longitudinal phonons in a GaAs-AlAs superlattice is observed, and the resonant behavior for laser frequencies near the gap of this sample is studied.
Abstract: Raman scattering from folded longitudinal phonons in a GaAs-AlAs superlattice is observed, and the resonant behavior for laser frequencies near the gap of this sample is studied. A quantitative explanation of the data is given in terms of a simple theory which involves the Kronig-Penney model for the electrons and considers the phonons in the elastic continuum limit.

Journal ArticleDOI
Ron Graham1, Neil J. A. Sloane1
TL;DR: Several lower bounds for A(n,2\delta,w) are given, better than the "Gilbert bound" in most cases.
Abstract: Let A(n,2\delta,w) denote the maximum number of codewords in any binary code of length n , constant weight w , and Hamming distance 2\delta Several lower bounds for A(n,2\delta,w) are given. For w and \delta fixed, A(n,2\delta,w) \geq n^{W-\delta+l}/w! and A(n,4,w)\sim n^{w-l}/w! as n \rightarrow \infty . In most cases these are better than the "Gilbert bound." Revised tables of A(n,2 \delta,w) are given in the range n \leq 24 and \delta \leq 5 .

Journal ArticleDOI
B. Lengeler1, P. Eisenberger1
TL;DR: In this article, a procedure for determining interatomic distances, coordination numbers, and mean relative displacements from extended x-ray absorption fine structure (EXAFS) data on disordered metallic systems is described.
Abstract: A procedure is described for determining interatomic distances $r$, coordination numbers $N$, and mean relative displacements $\ensuremath{\sigma}$ from extended x-ray absorption fine structure (EXAFS) data on disordered metallic systems. This procedure includes the case when the spectrometer has finite energy resolution. It is shown how the absolute value of the EXAFS can be determined. A way has been developed to determine the inelastic loss factors of the photoelectrons from model compounds of known structure. The procedure has been checked on four metallic systems (Cu, $\ensuremath{\alpha}\ensuremath{-}\mathrm{Al}\mathrm{Cu}$-AlCu, ${\ensuremath{\theta}}^{\ensuremath{'}}$ and $\ensuremath{\theta}$ AlCu). The accuracy of the analysis is 1% for the interatomic distances, 15% for the coordination numbers, and 20% for $\ensuremath{\sigma}$. The reliability of the data analysis, which depends strongly on the number of different atoms in a shell, on the magnitude of their mean relative displacement and on the difference in their backscattering power will be discussed at length.

Journal ArticleDOI
Dietrich Marcuse1
TL;DR: A theory is presented of the propagation of Gaussian pulses in single-mode optical fibers by expanding the propagation constant in a Taylor series that includes the third derivative with respect to frequency.
Abstract: A theory is presented of the propagation of Gaussian pulses in single-mode optical fibers by expanding the propagation constant in a Taylor series that includes the third derivative with respect to frequency. The light source is assumed to have a Gaussian spectral distribution whose width relative to the width of the Gaussian signal pulse is arbitrary. Formulas are derived for the spectrum of the ensemble average of the optical pulse, from which the shape of the average pulse itself is obtained by the fast Fourier transform. Also derived is an expression for the rms pulse width. The theory is applicable at all wavelengths including the vicinity of the zero first-order dispersion point.

Journal ArticleDOI
Peter Burt1, Bela Julesz1
09 May 1980-Science
TL;DR: In this paper, it was shown that nearby objects modify the absolute disparity limit for binocular fusion, which sheds new light on several enigmatic phenomena in stereopsis, e.g., the ability of nearby objects to modify the disparity limit.
Abstract: Ever since Panum, it has been commonly assumed that there is an absolute disparity limit for binocular fusion. It is now found that nearby objects modify this disparity limit. This result sheds new light on several enigmatic phenomena in stereopsis.

Journal ArticleDOI
A. Hasegawa1, W. Brinkman1
TL;DR: In this article, a tunable coherent infrared (IR) source is presented, which utilizes the sideband produced by modulational instability of an optical signal in a fiber which results from a combination of anomalous dispersion and the nonlinear Kerr effect.
Abstract: A new tunable coherent infrared (IR) source is presented. It utilizes the sideband produced by modulational instability of an optical signal in a fiber which results from a combination of anomalous dispersion and the nonlinear Kerr effect. The generated frequency is in proportion to the square root of the optical pump signal. A coherent IR signal is generated by extracting the sideband.

Journal ArticleDOI
TL;DR: In this paper, a single-parameter model representing voids in an otherwise homogeneous medium is shown to account for the major discrepancies in the above-band-gap spectra for Au samples prepared in different ways.
Abstract: Reported optical data for Au are investigated to determine the origin of their differences. A single-parameter model representing voids in an otherwise homogeneous medium is shown to account for the major discrepancies in the above-band-gap ($Eg2.5$ eV) ${\ensuremath{\epsilon}}_{2}$ spectra for Au samples prepared in different ways. Ellipsometric measurements on transmission-electron-microscopy (TEM) characterized thin-film samples on an energy range of 1.5-5.8 eV support the void model but show the importance of measuring both ${\ensuremath{\epsilon}}_{1}$ and ${\ensuremath{\epsilon}}_{2}$ to separate volume from surface film effects. Differences in below-band-gap data arise from at least two mechanisms: grain-size effects in samples with a large volume fraction of imperfections, and increased surface scattering, probably from thermal grooving, in annealed samples. Two mechanisms are required because the lowest values of ${\ensuremath{\epsilon}}_{2}$ in the Drude region are shown to occur for unannealed but smooth, moderately thick film samples evaporated on room-temperature substrates. Our best below-band-gap data, taken on electron-beam evaporated samples, show directly the linearly increasing $d$-band to Fermi-level transition threshold near $X$ at 1.8 eV unobstructed by the Drude tail. These data have an ${\ensuremath{\epsilon}}_{2}$ value at 1.5 eV equal to within experimental error to that calculated from the scattering lifetime derived from the known resistivity of the bulk metal, indicating a grain size and quality better than anything previously used for optical measurements in Drude region.

Patent
Daniel Sheinbein1
18 Jan 1980
TL;DR: In this paper, a telephone system is disclosed for providing customized telephone services, such as selective call waiting, selective call forwarding, priority ringing, and many other customized services are contemplated.
Abstract: A telephone system is disclosed for providing customized telephone services. A called customer can screen calls incoming to his station based on the identity of the calling line. The calling line's identity is forwarded to the switching office containing the called customer's screening memory. The memory is interrogated to ascertain the call disposition based on information priorly put in the memory by the called customer. Selective call waiting, selective call forwarding, priority ringing, and many other customized services are contemplated.