scispace - formally typeset
Search or ask a question

Showing papers by "Alcatel-Lucent published in 1997"


Journal ArticleDOI
TL;DR: The results show that on the United States postal service database of handwritten digits, the SV machine achieves the highest recognition accuracy, followed by the hybrid system, and the SV approach is thus not only theoretically well-founded but also superior in a practical application.
Abstract: The support vector (SV) machine is a novel type of learning machine, based on statistical learning theory, which contains polynomial classifiers, neural networks, and radial basis function (RBF) networks as special cases. In the RBF case, the SV algorithm automatically determines centers, weights, and threshold that minimize an upper bound on the expected test error. The present study is devoted to an experimental comparison of these machines with a classical approach, where the centers are determined by X-means clustering, and the weights are computed using error backpropagation. We consider three machines, namely, a classical RBF machine, an SV machine with Gaussian kernel, and a hybrid system with the centers determined by the SV method and the weights trained by error backpropagation. Our results show that on the United States postal service database of handwritten digits, the SV machine achieves the highest recognition accuracy, followed by the hybrid system. The SV approach is thus not only theoretically well-founded but also superior in a practical application.

1,385 citations


Journal ArticleDOI
TL;DR: Various aspects of the system design of WaveLAN-II and characteristics of its antenna, radio-frequency (RF) front-end, digital signal processor (DSP) transceiver chip, and medium access controller (MAC) chip are discussed.
Abstract: In July 1997 the Institute of Electrical and Electronics Engineers (IEEE) completed standard 802.11 for wireless local area networks (LANs). WaveLAN®-II, to be released early in 1998, offers compatibility with the IEEE 802.11 standard for operation in the 2.4-GHz band. It is the successor to WaveLAN-I, which has been in the market since 1991. As a next-generation wireless LAN product, WaveLAN-II has many enhancements to improve performance in various areas. An IEEE 802.11 direct sequence spread spectrum (DSSS) product, WaveLAN-II supports the basic bit rates of 1 and 2 Mb/s, but it can also provide enhanced bit rates as high as 10 Mb/s. This paper discusses various aspects of the system design of WaveLAN-II and characteristics of its antenna, radio-frequency (RF) front-end, digital signal processor (DSP) transceiver chip, and medium access controller (MAC) chip.

1,353 citations


Journal ArticleDOI
Arthur Ashkin1
TL;DR: Early developments in the field leading to the demonstration of cooling and trapping of neutral atoms in atomic physics and to the first use of optical tweezers traps in biology are reviewed.
Abstract: The techniques of optical trapping and manipulation of neutral particles by lasers provide unique means to control the dynamics of small particles. These new experimental methods have played a revolutionary role in areas of the physical and biological sciences. This paper reviews the early developments in the field leading to the demonstration of cooling and trapping of neutral atoms in atomic physics and to the first use of optical tweezers traps in biology. Some further major achievements of these rapidly developing methods also are considered.

1,346 citations


Proceedings ArticleDOI
09 Jun 1997
TL;DR: The application of Bayesian regularization to the training of feedforward neural networks is described, using a Gauss-Newton approximation to the Hessian matrix to reduce the computational overhead.
Abstract: This paper describes the application of Bayesian regularization to the training of feedforward neural networks. A Gauss-Newton approximation to the Hessian matrix, which can be conveniently implemented within the framework of the Levenberg-Marquardt algorithm, is used to reduce the computational overhead. The resulting algorithm is demonstrated on a simple test problem and is then applied to three practical problems. The results demonstrate that the algorithm produces networks which have excellent generalization capabilities.

1,338 citations


Journal ArticleDOI
TL;DR: It is shown that if the two-member committee algorithm achieves information gain with positive lower bound, then the prediction error decreases exponentially with the number of queries, and this exponential decrease holds for query learning of perceptrons.
Abstract: We analyze the “query by committee” algorithm, a method for filtering informative queries from a random stream of inputs. We show that if the two-member committee algorithm achieves information gain with positive lower bound, then the prediction error decreases exponentially with the number of queries. We show that, in particular, this exponential decrease holds for query learning of perceptrons.

1,234 citations


Journal ArticleDOI
01 Jan 1997-Nature
TL;DR: In this article, the slow sedimentation of colloidal particles onto a patterned substrate (or template) can direct the crystallization of bulk colloidal crystals, and so permit tailoring of the lattice structure, orientation and size of the resulting crystals.
Abstract: Colloidal crystals are three-dimensional periodic structures formed from small particles suspended in solution. They have important technological uses as optical filters1–3, switches4 and materials with photonic band gaps5,6, and they also provide convenient model systems for fundamental studies of crystallization and melting7–10. Unfortunately, applications of colloidal crystals are greatly restricted by practical difficulties encountered in synthesizing large single crystals with adjustable crystal orientation11. Here we show that the slow sedimentation of colloidal particles onto a patterned substrate (or template) can direct the crystallization of bulk colloidal crystals, and so permit tailoring of the lattice structure, orientation and size of the resulting crystals: we refer to this process as 'colloidal epitaxy'. We also show that, by using silica spheres synthesized with a fluorescent core12,13, the defect structures in the colloidal crystals that result from an intentional lattice mismatch of the template can be studied by confocal microscopy14. We suggest that colloidal epitaxy will open new ways to design and fabricate materials based on colloidal crystals and also allow quantitative studies of heterogeneous crystallization in real space.

1,148 citations


Journal ArticleDOI
TL;DR: Modifications that may be required both at the transport and network layers to provide good end-to-end performance over high-speed WANs are indicated.
Abstract: This paper examines the performance of TCP/IP, the Internet data transport protocol, over wide-area networks (WANs) in which data traffic could coexist with real-time traffic such as voice and video. Specifically, we attempt to develop a basic understanding, using analysis and simulation, of the properties of TCP/IP in a regime where: (1) the bandwidth-delay product of the network is high compared to the buffering in the network and (2) packets may incur random loss (e.g., due to transient congestion caused by fluctuations in real-time traffic, or wireless links in the path of the connection). The following key results are obtained. First, random loss leads to significant throughput deterioration when the product of the loss probability and the square of the bandwidth-delay product is larger than one. Second, for multiple connections sharing a bottleneck link, TCP is grossly unfair toward connections with higher round-trip delays. This means that a simple first in first out (FIFO) queueing discipline might not suffice for data traffic in WANs. Finally, while the Reno version of TCP produces less bursty traffic than the original Tahoe version, it is less robust than the latter when successive losses are closely spaced. We conclude by indicating modifications that may be required both at the transport and network layers to provide good end-to-end performance over high-speed WANs.

979 citations


Proceedings ArticleDOI
Patrice Godefroid1
01 Jan 1997
TL;DR: This paper discusses how model checking can be extended to deal directly with "actual" descriptions of concurrent systems, e.g., implementations of communication protocols written in programming languages such as C or C++, and introduces a new search technique that is suitable for exploring the state spaces of such systems.
Abstract: Verification by state-space exploration, also often referred to as "model checking", is an effective method for analyzing the correctness of concurrent reactive systems (e.g., communication protocols). Unfortunately, existing model-checking techniques are restricted to the verification of properties of models, i.e., abstractions, of concurrent systems.In this paper, we discuss how model checking can be extended to deal directly with "actual" descriptions of concurrent systems, e.g., implementations of communication protocols written in programming languages such as C or C++. We then introduce a new search technique that is suitable for exploring the state spaces of such systems. This algorithm has been implemented in VeriSoft, a tool for systematically exploring the state spaces of systems composed of several concurrent processes executing arbitrary C code. As an example of application, we describe how VeriSoft successfully discovered an error in a 2500-line C program controlling robots operating in an unpredictable environment.

867 citations


Journal ArticleDOI
09 Jan 1997-Nature
TL;DR: It is shown that two-photon excitation laser scanning microscopy can penetrate the highly scattering tissue of the intact brain and is used to measure sensory stimulus-induced dendritic [Ca2+] dynamics of layer 2/3 pyramidal neurons of the rat primary vibrissa cortex in vivo.
Abstract: The dendrites of mammalian pyramidal neurons contain a rich collection of active conductances that can support Na+ and Ca2+ action potentials (for a review see ref. 1). The presence, site of initiation, and direction of propagation of Na+ and Ca2+ action potentials are, however, controversial, and seem to be sensitive to resting membrane potential, ionic composition, and degree of channel inactivation, and depend on the intensity and pattern of synaptic stimulation. This makes it difficult to extrapolate from in vitro experiments to the situation in the intact brain. Here we show that two-photon excitation laser scanning microscopy can penetrate the highly scattering tissue of the intact brain. We used this property to measure sensory stimulus-induced dendritic [Ca2+] dynamics of layer 2/3 pyramidal neurons of the rat primary vibrissa (Sm1) cortex in vivo. Simultaneous recordings of intracellular voltage and dendritic [Ca2+] dynamics during whisker stimulation or current injection showed increases in [Ca2+] only in coincidence with Na+ action potentials. The amplitude of these [Ca2+] transients at a given location was approximately proportional to the number of Na+ action potentials in a short burst. The amplitude for a given number of action potentials was greatest in the proximal apical dendrite and declined steeply with increasing distance from the soma, with little Ca2+ accumulation in the most distal branches, in layer 1. This suggests that widespread Ca2+ action potentials were not generated, and any significant [Ca2+] increase depends on somatically triggered Na+ action potentials.

817 citations


Journal ArticleDOI
01 Mar 1997-Neuron
TL;DR: The unique niche that light microscopy occupies in biology is based on the ability to perform observations on living tissue at relatively high spatial resolution, but this resolution is limited by the wavelength of light and does not rival that of electron microscopy.

734 citations


Journal ArticleDOI
TL;DR: In this article, the evolution of the structural properties of the metal-insulator transition was determined as a function of temperature, average $A$-site radius and applied pressure for the ''optimal'' doping range $x=0.25,$ 0.30, by using high-resolution neutron powder diffraction.
Abstract: The evolution of the structural properties of ${A}_{1\ensuremath{-}x}{A}_{x}^{\ensuremath{'}}{\mathrm{MnO}}_{3}$ was determined as a function of temperature, average $A$-site radius $〈{r}_{A}〉,$ and applied pressure for the ``optimal'' doping range $x=0.25,$ 0.30, by using high-resolution neutron powder diffraction. The metal-insulator transition, which can be induced both as a function of temperature and of $〈{r}_{A}〉,$ was found to be accompanied by significant structural changes. Both the paramagnetic charge-localized phase, which exists at high temperatures for all values of $〈{r}_{A}〉,$ and the spin-canted ferromagnetic charge-ordered phase, which is found at low temperatures for low values of $〈{r}_{A}〉,$ are characterized by large metric distortions of the ${\mathrm{MnO}}_{6}$ octahedra. These structural distortions are mainly incoherent with respect to the space-group symmetry, with a significant coherent component only at low $〈{r}_{A}〉.$ These distortions decrease abruptly at the transition into the ferromagnetic metal phase. These observations are consistent with the hypothesis that, in the insulating phases, lattice distortions of the Jahn-Teller type, in addition to spin scattering, provide a charge-localization mechanism. The evolution of the average structural parameters indicates that the variation of the electronic bandwidth is the driving force for the evolution of the insulator-to-metal transition at ${T}_{C}$ as a function of ``chemical'' and applied pressure.

Journal ArticleDOI
TL;DR: The issue of speech recognizer training from a broad perspective with root in the classical Bayes decision theory is discussed, and the superiority of the minimum classification error (MCE) method over the distribution estimation method is shown by providing the results of several key speech recognition experiments.
Abstract: A critical component in the pattern matching approach to speech recognition is the training algorithm, which aims at producing typical (reference) patterns or models for accurate pattern comparison. In this paper, we discuss the issue of speech recognizer training from a broad perspective with root in the classical Bayes decision theory. We differentiate the method of classifier design by way of distribution estimation and the discriminative method of minimizing classification error rate based on the fact that in many realistic applications, such as speech recognition, the real signal distribution form is rarely known precisely. We argue that traditional methods relying on distribution estimation are suboptimal when the assumed distribution form is not the true one, and that "optimality" in distribution estimation does not automatically translate into "optimality" in classifier design. We compare the two different methods in the context of hidden Markov modeling for speech recognition. We show the superiority of the minimum classification error (MCE) method over the distribution estimation method by providing the results of several key speech recognition experiments. In general, the MCE method provides a significant reduction of recognition error rate.

Journal ArticleDOI
TL;DR: In this article, a printed field effect transistor (FET) is reported, which has a polyimide dielectric layer, a regioregular poly(3-alkythiophene) semiconducting layer, and two silver electrodes, all of which are printed on an ITO-coated plastic substrate.
Abstract: A printed field-effect transistor (FET) is reported, in which all the essential components are screen-printed for the first time. This transistor has a polyimide dielectric layer, a regioregular poly(3-alkythiophene) semiconducting layer, and two silver electrodes, all of which are printed on an ITO-coated plastic substrate.

Journal ArticleDOI
06 Nov 1997-Nature
TL;DR: In this paper, the authors describe a novel class of magnetoresistive compounds, the silver chalcogenides, and show that slightly altering the stoichiometry can lead to a marked increase in the magnetic response.
Abstract: Several materials have been identified over the past few years as promising candidates for the development of new generations of magnetoresistive devices. These range from artificially engineered magnetic multilayers' and granular alloys, in which the magnetic-field response of interfacial spins modulates electron transport to give rise to 'giant' magnetoresistance, to the manganite peravskites, in which metal-insulator transitions driven by a magnetic field give rise to a `colossal' magnetoresistive response (albeit at very high fields). Here we describe a hitherto unexplored class of magnetoresistive compounds, the silver chalcogenides. At high temperatures, the compounds Ag_2S, Ag_2Se and Ag_2Te are superionic conductors; below similar to 400 K, ion migration is effectively frozen and the compounds are non-magnetic semiconductors that exhibit no appreciable magnetoresistance. We show that slightly altering the stoichiometry can lead to a marked increase in the magnetic response. At room temperature and in a magnetic field of similar to 55 kOe, Ag_(2+δ)Se and Ag_(2+δ)Te show resistance increases of up to 200%, which are comparable with the colossal-magnetoresistance materials. Moreover, the resistance of our most responsive samples exhibits an unusual linear dependence on magnetic field, indicating both a potentially useful response down to fields of practical importance and a peculiarly long length scale associated with the underlying mechanism.

Journal ArticleDOI
31 Oct 1997-Science
TL;DR: The force-velocity relation fits well to a decaying exponential, in agreement with theoretical models, but the rate of decay is faster than predicted.
Abstract: Forces generated by protein polymerization are important for various forms of cellular motility. Assembling microtubules, for instance, are believed to exert pushing forces on chromosomes during mitosis. The force that a single microtubule can generate was measured by attaching microtubules to a substrate at one end and causing them to push against a microfabricated rigid barrier at the other end. The subsequent buckling of the microtubules was analyzed to determine both the force on each microtubule end and the growth velocity. The growth velocity decreased from 1.2 micrometers per minute at zero force to 0.2 micrometer per minute at forces of 3 to 4 piconewtons. The force-velocity relation fits well to a decaying exponential, in agreement with theoretical models, but the rate of decay is faster than predicted.

Proceedings ArticleDOI
01 May 1997
TL;DR: This paper extends previous work on efficient path profiling to flow sensitive profiling, which associates hardware performance metrics with a path through a procedure, and describes a data structure, the calling context tree, that efficiently captures calling contexts for procedure-level measurements.
Abstract: A program profile attributes run-time costs to portions of a program's execution. Most profiling systems suffer from two major deficiencies: first, they only apportion simple metrics, such as execution frequency or elapsed time to static, syntactic units, such as procedures or statements; second, they aggressively reduce the volume of information collected and reported, although aggregation can hide striking differences in program behavior.This paper addresses both concerns by exploiting the hardware counters available in most modern processors and by incorporating two concepts from data flow analysis--flow and context sensitivity--to report more context for measurements. This paper extends our previous work on efficient path profiling to flow sensitive profiling, which associates hardware performance metrics with a path through a procedure. In addition, it describes a data structure, the calling context tree, that efficiently captures calling contexts for procedure-level measurements.Our measurements show that the SPEC95 benchmarks execute a small number (3--28) of hot paths that account for 9--98% of their L1 data cache misses. Moreover, these hot paths are concentrated in a few routines, which have complex dynamic behavior.

Patent
22 Jan 1997
TL;DR: In this paper, a central proxy system includes computer-executable routines that process site-specific substitute identifiers constructed from data specific to the users, that transmits the substitute identifiers to the server sites, that retransmits browsing commands received from the users to the user sites, and that removes portions of the browsing commands that would identify the users.
Abstract: For use with a network having server sites capable of being browsed by users based on identifiers received into the server sites and personal to the users, alternative proxy systems for providing substitute identifiers to the server sites that allow the users to browse the server sites anonymously via the proxy system A central proxy system includes computer-executable routines that process site-specific substitute identifiers constructed from data specific to the users, that transmits the substitute identifiers to the server sites, that retransmits browsing commands received from the users to the server sites, and that removes portions of the browsing commands that would identify the users to the server sites The foregoing functionality is performed consistently by the central proxy system during subsequent visits to a given server site as the same site specific substitute identifiers are reused Consistent use of the site specific substitute identifiers enables the server site to recognize a returning user and, possibly, provide personalized service

Proceedings ArticleDOI
05 Jan 1997
TL;DR: This work presents theoretical algorithms for sorting and searching multikey data, and derive from them practical C implementations for applications in which keys are character strings, and presents extensions to more complex string problems, such as partial-match searching.
Abstract: We present theoretical algorithms for sorting and searching multikey data, and derive from them practical C implementations for applications in which keys are character strings. The sorting algorithm blends Quicksort and radix sort; it is competitive with the best known C sort codes. The searching algorithm blends tries and binary search trees; it is faster than hashing and other commonly used search methods. The basic ideas behind the algorithms date back at least to the 1960s, but their practical utility has been overlooked. We also present extensions to more complex string problems, such as partial-match searching.

Journal ArticleDOI
TL;DR: In this article, the authors demonstrate tomographic T-ray imaging using the timing information present in terahertz (THz) pulses in a reflection geometry, where the time delays of these pulses are used to determine the positions of the discontinuities along the propagation direction.
Abstract: We demonstrate tomographic T-ray imaging, using the timing information present in terahertz (THz) pulses in a reflection geometry. THz pulses are reflected from refractive-index discontinuities inside an object, and the time delays of these pulses are used to determine the positions of the discontinuities along the propagation direction. In this fashion a tomographic image can be constructed.

Journal ArticleDOI
TL;DR: The CMM adopted the opposite of the quick-fix silver bullet philosophy, intended to be a coherent, ordered set of incremental improvements, all having experienced success in the field, packaged into a roadmap that showed how effective practices could be built on one another in a logical progression.
Abstract: A bout the time Fred Brooks was warning us there was not likely to be a single, “silver bullet” solution to the essential difficulties of developing software [3], Watts Humphrey and others at the Software Engineering Institute (SEI) were busy putting together the set of ideas that was to become the Capability Maturity Model (CMM) for Software. The CMM adopted the opposite of the quick-fix silver bullet philosophy. It was intended to be a coherent, ordered set of incremental improvements, all having experienced success in the field, packaged into a roadmap that showed how effective practices could be built on one another in a logical progression (see “The Capability Maturity Model for Software” sidebar). Far from a quick fix, it was

Journal ArticleDOI
Chandra Varma1
TL;DR: In this paper, a model of copper-oxygen bonding and antibonding bands with the most general two-body interactions allowable by symmetry is considered, and the model has a continuous transition as a function of hole density x and temperature T to a phase in which a current circulates in each unit cell.
Abstract: A model of copper-oxygen bonding and antibonding bands with the most general two-body interactions allowable by symmetry is considered. The model has a continuous transition as a function of hole density x and temperature T to a phase in which a current circulates in each unit cell. This phase preserves the translational symmetry of the lattice while breaking time-reversal invariance and fourfold rotational symmetry. The product of time reversal and fourfold rotation is preserved. The circulating current phase terminates at a critical point at x=${\mathrm{x}}_{\mathrm{c}}$, T=0. In the quantum critical region about this point the logarithm of the frequency of the current fluctuations scales with their momentum. The microscopic basis for the marginal Fermi-liquid phenemenology and the observed long-wavelength transport anomalies near x=${\mathrm{x}}_{\mathrm{c}}$ are derived from such fluctuations. The symmetry of the current fluctuations is such that the associated magnetic field fluctuations are absent at oxygen sites and have the correct form to explain the anomalous copper nuclear relaxation rate. Crossovers to the Fermi-liquid phase on either side of ${\mathrm{x}}_{\mathrm{c}}$ and the role of disorder are briefly considered. The current fluctuations promote superconductive instability with a propensity towards ``D-wave'' symmetry or ``extended S-wave''symmetry depending on details of the band structure. Several experiments are proposed to test the theory.


Journal ArticleDOI
TL;DR: It is shown that the well-known Guard Channel policy is optimal for the MINOBJ problem, while a new Fractional Guard Channelpolicy is optimalFor the MINBLOCK and MINC problems.
Abstract: Two important Quality-of-Service (QoS) measures for current cellular networks are the fractions of new and handoff “calls” that are blocked due to unavailability of “channels” (radio and/or computing resources). Based on these QoS measures, we derive optimal admission control policies for three problems: minimizing a linear objective function of the new and handoff call blocking probabilities (MINOBJ), minimizing the new call blocking probability with a hard constraint on the handoff call blocking probability (MINBLOCK) and minimizing the number of channels with hard constraints on both of the blocking probabilities (MINC). We show that the well-known Guard Channel policy is optimal for the MINOBJ problem, while a new Fractional Guard Channel policy is optimal for the MINBLOCK and MINC problems. The Guard Channel policy reserves a set of channels for handoff calls while the Fractional Guard Channel policy effectively reserves a non-integral number of guard channels for handoff calls by rejecting new calls with some probability that depends on the current channel occupancy. It is also shown that the Fractional policy results in significant savings (20-50\%) in the new call blocking probability for the MINBLOCK problem and provides some, though small, gains over the Guard Channel policy for the MINC problem. Further, we also develop computationally inexpensive algorithms for the determination of the parameters for the optimal policies.

Patent
Neil J. A. Sloane1
14 Oct 1997
TL;DR: In this article, a patient's disease is diagnosed and/or treated using electronic data communications between not only the physician and his/her patient, but also via the use of electronic data communication between the physicians and one or more entities which can contribute to the patient's diagnosis and treatment, including information that was priorly received electronically from the patient and was developed as a consequence of an electronic messaging interaction that occurred between the patient, and the physician.
Abstract: Patient disease is diagnosed and/or treated using electronic data communications between not only the physician and his/her patient, but via the use of electronic data communications between the physician and one or more entities which can contribute to the patient's diagnosis and/or treatment, such electronic data communications including information that was priorly received electronically from the patient and/or was developed as a consequence of an electronic messaging interaction that occurred between the patient and the physician. Such other entities illustratively include a medical diagnostic center and an epidemiological database computer facility which collects epidemiological transaction records from physicians, hospitals and other institutions which have medical facilities, such as schools and large businesses. The epidemiological transaction record illustratively includes various medical, personal and epidemiological data relevant to the patient and his/her present symptoms, including test results, as well as the diagnosis, if one has already been arrived at by the e-doc. The epidemiological database computer facility can correlate this information with the other epidemiological transaction records that it receives over time in order to help physicians make and/or confirm diagnoses as well as to identify and track epidemiological events and/or trends.

Proceedings ArticleDOI
Richard Hull1
01 May 1997
TL;DR: This tutorial describes fundamental problems raised by semantic heterogeneity and surveys theoretical frameworks that can provide solutions for them.
Abstract: Modern database management systems essentially solve the problem of accessing and managing large volumes of related data on a single platform, or on a cluster of tightly-coupled platforms. But many problems remain when two or more databases need to work together. A fundamental problem is raised by semantic heterogeneity the fact that data duplicated across multiple databases is represented differently in the underlying database schemas. This tutorial describes fundamental problems raised by semantic heterogeneity and surveys theoretical frameworks that can provide solutions for them. The tutorial considers the following topics: (1) representative architectures for supporting database interoperation; (2) notions for comparing the “information capacity” of database schemas; (3) providing support for read-only integrated views of data, including the .virtual and materialized approaches; (4) providing support for read-write integrated views of data, including the issue of workflows on heterogeneous databases; and (5) research and tools for accessing and effectively using meta-data, e.g., to identify the relationships between schemas of different databases.

Patent
TL;DR: In this paper, an acoustic signature recognition and identification system receives signals from a sensor placed on a designated piece of equipment, and the acoustic data is digitized and processed, via a Fast Fourier Transform routine, to create a spectrogram image of frequency versus time.
Abstract: An acoustic signature recognition and identification system receives signals from a sensor placed on a designated piece of equipment. The acoustic data is digitized and processed, via a Fast Fourier Transform routine, to create a spectrogram image of frequency versus time. The spectrogram image is then normalized to permit acoustic pattern recognition regardless of the surrounding environment or magnitude of the acoustic signal. A feature extractor then detects, tracks and characterizes the lines which form the spectrogram. Specifically, the lines are detected via a KY process that is applied to each pixel in the line. A blob coloring process then groups spatially connected pixels into a single signal object. The harmonic content of the lines is then determined and compared with stored templates of known acoustic signatures to ascertain the type of machinery. An alert is then generated in response to the recognized and identified machinery.

Journal ArticleDOI
TL;DR: In this article, it was shown that the bubble wall is collapsing at more than 4 times the ambient speed of sound in the gas just prior to the light emitting moment when the gas has been compressed to a density determined by its van der Waals hard core.

Journal ArticleDOI
TL;DR: In this article, the authors considered the convergence of variance type models for a regression function or for the logarithm of a probability function, conditional probability functions, density function, hazard function, or spectral density function.
Abstract: Analysis of variance type models are considered for a regression function or for the logarithm of a probability function, conditional probability function, density function, conditional density function, hazard function, conditional hazard function or spectral density function. Polynomial splines are used to model the main effects, and their tensor products are used to model any interaction components that are included. In the special context of survival analysis, the baseline hazard function is modeled and nonproportionality is allowed. In general, the theory involves the $L_2$ rate of convergence for the fitted model and its components. The methodology involves least squares and maximum likelihood estimation, stepwise addition of basis functions using Rao statistics, stepwise deletion using Wald statistics and model selection using the Bayesian information criterion, cross-validation or an independent test set. Publicly available software, written in C and interfaced to S/S-PLUS, is used to apply this methodology to real data.

Patent
04 Dec 1997
TL;DR: In this article, a distributed telecommunications switching subsystem (100) receives and distributes data packets passed between a plurality of switching subsystems or channel banks (102, 104, 106) and a data packet switch (110).
Abstract: A distributed telecommunications switching subsystem (100) receives and distributes data packets passed between a plurality of switching subsystems or channel banks (102, 104, 106) and a data packet switch (110). Each channel bank (102) has a stored list of addresses. When a channel bank (102) receives a data packet, it compares the address of the data packet to its stored list of addresses, and transmits the data packet to another channel bank (104) if the address of the data packet does not correspond to any of the addresses in its stored list of addresses. The data packet is passed on until it reaches a channel bank (106) with a matching address or else it is appropriately handled by a last channel bank (106) in the chain. If the address of data packet matches an address in its stored list of addresses, the channel bank (102) passes the data packet through a subscriber interface card (120) to a customer premises equipment unit (108) corresponding to the address of the data packet.

Journal ArticleDOI
TL;DR: A compact statistical model for the joint distribution of path gain and delay spread within a cellular environment, which lends itself readily to Monte Carlo simulation and is useful for performance studies of cellular systems with bandwidths up to tens of kilohertz.
Abstract: We derive a statistical model for the distribution of RMS delay spread (/spl tau//sub rms/) within a cellular environment, including the effects of base-to-mobile distance, environment type (urban, suburban, rural, and mountainous areas), and the correlation between delay spread and shadow fading. We begin with intuitive arguments that /spl tau//sub rms/ should be lognormally distributed at any given distance d; that the median of this distribution should grow as some (weak) power of d and that the variation about the median should be negatively correlated with shadow fading gain. We then present empirical evidence, drawn from a wide array of published reports, which gives strong support to these conjectures. Finally, we combine our findings with the widely used model for path gain in a cellular environment. The result is a compact statistical model for the joint distribution of path gain and delay spread. The model lends itself readily to Monte Carlo simulation and is useful for performance studies of cellular systems with bandwidths up to tens of kilohertz.