scispace - formally typeset
Search or ask a question

Showing papers by "Bell Labs published in 1998"


Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations


Journal ArticleDOI
Tin Kam Ho1
TL;DR: A method to construct a decision tree based classifier is proposed that maintains highest accuracy on training data and improves on generalization accuracy as it grows in complexity.
Abstract: Much of previous attention on decision trees focuses on the splitting criteria and optimization of tree sizes. The dilemma between overfitting and achieving maximum accuracy is seldom resolved. A method to construct a decision tree based classifier is proposed that maintains highest accuracy on training data and improves on generalization accuracy as it grows in complexity. The classifier consists of multiple trees constructed systematically by pseudorandomly selecting subsets of components of the feature vector, that is, trees constructed in randomly chosen subspaces. The subspace method is compared to single-tree classifiers and other forest construction methods by experiments on publicly available datasets, where the method's superiority is demonstrated. We also discuss independence between trees in a forest and relate that to the combined classification accuracy.

5,984 citations


Proceedings ArticleDOI
29 Sep 1998
TL;DR: This paper describes a wireless communication architecture known as vertical BLAST (Bell Laboratories Layered Space-Time) or V-BLAST, which has been implemented in real-time in the laboratory and demonstrated spectral efficiencies of 20-40 bps/Hz in an indoor propagation environment at realistic SNRs and error rates.
Abstract: Information theory research has shown that the rich-scattering wireless channel is capable of enormous theoretical capacities if the multipath is properly exploited In this paper, we describe a wireless communication architecture known as vertical BLAST (Bell Laboratories Layered Space-Time) or V-BLAST, which has been implemented in real-time in the laboratory Using our laboratory prototype, we have demonstrated spectral efficiencies of 20-40 bps/Hz in an indoor propagation environment at realistic SNRs and error rates To the best of our knowledge, wireless spectral efficiencies of this magnitude are unprecedented and are furthermore unattainable using traditional techniques

3,925 citations


Proceedings ArticleDOI
01 Jun 1998
TL;DR: This work proposes a new clustering algorithm called CURE that is more robust to outliers, and identifies clusters having non-spherical shapes and wide variances in size, and demonstrates that random sampling and partitioning enable CURE to not only outperform existing algorithms but also to scale well for large databases without sacrificing clustering quality.
Abstract: Clustering, in data mining, is useful for discovering groups and identifying interesting distributions in the underlying data. Traditional clustering algorithms either favor clusters with spherical shapes and similar sizes, or are very fragile in the presence of outliers. We propose a new clustering algorithm called CURE that is more robust to outliers, and identifies clusters having non-spherical shapes and wide variances in size. CURE achieves this by representing each cluster by a certain fixed number of points that are generated by selecting well scattered points from the cluster and then shrinking them toward the center of the cluster by a specified fraction. Having more than one representative point per cluster allows CURE to adjust well to the geometry of non-spherical shapes and the shrinking helps to dampen the effects of outliers. To handle large databases, CURE employs a combination of random sampling and partitioning. A random sample drawn from the data set is first partitioned and each partition is partially clustered. The partial clusters are then clustered in a second pass to yield the desired clusters. Our experimental results confirm that the quality of clusters produced by CURE is much better than those found by existing algorithms. Furthermore, they demonstrate that random sampling and partitioning enable CURE to not only outperform existing algorithms but also to scale well for large databases without sacrificing clustering quality.

2,652 citations


Proceedings Article
24 Aug 1998
TL;DR: It is shown formally that partitioning and clustering techniques for similarity search in HDVSs exhibit linear complexity at high dimensionality, and that existing methods are outperformed on average by a simple sequential scan if the number of dimensions exceeds around 10.
Abstract: For similarity search in high-dimensional vector spaces (or ‘HDVSs’), researchers have proposed a number of new methods (or adaptations of existing methods) based, in the main, on data-space partitioning. However, the performance of these methods generally degrades as dimensionality increases. Although this phenomenon-known as the ‘dimensional curse’-is well known, little or no quantitative a.nalysis of the phenomenon is available. In this paper, we provide a detailed analysis of partitioning and clustering techniques for similarity search in HDVSs. We show formally that these methods exhibit linear complexity at high dimensionality, and that existing methods are outperformed on average by a simple sequential scan if the number of dimensions exceeds around 10. Consequently, we come up with an alternative organization based on approximations to make the unavoidable sequential scan as fast as possible. We describe a simple vector approximation scheme, called VA-file, and report on an experimental evaluation of this and of two tree-based index methods (an R*-tree and an X-tree).

1,744 citations


Journal ArticleDOI
TL;DR: It is proved that no MAX SNP-hard problem has a polynomial time approximation scheme, unless NP = P, and there exists a positive ε such that approximating the maximum clique size in an N-vertex graph to within a factor of Nε is NP-hard.
Abstract: We show that every language in NP has a probablistic verifier that checks membership proofs for it using logarithmic number of random bits and by examining a constant number of bits in the proof. If a string is in the language, then there exists a proof such that the verifier accepts with probability 1 (i.e., for every choice of its random string). For strings not in the language, the verifier rejects every provided “proof” with probability at least 1/2. Our result builds upon and improves a recent result of Arora and Safra [1998] whose verifiers examine a nonconstant number of bits in the proof (though this number is a very slowly growing function of the input length).As a consequence, we prove that no MAX SNP-hard problem has a polynomial time approximation scheme, unless NP = P. The class MAX SNP was defined by Papadimitriou and Yannakakis [1991] and hard problems for this class include vertex cover, maximum satisfiability, maximum cut, metric TSP, Steiner trees and shortest superstring. We also improve upon the clique hardness results of Feige et al. [1996] and Arora and Safra [1998] and show that there exists a positive e such that approximating the maximum clique size in an N-vertex graph to within a factor of Ne is NP-hard.

1,501 citations


Journal ArticleDOI
TL;DR: The design of a new methodology for representing the relationship between two sets of spectral envelopes and the proposed transform greatly improves the quality and naturalness of the converted speech signals compared with previous proposed conversion methods.
Abstract: Voice conversion, as considered in this paper, is defined as modifying the speech signal of one speaker (source speaker) so that it sounds as if it had been pronounced by a different speaker (target speaker). Our contribution includes the design of a new methodology for representing the relationship between two sets of spectral envelopes. The proposed method is based on the use of a Gaussian mixture model of the source speaker spectral envelopes. The conversion itself is represented by a continuous parametric function which takes into account the probabilistic classification provided by the mixture model. The parameters of the conversion function are estimated by least squares optimization on the training data. This conversion method is implemented in the context of the HNM (harmonic+noise model) system, which allows high-quality modifications of speech signals. Compared to earlier methods based on vector quantization, the proposed conversion scheme results in a much better match between the converted envelopes and the target envelopes. Evaluation by objective tests and formal listening tests shows that the proposed transform greatly improves the quality and naturalness of the converted speech signals compared with previous proposed conversion methods.

1,109 citations


Journal ArticleDOI
TL;DR: A minimum mean-square-error (MMSE) channel estimator is derived, which makes full use of the time- and frequency-domain correlations of the frequency response of time-varying dispersive fading channels and can significantly improve the performance of OFDM systems in a rapid dispersion fading channel.
Abstract: Orthogonal frequency-division multiplexing (OFDM) modulation is a promising technique for achieving the high bit rates required for a wireless multimedia service. Without channel estimation and tracking, OFDM systems have to use differential phase-shift keying (DPSK), which has a 3-dB signal-to-noise ratio (SNR) loss compared with coherent phase-shift keying (PSK). To improve the performance of OFDM systems by using coherent PSK, we investigate robust channel estimation for OFDM systems. We derive a minimum mean-square-error (MMSE) channel estimator, which makes full use of the time- and frequency-domain correlations of the frequency response of time-varying dispersive fading channels. Since the channel statistics are usually unknown, we also analyze the mismatch of the estimator-to-channel statistics and propose a robust channel estimator that is insensitive to the channel statistics. The robust channel estimator can significantly improve the performance of OFDM systems in a rapid dispersive fading channel.

1,039 citations


Journal ArticleDOI
C.I. Podilchuk1, Wenjun Zeng2
TL;DR: This work proposes perceptually based watermarking schemes in two frameworks: the block-based discrete cosine transform and multiresolution wavelet framework and discusses the merits of each one, which are shown to provide very good results both in terms of image transparency and robustness.
Abstract: The huge success of the Internet allows for the transmission, wide distribution, and access of electronic data in an effortless manner. Content providers are faced with the challenge of how to protect their electronic data. This problem has generated a flurry of research activity in the area of digital watermarking of electronic content for copyright protection. The challenge here is to introduce a digital watermark that does not alter the perceived quality of the electronic content, while being extremely robust to attack. For instance, in the case of image data, editing the picture or illegal tampering should not destroy or transform the watermark into another valid signature. Equally important, the watermark should not alter the perceived visual quality of the image. From a signal processing perspective, the two basic requirements for an effective watermarking scheme, robustness and transparency, conflict with each other. We propose two watermarking techniques for digital images that are based on utilizing visual models which have been developed in the context of image compression. Specifically, we propose watermarking schemes where visual models are used to determine image dependent upper bounds on watermark insertion. This allows us to provide the maximum strength transparent watermark which, in turn, is extremely robust to common image processing and editing such as JPEG compression, rescaling, and cropping. We propose perceptually based watermarking schemes in two frameworks: the block-based discrete cosine transform and multiresolution wavelet framework and discuss the merits of each one. Our schemes are shown to provide very good results both in terms of image transparency and robustness.

962 citations


Journal ArticleDOI
David L. Windt1
TL;DR: IMD includes a full graphical user interface and affords modeling with up to eight simultaneous independent variables, as well as parameter estimation using nonlinear, least-squares curve fitting to user-supplied experimental optical data.
Abstract: A computer program called IMD is described. IMD is used for modeling the optical properties (reflectance, transmittance, electric-field intensities, etc.) of multilayer films, i.e., films consisting of any number of layers of any thickness. IMD includes a full graphical user interface and affords modeling with up to eight simultaneous independent variables, as well as parameter estimation (including confidence interval generation) using nonlinear, least-squares curve fitting to user-supplied experimental optical data. The computation methods and user interface are described, and numerous examples are presented that illustrate some of IMD’s unique modeling, fitting, and visualization capabilities. © 1998 American Institute of Physics.

892 citations


Journal ArticleDOI
Jack Harriman Winters1
TL;DR: Standard cellular antennas, smart antennas using fixed beams, and adaptive antennas for base stations, as well as antenna technologies for handsets are described and the potential improvement that these antennas can provide is shown.
Abstract: In this article we discuss current and future antenna technology for wireless systems and the improvement that smart and adaptive antenna arrays can provide. We describe standard cellular antennas, smart antennas using fixed beams, and adaptive antennas for base stations, as well as antenna technologies for handsets. We show the potential improvement that these antennas can provide, including range extension, multipath diversity, interference suppression, capacity increase, and data rate increase. The issues involved in incorporating these antennas into wireless systems using CDMA, GSM, and IS-136 in different environments, such as rural, suburban, and urban areas, as well as indoors, are described. Theoretical, computer simulation, experimental, and field trial results are also discussed that demonstrate the potential of this technology.

Proceedings ArticleDOI
24 Jul 1998
TL;DR: An irregular connectivity mesh representative of a surface having an arbitrary topology is processed to generate a parameterization which maps points in a coarse base domain to points in the mesh, such that the original mesh can be reconstructed from the base domain and the parameterization.
Abstract: An irregular connectivity mesh representative of a surface having an arbitrary topology is processed to generate a parameterization which maps points in a coarse base domain to points in the mesh. An illustrative embodiment uses a multi-level mesh simplification process in conjunction with conformal mapping to efficiently construct a parameterization of a mesh comprising a large number of triangles over a base domain comprising a smaller number of triangles. The parameterization in this embodiment corresponds to the inverse of function mapping each point in the original mesh to one of the triangles of the base domain, such that the original mesh can be reconstructed from the base domain and the parameterization. The mapping function is generated as a combination of a number of sub-functions, each of which relates data points in a mesh of one level in a simplification hierarchy to data points in a mesh of the next coarser level of the simplification hierarchy. The parameterization can also be used to construct, from the original irregular connectivity mesh, an adaptive remesh having a regular connectivity which is substantially easier to process than the original mesh.

Proceedings ArticleDOI
01 Oct 1998
TL;DR: New packet classification schemes are presented that, with a worst-case and traffic-independent performance metric, can classify packets, by checking amongst a few thousand filtering rules, at rates of a million packets per second using range matches on more than 4 packet header fields.
Abstract: The ability to provide differentiated services to users with widely varying requirements is becoming increasingly important, and Internet Service Providers would like to provide these differentiated services using the same shared network infrastructure. The key mechanism, that enables differentiation in a connectionless network, is the packet classification function that parses the headers of the packets, and after determining their context, classifies them based on administrative policies or real-time reservation decisions. Packet classification, however, is a complex operation that can become the bottleneck in routers that try to support gigabit link capacities. Hence, many proposals for differentiated services only require classification at lower speed edge routers and also avoid classification based on multiple fields in the packet header even if it might be advantageous to service providers. In this paper, we present new packet classification schemes that, with a worst-case and traffic-independent performance metric, can classify packets, by checking amongst a few thousand filtering rules, at rates of a million packets per second using range matches on more than 4 packet header fields. For a special case of classification in two dimensions, we present an algorithm that can handle more than 128K rules at these speeds in a traffic independent manner. We emphasize worst-case performance over average case performance because providing differentiated services requires intelligent queueing and scheduling of packets that precludes any significant queueing before the differentiating step (i.e., before packet classification). The presented filtering or classification schemes can be used to classify packets for security policy enforcement, applying resource management decisions, flow identification for RSVP reservations, multicast look-ups, and for source-destination and policy based routing. The scalability and performance of the algorithms have been demonstrated by implementation and testing in a prototype system.

Book ChapterDOI
Daniel Bleichenbacher1
23 Aug 1998
TL;DR: A new adaptive chosen ciphertext attack against certain protocols based on RSA is introduced if the attacker has access to an oracle that returns only one bit telling whether the ciphertext corresponds to some unknown block of data encrypted using PKCS #1.0.
Abstract: This paper introduces a new adaptive chosen ciphertext attack against certain protocols based on RSA. We show that an RSA private-key operation can be performed if the attacker has access to an oracle that, for any chosen ciphertext, returns only one bit telling whether the ciphertext corresponds to some unknown block of data encrypted using PKCS #1. An example of a protocol susceptible to our attack is SSL V.3.0.

Journal ArticleDOI
TL;DR: The theory of /spl Lscr//spl Rscr/ servers enables computation of tight upper bounds on end-to-end delay and buffer requirements in a heterogeneous network, where individual servers may support different scheduling architectures and under different traffic models.
Abstract: We develop a general model, called latency-rate servers (/spl Lscr//spl Rscr/ servers), for the analysis of traffic scheduling algorithms in broadband packet networks. The behavior of an /spl Lscr//spl Rscr/ server is determined by two parameters-the latency and the allocated rate. Several well-known scheduling algorithms, such as weighted fair queueing, virtualclock, self-clocked fair queueing, weighted round robin, and deficit round robin, belong to the class of /spl Lscr//spl Rscr/ servers. We derive tight upper bounds on the end-to-end delay, internal burstiness, and buffer requirements of individual sessions in an arbitrary network of /spl Lscr//spl Rscr/ servers in terms of the latencies of the individual schedulers in the network, when the session traffic is shaped by a token bucket. The theory of /spl Lscr//spl Rscr/ servers enables computation of tight upper bounds on end-to-end delay and buffer requirements in a heterogeneous network, where individual servers may support different scheduling architectures and under different traffic models.

Journal ArticleDOI
TL;DR: This work considers a system with beamforming capabilities in the receiver, and power control, and proposes an iterative algorithm to jointly update the transmission powers and the beamformer weights that converges to the jointly optimal beamforming and transmission power vector.
Abstract: The interference reduction capability of antenna arrays and the power control algorithms have been considered separately as means to increase the capacity in wireless communication networks. The minimum variance distortionless response beamformer maximizes the signal-to-interference-and-noise ratio (SINR) when it is employed in the receiver of a wireless link. In a system with omnidirectional antennas, power control algorithms are used to maximize the SINR as well. We consider a system with beamforming capabilities in the receiver, and power control. An iterative algorithm is proposed to jointly update the transmission powers and the beamformer weights so that it converges to the jointly optimal beamforming and transmission power vector. The algorithm is distributed and uses only local interference measurements. In an uplink transmission scenario, it is shown how base assignment can be incorporated in addition to beamforming and power control, such that a globally optimum solution is obtained. The network capacity and the saving in mobile power are evaluated through numerical study.

Journal ArticleDOI
Lov K. Grover1
TL;DR: This paper shows that this algorithm for exhaustive search can be implemented by replacing the W-H transform by almost any quantum mechanical operation, which leads to several new applications where it improves the number of steps by a square-root.
Abstract: A quantum computer has a clear advantage over a classical computer for exhaustive search The quantum mechanical algorithm for exhaustive search was originally derived by using subtle properties of a particular quantum mechanical operation called the Walsh-Hadamard (W-H) transform This paper shows that this algorithm can be implemented by replacing the W-H transform by almost any quantum mechanical operation This leads to several new applications where it improves the number of steps by a square root It also broadens the scope for implementation since it demonstrates quantum mechanical algorithms that can adapt to available technology

Proceedings ArticleDOI
P.V. Mamyshev1
20 Sep 1998
TL;DR: In this paper, a simple all-optical regeneration technique is described, which suppresses the noise in "zeros" and the amplitude fluctuations in "ones" of return-to-zero optical data streams.
Abstract: A simple all-optical regeneration technique is described. The regenerator suppresses the noise in "zeros" and the amplitude fluctuations in "ones" of return-to-zero optical data streams. Numerical simulations and experimental results are presented.

Proceedings ArticleDOI
01 Jun 1998
TL;DR: This paper introduces two new sampling-based summary statistics, concise samples and counting samples, and presents new techniques for their fast incremental maintenance regardless of the data distribution, and considers their application to providing fast approximate answers to hot list queries.
Abstract: In large data recording and warehousing environments, it is often advantageous to provide fast, approximate answers to queries, whenever possible. Before DBMSs providing highly-accurate approximate answers can become a reality, many new techniques for summarizing data and for estimating answers from summarized data must be developed. This paper introduces two new sampling-based summary statistics, concise samples and counting samples, and presents new techniques for their fast incremental maintenance regardless of the data distribution. We quantify their advantages over standard sample views in terms of the number of additional sample points for the same view size, and hence in providing more accurate query answers. Finally, we consider their application to providing fast approximate answers to hot list queries. Our algorithms maintain their accuracy in the presence of ongoing insertions to the data warehouse.

Journal ArticleDOI
Rahul Sarpeshkar1
TL;DR: The results suggest that it is likely that the brain computes in a hybrid fashion and that an underappreciated and important reason for the efficiency of the human brain, which consumes only 12 W, is the hybrid and distributed nature of its architecture.
Abstract: We review the pros and cons of analog and digital computation. We propose that computation that is most efficient in its use of resources is neither analog computation nor digital computation but, rather, a mixture of the two forms. For maximum efficiency, the information and information-processing resources of the hybrid form must be distributed over many wires, with an optimal signal-to-noise ratio per wire. Our results suggest that it is likely that the brain computes in a hybrid fashion and that an underappreciated and important reason for the efficiency of the human brain, which consumes only 12 W, is the hybrid and distributed nature of its architecture.

Proceedings Article
24 Aug 1998
TL;DR: Algorithms for computing optimal bucket boundaries in time proportional to the square of the number of distinct data values, for a broad class of optimality metrics and an enhancement to traditional histograms that allows us to provide quality guarantees on individual selectivity estimates are presented.
Abstract: 1 Introduction Histograms are commonly used to capture attribute value distribution statistics for query optimizers. More recently, histograms have also been considered as a way to produce quick approximate answers to decision support queries. This widespread interest in histograms motivates the problem of computing his-tograms that are good under a given error metric. In particular, we are interested in an efficient algorithm for choosing the bucket boundaries in a way that either minimizes the estimation error for a given amount of space (number of buckets) or, conversely, minimizes the space needed for a given upper bound on the error. Under the assumption that finding optimal bucket boundaries is computationally inefficient, previous research has focused on heuristics with no provable bounds on the quality of the solutions. In this paper, we present algorithms for computing optimal bucket boundaries in time proportional to the square of the number of distinct data values, for a broad class of optimality metrics. This class includes the V-Optimality constraint, which has been shown to result in the most accurate histograms for several selectivity estimation problems. Through experiments , we show that optimal histograms can achieve substantially lower estimation errors than histograms produced by popular heuristics. We also present new heuristics with provably good space-accuracy trade-offs that are significantly faster than the optimal algorithm. Finally, we present an enhancement to traditional histograms that allows us to provide quality guarantees on individual selectivity estimates. In our experiments, these quality guarantees were highly effective in isolating outliers in selectivity estimates. It is often the case that a data set cannot be stored or processed in its entirety; only a summarized form is stored. A typical way in which data is summarized is by means of a histogram. The summarized data can be used to answer various kinds of queries, in the same way the original data would have been used. The answer obtained is not exact but approximate, and contains an error due to the information lost when the data was summarized. This error can be measured according to some appropriate metric such as the maximum , average, or mean squared error of the estimate. This basic idea has long been used in a database context to estimate the result sizes of relational operators for the purpose of cost-based query optimization. The objective is to approximate the data distribution of the values in a column, and to use that …

Proceedings ArticleDOI
23 Feb 1998
TL;DR: This work devise a new technique called cycle pruning, which reduces the amount of time needed to find cyclic association rules by studying the interaction between association rules and time, and presents two new algorithms for discovering such rules.
Abstract: We study the problem of discovering association rules that display regular cyclic variation over time. For example, if we compute association rules over monthly sales data, we may observe seasonal variation where certain rules are true at approximately the same month each year. Similarly, association rules can also display regular hourly, daily, weekly, etc., variation that is cyclical in nature. We demonstrate that existing methods cannot be naively extended to solve this problem of cyclic association rules. We then present two new algorithms for discovering such rules. The first one, which we call the sequential algorithm, treats association rules and cycles more or less independently. By studying the interaction between association rules and time, we devise a new technique called cycle pruning, which reduces the amount of time needed to find cyclic association rules. The second algorithm, which we call the interleaved algorithm, uses cycle pruning and other optimization techniques for discovering cyclic association rules. We demonstrate the effectiveness of the interleaved algorithm through a series of experiments. These experiments show that the interleaved algorithm can yield significant performance benefits when compared to the sequential algorithm. Performance improvements range from 5% to several hundred percent.

Journal ArticleDOI
TL;DR: The potential uses of mobile agents in network management are discussed and software agents and a navigation model that determines agent mobility are defined and a number of potential advantages and disadvantages are listed.
Abstract: In this article we discuss the potential uses of mobile agents in network management and define software agents and a navigation model that determines agent mobility. We list a number of potential advantages and disadvantages of mobile agents and include a short commentary on the ongoing standardization activity. The core of this article comprises descriptions of several actual and potential applications of mobile agents in the five OSI functional areas of network management. A brief review of other research activity in the area and prospects for the future conclude the presentation.

Journal ArticleDOI
TL;DR: The article describes how to perform domain engineering by identifying the commonalities and variabilities within a family of products through interesting examples dealing with reuse libraries, design patterns, and programming language design.
Abstract: The article describes how to perform domain engineering by identifying the commonalities and variabilities within a family of products. Through interesting examples dealing with reuse libraries, design patterns, and programming language design, the authors suggest a systematic scope, commonalities, and variabilities approach to formal analysis. Their SCV analysis has been an integral part of the FAST (Family-oriented Abstraction, Specification, and Translation) technology applied to over 25 domains at Lucent Technologies.

Journal ArticleDOI
16 Aug 1998
TL;DR: The capacity and mutual information of a broadband fading channel consisting of a finite number of time-varying paths is investigated and it is shown that if white-like signals are used instead (as is common in spread-spectrum systems), the mutual information is inversely proportional to the number of resolvable paths L/spl tilde/ with energy spread out.
Abstract: We investigate the capacity and mutual information of a broadband fading channel consisting of a finite number of time-varying paths. We show that the capacity of the channel in the wideband limit is the same as that of a wideband Gaussian channel with the same average received power. However, the input signals needed to achieve the capacity must be "peaky" in time or frequency. In particular, we show that if white-like signals are used instead (as is common in spread-spectrum systems), the mutual information is inversely proportional to the number of resolvable paths L/spl tilde/ with energy spread out, and in fact approaches 0 as the number of paths gets large. This is true even when the paths are assumed to be tracked perfectly at the receiver. A critical parameter L/spl tilde//sub crit/ is defined in terms of system parameters to delineate the threshold on L over which such overspreading phenomenon occurs.

Journal ArticleDOI
TL;DR: This parameter-expanded Ei M, PX-EM, algorithm shares the simplicity and stability of ordinary EM, but has a faster rate of convergence since its M step performs a more efficient analysis.
Abstract: SUMMARY The EM algorithm and its extensions are popular tools for modal estimation but ar-e often criticised for their slow convergence. We propose a new method that can often make EM much faster. The intuitive idea is to use a 'covariance adjustment' to correct the analysis of the M step, capitalising on extra information captured in the imputed complete data. The way we accomplish this is by parameter expansion; we expand the complete-data model while preserving the observed-data model and use the expanded complete-data model to generate EM. This parameter-expanded Ei M, PX-EM, algorithm shares the simplicity and stability of ordinary EM, but has a faster rate of convergence since its M step performs a more efficient analysis. The PX-EM algorithm is illustrated for the multivariate t distribution, a random effects model, factor analysis, probit regression and a Poisson imaging model.

Journal ArticleDOI
TL;DR: The relations of non-subsampled filter banks to continuous-time filtering are investigated and the design flexibility is illustrated by giving a procedure for designing maximally flat two-channel filter banks that yield highly regular wavelets with a given number of vanishing moments.
Abstract: Perfect reconstruction oversampled filter banks are equivalent to a particular class of frames in l/sup 2/(Z). These frames are the subject of this paper. First, the necessary and sufficient conditions of a filter bank for implementing a frame or a tight frame expansion are established, as well as a necessary and sufficient condition for perfect reconstruction using FIR filters after an FIR analysis. Complete parameterizations of oversampled filter banks satisfying these conditions are given. Further, we study the condition under which the frame dual to the frame associated with an FIR filter bank is also FIR and give a parameterization of a class of filter banks satisfying this property. Then, we focus on non-subsampled filter banks. Non-subsampled filter banks implement transforms similar to continuous-time transforms and allow for very flexible design. We investigate the relations of these filter banks to continuous-time filtering and illustrate the design flexibility by giving a procedure for designing maximally flat two-channel filter banks that yield highly regular wavelets with a given number of vanishing moments.

Proceedings ArticleDOI
18 May 1998
TL;DR: In this article, the distribution of the peak-to-average power (PAP) ratio of an OFDM signal is derived, showing that large PAP ratios only occur very infrequently.
Abstract: The distribution of the peak-to-average power (PAP) ratio of an OFDM signal is derived, showing that large PAP ratios only occur very infrequently. Because of this, PAP reducing techniques which distort the signal can be quite effective, since only a small fraction of the OFDM signal has to be distorted. One example of such a technique is peak windowing. It is shown that peak windowing can achieve PAP ratios around 4 dB for an arbitrary number of subcarriers, at the cost of a slight increase in the BER and out-of-band radiation. Simulations with realistic power amplifier models show that a backoff of about 5 dB is required to get an out-of-band radiation level of 30 dB below the in-band spectral density.

Journal ArticleDOI
TL;DR: In this paper, a simple method of fabricating long-period fiber gratings by direct exposure of the fibre to focused 10.6 /spl mu/m wavelength CO/sub 2/ laser pulses is presented.
Abstract: A new, simple method of fabricating long-period fibre gratings by direct exposure of the fibre to focused 10.6 /spl mu/m wavelength CO/sub 2/ laser pulses is presented. No ultraviolet exposure is used. Hydrogen loading is found to enhance the writing sensitivity.

Journal ArticleDOI
01 Nov 1998
TL;DR: It is shown that in some contexts this idea of using prior knowledge by creating virtual examples and thereby expanding the effective training-set size is mathematically equivalent to incorporating the prior knowledge as a regularizer, suggesting that the strategy is well motivated.
Abstract: One of the key problems in supervised learning is the insufficient size of the training set. The natural way for an intelligent learner to counter this problem and successfully generalize is to exploit prior information that may be available about the domain or that can be learned from prototypical examples. We discuss the notion of using prior knowledge by creating virtual examples and thereby expanding the effective training-set size. We show that in some contexts this idea is mathematically equivalent to incorporating the prior knowledge as a regularizer, suggesting that the strategy is well motivated. The process of creating virtual examples in real-world pattern recognition tasks is highly nontrivial. We provide demonstrative examples from object recognition and speech recognition to illustrate the idea.