scispace - formally typeset
Search or ask a question

Showing papers on "Adaptive algorithm published in 2001"


Journal ArticleDOI
TL;DR: The main result of the paper is the construction of an adaptive scheme which produces an approximation to u with error O(N -s ) in the energy norm, whenever such a rate is possible by N-term approximation.
Abstract: This paper is concerned with the construction and analysis of wavelet-based adaptive algorithms for the numerical solution of elliptic equations. These algorithms approximate the solution u of the equation by a linear combination of N wavelets, Therefore, a benchmark for their performance is provided by the rate of best approximation to u by an arbitrary linear combination of N wavelets (so called N-term approximation), which would be obtained by keeping the N largest wavelet coefficients of the real solution (which of course is unknown). The main result of the paper is the construction of an adaptive scheme which produces an approximation to u with error O(N -s ) in the energy norm, whenever such a rate is possible by N-term approximation. The range of s > 0 for which this holds is only limited by the approximation properties of the wavelets together with their ability to compress the elliptic operator. Moreover, it is shown that the number of arithmetic operations needed to compute the approximate solution stays proportional to N. The adaptive algorithm applies to a wide class of elliptic problems and wavelet bases. The analysis in this paper puts forward new techniques for treating elliptic problems as well as the linear systems of equations that arise from the wavelet discretization.

488 citations


Proceedings ArticleDOI
Sem Borst1, Philip Whiting
28 Feb 2001
TL;DR: It is shown that the 'best' user may be identified as the maximum-rate user when the feasible rates are weighed with some appropriately determined coefficients, and the optimal strategy may be viewed as a revenue-based policy.
Abstract: The relative delay tolerance of data applications, together with the bursty traffic characteristics, opens up the possibility for scheduling transmissions so as to optimize throughput. A particularly attractive approach, in fading environments, is to exploit the variations in the channel conditions, and transmit to the user with the currently 'best' channel. We show that the 'best' user may be identified as the maximum-rate user when the feasible rates are weighed with some appropriately determined coefficients. Interpreting the coefficients as shadow prices, or reward values, the optimal strategy may thus be viewed as a revenue-based policy. Calculating the optimal revenue vector directly is a formidable task, requiring detailed information on the channel statistics. Instead, we present adaptive algorithms for determining the optimal revenue vector on-line in an iterative fashion, without the need for explicit knowledge of the channel behavior. Starting from an arbitrary initial vector, the algorithms iteratively adjust the reward values to compensate for observed deviations from the target throughput ratios. The algorithms are validated through extensive numerical experiments. Besides verifying long-run convergence, we also examine the transient performance, in particular the rate of convergence to the optimal revenue vector. The results show that the target throughput ratios are tightly maintained, and that the algorithms are well able to track changes in the channel conditions or throughput targets.

248 citations


Journal ArticleDOI
01 Sep 2001
TL;DR: Four different strategies, including an adaptive algorithm, to control the movement of the freely swinging shank were developed on the basis of computer simulations and experimentally evaluated on two subjects with paraplegia due to a complete thoracic spinal cord injury.
Abstract: A crucial issue of functional electrical stimulation (FES) is the control of motor function by the artificial activation of paralyzed muscles. Major problems that limit the success of current FES systems are the nonlinearity of the target system and the rapid change of muscle properties due to fatigue. In this study, four different strategies, including an adaptive algorithm, to control the movement of the freely swinging shank were developed on the basis of computer simulations and experimentally evaluated on two subjects with paraplegia due to a complete thoracic spinal cord injury. After developing a nonlinear, physiologically based model describing the dynamic behavior of the knee joint and muscles, an open-loop approach, a closed-loop approach, and a combination of both were tested. In order to automate the individual adjustments cited, we further evaluated the performance of an adaptive feedforward controller. The two parameters chosen for the adaptation were the threshold pulse width and the scaling factor for adjusting the active moment produced by the stimulated muscle to the fitness of the muscle. These parameters have been chosen because of their significant time variability. The first three controllers with fixed parameters yielded satisfactory result. An additional improvement was achieved by applying the adaptive algorithm that could cope with problems due to muscle fatigue, thus permitting on-line identification of critical parameters of the plant. Although the present study is limited to a simplified experimental setup, its applicability to more complex and functional movements can be expected.

246 citations


Journal ArticleDOI
TL;DR: An adaptive filter algorithm for tracking neural receptive field plasticity based on point process models of spike train activity is presented and an instantaneous steepest descent algorithm is derived by using as the criterion function the instantaneous log likelihood of a point process spike train model.
Abstract: Neural receptive fields are plastic: with experience, neurons in many brain regions change their spiking responses to relevant stimuli. Analysis of receptive field plasticity from experimental measurements is crucial for understanding how neural systems adapt their representations of relevant biological information. Current analysis methods using histogram estimates of spike rate functions in nonoverlapping temporal windows do not track the evolution of receptive field plasticity on a fine time scale. Adaptive signal processing is an established engineering paradigm for estimating time-varying system parameters from experimental measurements. We present an adaptive filter algorithm for tracking neural receptive field plasticity based on point process models of spike train activity. We derive an instantaneous steepest descent algorithm by using as the criterion function the instantaneous log likelihood of a point process spike train model. We apply the point process adaptive filter algorithm in a study of spatial (place) receptive field properties of simulated and actual spike train data from rat CA1 hippocampal neurons. A stability analysis of the algorithm is sketched in the Appendix. The adaptive algorithm can update the place field parameter estimates on a millisecond time scale. It reliably tracked the migration, changes in scale, and changes in maximum firing rate characteristic of hippocampal place fields in a rat running on a linear track. Point process adaptive filtering offers an analytic method for studying the dynamics of neural receptive fields.

165 citations


Journal ArticleDOI
TL;DR: The results show that for constant power complex Gaussian noise, if the signal is matched to the steering vector, ABORT, GLRT, and AMF give approximately equivalent probability of detection, higher than that of ACE, which trades detection probability for an extra invariance to scale mismatch between training and test data.
Abstract: Research in the area of signal detection in the presence of unknown interference has resulted in a number of adaptive detection algorithms. Examples of such algorithms include the adaptive matched filter (AMF), the generalized likelihood ratio test (GLRT), and the adaptive coherence estimator (ACE). Each of these algorithms results in a tradeoff between detection performance for matched signals and rejection performance for mismatch signals. This paper introduces a new detection algorithm we call the adaptive beamformer orthogonal rejection test (ABORT). Our test decides if an observation contains a multidimensional signal belonging to one subspace or if it contains a multidimensional signal belonging to an orthogonal subspace when unknown complex Gaussian noise is present. In our analysis, we use a statistical hypothesis testing framework to develop a generalized likelihood ratio decision rule. We evaluate the performance of this decision rule in both the matched and mismatched signal cases. Our results show that for constant power complex Gaussian noise, if the signal is matched to the steering vector, ABORT, GLRT, and AMF give approximately equivalent probability of detection, higher than that of ACE, which trades detection probability for an extra invariance to scale mismatch between training and test data. Of these four tests, ACE is most selective and, therefore, least tolerant of mismatch, whereas AMF is most tolerant of mismatch and, therefore, least selective, ABORT and GLRT offer compromises between these extremes, with ABORT more like ACE and with GLRT more like AMF.

163 citations


Journal ArticleDOI
TL;DR: Computer simulation is used to study the convergence speed and steady-state BER misadjustment of this adaptive MBER linear multiuser detector, and the results show that it outperforms an existing LMS-style adaptiveMBER algorithm.
Abstract: The problem of constructing adaptive minimum bit error rate (MBER) linear multiuser detectors is considered for direct-sequence code division multiple access (DS-CDMA) signals transmitted through multipath channels. Based on the approach of kernel density estimation for approximating the bit error rate (BER) from training data, a least mean squares (LMS) style stochastic gradient adaptive algorithm is developed for training linear multiuser detectors. Computer simulation is used to study the convergence speed and steady-state BER misadjustment of this adaptive MBER linear multiuser detector, and the results show that it outperforms an existing LMS-style adaptive MBER algorithm presented by Yeh et al. (see Proc. Globecom, Sydney, Australia, p.3590-95, 1998).

152 citations


Proceedings ArticleDOI
25 Aug 2001
TL;DR: The operational features of the OPAC algorithm are presented and the implementation and-field testing of OPAC within the RT-TRACS system are described.
Abstract: The Real-time Traffic Adaptive Control System (RT-TRACS) represents a new, state-of-the-art system in advanced traffic signal control. It has been developed cooperatively by a team of U.S. academic, private and public researchers under the guidance of the Federal Highway Administration (FHWA). The system provides a framework to run multiple traffic control algorithms, existing ones as well as new adaptive algorithms. The OPAC (Optimized Policies for Adaptive Control) control strategy, which provides a dual capability of distributed individual intersection control as well as coordinated control of intersections in a network, is the first adaptive algorithm implemented within the RT-TRACS framework. OPAC was the first comprehensive strategy to be developed in the U.S. for real-time traffic-adaptive control of signal systems. This paper presents the operational features of the OPAC algorithm and describes the implementation and-field testing of OPAC within the RT-TRACS system.

147 citations


Journal ArticleDOI
TL;DR: Based on the {a posteriori} error bound of hp-discontinuous Galerkin finite element approximations to first-order hyperbolic problems, the corresponding adaptive algorithm is designed and implemented to ensure reliable and efficient control of the error in the prescribed functional to within a given tolerance.
Abstract: We consider the {a posteriori} error analysis of hp-discontinuous Galerkin finite element approximations to first-order hyperbolic problems. In particular, we discuss the question of error estimation for linear functionals, such as the outflow flux and the local average of the solution. Based on our {a posteriori} error bound we design and implement the corresponding adaptive algorithm to ensure reliable and efficient control of the error in the prescribed functional to within a given tolerance. This involves exploiting both local polynomial-degree variation and local mesh subdivision. The theoretical results are illustrated by a series of numerical experiments.

145 citations


Journal ArticleDOI
TL;DR: An adaptive hybrid algorithm to invert ocean acoustic field measurements for seabed geoacoustic parameters is presented, employing an adaptive approach to control the trade off between random variation and gradient-based information in the inversion.
Abstract: This paper presents an adaptive hybrid algorithm to invert ocean acoustic field measurements for seabed geoacoustic parameters. The inversion combines a global search (simulated annealing) and a local method (downhill simplex), employing an adaptive approach to control the trade off between random variation and gradient-based information in the inversion. The result is an efficient and effective algorithm that successfully navigates challenging parameter spaces including large numbers of local minima, strongly correlated parameters, and a wide range of parameter sensitivities. The algorithm is applied to a set of benchmark test cases, which includes inversion of simulated measurements with and without noise, and cases where the model parameterization is known and where the parameterization most be determined as part of the inversion. For accurate data, the adaptive inversion often produces a model with a Bartlett mismatch lower than the numerical error of the propagation model used to compute the replica fields. For noisy synthetic data, the inversion produces a model with a mismatch that is lower than that for the true parameters. Comparison with previous inversions indicates that the adaptive hybrid method provides the best results to date for the benchmark cases.

140 citations


Proceedings ArticleDOI
22 Apr 2001
TL;DR: This paper examines the question of providing feedback from the network such that the congestion controllers derived from the penalty function formulation lead to the solution of the original unconstrained problem and results in two separate systems which are stable individually.
Abstract: Fair resource allocation in high-speed networks such as the Internet can be viewed as a constrained optimization program. Kelly and his co-workers have shown that an unconstrained penalty function formulation of this problem can be used to design congestion controllers that are stable. In this paper, we examine the question of providing feedback from the network such that the congestion controllers derived from the penalty function formulation lead to the solution of the original unconstrained problem. This can be viewed as the decentralized design of early congestion notification (ECN) marking rates at each node in the Internet to ensure global loss-free operation of a fluid model of the network. We then look at the stability of such a scheme using a time-scale decomposition of the system. This results in two separate systems which are stable individually and we show that under certain assumptions the entire system is semi-globally stable and converges to the equilibrium point exponentially fast.

139 citations


01 Jan 2001
TL;DR: This paper focuses on the NLP-based generalisation and the strategy for pruning both the search space and the final rule set, and shows a significant gain in using NLP in terms of effectiveness and reduction of training time.
Abstract: (LP) is an algorithm for adaptive Information Extraction from Web-related text that induces symbolic rules by learning from a corpus tagged with SGML tags. Induction is performed by bottom-up generalisation of examples in a training corpus. Training is performed in two steps: initially a set of tagging rules is learned; then additional rules are induced to correct mistakes and imprecision in tagging. Shallow NLP is used to generalise rules beyond the flat word structure. Generalization allows a better coverage on unseen texts, as it limits data sparseness and overfitting in the training phase. In experiments on publicly available corpora the algorithm outperforms any other algorithm presented in literature and tested on the same corpora. Experiments also show a significant gain in using NLP in terms of (1) effectiveness (2) reduction of training time and (3) training corpus size. In this paper we present the machine learning algorithm for rule induction. In particular we focus on the NLP-based generalisation and the strategy for pruning both the search space and the final rule set.

Journal ArticleDOI
TL;DR: This paper indicates then how the ingredients combined with concepts from nonlinear or best N -term approximation culminate in an adaptive wavelet scheme for elliptic selfadjoint problems covering boundary value problems as well as boundary integral equations.

Proceedings ArticleDOI
01 Dec 2001
TL;DR: A new adaptive algorithm for automatic detection of text from a natural scene that combines the advantages of several previous approaches for text detection, and utilizes a focus-of-attention approach for text finding is presented.
Abstract: We present a new adaptive algorithm for automatic detection of text from a natural scene. The initial cues of text regions are first detected from the captured image/video. An adaptive color modeling and searching algorithm is then utilized near the initial text cues, to discriminate text/non-text regions. EM optimization algorithm is used for color modeling, under the constraint of text layout relations for a specific language. The proposed algorithm combines the advantages of several previous approaches for text detection, and utilizes a focus-of-attention approach for text finding. The whole algorithm is applied in a prototype system that can automatically detect and recognize sign input from a video camera, and translate the signs into English text or voice streams. We present evaluation results of our algorithm on this system.

Journal ArticleDOI
TL;DR: The performance comparisons between the NN approach and the conventional adaptation approach with an observer is carried out to show the advantages of the proposed control approaches through simulation studies.
Abstract: A neural network (NN)-based adaptive controller with an observer is proposed for the trajectory tracking of robotic manipulators with unknown dynamics nonlinearities. It is assumed that the robotic manipulator has only joint angle position measurements. A linear observer is used to estimate the robot joint angle velocity, while NNs are employed to further improve the control performance of the controlled system through approximating the modified robot dynamics function. The adaptive controller for robots with an observer can guarantee the uniform ultimate bounds of the tracking errors and the observer errors as well as the bounds of the NN weights. For performance comparisons, the conventional adaptive algorithm with an observer using linearity in parameters of the robot dynamics is also developed in the same control framework as the NN approach for online approximating unknown nonlinearities of the robot dynamics. Main theoretical results for designing such an observer-based adaptive controller with the NN approach using multilayer NNs with sigmoidal activation functions, as well as with the conventional adaptive approach using linearity in parameters of the robot dynamics are given. The performance comparisons between the NN approach and the conventional adaptation approach with an observer is carried out to show the advantages of the proposed control approaches through simulation studies.

Proceedings ArticleDOI
25 Nov 2001
TL;DR: An adaptive algorithm for call admission control in wireless networks is developed that guarantees that the handoff blocking rate is below its given threshold and at the same time, minimizes the new call blocking rate.
Abstract: In the present paper, we develop an adaptive algorithm for call admission control in wireless networks. The algorithm is built upon the concept of guard channels and it uses an adaptation algorithm to search automatically the optimal number of guard channels to be reserved at each base station. The quality of service parameters used in our study are the new call blocking probability and the handoff call blocking probability. Our simulation studies are performed for comparisons of the present algorithm with static guard channel policy. Simulation results show that our algorithm guarantees that the handoff blocking rate is below its given threshold and at the same time, minimizes the new call blocking rate.

Journal ArticleDOI
09 Sep 2001
TL;DR: In this article, an adaptive slicing algorithm which can generate optimal slices to achieve deposition without support structures for five-axis hybrid layered manufacturing is presented in the laser aided manufacturing process (LAMP).
Abstract: An adaptive slicing algorithm which can generate optimal slices to achieve deposition without support structures for five-axis hybrid layered manufacturing is presented in this paper. Different from current adaptive slicing, this technique varies not only in layer thickness but also in slicing direction. The Laser Aided Manufacturing Process (LAMP), a five axis system combined material additive and removal process which was developed at the University of Missouri-Rolla, is used as an example. The multiple-degree-of-freedom system allows LAMP to build a part with minimum support structure. However, an automated method for path planning of such a system is necessary. Two techniques have been adapted to build the overhang between two adjacent layers: transition wall and surface tension. This paper addresses the critical slicing algorithm based on the above two techniques. The slicing direction is determined by a marching algorithm which is based on the surface normals of points on the side surface of the current slice.

Journal ArticleDOI
TL;DR: An adaptive threshold modulation framework is presented to improve halftone quality by optimizing error diffusion parameters in the least squares sense and derive adaptive algorithms to optimize edge enhancement halftoning and green noise halftoned.
Abstract: Grayscale digital image halftoning quantizes each pixel to one bit. In error diffusion halftoning, the quantization error at each pixel is filtered and fed back to the input in order to diffuse the quantization error among the neighboring grayscale pixels. Error diffusion introduces nonlinear distortion (directional artifacts), linear distortion (sharpening), and additive noise. Threshold modulation, which alters the quantizer input, has been previously used to reduce either directional artifacts or linear distortion. This paper presents an adaptive threshold modulation framework to improve halftone quality by optimizing error diffusion parameters in the least squares sense. The framework models the quantizer implicitly, so a wide variety of quantizers may be used. Based on the framework, we derive adaptive algorithms to optimize 1) edge enhancement halftoning and 2) green noise halftoning. In edge enhancement halftoning, we minimize linear distortion by controlling the sharpening control parameter. We may also break up directional artifacts by replacing the thresholding quantizer with a deterministic bit flipping (DBF) quantizer. For green noise halftoning, we optimize the hysteresis coefficients.

Proceedings ArticleDOI
04 Dec 2001
TL;DR: An adaptive method for the direct design of FIR filters for the input spectrum design problem is developed and it is shown that this problem may be formulated as a convex optimization problem with linear matrix inequality constraints.
Abstract: An optimal experiment design for system identification is studied. The main contribution is the development of an adaptive method for the direct design of FIR filters for the input spectrum design problem. The accuracy of the identified model is measured in terms of the closed-loop performance of the system using the controller designed from the model. Under the assumption that the identified parameters are sufficiently close to their true values, we show that this problem may be formulated as a convex optimization problem with linear matrix inequality constraints. Thus, a global solution (if feasible) is guaranteed and the solution may further achieve any demanded accuracy. The problem formulation is particularly suited for a practical implementation, thus the extension of the experiment design problem into an iterative/adaptive identification-experiment design framework is straight forward. The adaptive approach is further studied in a simulation example, where the rapid convergence of the method is noted, and the superior result compared to an arbitrary experiment design is clear. The example support the use of the approximations taken in the theoretical approach.

Journal ArticleDOI
TL;DR: A residual-based a posteriori error estimate for boundary integral equations on surfaces is derived from a localisation argument that involves a Lipschitz partition of unity such as nodal basis functions known from finite element methods.
Abstract: A residual-based a posteriori error estimate for boundary integral equations on surfaces is derived in this paper. A localisation argument involves a Lipschitz partition of unity such as nodal basis functions known from finite element methods. The abstract estimate does not use any property of the discrete solution, but simplifies for the Galerkin discretisation of Symm's integral equation if piecewise constants belong to the test space. The estimate suggests an isotropic adaptive algorithm for automatic mesh-refinement. An alternative motivation from a two-level error estimate is possible but then requires a saturation assumption. The efficiency of an anisotropic version is discussed and supported by numerical experiments.

Patent
Hiroyasu Sano1
13 Nov 2001
TL;DR: In this paper, a path detector detects multipath waves on a transmission line based on a plurality of despread signals corresponding to fixed directional beams, and an adder combines the adaptive beam signal for all paths, and a data judging section judges data included in the combined signal.
Abstract: A path detector detects multipath waves on a transmission line based on a plurality of despread signals corresponding to fixed directional beams. An adaptive beam forming section forms an adaptive beam combined signal for each path, using a weight generated by an adaptive algorithm and the despread signals. An adder combines the adaptive beam signal for all paths, and a data judging section judges data included in the combined signal.

Journal ArticleDOI
TL;DR: Computable a posteriori error bounds and related adaptive meshrefining algorithms are provided for the numerical treatment of monotone stationary flow problems with a quite general class of conforming and non-conforming finite element methods.
Abstract: Computable a posteriori error bounds and related adaptive meshrefining algorithms are provided for the numerical treatment of monotone stationary flow problems with a quite general class of conforming and non-conforming finite element methods. A refined residual-based error estimate generalises the works of Verfurth; Dari, Duran and Padra; Bao and Barrett. As a consequence, reliable and efficient averaging estimates can be established on unstructured grids. The symmetric formulation of the incompressible flow problem models certain nonNewtonian flow problems and the Stokes problem with mixed boundary conditions. A Helmholtz decomposition avoids any regularity or saturation assumption in the mathematical error analysis. Numerical experiments for the partly nonconforming method analysed by Kouhia and Stenberg indicate efficiency of related adaptive mesh-refining algorithms.

Book ChapterDOI
05 Jan 2001
TL;DR: This paper presents experiments for searching 114 megabytes of text from the World Wide Web using 5,000 actual user queries from a commercial search engine, and studies several improvement techniques for the standard algorithms to find an algorithm that outperforms existing algorithms in most cases.
Abstract: In [3] we introduced an adaptive algorithm for computing the intersection of k sorted sets within a factor of at most 8k comparisons of the information-theoretic lower bound under a model that deals with an encoding of the shortest proof of the answer This adaptive algorithm performs better for "burstier" inputs than a straightforward worst-case optimal method Indeed, we have shown that, subject to a reasonable measure of instance difficulty, the algorithm adapts optimally up to a constant factor This paper explores how this algorithm behaves under actual data distributions, compared with standard algorithms We present experiments for searching 114 megabytes of text from the World Wide Web using 5,000 actual user queries from a commercial search engine From the experiments, it is observed that the theoretically optimal adaptive algorithm is not always the optimal in practice, given the distribution of WWW text data We then proceed to study several improvement techniques for the standard algorithms These techniques combine improvements suggested by the observed distribution of the data as well as the theoretical results from [3] We perform controlled experiments on these techniques to determine which ones result in improved performance, resulting in an algorithm that outperforms existing algorithms in most cases

Journal ArticleDOI
TL;DR: In this paper, a discrete robust adaptive quasi-sliding-mode tracking controller is presented for input-output systems with unknown parameters, unmodeled dynamics, and bounded disturbances.
Abstract: In this paper, a discrete robust adaptive quasi-sliding-mode tracking controller is presented for input-output systems with unknown parameters, unmodeled dynamics, and bounded disturbances. The robust tracking controller is comprised of adaptive control and a sliding-mode-based control design. The bounded motion of the system around the sliding surface and the stability of the global system in the sense that all signals remain bounded are guaranteed. The adaptive algorithm, in which the deadzone method is employed even though the upper and lower bounds of the disturbances are unknown, is the extension of the authors' previous work for the state-space systems. An example and its simulation results are presented to illustrate the proposed approach.

Journal ArticleDOI
TL;DR: A novel scheme of adaptively updating the structure and parameters of the neural network for rainfall estimation is presented, which enables the network to account for any variability in the relationship between radar measurements and precipitation estimation and also to incorporate new information to the network without retraining the complete network.
Abstract: Recent research has shown that neural network techniques can be used successfully for ground rainfall estimation from radar measurements. The neural network is a nonparametric method for representing the relationship between radar measurements and rainfall rate. The relationship is derived directly from a dataset consisting of radar measurements and rain gauge measurements. The effectiveness of the rainfall estimation by using neural networks can be influenced by many factors such as the representativeness and sufficiency of the training dataset, the generalization capability of the network to new data, season change, location change, and so on. In this paper, a novel scheme of adaptively updating the structure and parameters of the neural network for rainfall estimation is presented. This adaptive neural network scheme enables the network to account for any variability in the relationship between radar measurements and precipitation estimation and also to incorporate new information to the network without retraining the complete network from the beginning. This precipitation estimation scheme is a good compromise between the competing demands of accuracy and generalization. Data collected by a Weather Surveillance Radar—1988 Doppler (WSR-88D) and a rain gauge network were used to evaluate the performance of the adaptive network for rainfall estimation. It is shown that the adaptive network can estimate rainfall fairly accurately. The implementation of the adaptive network is very efficient and convenient for real-time rainfall estimation to be used with WSR-88D.

Journal ArticleDOI
TL;DR: The hybrid LMS-MMSE inverse halftone algorithm has the advantages of both excellent reconstructed quality and fast speed and in the experiments, the error diffusion yields the best reconstruction quality among all three Halftone techniques.
Abstract: The objective of this work is to reconstruct high quality gray-level images from bilevel halftone images. We develop optimal inverse halftoning methods for several commonly used halftone techniques, which include dispersed-dot ordered dither, clustered-dot ordered dither, and error diffusion. At first, the least-mean-square (LMS) adaptive filtering algorithm is applied in the training of inverse halftone filters. The resultant optimal mask shapes are significantly different for various halftone techniques, and these mask shapes are also quite different from the square shape that was frequently used in the literature. In the next step, we further reduce the computational complexity by using lookup tables designed by the minimum mean square error (MMSE) method. The optimal masks obtained from the LMS method are used as the default filter masks. Finally, we propose the hybrid LMS-MMSE inverse halftone algorithm. It normally uses the MMSE table lookup method for its fast speed. When an empty cell is referred, the LMS method is used to reconstruct the gray-level value. Consequently, the hybrid method has the advantages of both excellent reconstructed quality and fast speed. In the experiments, the error diffusion yields the best reconstruction quality among all three halftone techniques.

Journal ArticleDOI
TL;DR: This work considers the design of an adaptive algorithm for finite impulse response channel estimation, which incorporates partial knowledge of the channel, specifically, the additive noise variance, and develops a noise-constrained LMS algorithm where the step-size rule arises naturally from the constraints.
Abstract: We consider the design of an adaptive algorithm for finite impulse response channel estimation, which incorporates partial knowledge of the channel, specifically, the additive noise variance. Although the noise variance is not required for the offline Wiener solution, there are potential benefits (and limitations) for the learning behavior of an adaptive solution. In our approach, a Robbins-Monro algorithm is used to minimize the conventional mean square error criterion subject to a noise variance constraint and a penalty term necessary to guarantee uniqueness of the combined weight/multiplier solution. The resulting noise-constrained LMS (NCLMS) algorithm is a type of variable step-size LMS algorithm where the step-size rule arises naturally from the constraints. A convergence and performance analysis is carried out, and extensive simulations are conducted that compare NCLMS with several adaptive algorithms. This work also provides an appropriate framework for the derivation and analysis of other adaptive algorithms that incorporate partial knowledge of the channel.


Patent
09 Feb 2001
TL;DR: In this paper, a matched filter computational architecture is utilized, in which common digital arithmetic elements are used for both acquisition and tracking purposes, as each channel is sequentially acquired by the parallel matched filter, a subset of the arithmetic elements were then dedicated to the subsequent tracking of that channel.
Abstract: A spread-spectrum demodulator architecture (10) is presented which utilizes parallel (Fig. 1) processing to accomplish rapid signal acquisition with simultaneous tracking of multiple channels, while implementing an integrated multi-element adaptive beamformer, Rake combiner, and multi-user detector (MUD). A matched filter computational architecture is utilized, in which common digital arithmetic elements are used for both acquisition and tracking purposes. As each channel is sequentially acquired by the parallel matched filter, a subset of the arithmetic elements are then dedicated to the subsequent tracking of that channel. Additionally, multiple data inputs and delay lines are present, connecting the sampled baseband data streams of numerous RF bands and antenna elements with the arithmetic elements. The matched filter/despreader processing is virtually independent of channel origin or utilization; e.g., CDMA users, RF bands, beamformer elements, or Rake Fingers. Integration of the beamformer weighting computation with the demodulator results in substantial savings by sharing the existing circuitry performing carrier tracking and AGC. An optimal demodulator solution can be achieved through unified 'space/time' processing, by providing all observables (element snapshots, Rake Fingers, carrier/symbol SNR/phase, etc.), for multiple channels, to a single adaptive algorithm processor that can beamform, Rake, and perform joint detection (MUD).

Proceedings ArticleDOI
01 Aug 2001
TL;DR: An adaptive algorithm incorporating the feature of visual masking of human vision system into watermarking into a novel method to classify wavelet blocks is proposed and demonstrated that the watermarks generated with the proposed algorithm are invisible and robust against noise and commonly used image processing techniques.
Abstract: In this paper, a new embedding strategy for DWT-based watermarking is proposed. Different from the existing watermarking schemes in which low frequency coefficients are explicitly excluded from watermark embedding, we claim that watermarks should be embedded in the low frequency subband firstly, and the remains should be embedded in high frequency subbands according to the significance of subbands. We also claim that different embedding formula should be applied on the low frequency subband and high frequency subbands respectively. Applying this strategy, an adaptive algorithm incorporating the feature of visual masking of human vision system into watermarking is proposed. In the algorithm, a novel method to classify wavelet blocks is presented. The experimental results demonstrate that the watermarks generated with the proposed algorithm are invisible and robust against noise and commonly used image processing techniques.

Journal ArticleDOI
TL;DR: Novel trained and blind adaptive algorithms based on stochastic gradient techniques are proposed, which are shown to approximate the ST-MMSE solution without requiring knowledge of the channel.
Abstract: We consider the problem of detecting synchronous code division multiple access (CDMA) signals in multipath channels that result in multiple access interference (MAI). It is well known that such challenging conditions may create severe near-far situations in which the standard techniques of combined power control and temporal single-user RAKE receivers provide poor performance. To address the shortcomings of the RAKE receiver, multiple antenna receivers combining space-time processing with multiuser detection have been proposed in the literature. Specifically, a space-time detector based on minimizing the mean-squared output between the data stream and the linear combiner output has shown great potential in achieving good near-far performance with much less complexity than the optimum space-time multiuser detector. Moreover, this space-time minimum mean-squared error (ST-MMSE) multiuser detector has the additional advantage of being well suited for adaptive implementation. We propose novel trained and blind adaptive algorithms based on stochastic gradient techniques, which are shown to approximate the ST-MMSE solution without requiring knowledge of the channel. We show that these linear space-time detectors can potentially provide significant capacity enhancements (up to one order of magnitude) over the conventional temporal single-user RAKE receiver.