scispace - formally typeset
Search or ask a question

Showing papers on "Adaptive algorithm published in 2008"


Journal ArticleDOI
TL;DR: It is shown that the AFEM yields a decay rate of the energy error plus oscillation in terms of the number of degrees of freedom as dictated by the best approximation for this combined nonlinear quantity.
Abstract: We analyze the simplest and most standard adaptive finite element method (AFEM), with any polynomial degree, for general second order linear, symmetric elliptic operators. As is customary in practice, the AFEM marks exclusively according to the error estimator and performs a minimal element refinement without the interior node property. We prove that the AFEM is a contraction, for the sum of the energy error and the scaled error estimator, between two consecutive adaptive loops. This geometric decay is instrumental to derive the optimal cardinality of the AFEM. We show that the AFEM yields a decay rate of the energy error plus oscillation in terms of the number of degrees of freedom as dictated by the best approximation for this combined nonlinear quantity.

507 citations


Journal ArticleDOI
TL;DR: A model-constrained adaptive sampling methodology is proposed for the reduction of large-scale systems with high-dimensional parametric input spaces using an efficient adaptive algorithm that scales well to systems with a large number of parameters.
Abstract: A model-constrained adaptive sampling methodology is proposed for the reduction of large-scale systems with high-dimensional parametric input spaces. Our model reduction method uses a reduced basis approach, which requires the computation of high-fidelity solutions at a number of sample points throughout the parametric input space. A key challenge that must be addressed in the optimization, control, and probabilistic settings is the need for the reduced models to capture variation over this parametric input space, which, for many applications, will be of high dimension. We pose the task of determining appropriate sample points as a PDE-constrained optimization problem, which is implemented using an efficient adaptive algorithm that scales well to systems with a large number of parameters. The methodology is demonstrated using examples with parametric input spaces of dimension 11 and 21, which describe thermal analysis and design of a heat conduction fin, and compared with statistically based sampling methods. For these examples, the model-constrained adaptive sampling leads to reduced models that, for a given basis size, have error several orders of magnitude smaller than that obtained using the other methods.

320 citations


Proceedings ArticleDOI
27 Jan 2008
TL;DR: A fast adaptive binarization algorithm that yields the same quality of Binarization as the Sauvola method but runs in time close to that of global thresholding methods (like Otsu's method), independent of the window size.
Abstract: Adaptive binarization is an important first step in many document analysis and OCR processes. This paper describes a fast adaptive binarization algorithm that yields the same quality of binarization as the Sauvola method,1 but runs in time close to that of global thresholding methods (like Otsu's method2), independent of the window size. The algorithm combines the statistical constraints of Sauvola's method with integral images.3 Testing on the UW-1 dataset demonstrates a 20-fold speedup compared to the original Sauvola algorithm.

317 citations


Journal ArticleDOI
TL;DR: An adaptive algorithm that iteratively updates both the weights and component parameters of a mixture importance sampling density so as to optimise the performance of importance sampling, as measured by an entropy criterion is proposed.
Abstract: In this paper, we propose an adaptive algorithm that iteratively updates both the weights and component parameters of a mixture importance sampling density so as to optimise the performance of importance sampling, as measured by an entropy criterion. The method, called M-PMC, is shown to be applicable to a wide class of importance sampling densities, which includes in particular mixtures of multivariate Student t distributions. The performance of the proposed scheme is studied on both artificial and real examples, highlighting in particular the benefit of a novel Rao-Blackwellisation device which can be easily incorporated in the updating scheme.

302 citations


Journal ArticleDOI
TL;DR: A novel localization algorithm, named discriminant-adaptive neural network (DANN), which takes the received signal strength from the access points (APs) as inputs to infer the client position in the wireless local area network (LAN) environment, and shows that the proposed algorithm is much higher in accuracy compared with other examined techniques.
Abstract: This brief paper presents a novel localization algorithm, named discriminant-adaptive neural network (DANN), which takes the received signal strength (RSS) from the access points (APs) as inputs to infer the client position in the wireless local area network (LAN) environment. We extract the useful information into discriminative components (DCs) for network learning. The nonlinear relationship between RSS and the position is then accurately constructed by incrementally inserting the DCs and recursively updating the weightings in the network until no further improvement is required. Our localization system is developed in a real-world wireless LAN WLAN environment, where the realistic RSS measurement is collected. We implement the traditional approaches on the same test bed, including weighted k -nearest neighbor (WKNN), maximum likelihood (ML), and multilayer perceptron (MLP), and compare the results. The experimental results indicate that the proposed algorithm is much higher in accuracy compared with other examined techniques. The improvement can be attributed to that only the useful information is efficiently extracted for positioning while the redundant information is regarded as noise and discarded. Finally, the analysis shows that our network intelligently accomplishes learning while the inserted DCs provide sufficient information.

228 citations


Journal ArticleDOI
TL;DR: The characteristic basis function method (CBFM) has been hybridized with the adaptive cross approximation (ACA) algorithm to construct a reduced matrix equation in a time-efficient manner and to solve electrically large antenna array problems in-core, with a solve time orders of magnitude less than those in the conventional methods.
Abstract: The characteristic basis function method (CBFM) has been hybridized with the adaptive cross approximation (ACA) algorithm to construct a reduced matrix equation in a time-efficient manner and to solve electrically large antenna array problems in-core, with a solve time orders of magnitude less than those in the conventional methods. Various numerical examples are presented that demonstrate that the proposed method has a very good accuracy, computational efficiency and reduced memory storage requirement. Specifically, we analyze large 1-D and 2-D arrays of electrically interconnected tapered slot antennas (TSAs). The entire computational domain is subdivided into many smaller subdomains, each of which supports a set of characteristic basis functions (CBFs). We also present a novel scheme for generating the CBFs that do not conform to the edge condition at the truncated edge of each subdomain, and provide a minor overlap between the CBFs in adjacent subdomains. As a result, the CBFs preserve the continuity of the surface current across the subdomain interfaces, thereby circumventing the need to use separate ldquoconnectionrdquo basis functions.

223 citations


Journal ArticleDOI
TL;DR: The binning algorithm resulted in an improved classification of hypertensive status compared with that of standard binning and facilitated the identification of relevant metabolites, and the involvement of alpha-1 acid glycoproteins and choline biochemistry in hypertension is suggested.
Abstract: As with every -omics technology, metabolomics requires new methodologies for data processing. Due to the large spectral size, a standard approach in NMR-based metabolomics implies the division of spectra into equally sized bins, thereby simplifying subsequent data analysis. Yet, disadvantages are the loss of information and the occurrence of artifacts caused by peak shifts. Here, a new binning algorithm, Adaptive Intelligent Binning (AI-Binning), which largely circumvents these problems, is presented. AI-Binning recursively identifies bin edges in existing bins, requires only minimal user input, and avoids the use of arbitrary parameters or reference spectra. The performance of AI-Binning is demonstrated using serum spectra from 40 hypertensive and 40 matched normotensive subjects from the Asklepios study. Hypertension is a major cardiovascular risk factor characterized by a complex biochemistry and, in most cases, an unknown origin. The binning algorithm resulted in an improved classification of hyperten...

219 citations


Journal ArticleDOI
TL;DR: Blatman et al. as mentioned in this paper proposed a method to build a sparse polynomial chaos expansion of a mechanical model whose input parameters are random, and an adaptive algorithm is described for automatically detecting the significant coefficients of the PC expansion.

207 citations


Journal ArticleDOI
TL;DR: A new dichotomous coordinate descent (DCD) algorithm is proposed and applied to the auxiliary equations of the RLS problem to result in a transversal RLS adaptive filter with complexity as low as multiplications per sample, which is only slightly higher than the complexity of the least mean squares algorithm.
Abstract: In this paper, we derive low-complexity recursive least squares (RLS) adaptive filtering algorithms. We express the RLS problem in terms of auxiliary normal equations with respect to increments of the filter weights and apply this approach to the exponentially weighted and sliding window cases to derive new RLS techniques. For solving the auxiliary equations, line search methods are used. We first consider conjugate gradient iterations with a complexity of operations per sample; being the number of the filter weights. To reduce the complexity and make the algorithms more suitable for finite precision implementation, we propose a new dichotomous coordinate descent (DCD) algorithm and apply it to the auxiliary equations. This results in a transversal RLS adaptive filter with complexity as low as multiplications per sample, which is only slightly higher than the complexity of the least mean squares (LMS) algorithm ( multiplications). Simulations are used to compare the performance of the proposed algorithms against the classical RLS and known advanced adaptive algorithms. Fixed-point FPGA implementation of the proposed DCD-based RLS algorithm is also discussed and results of such implementation are presented.

198 citations


Journal ArticleDOI
TL;DR: In this article, the authors considered the adaptive solution with adaptive finite elements of a class of linear boundary value problems, which includes problems of saddle point type, and gave a sufficient and essentially necessary condition on marking for the convergence of the finite element solutions to the exact one.
Abstract: We consider the approximate solution with adaptive finite elements of a class of linear boundary value problems, which includes problems of "saddle point" type. For the adaptive algorithm we assume the following framework: refinement relies on unique quasi-regular element subdivisions and generates locally quasi-uniform grids, the finite element spaces are conforming, nested, and satisfy the inf–sup conditions, the error estimator is reliable as well as locally and discretely efficient, and marked elements are subdivided at least once. Under these assumptions, we give a sufficient and essentially necessary condition on marking for the convergence of the finite element solutions to the exact one. This condition is not only satisfied by Dorfler's strategy, but also by the maximum strategy and the equidistribution strategy.

177 citations


Journal ArticleDOI
TL;DR: A multistage decomposition for blind adaptive parameter estimation in the Krylov subspace with the code-constrained constant modulus (CCM) design criterion is developed and the proposed techniques to the suppression of multiaccess and intersymbol interference in DS-CDMA systems are considered.
Abstract: This paper proposes a multistage decomposition for blind adaptive parameter estimation in the Krylov subspace with the code-constrained constant modulus (CCM) design criterion. Based on constrained optimization of the constant modulus cost function and utilizing the Lanczos algorithm and Arnoldi-like iterations, a multistage decomposition is developed for blind parameter estimation. A family of computationally efficient blind adaptive reduced-rank stochastic gradient (SG) and recursive least squares (RLS) type algorithms along with an automatic rank selection procedure are also devised and evaluated against existing methods. An analysis of the convergence properties of the method is carried out and convergence conditions for the reduced-rank adaptive algorithms are established. Simulation results consider the application of the proposed techniques to the suppression of multiaccess and intersymbol interference in DS-CDMA systems.

Journal ArticleDOI
TL;DR: In this article, a multiscale morphology analysis is applied to one-dimensional signal by defining both the length and height scales of structuring elements (SEs) and a local-peak value based adaptive algorithm is also introduced.

Journal ArticleDOI
TL;DR: This paper studies the statistical behavior of an affine combination of the outputs of two least-mean-square adaptive filters that simultaneously adapt using the same white Gaussian inputs to obtain an LMS adaptive filter with fast convergence and small steady-state mean-square deviation (MSD).
Abstract: This paper studies the statistical behavior of an affine combination of the outputs of two least mean-square (LMS) adaptive filters that simultaneously adapt using the same white Gaussian inputs. The purpose of the combination is to obtain an LMS adaptive filter with fast convergence and small steady-state mean-square deviation (MSD). The linear combination studied is a generalization of the convex combination, in which the combination factor lambda(n) is restricted to the interval (0,1). The viewpoint is taken that each of the two filters produces dependent estimates of the unknown channel. Thus, there exists a sequence of optimal affine combining coefficients which minimizes the mean-square error (MSE). First, the optimal unrealizable affine combiner is studied and provides the best possible performance for this class. Then two new schemes are proposed for practical applications. The mean-square performances are analyzed and validated by Monte Carlo simulations. With proper design, the two practical schemes yield an overall MSD that is usually less than the MSDs of either filter.

Journal ArticleDOI
TL;DR: In this article, the authors investigated how people control the indoor environment by opening windows, the cooling potential of opening windows and the use of an adaptive algorithm for predicting window-opening behavior for thermal simulation in ESP-r.
Abstract: This investigation of the window-opening data from extensive field surveys in UK office buildings demonstrates: (1) how people control the indoor environment by opening windows; (2) the cooling potential of opening windows; and (3) the use of an ‘adaptive algorithm’ for predicting window-opening behaviour for thermal simulation in ESP-r. It was found that when the window was open the mean indoor and outdoor temperatures were higher than when closed, but it was shown that nonetheless there was a useful cooling effect from opening a window. The adaptive algorithm for window-opening behaviour was then used in thermal simulation studies for some typical office designs. The thermal simulation results were in general agreement with the findings of the field surveys. The adaptive algorithm is shown to provide insights not available using non adaptive simulation methods and can assist in achieving more comfortable, lower energy buildings while avoiding overheating.

Journal ArticleDOI
TL;DR: This paper proposes a VSS-APA derived in the context of AEC that aims to recover the near-end signal within the error signal of the adaptive filter and is robust against near- end signal variations (including double-talk).
Abstract: The adaptive algorithms used for acoustic echo cancellation (AEC) have to provide (1) high convergence rates and good tracking capabilities, since the acoustic environments imply very long and time-variant echo paths, and (2) low misadjustment and robustness against background noise variations and double-talk. In this context, the affine projection algorithm (APA) and different versions of it are very attractive choices for AEC. However, an APA with a constant step-size parameter has to compromise between the performance criteria (1) and (2). Therefore, a variable step-size APA (VSS-APA) represents a more reliable solution. In this paper, we propose a VSS-APA derived in the context of AEC. Most of the APAs aim to cancel p (i.e., projection order) previous a posteriori errors at every step of the algorithm. The proposed VSS-APA aims to recover the near-end signal within the error signal of the adaptive filter. Consequently, it is robust against near-end signal variations (including double-talk). This algorithm does not require any a priori information about the acoustic environment, so that it is easy to control in practice. The simulation results indicate the good performance of the proposed algorithm as compared to other members of the APA family.

Journal ArticleDOI
TL;DR: In this article, the same sampler is used to model shifts in variance that occur independently of shifts in other parameters and how to draw the break dates efficiently when regime durations follow a Poisson process.
Abstract: Time series subject to parameter shifts of random magnitude and timing are commonly modeled with a change-point approach using Chib's algorithm to draw the break dates. We outline some advantages of an alternative approach in which breaks come through mixture distributions in state innovations, and for which the sampler of Gerlach, Carter, and Kohn allows reliable and efficient inference. We show how the same sampler can be used to model shifts in variance that occur independently of shifts in other parameters and how to draw the break dates efficiently when regime durations follow a Poisson process. Finally, we introduce to the time series literature the concept of adaptive Metropolis–Hastings sampling for discrete latent variable models. We develop an easily implemented adaptive algorithm that improves on the work of Gerlach et al. and promises to significantly reduce computing time in a variety of problems including mixture innovation, change-point, regime switching, and outlier detection. The efficien...

Journal ArticleDOI
TL;DR: A new algorithm is developed to estimate unknown parameters online by combining the Slotine-Li method with the idea of structure from motion in computer vision, and this new adaptive algorithm can guarantee the convergence of the estimated parameters to the real values up to a scale.
Abstract: This paper presents a novel approach for image-based visual servoing of a robot manipulator with an eye-in-hand camera when the camera parameters are not calibrated and the 3-D coordinates of the features are not known. Both point and line features are considered. This paper extends the concept of depth-independent interaction (or image Jacobian) matrix, developed in earlier work for visual servoing using point features and fixed cameras, to the problem using eye-in-hand cameras and point and line features. By using the depth-independent interaction matrix, it is possible to linearly parameterize, by the unknown camera parameters and the unknown coordinates of the features, the closed-loop dynamics of the system. A new algorithm is developed to estimate unknown parameters online by combining the Slotine-Li method with the idea of structure from motion in computer vision. By minimizing the errors between the real and estimated projections of the feature on multiple images captured during motion of the robot, this new adaptive algorithm can guarantee the convergence of the estimated parameters to the real values up to a scale. On the basis of the nonlinear robot dynamics, we proved asymptotic convergence of the image errors by the Lyapunov theory. Experiments have been conducted to demonstrate the performance of the proposed controller.

Journal ArticleDOI
TL;DR: This work develops a fast and numerically robust implementation of DPM, which comes at a considerably lower computational cost, and constitutes the first numerically stable, low complexity, algorithm for tracking subspaces corresponding to the smallest singular values.
Abstract: We consider the problem of adaptive subspace tracking, when the rank of the subspace we seek to estimate is assumed to be known. Starting from the data projection method (DPM), which constitutes a simple and reliable means for adaptively estimating and tracking subspaces, we develop a fast and numerically robust implementation of DPM, which comes at a considerably lower computational cost. Most existing schemes track subspaces corresponding either to the largest or to the smallest singular values, while our DPM version can switch from one subspace type to the other with a simple change of sign of its single parameter. The proposed algorithm provides orthonormal vector estimates of the subspace basis that are numerically stable since they do not accumulate roundoff errors. In fact, our scheme constitutes the first numerically stable, low complexity, algorithm for tracking subspaces corresponding to the smallest singular values. Regarding convergence towards orthonormality our scheme exhibits the fastest speed among all other subspace tracking algorithms of similar complexity.

Journal ArticleDOI
TL;DR: In this article, a non-certainty-equivalence-based adaptive attitude control algorithm is proposed to overcome the adverse effect of uncertain parameter effects on closed-loop performance.
Abstract: Virtually every existing adaptive attitude control solution is based on the certainty-equivalence principle, which permits the adaptive controller structure to be based upon the deterministic feedback control algorithm (controller design based on nominal system information without any inertia-parameter uncertainty) and to be used in conjunction with a suitable adaptive parameter-estimation algorithm. However, one of the main drawbacks of the certainty-equivalence-based adaptive control methodology is the arbitrary degradation in closed-loop performance due to the adaptation (parameter-estimation) process, which acts like a forcing disturbance (uncertain parameter effect) imposed onto the deterministic closed-loop control dynamics. In this paper, we significantly deviate from the classical certainty-equivalence-based adaptive control framework and develop, for the first time (to our best knowledge), a noncertainty-equivalent adaptive attitude control algorithm. This novel control design process eliminates the deleterious performance-degradation effects of the certainty-equivalence controller through the introduction of a stable attracting manifold into the adaptation process, such that the resulting closed-loop adaptive attitude control dynamics recover the deterministic (ideal) case of closed-loop attitude controller performance (i.e., no uncertain parameter effects). In addition to detailed derivations of the new controller design and a rigorous sketch of all the associated stability and attitude error convergence proofs, we present numerical simulation results that not only illustrate the various features of the new noncertainty-equivalent controller design methodology but also highlight the ensuing closed-loop-performance benefits when compared with the conventional certainty-equivalence-based adaptive control schemes.

Journal ArticleDOI
TL;DR: A new adaptive law combining time- and iteration-domain adaptation is derived to search for suitable values of control parameters and then guarantee the closed-loop stability and error convergence.
Abstract: To deal with the iterative control of uncertain nonlinear systems with varying control tasks, nonzero initial resetting state errors, and nonrepeatable mismatched input disturbance, a new adaptive fuzzy iterative learning controller is proposed in this paper. The main structure of this learning controller is constructed by a fuzzy learning component and a robust learning component. For the fuzzy learning component, a fuzzy system used as an approximator is designed to compensate for the plant nonlinearity. For the robust learning component, a sliding-mode-like strategy is applied to overcome the nonlinear input gain, input disturbance, and fuzzy approximation error. Both designs are based on a time-varying boundary layer which is introduced not only to solve the problem of initial state errors but also to eliminate the possible undesirable chattering behavior. A new adaptive law combining time- and iteration-domain adaptation is derived to search for suitable values of control parameters and then guarantee the closed-loop stability and error convergence. This adaptive algorithm is designed without using projection or deadzone mechanism. With a suitable choice of the weighting gain, the memory size for the storage of parameter profiles can be greatly reduced. It is shown that all the adjustable parameters as well as internal signals remain bounded for all iterations. Moreover, the norm of tracking state error vector will asymptotically converge to a tunable residual set even when the desired tracking trajectory is varying between successive iterations.

Journal ArticleDOI
TL;DR: The simplex method and an updating procedure are combined into a new algorithm for time-adaptive quantile regression, which generates new solutions on the basis of the old solution, leading to savings in computation time.

Journal ArticleDOI
TL;DR: To more accurately and precisely delineate a tumor in a 3D PET image, a novel, semi-automatic, two-stage method by utilizing an adaptive region-growing algorithm and a dual-front active contour model is proposed.
Abstract: To more accurately and precisely delineate a tumor in a 3D PET image, we proposed a novel, semi-automatic, two-stage method by utilizing an adaptive region-growing algorithm and a dual-front active contour model. First, a rough region of interest (ROI) is manually drawn by a radiation oncologist that encloses a tumor. The voxel having the highest intensity in the ROI is chosen as a seed point. An adaptive region growing algorithm successively appends to the seed point all neighboring voxels whose intensities>=T of the mean of the current region. When T varies from 100% to 0%, a sharp volume increase, indicating the transition from the tumor to the background, always occurs at a certain T value. A preliminary tumor boundary is determined just before the sharp volume increase, which is found to be slightly outside of the known tumor in all tested phantoms. A novel dual-front active contour model utilizing region-based information is then applied to refine the preliminary boundary automatically. We tested the two-stage method on six spheres (0.5–20ml) in a cylindrical container under different source to background ratios. Comparisons between the two-stage method and an iterative threshold method demonstrate its higher detection accuracy for small tumors (less than 6ml). One patient study was tested and evaluated by two experienced radiation oncologists. The study illustrated that this two-stage method has several advantages. First, it does not require any threshold-volume curves, which are different and must be calibrated for each scanner and image reconstruction method. Second, it does not use any iso-threshold lines as contours. Third, the final result is reproducible and is independent of the manual rough ROIs. Fourth, this method is an adaptive algorithm that can process different images automatically.

Journal ArticleDOI
TL;DR: In this article, an implicit/explicit integration scheme for non-linear constitutive models is presented to provide additional computability to those solid mechanics problems were robustness is an important issue, i.e., material failure models equipped with strain softening, soft materials, contact-friction models, etc.

Journal ArticleDOI
TL;DR: A novel online algorithm for classification in reproducing kernel hilbert spaces (RKHS) by exploiting projection-based adaptive filtering tools and showing that the additional convex constraint on the norm of the classifier naturally leads to an online sparsification of the resulting kernel series expansion.
Abstract: The goal of this paper is to derive a novel online algorithm for classification in reproducing kernel hilbert spaces (RKHS) by exploiting projection-based adaptive filtering tools. The paper brings powerful convex analytic and set theoretic estimation arguments in machine learning by revisiting the standard kernel-based classification as the problem of finding a point which belongs to a closed halfspace (a special closed convex set) in an RKHS. In this way, classification in an online setting, where data arrive sequentially, is viewed as the problem of finding a point (classifier) in the nonempty intersection of an infinite sequence of closed halfspaces in the RKHS. Convex analysis is also used to introduce sparsification arguments in the design by imposing an additional simple convex constraint on the norm of the classifier. An algorithmic solution to the resulting optimization problem, where new convex constraints are added every time instant, is given by the recently introduced adaptive projected subgradient method (APSM), which generalizes a number of well-known projection-based adaptive filtering algorithms such as the classical normalized least mean squares (NLMS) and the affine projection algorithm (APA). Under mild conditions, the generated sequence of estimates enjoys monotone approximation, strong convergence, asymptotic optimality, and a characterization of the limit point. Further, we show that the additional convex constraint on the norm of the classifier naturally leads to an online sparsification of the resulting kernel series expansion. We validate the proposed design by considering the adaptive equalization problem of a nonlinear channel, and by comparing it with classical as well as with recently developed stochastic gradient descent techniques.

Journal ArticleDOI
TL;DR: A posteriori error estimates for finite element discretization of elliptic optimization problems with pointwise inequality constraints on the control variable are developed and guide an adaptive mesh refinement algorithm allowing for substantial saving in degrees of freedom.
Abstract: In this paper we develop a posteriori error estimates for finite element discretization of elliptic optimization problems with pointwise inequality constraints on the control variable. We derive error estimators for assessing the discretization error with respect to the cost functional as well as with respect to a given quantity of interest. These error estimators provide quantitative information about the discretization error and guide an adaptive mesh refinement algorithm allowing for substantial saving in degrees of freedom. The behavior of the method is demonstrated on numerical examples.

Journal ArticleDOI
TL;DR: An unsupervised image segmentation framework that is based on the adaptive inclusion of color and texture in the process of data partition and a new formulation for the extraction of color features that evaluates the input image in a multispace color representation is presented.
Abstract: This paper presents the development of an unsupervised image segmentation framework (referred to as CTex) that is based on the adaptive inclusion of color and texture in the process of data partition. An important contribution of this work consists of a new formulation for the extraction of color features that evaluates the input image in a multispace color representation. To achieve this, we have used the opponent characteristics of the RGB and YIQ color spaces where the key component was the inclusion of the self organizing map (SOM) network in the computation of the dominant colors and estimation of the optimal number of clusters in the image. The texture features are computed using a multichannel texture decomposition scheme based on Gabor filtering. The major contribution of this work resides in the adaptive integration of the color and texture features in a compound mathematical descriptor with the aim of identifying the homogenous regions in the image. This integration is performed by a novel adaptive clustering algorithm that enforces the spatial continuity during the data assignment process. A comprehensive qualitative and quantitative performance evaluation has been carried out and the experimental results indicate that the proposed technique is accurate in capturing the color and texture characteristics when applied to complex natural images.

Journal ArticleDOI
TL;DR: An adaptive algorithm based on evolutionary computation techniques is designed, developed and applied to the timetabling problem of educational organizations in Greece and is able to construct a feasible and very efficient timetable more quickly and easily compared to other techniques.

Journal ArticleDOI
Weiyi Qian1, Ajun li1
TL;DR: The results illustrate that ADEA efficiently achieves two goals of multiobjective optimization problems: find the solutions converge to the true Pareto-front and uniform spread along the front.

Journal ArticleDOI
TL;DR: A modified generalized cross-validation criterion, called the adjustable prediction error sum of squares (APRESS), is introduced and incorporated into a forward orthogonal search procedure and can produce efficient model subsets for most non-linear identification problems.
Abstract: A new adaptive orthogonal search (AOS) algorithm is proposed for model subset selection and non-linear system identification. Model structure detection is a key step in any system identification problem. This consists of selecting significant model terms from a redundant dictionary of candidate model terms, and determining the model complexity (model length or model size). The final objective is to produce a parsimonious model that can well capture the inherent dynamics of the underlying system. In the new AOS algorithm, a modified generalized cross-validation criterion, called the adjustable prediction error sum of squares (APRESS), is introduced and incorporated into a forward orthogonal search procedure. The main advantage of the new AOS algorithm is that the mechanism is simple and the implementation is direct and easy, and more importantly it can produce efficient model subsets for most non-linear identification problems.

Proceedings ArticleDOI
19 May 2008
TL;DR: A novel algorithm is presented, based on the REINFORCE family of stochastic policy gradient algorithms, which automatically discovers a locally-optimal weighting of workspace features to produce a distribution which performs well for a given class of sampling-based motion planning queries.
Abstract: The widespread success of sampling-based planning algorithms stems from their ability to rapidly discover the connectivity of a configuration space. Past research has found that non-uniform sampling in the configuration space can significantly outperform uniform sampling; one important strategy is to bias the sampling distribution based on features present in the underlying workspace. In this paper, we unite several previous approaches to workspace biasing into a general framework for automatically discovering useful sampling distributions. We present a novel algorithm, based on the REINFORCE family of stochastic policy gradient algorithms, which automatically discovers a locally-optimal weighting of workspace features to produce a distribution which performs well for a given class of sampling-based motion planning queries. We present as well a novel set of workspace features that our adaptive algorithm can leverage for improved configuration space sampling. Experimental results show our algorithm to be effective across a variety of robotic platforms and high- dimensional configuration spaces.