scispace - formally typeset
Search or ask a question

Showing papers on "Adaptive algorithm published in 2005"


Journal ArticleDOI
Dar-Shyang Lee1
TL;DR: An effective scheme to improve the convergence rate without compromising model stability is proposed by replacing the global, static retention factor with an adaptive learning rate calculated for each Gaussian at every frame.
Abstract: Adaptive Gaussian mixtures have been used for modeling nonstationary temporal distributions of pixels in video surveillance applications. However, a common problem for this approach is balancing between model convergence speed and stability. This paper proposes an effective scheme to improve the convergence rate without compromising model stability. This is achieved by replacing the global, static retention factor with an adaptive learning rate calculated for each Gaussian at every frame. Significant improvements are shown on both synthetic and real video data. Incorporating this algorithm into a statistical framework for background subtraction leads to an improved segmentation performance compared to a standard method.

867 citations


Journal ArticleDOI
TL;DR: A new control strategy called Adaptive Equivalent Consumption Minimization Strategy (A-ECMS) is presented, adding to the ECMS framework an on-the-fly algorithm for the estimation of the equivalence factor according to the driving conditions.

729 citations


Journal ArticleDOI
TL;DR: The adaptive cross approximation algorithm is extended to electromagnetic compatibility-related problems of moderate electrical size and it is concluded that formoderate electrical size problems the memory and CPU time requirements for the ACA algorithm scale as N/sup 4/3/logN.
Abstract: This paper presents the adaptive cross approximation (ACA) algorithm to reduce memory and CPU time overhead in the method of moments (MoM) solution of surface integral equations. The present algorithm is purely algebraic; hence, its formulation and implementation are integral equation kernel (Green's function) independent. The algorithm starts with a multilevel partitioning of the computational domain. The interactions of well-separated partitioning clusters are accounted through a rank-revealing LU decomposition. The acceleration and memory savings of ACA come from the partial assembly of the rank-deficient interaction submatrices. It has been demonstrated that the ACA algorithm results in O(NlogN) complexity (where N is the number of unknowns) when applied to static and electrically small electromagnetic problems. In this paper the ACA algorithm is extended to electromagnetic compatibility-related problems of moderate electrical size. Specifically, the ACA algorithm is used to study compact-range ground planes and electromagnetic interference and shielding in vehicles. Through numerical experiments, it is concluded that for moderate electrical size problems the memory and CPU time requirements for the ACA algorithm scale as N/sup 4/3/logN.

608 citations


Journal ArticleDOI
01 Sep 2005
TL;DR: AntHocNet is a hybrid algorithm, which combines reactive path setup with proactive path probing, maintenance and improvement, based on the nature-inspired ant colony optimisation framework, and its performance advantage is visible over a broad range of possible network scenarios.
Abstract: In this paper, we describe AntHocNet, an algorithm for routing in mobile ad hoc networks. It is a hybrid algorithm, which combines reactive path setup with proactive path probing, maintenance and improvement. The algorithm is based on the nature-inspired ant colony optimisation framework. Paths are learned by guided Monte Carlo sampling using ant-like agents communicating in a stigmergic way. In an extensive set of simulation experiments, we compare AntHocNet with AODV, a reference algorithm in the field. We show that our algorithm can outperform AODV on different evaluation criteria. AntHocNet's performance advantage is visible over a broad range of possible network scenarios, and increases for larger, sparser and more mobile networks. Copyright © 2005 AEIT.

596 citations


Proceedings ArticleDOI
12 Dec 2005
TL;DR: In this paper, a new control strategy called Adaptive Equivalent Consumption Minimization Strategy (A-ECMS) is presented, which periodically refresh the control parameter according to the current road load, so that the battery State of Charge (SOC) is maintained within the boundaries and the fuel consumption is minimized.
Abstract: Hybrid Electric Vehicles (HEV) improvements in fuel economy and emissions strongly depend on the energy management strategy. In this paper a new control strategy called Adaptive Equivalent Consumption Minimization Strategy (A-ECMS) is presented. This real-time energy management for HEV is obtained adding to the ECMS framework an on-the-fly algorithm for the estimation of the equivalence factor according to the driving conditions. The main idea is to periodically refresh the control parameter according to the current road load, so that the battery State of Charge (SOC) is maintained within the boundaries and the fuel consumption is minimized. The results obtained with A-ECMS show that the fuel economy that can be achieved is only slightly sub-optimal and the operations are charge-sustaining.

404 citations


Journal ArticleDOI
TL;DR: An incremental algorithm involving adaptive periods, such that all the state evolutions needed to perform the model reduction are described by an approximate ROM, which is compatible with classical formulations of the equations.

360 citations


Journal ArticleDOI
TL;DR: The proposed approach combines knowledge of human perception with an understanding of signal characteristics in order to segment natural scenes into perceptually/semantically uniform regions to convey semantic information that can be used for content-based retrieval.
Abstract: We propose a new approach for image segmentation that is based on low-level features for color and texture. It is aimed at segmentation of natural scenes, in which the color and texture of each segment does not typically exhibit uniform statistical characteristics. The proposed approach combines knowledge of human perception with an understanding of signal characteristics in order to segment natural scenes into perceptually/semantically uniform regions. The proposed approach is based on two types of spatially adaptive low-level features. The first describes the local color composition in terms of spatially adaptive dominant colors, and the second describes the spatial characteristics of the grayscale component of the texture. Together, they provide a simple and effective characterization of texture that the proposed algorithm uses to obtain robust and, at the same time, accurate and precise segmentations. The resulting segmentations convey semantic information that can be used for content-based retrieval. The performance of the proposed algorithms is demonstrated in the domain of photographic images, including low-resolution, degraded, and compressed images.

250 citations


Journal ArticleDOI
TL;DR: The algorithm has been successfully applied to constructing the geometric model of a biomolecule in finite element calculations and generates adaptive and quality 3D meshes without introducing any hanging nodes.

192 citations


Journal ArticleDOI
TL;DR: An easily implemented adaptive algorithm is developed that improves on the work of Gerlach et al. and promises to significantly reduce computing time in a variety of problems including mixture innovation, change-point, regime switching, and outlier detection.
Abstract: Time series subject to parameter shifts of random magnitude and timing are commonly modeled with a change-point approach using Chib's (1998) algorithm to draw the break dates. We outline some advantages of an alternative approach in which breaks come through mixture distributions in state innovations, and for which the sampler of Gerlach, Carter and Kohn (2000) allows reliable and efficient inference. We show how the same sampler can be used to (i) model shifts in variance that occur independently of shifts in other parameters (ii) draw the break dates in O(n) rather than O(n³) operations in the change-point model of Koop and Potter (2004b), the most general to date. Finally, we introduce to the time series literature the concept of adaptive Metropolis-Hastings sampling for discrete latent variable models. We develop an easily implemented adaptive algorithm that improves on Gerlach et al. (2000) and promises to significantly reduce computing time in a variety of problems including mixture innovation, change-point, regime-switching, and outlier detection. The efficiency gains on two models for U.S. inflation and real interest rates are 257% and 341%.

189 citations


Journal ArticleDOI
TL;DR: An adaptive algorithm is used to choose the location of the collocation points of the radial basis function methods and produces results similar to the more well-known and analyzed spectral methods, but while allowing greater flexibility in the choice of grid point locations.

176 citations


Journal ArticleDOI
TL;DR: In this paper, an hp-adaptive finite element algorithm based on a combination of reliable and efficient residual error indicators and a new hp-extension control technique which assesses the local regularity of the underlying analytical solution on the basis of its local Legendre series expansion is proposed.

Journal ArticleDOI
TL;DR: This paper describes a multi-actuator substructured system of a coupled three mass–spring–damper system and uses this to demonstrate the nature of delay errors which can first lead to a loss of accuracy and then to instability of the substructuring algorithm.
Abstract: Real-time dynamic substructuring is a novel experimental technique used to test the dynamic behaviour of complex structures. The technique involves creating a hybrid model of the entire structure by combining an experimental test piece—the substructure—with a set of numerical models. In this paper we describe a multi-actuator substructured system of a coupled three mass–spring–damper system and use this to demonstrate the nature of delay errors which can first lead to a loss of accuracy and then to instability of the substructuring algorithm. Synchronization theory and delay compensation are used to show how the delay errors, present in the transfer systems, can be minimized by online forward prediction. This new algorithm uses a more generic approach than the single step algorithms applied to substructuring thus far, giving considerable advantages in terms of flexibility and accuracy. The basic algorithm is then extended by closing the control loop resulting in an error driven adaptive feedback controller which can operate with no prior knowledge of the plant dynamics. The adaptive algorithm is then used to perform a real substructuring test using experimentally measured forces to deliver a stable substructuring algorithm.

Proceedings ArticleDOI
15 Jun 2005
TL;DR: This work develops a general framework for adaptive algorithm selection for use in the Standard Template Adaptive Parallel Library (STAPL), using machine learning techniques to analyze data collected by STAPL installation benchmarks and to determine tests that will select among algorithmic options at run-time.
Abstract: Writing portable programs that perform well on multiple platforms or for varying input sizes and types can be very difficult because performance is often sensitive to the system architecture, the run-time environment, and input data characteristics This is even more challenging on parallel and distributed systems due to the wide variety of system architectures One way to address this problem is to adaptively select the best parallel algorithm for the current input data and system from a set of functionally equivalent algorithmic options Toward this goal, we have developed a general framework for adaptive algorithm selection for use in the Standard Template Adaptive Parallel Library (STAPL) Our framework uses machine learning techniques to analyze data collected by STAPL installation benchmarks and to determine tests that will select among algorithmic options at run-time We apply a prototype implementation of our framework to two important parallel operations, sorting and matrix multiplication, on multiple platforms and show that the framework determines run-time tests that correctly select the best performing algorithm from among several competing algorithmic options in 86-100% of the cases studied, depending on the operation and the system

Journal ArticleDOI
TL;DR: An adaptive sampling algorithm that adaptively chooses which action to sample as the sampling process proceeds and generates an asymptotically unbiased estimator, whose bias is bounded by a quantity that converges to zero at rate (lnN)/ N.
Abstract: Based on recent results for multiarmed bandit problems, we propose an adaptive sampling algorithm that approximates the optimal value of a finite-horizon Markov decision process (MDP) with finite state and action spaces. The algorithm adaptively chooses which action to sample as the sampling process proceeds and generates an asymptotically unbiased estimator, whose bias is bounded by a quantity that converges to zero at rate (lnN)/ N, whereN is the total number of samples that are used per state sampled in each stage. The worst-case running-time complexity of the algorithm isO(( |A|N) H ), independent of the size of the state space, where | A| is the size of the action space andH is the horizon length. The algorithm can be used to create an approximate receding horizon control to solve infinite-horizon MDPs. To illustrate the algorithm, computational results are reported on simple examples from inventory control.

Journal ArticleDOI
TL;DR: The idea of a linearly adaptive gradient matrix presented in this paper provides an interesting compromise between a standard optimization technique that recomputes the gradient at every iteration and the fixed gradient matrix approach of the basic AAM.
Abstract: The active appearance model (AAM) is a powerful tool for modeling images of deformable objects and has been successfully used in a variety of alignment, tracking, and recognition applications. AAM uses subspace-based deformable models to represent the images of a certain object class. In general, fitting such complicated models to previously unseen images using standard optimization techniques is a computationally complex task because the gradient matrix has to be numerically computed at every iteration. The critical feature of AAM is a fast convergence scheme which assumes that the gradient matrix is fixed around the optimal coefficients for all images. Our work in this paper starts with the observation that such a fixed gradient matrix inevitably specializes to a certain region in the texture space, and the fixed gradient matrix is not a good estimate of the actual gradient as the target texture moves away from this region. Hence, we propose an adaptive AAM algorithm that linearly adapts the gradient matrix according to the composition of the target image's texture to obtain a better estimate for the actual gradient. We show that the adaptive AAM significantly outperforms the basic AAM, especially in images that are particularly challenging for the basic algorithm. In terms of speed and accuracy, the idea of a linearly adaptive gradient matrix presented in this paper provides an interesting compromise between a standard optimization technique that recomputes the gradient at every iteration and the fixed gradient matrix approach of the basic AAM.

Journal ArticleDOI
TL;DR: A second-order finite-element adaptive strategy with error control for one-dimensional grating problems is developed and is expected to increase significantly the accuracy and efficiency of the discretization as well as reduce the computation cost.
Abstract: A second-order finite-element adaptive strategy with error control for one-dimensional grating problems is developed. The unbounded computational domain is truncated to a bounded one by a perfectly-matched-layer (PML) technique. The PML parameters, such as the thickness of the layer and the medium properties, are determined through sharp a posteriori error estimates. The adaptive finite-element method is expected to increase significantly the accuracy and efficiency of the discretization as well as reduce the computation cost. Numerical experiments are included to illustrate the competitiveness of the proposed adaptive method.

01 May 2005
TL;DR: A novel approach to motion cueing, the “nonlinear algorithm” is introduced that combines features from both approaches, formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli.
Abstract: While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.

Proceedings ArticleDOI
03 Oct 2005
TL;DR: The simulation results show that advance resource reservation coupled with adaptively changing the size of the active tracking region and the sampling rate reduces the overall energy consumed for tracking without affecting the accuracy in tracking.
Abstract: Target tracking in wireless sensor networks requires efficient coordination among sensor nodes. Existing methods have focused on tree-based collaboration, selective activation, and group clustering. This paper presents a prediction-based adaptive algorithm for tracking mobile targets. We use adaptive Kalman filtering to predict the future location and velocity of the target. This location prediction is used to determine the active tracking region which corresponds to the set of sensors that needs to be "lighted". The velocity prediction is used to adaptively determine the size of the active tracking region, and to modulate the sampling rate as well. In this paper, we quantify the benefits of our approach in terms of energy consumed and accuracy of tracking for different mobility patterns. Our simulation results show that advance resource reservation coupled with adaptively changing the size of the active tracking region and the sampling rate reduces the overall energy consumed for tracking without affecting the accuracy in tracking.

Book ChapterDOI
03 Oct 2005
TL;DR: Tests performed on both synthetic and real-life data indicate that the new classifier outperforms existing algorithms for data streams in terms of accuracy and computational costs.
Abstract: In this paper, we propose an incremental classification algorithm which uses a multi-resolution data representation to find adaptive nearest neighbors of a test point. The algorithm achieves excellent performance by using small classifier ensembles where approximation error bounds are guaranteed for each ensemble size. The very low update cost of our incremental classifier makes it highly suitable for data stream applications. Tests performed on both synthetic and real-life data indicate that our new classifier outperforms existing algorithms for data streams in terms of accuracy and computational costs.

Journal ArticleDOI
TL;DR: A computationally efficient recursive least squares (RLS) type algorithm for jointly estimating the parameters of the channel and the receiver is developed in order to suppress multiaccess (MAI) and inter-symbol interference (ISI).
Abstract: A code-constrained constant modulus (CCM) design criterion for linear receivers is investigated for direct sequence code division multiple access (DS-CDMA) in multipath channels based on constrained optimization techniques. A computationally efficient recursive least squares (RLS) type algorithm for jointly estimating the parameters of the channel and the receiver is developed in order to suppress multiaccess (MAI) and inter-symbol interference (ISI). An analysis of the method examines its convergence properties and simulations under nonstationary environments show that the novel algorithms outperform existent techniques.

Journal ArticleDOI
TL;DR: A novel Coordinate Rotation Digital Computer (CORDIC) rotator algorithm that converges to the final target angle by adaptively executing appropriate iteration steps while keeping the scale factor virtually constant and completely predictable is proposed.
Abstract: In this paper, we proposed a novel Coordinate Rotation Digital Computer (CORDIC) rotator algorithm that converges to the final target angle by adaptively executing appropriate iteration steps while keeping the scale factor virtually constant and completely predictable. The new feature of our scheme is that, depending on the input angle, the scale factor can assume only two values, viz., 1 and 1//spl radic/2, and it is independent of the number of executed iterations, nature of iterations, and word length. In this algorithm, compared to the conventional CORDIC, a reduction of 50% iteration is achieved on an average without compromising the accuracy. The adaptive selection of the appropriate iteration step is predicted from the binary representation of the target angle, and no further arithmetic computation in the angle approximation datapath is required. The convergence range of the proposed CORDIC rotator is spanned over the entire coordinate space. The new CORDIC rotator requires 22% less adders and 53% less registers compared to that of the conventional CORDIC. The synthesized cell area of the proposed CORDIC rotator core is 0.7 mm/sup 2/ and its power dissipation is 7 mW in IHP in-house 0.25-/spl mu/m BiCMOS technology.

Journal ArticleDOI
TL;DR: Simulation based discrete stochastic optimization algorithms are proposed to adaptively select a better antenna subset using criteria such as maximum mutual information, bounds on error rate, etc to minimize the error rate in MIMO antenna selection algorithms.
Abstract: Recently it has been shown that it is possible to improve the performance of multiple-input multiple-output (MIMO) systems by employing a larger number of antennas than actually used and selecting the optimal subset based on the channel state information. Existing antenna selection algorithms assume perfect channel knowledge and optimize criteria such as Shannon capacity or various bounds on error rate. This paper examines MIMO antenna selection algorithms where the set of possible solutions is large and only a noisy estimate of the channel is available. In the same spirit as traditional adaptive filtering algorithms, we propose simulation based discrete stochastic optimization algorithms to adaptively select a better antenna subset using criteria such as maximum mutual information, bounds on error rate, etc. These discrete stochastic approximation algorithms are ideally suited to minimize the error rate since computing a closed form expression for the error rate is intractable. We also consider scenarios of time-varying channels for which the antenna selection algorithms can track the time-varying optimal antenna configuration. We present several numerical examples to show the fast convergence of these algorithms under various performance criteria, and also demonstrate their tracking capabilities.

Journal ArticleDOI
TL;DR: An adaptive frequency-domain equalization (FDE) algorithm for implementation in single-carrier (SC) multiple-input multiple-output (MIMO) systems is presented and a novel method of reducing the overhead required to train the proposed equalizer is outlined.
Abstract: Channel estimation and tracking pose real problems in broadband single-carrier wireless communication systems employing multiple transmit and receive antennas. An alternative to estimating the channel is to adaptively equalize the received symbols. Several adaptive equalization solutions have been researched for systems operating in the time domain. However, these solutions tend to be computationally intensive. A low-complexity alternative is to adaptively equalize the received message in the frequency domain. In this paper, we present an adaptive frequency-domain equalization (FDE) algorithm for implementation in single-carrier (SC) multiple-input multiple-output (MIMO) systems. Furthermore, we outline a novel method of reducing the overhead required to train the proposed equalizer. Finally, we address the issues of complexity and training sequence design. Other computationally efficient adaptive FDE algorithms for use in SC systems employing single transmit and receive antennas, receive diversity, or space-time block codes (STBC) can be found in the literature. However, the algorithm detailed in this paper can be implemented in STBC systems as well as in broadband spatial multiplexing systems, making it suitable for use in high data rate MIMO applications.

Journal ArticleDOI
TL;DR: Exact expressions that completely characterize the transient and steady-state mean-square performances of the algorithm are developed, which lead to new insights into the statistical behavior of the deficient length LMS algorithm.
Abstract: In almost all analyzes of the least mean-square (LMS) finite impulse response (FIR) adaptive algorithm, it is assumed that the length of the adaptive filter is equal to that of the unknown system impulse response. However, in many practical situations, a deficient length adaptive filter, whose length is less than that of the unknown system, is employed, and analysis results for the sufficient length LMS algorithm are not necessarily applicable to the deficient length case. Therefore, there is an essential need to accurately quantify the behavior of the LMS algorithm for realistic situations where the length of the adaptive filter is deficient. In this paper, we present a performance analysis for the deficient length LMS adaptive algorithm for correlated Gaussian input data and using the common independence assumption. Exact expressions that completely characterize the transient and steady-state mean-square performances of the algorithm are developed, which lead to new insights into the statistical behavior of the deficient length LMS algorithm. Simulation experiments illustrate the accuracy of the theoretical results in predicting the convergence behavior of the algorithm.

Journal ArticleDOI
TL;DR: In this paper, an adaptive remeshing algorithm that automatically adjusts the size of the elements of meshes of unstructured triangles (2D and 3D) with time and position in the computational domain is presented.

Journal ArticleDOI
TL;DR: A new version of the HHC algorithm, relaxed HHC, is introduced and shown to have beneficial robustness properties and unify and extend previous work on the higher-harmonic-control algorithm.
Abstract: The higher-harmonic-control (HHC) algorithm is examined from a control theory perspective. A brief review of the history and variants of HHC is given, followed by a careful development of the algorithm. An analytic convergence and robustness analysis is then performed. Online identification with the adaptive variant of the algorithm is also addressed. A new version of the algorithm, relaxed HHC, is introduced and shown to have beneficial robustness properties. Some numerical results comparing these variants of the HHC algorithm applied to helicopter vibration reduction are also presented. The results presented unify and extend previous work on the higher-harmonic-control algorithm.

Journal ArticleDOI
TL;DR: This paper demonstrates a real-valued genetic algorithm that simultaneously adapts several such parameters during the optimization process, and this adaptive algorithm is shown to outperform its static counterparts when used to synthesize the phased array weights to satisfy specified far-field sidelobe constraints.
Abstract: Genetic algorithms are commonly used to solve many optimization and synthesis problems. An important issue facing the user is the selection of genetic algorithm parameters, such as mutation rate, mutation range, and number of crossovers. This paper demonstrates a real-valued genetic algorithm that simultaneously adapts several such parameters during the optimization process. This adaptive algorithm is shown to outperform its static counterparts when used to synthesize the phased array weights to satisfy specified far-field sidelobe constraints, and can perform amplitude-only, phase-only, and complex weight synthesis. When compared to conventional static parameter implementations, computation time is saved in two ways: 1) The algorithm converges faster and 2) the need to tune parameters by hand (generally done by repeatedly running the code with different parameter choices) is greatly reduced. By requiring less iteration to solve a given problem, this approach may benefit electromagnetic optimization problems with expensive cost functions, since genetic algorithms generally require many function evaluations to converge. The adaptive process also provides insight into the qualitative importance of parameters, and dynamically adjusting the mutation range is found to be especially beneficial.

Journal ArticleDOI
01 Jul 2005
TL;DR: An adaptive algorithm is presented that can automatically simplify the dynamics of a multi-body system, based on the desired number of degrees of freedom and the location of external forces and active joint forces, and achieve up to two orders of magnitude performance improvement in several complex benchmarks.
Abstract: Forward dynamics is central to physically-based simulation and control of articulated bodies. We present an adaptive algorithm for computing forward dynamics of articulated bodies: using novel motion error metrics, our algorithm can automatically simplify the dynamics of a multi-body system, based on the desired number of degrees of freedom and the location of external forces and active joint forces. We demonstrate this method in plausible animation of articulated bodies, including a large-scale simulation of 200 animated humanoids and multi-body dynamics systems with many degrees of freedom. The graceful simplification allows us to achieve up to two orders of magnitude performance improvement in several complex benchmarks.

Journal ArticleDOI
TL;DR: Simulation results show that the novel hierarchical scheme for adaptive dynamic power management under nonstationary service requests can lead to significant power savings compared to previously proposed heuristic approaches.
Abstract: Dynamic power management aims at extending battery life by switching devices to lower-power modes when there is a reduced demand for service. Static power management strategies can lead to poor performance or unnecessary power consumption when there are wide variations in the rate of requests for service. This paper presents a hierarchical scheme for adaptive dynamic power management (DPM) under nonstationary service requests. As the main theoretical contribution, we model the nonstationary request process as a Markov-modulated process with a collection of modes, each corresponding to a particular stationary request process. Optimal DPM policies are precalculated offline for selected modes using standard algorithms available for stationary Markov decision processes (MDPs). The power manager then switches online among these policies to accommodate the stochastic mode-switching request dynamics using an adaptive algorithm to determine the optimal switching rule based on the observed sample path. As a target application, we present simulations of hierarchical DPM for hard disk drives where the read/write request arrivals are modeled as a Markov-modulated Poisson process. Simulation results show that the power consumption of our approach under highly nonstationary request arrivals is less than that of a previously proposed heuristic approach and is even comparable to that of the optimal policy under stationary Poisson request process with the same arrival rate as the average arrival rate of the nonstationary request process.

Proceedings Article
01 Jan 2005
TL;DR: An efficient online spherical k-means (OSKM) algorithm is combined with an existing scalable clustering strategy to achieve fast and adaptive clustering of text streams to reveal an intuitive and an interesting fact for clustering text streams-one needs to forget to be adaptive.
Abstract: Clustering data streams has been a new research topic, recently emerged from many real data mining applications, and has attracted a lot of research attention. However, there is little work on clustering high-dimensional streaming text data. This paper combines an efficient online spherical k-means (OSKM) algorithm with an existing scalable clustering strategy to achieve fast and adaptive clustering of text streams. The OSKM algorithm modifies the spherical k-means (SPKM) algorithm, using online update (for cluster centroids) based on the well-known Winner-Take-All competitive learning. It has been shown to be as efficient as SPKM, but much superior in clustering quality. The scalable clustering strategy was previously developed to deal with very large databases that cannot fit into a limited memory and that are too expensive to read/scan multiple times. Using the strategy, one keeps only sufficient statistics for history data to retain (part of) the contribution of history data and to accommodate the limited memory. To make the proposed clustering algorithm adaptive to data streams, we introduce a forgetting factor that applies exponential decay to the importance of history data. The older a set of text documents, the less weight they carry. Our experimental results demonstrate the efficiency of the proposed algorithm and reveal an intuitive and an interesting fact for clustering text streams-one needs to forget to be adaptive.