scispace - formally typeset
Search or ask a question
Author

Joëlle Skaf

Other affiliations: Google
Bio: Joëlle Skaf is an academic researcher from Stanford University. The author has contributed to research in topics: Linear matrix inequality & Convex optimization. The author has an hindex of 9, co-authored 15 publications receiving 552 citations. Previous affiliations of Joëlle Skaf include Google.

Papers
More filters
Journal ArticleDOI
TL;DR: This work addresses the problem of designing a general affine causal controller, in which the control input is an affine function of all previous measurements, in order to minimize a convex objective, in either a stochastic or worst-case setting.
Abstract: We consider a discrete-time time-varying linear dynamical system, perturbed by process noise, with linear noise corrupted measurements, over a finite horizon. We address the problem of designing a general affine causal controller, in which the control input is an affine function of all previous measurements, in order to minimize a convex objective, in either a stochastic or worst-case setting. This controller design problem is not convex in its natural form, but can be transformed to an equivalent convex optimization problem by a nonlinear change of variables, which allows us to efficiently solve the problem. Our method is related to the classical -design procedure for time-invariant, infinite-horizon linear controller design, and the more recent purified output control method. We illustrate the method with applications to supply chain optimization and dynamic portfolio optimization, and show the method can be combined with model predictive control techniques when perfect state information is available.

150 citations

Journal ArticleDOI
TL;DR: The optimized self-tuning approach satisfies performance constraints at all times, and maximizes a lifetime computational power efficiency (LCPE) metric, which is defined as the total number of clock cycles achieved over lifetime divided by the total energy consumed over lifetime.
Abstract: This paper presents an integrated framework, together with control policies, for optimizing dynamic control of self-tuning parameters of a digital system over its lifetime in the presence of circuit aging. A variety of self-tuning parameters such as supply voltage, operating clock frequency, and dynamic cooling are considered, and jointly optimized using efficient algorithms described in this paper. Our optimized self-tuning approach satisfies performance constraints at all times, and maximizes a lifetime computational power efficiency (LCPE) metric, which is defined as the total number of clock cycles achieved over lifetime divided by the total energy consumed over lifetime. We present three control policies: 1) progressive-worst-case-aging (PWCA), which assumes worst-case aging at all times; 2) progressive-on-state-aging (POSA), which estimates aging by tracking active/sleep modes, and then assumes worst-case aging in active mode and long recovery effects in sleep mode; and 3) progressive-real-time-aging-assisted (PRTA), which acquires real-time information and initiates optimized control actions. Various flavors of these control policies for systems with dynamic voltage and frequency scaling (DVFS) are also analyzed. Simulation results on benchmark circuits, using aging models validated by 45 nm measurements, demonstrate the effectiveness and practicality of our approach in significantly improving LCPE and/or lifetime compared to traditional one-time worst-case guardbanding. We also derive system design guidelines to maximize self-tuning benefits.

89 citations

Journal ArticleDOI
TL;DR: It is shown how LMI methods can be used to synthesize a constant state-feedback controller that minimizes the performance bound, for a given level of timing jitter.
Abstract: We consider a continuous-time linear system with sampled constant linear state-feedback control and a convex quadratic performance measure. The sample times, however, are subject to variation within some known interval. We use linear matrix inequality (LMI) methods to derive a Lyapunov function that establishes an upper bound on performance degradation due to the timing jitter. The same Lyapunov function can be used in a heuristic for finding a bad timing jitter sequence, which gives a lower bound on the possible performance degradation. Numerical experiments show that these two bounds are often close, which means that our bound is tight. We show how LMI methods can be used to synthesize a constant state-feedback controller that minimizes the performance bound, for a given level of timing jitter.

78 citations

Proceedings ArticleDOI
A. Zymnis1, Seung-Jean Kim1, Joëlle Skaf1, Mario Parente1, Stephen Boyd1 
01 Nov 2007
TL;DR: In this paper, the authors considered the problem of factorizing a hyperspectral image into the product of two nonnegative matrices, which represent nonnegative bases for image spectra and mixing coefficients, respectively and presented a simple heuristic for approximately solving this problem based on the idea of alternating projected subgradient descent.
Abstract: We consider the problem of factorizing a hyperspectral image into the product of two nonnegative matrices, which represent nonnegative bases for image spectra and mixing coefficients, respectively This spectral unmixing problem is a nonconvex optimization problem, which is very difficult to solve exactly We present a simple heuristic for approximately solving this problem based on the idea of alternating projected subgradient descent Finally, we present the results of applying this method on the 1990 AVIRIS image of Cuprite, Nevada and show that our results are in agreement with similar studies on the same data

76 citations

Proceedings ArticleDOI
08 Mar 2010
TL;DR: This work presents a framework and control policies for optimizing dynamic control of various self-tuning parameters over lifetime in the presence of circuit aging, and introduces dynamic cooling as one of the self- Tuning parameters, in addition to supply voltage and clock frequency.
Abstract: We present a framework and control policies for optimizing dynamic control of various self-tuning parameters over lifetime in the presence of circuit aging. Our framework introduces dynamic cooling as one of the self-tuning parameters, in addition to supply voltage and clock frequency. Our optimized self-tuning satisfies performance constraints at all times and maximizes a lifetime computational power efficiency (LCPE) metric, which is defined as the total number of clock cycles achieved over lifetime divided by the total energy consumed over lifetime. Our framework features three control policies: 1. Progressive-worst-case-aging (PWCA), which assumes worst-case aging at all times; 2. Progressive-on-state-aging (POSA), which estimates aging by tracking active/sleep mode, and then assumes worst-case aging in active mode and long recovery effects in sleep mode; 3. Progressive-real-time-aging-assisted (PRTA), which estimates the actual amount of aging and initiates optimized control action. Simulation results on benchmark circuits, using aging models validated by 45nm CMOS stress measurements, demonstrate the practicality and effectiveness of our approach. We also analyze design constraints and derive system design guidelines to maximize self-tuning benefits.

38 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper presents an overview of un Mixing methods from the time of Keshava and Mustard's unmixing tutorial to the present, including Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixed algorithms.
Abstract: Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.

2,373 citations

Posted Content
TL;DR: An overview of unmixing methods from the time of Keshava and Mustard's tutorial as mentioned in this paper to the present can be found in Section 2.2.1].
Abstract: Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.

1,808 citations

Book ChapterDOI
30 Apr 2009
TL;DR: This work presents a semi-definite programming based solution for solving the problem of security constrained optimal control for discrete-time, linear dynamical systems in which control and measurement packets are transmitted over a communication network.
Abstract: We consider the problem of security constrained optimal control for discrete-time, linear dynamical systems in which control and measurement packets are transmitted over a communication network. The packets may be jammed or compromised by a malicious adversary. For a class of denial-of-service (DoS) attack models, the goal is to find an (optimal) causal feedback controller that minimizes a given objective function subject to safety and power constraints. We present a semi-definite programming based solution for solving this problem. Our analysis also presents insights on the effect of attack models on solution of the optimal control problem.

676 citations

Journal ArticleDOI
TL;DR: The total variation (TV) regularization to the classical sparse regression formulation is included, thus exploiting the spatial-contextual information present in the hyperspectral images and developing a new algorithm called sparse unmixing via variable splitting augmented Lagrangian and TV.
Abstract: Spectral unmixing aims at estimating the fractional abundances of pure spectral signatures (also called endmembers) in each mixed pixel collected by a remote sensing hyperspectral imaging instrument. In recent work, the linear spectral unmixing problem has been approached in semisupervised fashion as a sparse regression one, under the assumption that the observed image signatures can be expressed as linear combinations of pure spectra, known a priori and available in a library. It happens, however, that sparse unmixing focuses on analyzing the hyperspectral data without incorporating spatial information. In this paper, we include the total variation (TV) regularization to the classical sparse regression formulation, thus exploiting the spatial-contextual information present in the hyperspectral images and developing a new algorithm called sparse unmixing via variable splitting augmented Lagrangian and TV. Our experimental results, conducted with both simulated and real hyperspectral data sets, indicate the potential of including spatial information (through the TV term) on sparse unmixing formulations for improved characterization of mixed pixels in hyperspectral imagery.

675 citations

Journal ArticleDOI
TL;DR: In this article, a model predictive control (MPC) approach is proposed to solve an open-loop constrained optimal control problem (OCP) repeatedly in a receding-horizon manner, where the OCP is solved over a finite sequence of control actions at every sampling time instant that the current state of the system is measured.
Abstract: Model predictive control (MPC) has demonstrated exceptional success for the high-performance control of complex systems [1], [2]. The conceptual simplicity of MPC as well as its ability to effectively cope with the complex dynamics of systems with multiple inputs and outputs, input and state/output constraints, and conflicting control objectives have made it an attractive multivariable constrained control approach [1]. MPC (a.k.a. receding-horizon control) solves an open-loop constrained optimal control problem (OCP) repeatedly in a receding-horizon manner [3]. The OCP is solved over a finite sequence of control actions {u0,u1,f,uN- 1} at every sampling time instant that the current state of the system is measured. The first element of the sequence of optimal control actions is applied to the system, and the computations are then repeated at the next sampling time. Thus, MPC replaces a feedback control law p(m), which can have formidable offline computation, with the repeated solution of an open-loop OCP [2]. In fact, repeated solution of the OCP confers an "implicit" feedback action to MPC to cope with system uncertainties and disturbances. Alternatively, explicit MPC approaches circumvent the need to solve an OCP online by deriving relationships for the optimal control actions in terms of an "explicit" function of the state and reference vectors. However, explicit MPC is not typically intended to replace standard MPC but, rather, to extend its area of application [4]-[6].

657 citations