scispace - formally typeset
Search or ask a question

Showing papers on "Parametric statistics published in 2013"


Proceedings ArticleDOI
26 May 2013
TL;DR: This paper examines an alternative scheme that is based on a deep neural network (DNN), the relationship between input texts and their acoustic realizations is modeled by a DNN, and experimental results show that the DNN- based systems outperformed the HMM-based systems with similar numbers of parameters.
Abstract: Conventional approaches to statistical parametric speech synthesis typically use decision tree-clustered context-dependent hidden Markov models (HMMs) to represent probability densities of speech parameters given texts. Speech parameters are generated from the probability densities to maximize their output probabilities, then a speech waveform is reconstructed from the generated parameters. This approach is reasonably effective but has a couple of limitations, e.g. decision trees are inefficient to model complex context dependencies. This paper examines an alternative scheme that is based on a deep neural network (DNN). The relationship between input texts and their acoustic realizations is modeled by a DNN. The use of the DNN can address some limitations of the conventional approach. Experimental results show that the DNN-based systems outperformed the HMM-based systems with similar numbers of parameters.

880 citations


Journal ArticleDOI
TL;DR: A learning-based model predictive control scheme that provides deterministic guarantees on robustness, while statistical identification tools are used to identify richer models of the system in order to improve performance.

483 citations


Journal ArticleDOI
TL;DR: In this paper, a degenerate optical parametric oscillator network is proposed to solve the NP-hard problem of finding a ground state of the Ising model, which is based on the bistable output phase of each oscillator and the inherent preference of the network in selecting oscillation modes with the minimum photon decay rate.
Abstract: A degenerate optical parametric oscillator network is proposed to solve the NP-hard problem of finding a ground state of the Ising model. The underlying operating mechanism originates from the bistable output phase of each oscillator and the inherent preference of the network in selecting oscillation modes with the minimum photon decay rate. Computational experiments are performed on all instances reducible to the NP-hard MAX-CUT problems on cubic graphs of order up to 20. The numerical results reasonably suggest the effectiveness of the proposed network.

295 citations


Journal ArticleDOI
06 Dec 2013-Science
TL;DR: It is shown that metamaterials can be designed with optical properties that relax the phase-matching requirements in nonlinear optics, and the experimental demonstration of phase mismatch–free nonlinear generation in a zero-index optical meetamaterial is reported.
Abstract: Phase matching is a critical requirement for coherent nonlinear optical processes such as frequency conversion and parametric amplification. Phase mismatch prevents microscopic nonlinear sources from combining constructively, resulting in destructive interference and thus very low efficiency. We report the experimental demonstration of phase mismatch-free nonlinear generation in a zero-index optical metamaterial. In contrast to phase mismatch compensation techniques required in conventional nonlinear media, the zero index eliminates the need for phase matching, allowing efficient nonlinear generation in both forward and backward directions. We demonstrate phase mismatch-free nonlinear generation using intrapulse four-wave mixing, where we observed a forward-to-backward nonlinear emission ratio close to unity. The removal of phase matching in nonlinear optical metamaterials may lead to applications such as multidirectional frequency conversion and entangled photon generation.

271 citations


Journal ArticleDOI
TL;DR: The proposed method may be a valid alternative when other existing techniques, either deterministic or stochastic, are not directly usable due to excessive conservatism or to numerical intractability caused by lack of convexity of the robust or chance-constrained optimization problem.
Abstract: This paper discusses a novel probabilistic approach for the design of robust model predictive control (MPC) laws for discrete-time linear systems affected by parametric uncertainty and additive disturbances. The proposed technique is based on the iterated solution, at each step, of a finite-horizon optimal control problem (FHOCP) that takes into account a suitable number of randomly extracted scenarios of uncertainty and disturbances, followed by a specific command selection rule implemented in a receding horizon fashion. The scenario FHOCP is always convex, also when the uncertain parameters and disturbance belong to nonconvex sets, and irrespective of how the model uncertainty influences the system's matrices. Moreover, the computational complexity of the proposed approach does not depend on the uncertainty/disturbance dimensions, and scales quadratically with the control horizon. The main result in this work is related to the analysis of the closed loop system under receding-horizon implementation of the scenario FHOCP, and essentially states that the devised control law guarantees constraint satisfaction at each step with some a priori assigned probability p, while the system's state reaches the target set either asymptotically, or in finite time with probability at least p. The proposed method may be a valid alternative when other existing techniques, either deterministic or stochastic, are not directly usable due to excessive conservatism or to numerical intractability caused by lack of convexity of the robust or chance-constrained optimization problem.

225 citations


Journal ArticleDOI
TL;DR: Through rigorous analysis, it is shown that under this new ILC scheme, uniform convergence of state tracking error is guaranteed and an illustrative example is presented to demonstrate the efficacy of the proposed I LC scheme.

215 citations


Journal ArticleDOI
TL;DR: Many modern applications of signal processing and machine learning, ranging from computer vision to computational biology, require the analysis of large volumes of high-dimensional continuous-valued measurements, and a flexible and robust modeling framework that can take into account these diverse statistical features is needed.
Abstract: Many modern applications of signal processing and machine learning, ranging from computer vision to computational biology, require the analysis of large volumes of high-dimensional continuous-valued measurements. Complex statistical features are commonplace, including multimodality, skewness, and rich dependency structures. Such problems call for a flexible and robust modeling framework that can take into account these diverse statistical features. Most existing approaches, including graphical models, rely heavily on parametric assumptions. Variables in the model are typically assumed to be discrete valued or multivariate Gaussians; and linear relations between variables are often used. These assumptions can result in a model far different from the data generating process.

201 citations


Journal ArticleDOI
TL;DR: The development of parametric and nonparametric models of wind turbine power curves are presented, which have been evolved using algorithms like neural networks, fuzzy c-means clustering, and data mining.
Abstract: A wind turbine power curve essentially captures the performance of the wind turbine. The power curve depicts the relationship between the wind speed and output power of the turbine. Modeling of wind turbine power curve aids in performance monitoring of the turbine and also in forecasting of power. This paper presents the development of parametric and nonparametric models of wind turbine power curves. Parametric models of the wind turbine power curve have been developed using four and five parameter logistic expressions. The parameters of these expressions have been solved using advanced algorithms like genetic algorithm (GA), evolutionary programming (EP), particle swarm optimization (PSO), and differential evolution (DE). Nonparametric models have been evolved using algorithms like neural networks, fuzzy c-means clustering, and data mining. The modeling of wind turbine power curve is done using five sets of data; one is a statistically generated set and the others are real-time data sets. The results obtained have been compared using suitable performance metrics and the best method for modeling of the power curve has been obtained.

195 citations


Journal ArticleDOI
TL;DR: An application of the proposed technique shows that a robust stabilization can be performed for linear time-varying and linear-parameter-variesing (LPV) systems without assumption that the vector of scheduling parameters is available for measurements.
Abstract: The problem of output stabilization of a class of nonlinear systems subject to parametric and signal uncertainties is studied. First, an interval observer is designed estimating the set of admissible values for the state. Next, it is proposed to design a control algorithm for the interval observer providing convergence of interval variables to zero, that implies a similar convergence of the state for the original nonlinear system. An application of the proposed technique shows that a robust stabilization can be performed for linear time-varying and linear-parameter-varying (LPV) systems without assumption that the vector of scheduling parameters is available for measurements. Efficiency of the proposed approach is demonstrated through two examples.

189 citations


Journal ArticleDOI
TL;DR: The updated version of the PCAtoTree software provides methods to reliably visualize and quantify separations in scores plots through dendrograms employing both nonparametric and parametric hypothesis testing to assess node significance, as well as scores plots identifying 95% confidence ellipsoids for all experimental groups.

167 citations


Journal ArticleDOI
TL;DR: A hierarchical estimation procedure for the parameters and an asymptotic analysis for the marginal distributions is introduced and the effectiveness of the grouping procedure in the sense of structure selection is shown.

Journal ArticleDOI
TL;DR: In this paper, the effects of load resistance, wind exposure area, mass of the bluff body and length of the piezoelectric sheets on the power output are investigated.
Abstract: Harvesting flow energy by exploiting transverse galloping of a bluff body attached to a piezoelectric cantilever is a prospective method to power wireless sensing systems. In order to better understand the electroaeroelastic behavior and further improve the galloping piezoelectric energy harvester (GPEH), an effective analytical model is required, which needs to incorporate both the electromechanical coupling and the aerodynamic force. Available electromechanical models for the GPEH include the lumped parameter single-degree-of-freedom (SDOF) model, the approximated distributed parameter model based on Rayleigh‐Ritz discretization, and the distributed parameter model with Euler‐Bernoulli beam representation. Each modeling method has its own advantages. The corresponding aerodynamic models are formulated using quasi-steady hypothesis (QSH). In this paper, the SDOF model, the Euler‐Bernoulli distributed parameter model using single mode and the Euler‐Bernoulli distributed parameter model using multi-modes are compared and validated with experimental results. Based on the comparison and validation, the most effective model is employed for the subsequent parametric study. The effects of load resistance, wind exposure area of the bluff body, mass of the bluff body and length of the piezoelectric sheets on the power output are investigated. These simulations can be exploited for designing and optimizing GPEHs for better performance. (Some figures may appear in colour only in the online journal)

Journal ArticleDOI
TL;DR: The proposed spectral modeling method can significantly alleviate the over-smoothing effect and improve the naturalness of the conventional HMM-based speech synthesis system using mel-cepstra.
Abstract: This paper presents a new spectral modeling method for statistical parametric speech synthesis. In the conventional methods, high-level spectral parameters, such as mel-cepstra or line spectral pairs, are adopted as the features for hidden Markov model (HMM)-based parametric speech synthesis. Our proposed method described in this paper improves the conventional method in two ways. First, distributions of low-level, un-transformed spectral envelopes (extracted by the STRAIGHT vocoder) are used as the parameters for synthesis. Second, instead of using single Gaussian distribution, we adopt the graphical models with multiple hidden variables, including restricted Boltzmann machines (RBM) and deep belief networks (DBN), to represent the distribution of the low-level spectral envelopes at each HMM state. At the synthesis time, the spectral envelopes are predicted from the RBM-HMMs or the DBN-HMMs of the input sentence following the maximum output probability parameter generation criterion with the constraints of the dynamic features. A Gaussian approximation is applied to the marginal distribution of the visible stochastic variables in the RBM or DBN at each HMM state in order to achieve a closed-form solution to the parameter generation problem. Our experimental results show that both RBM-HMM and DBN-HMM are able to generate spectral envelope parameter sequences better than the conventional Gaussian-HMM with superior generalization capabilities and that DBN-HMM and RBM-HMM perform similarly due possibly to the use of Gaussian approximation. As a result, our proposed method can significantly alleviate the over-smoothing effect and improve the naturalness of the conventional HMM-based speech synthesis system using mel-cepstra.

Journal ArticleDOI
TL;DR: This paper proposes the design of fuzzy control systems with a reduced parametric sensitivity making use of Gravitational Search Algorithms (GSAs), and suggests a GSA with improved search accuracy.

Journal ArticleDOI
TL;DR: In this paper, a complex dynamical analysis of the parametric fourth-order Kim's iterative family on quadratic polynomials is made, showing the MATLAB codes generated to draw the fractal images necessary to complete the study.
Abstract: The complex dynamical analysis of the parametric fourth-order Kim's iterative family is made on quadratic polynomials, showing the MATLAB codes generated to draw the fractal images necessary to complete the study. The parameter spaces associated with the free critical points have been analyzed, showing the stable (and unstable) regions where the selection of the parameter will provide us the excellent schemes (or dreadful ones).

Journal ArticleDOI
TL;DR: A practical computational algorithm is developed whose convergence rates are provably higher than those of Monte Carlo (MC) and Markov chain Monte Carlo methods, in terms of the number of solutions of the forward problem.
Abstract: Based on the parametric deterministic formulation of Bayesian inverse problems with unknown input parameter from infinite-dimensional, separable Banach spaces proposed in Schwab and Stuart (2012 Inverse Problems 28 045003), we develop a practical computational algorithm whose convergence rates are provably higher than those of Monte Carlo (MC) and Markov chain Monte Carlo methods, in terms of the number of solutions of the forward problem. In the formulation of Schwab and Stuart, the forward problems are parametric, deterministic elliptic partial differential equations, and the inverse problem is to determine the unknown diffusion coefficients from noisy observations comprising linear functionals of the system?s response. The sparsity of the generalized polynomial chaos representation of the posterior density being implied by sparsity assumptions on the class of the prior (Schwab and Stuart 2012), we design, analyze and implement a class of adaptive, deterministic sparse tensor Smolyak quadrature schemes for the efficient approximate numerical evaluation of expectations under the posterior, given data. The proposed, deterministic quadrature algorithm is based on a greedy, iterative identification of finite sets of most significant, ?active? chaos polynomials in the posterior density analogous to recently proposed algorithms for adaptive interpolation (Chkifa et?al 2012 Report 2012-NN, 2013 Math. Modelling Numer. Anal. 47 253?80). Convergence rates for the quadrature approximation are shown, both theoretically and computationally, to depend only on the sparsity class of the unknown, but are bounded independently of the number of random variables activated by the adaptive algorithm. Numerical results for a model problem of coefficient identification with point measurements in a diffusion problem confirm the theoretical results.

Journal ArticleDOI
TL;DR: This paper addresses the apparent contradiction between claims that for criminal justice applications, forecasting accuracy is about the same and procedures such as machine learning that proceed adaptively from the data will improve forecasting accuracy.
Abstract: There is a substantial and powerful literature in statistics and computer science clearly demonstrating that modern machine learning procedures can forecast more accurately than conventional parametric statistical models such as logistic regression. Yet, several recent studies have claimed that for criminal justice applications, forecasting accuracy is about the same. In this paper, we address the apparent contradiction. Forecasting accuracy will depend on the complexity of the decision boundary. When that boundary is simple, most forecasting tools will have similar accuracy. When that boundary is complex, procedures such as machine learning, that proceed adaptively from the data will improve forecasting accuracy, sometimes dramatically. Machine learning has other benefits as well, and e↵ective software is readily available.

Journal ArticleDOI
TL;DR: In this article, a high-resolution, 200-member ensemble of land surface hydrology simulations obtained with the mesoscale Hydrologic Model is used to investigate the effects of the parametric uncertainty on drought statistics such as duration, extension, and severity.
Abstract: Simulated soil moisture is increasingly used to characterize agricultural droughts but its parametric uncertainty, which essentially affects all hydrological fluxes and state variables, is rarely considered for identifying major drought events. In this study, a high-resolution, 200-member ensemble of land surface hydrology simulations obtained with the mesoscale Hydrologic Model is used to investigate the effects of the parametric uncertainty on drought statistics such as duration, extension, and severity. Simulated daily soil moisture fields over Germany at the spatial resolution of 4 × 4 km2 from 1950 to 2010 are used to derive a hydrologically consistent soil moisture index (SMI) representing the monthly soil water quantile at every grid cell. This index allows a quantification of major drought events in Germany. Results of this study indicated that the large parametric uncertainty inherent to the model did not allow discriminating major drought events without a significant classification error...

Journal ArticleDOI
TL;DR: Threshold-free cluster-enhancement has recently been proposed as a useful analysis tool for fMRI datasets and this approach is adapted to optimally deal with EEG datasets and use permutation-based statistics to build an efficient statistical analysis.

Book ChapterDOI
16 Mar 2013
TL;DR: It is shown that detecting attacks can be parallelized, and can be solved using state reachability queries under the SC semantics in a suitably instrumented program obtained by a linear size source-to-source translation.
Abstract: We present algorithms for checking and enforcing robustness of concurrent programs against the Total Store Ordering (TSO) memory model. A program is robust if all its TSO computations correspond to computations under the Sequential Consistency (SC) semantics. We provide a complete characterization of non-robustness in terms of so-called attacks: a restricted form of (harmful) out-of-program-order executions. Then, we show that detecting attacks can be parallelized, and can be solved using state reachability queries under the SC semantics in a suitably instrumented program obtained by a linear size source-to-source translation. Importantly, the construction is valid for an unbounded number of memory addresses and an arbitrary number of parallel threads. It is independent from the data domain and from the size of store buffers in the TSO semantics. In particular, when the data domain is finite and the number of addresses is fixed, we obtain decidability and complexity results for robustness, even for a parametric number of threads. As a second contribution, we provide an algorithm for computing an optimal set of fences that enforce robustness. We consider two criteria of optimality: minimization of program size and maximization of its performance. The algorithms we define are implemented, and we successfully applied them to analyzing and correcting several concurrent algorithms.

Journal ArticleDOI
TL;DR: The proposed p‐CS regularization strategy uses smoothness of signal evolution in the parametric dimension within compressed sensing framework (p‐CS) to provide accurate and precise estimation of parametric maps from undersampled data.
Abstract: MR parameter mapping requires sampling along additional (parametric) dimension, which often limits its clinical appeal due to a several-fold increase in scan times compared to conventional anatomic imaging. Data undersampling combined with parallel imaging is an attractive way to reduce scan time in such applications. However, inherent SNR penalties of parallel MRI due to noise amplification often limit its utility even at moderate acceleration factors, requiring regularization by prior knowledge. In this work, we propose a novel regularization strategy, which uses smoothness of signal evolution in the parametric dimension within compressed sensing framework (p-CS) to provide accurate and precise estimation of parametric maps from undersampled data. The performance of the method was demonstrated with variable flip angle T1 mapping and compared favorably to two representative reconstruction approaches, image space-based total variation regularization and an analytical model-based reconstruction. The proposed p-CS regularization was found to provide efficient suppression of noise amplification and preservation of parameter mapping accuracy without explicit utilization of analytical signal models. The developed method may facilitate acceleration of quantitative MRI techniques that are not suitable to model-based reconstruction because of complex signal models or when signal deviations from the expected analytical model exist. Magn Reson Med 70:1263–1273, 2013. © 2012 Wiley Periodicals, Inc.

Posted Content
TL;DR: The complex dynamical analysis of the parametric fourth-order Kim's iterative family is made on quadratic polynomials, showing the MATLAB codes generated to draw the fractal images necessary to complete the study.
Abstract: In this paper the complex dynamical analysis of the parametric fourth-order Kim's iterative family is made on quadratic polynomials, showing the Matlab codes generated to draw the fractal images necessary to complete the study. The parameter spaces associated to the free critical points have been analyzed, showing the stable (and unstable) regions where the selection of the parameter will provide us excellent schemes (or dreadful ones).

Journal ArticleDOI
TL;DR: In this paper, the analytical prediction properties of top-down (TD) and bottom-up (BU) approaches when forecasting aggregate demand, using multivariate exponential smoothing as demand planning framework, were provided.
Abstract: Forecasting aggregate demand is a crucial matter in all industrial sectors. In this paper, we provide the analytical prediction properties of top-down (TD) and bottom-up (BU) approaches when forecasting aggregate demand, using multivariate exponential smoothing as demand planning framework. We extend and generalize the results obtained by Widiarta, Viswanathan and Piplani (2009) by employing an unrestricted multivariate framework allowing for interdependency between the variables. Moreover, we establish the necessary and sufficient condition for the equality of mean squared errors (MSEs) of the two approaches. We show that the condition for the equality of MSEs also holds even when the moving average parameters of the individual components are not identical. In addition, we show that the relative forecasting accuracy of TD and BU depends on the parametric structure of the underlying framework. Simulation results confirm our theoretical findings. Indeed, the ranking of TD and BU forecasts is led by the parametric structure of the underlying data generation process, regardless of possible misspecification issues.

Journal ArticleDOI
TL;DR: In this paper, an adaptive numerical algorithm for constructing a sequence of sparse polynomials that is proved to converge toward the solution with the optimal benchmark rate is presented, where the convergence rate in terms of N does not depend on the number of parameters in V, which may be arbitrarily large or countably infinite.
Abstract: The numerical approximation of parametric partial differential equations is a computational challenge, in particular when the number of involved parameter is large. This paper considers a model class of second order, linear, parametric, elliptic PDEs on a bounded domain D with diffusion coefficients depending on the parameters in an affine manner. For such models, it was shown in (9, 10) that under very weak assumptions on the diffusion coefficients, the entire family of solutions to such equations can be simultaneously approximated in the Hilbert space V = H 1 0 (D) by multivariate sparse polynomials in the parameter vector y with a controlled number N of terms. The convergence rate in terms of N does not depend on the number of parameters in V , which may be arbitrarily large or countably infinite, thereby breaking the curse of dimensionality. However, these approximation results do not describe the concrete construction of these polynomial expansions, and should therefore rather be viewed as benchmark for the convergence analysis of numerical methods. The present paper presents an adaptive numerical algorithm for constructing a sequence of sparse polynomials that is proved to converge toward the solution with the optimal benchmark rate. Numerical experiments are presented in large parameter dimension, which confirm the effectiveness of the adaptive approach. Mathematics Subject Classification. 65N35, 65L10, 35J25.

Journal ArticleDOI
TL;DR: It is shown that coupling domain decomposition and projection-based model order reduction permits to focus the numerical effort where it is most needed: around the zones where damage propagates.

Journal ArticleDOI
TL;DR: A general algorithm involving numerical integration and root‐finding techniques to generate survival times from a variety of complex parametric distributions, incorporating any combination of time‐dependent effects, time‐varying covariates, delayed entry, random effects and covariates measured with error is described.
Abstract: Simulation studies are conducted to assess the performance of current and novel statistical models in pre-defined scenarios. It is often desirable that chosen simulation scenarios accurately reflect a biologically plausible underlying distribution. This is particularly important in the framework of survival analysis, where simulated distributions are chosen for both the event time and the censoring time. This paper develops methods for using complex distributions when generating survival times to assess methods in practice. We describe a general algorithm involving numerical integration and root-finding techniques to generate survival times from a variety of complex parametric distributions, incorporating any combination of time-dependent effects, time-varying covariates, delayed entry, random effects and covariates measured with error. User-friendly Stata software is provided.

Journal ArticleDOI
TL;DR: A novel sufficient dimension-reduction method using a squared-loss variant of mutual information as a dependency measure that is formulated as a minimum contrast estimator on parametric or nonparametric models and a natural gradient algorithm on the Grassmann manifold for sufficient subspace search.
Abstract: The goal of sufficient dimension reduction in supervised learning is to find the low-dimensional subspace of input features that contains all of the information about the output values that the input features possess. In this letter, we propose a novel sufficient dimension-reduction method using a squared-loss variant of mutual information as a dependency measure. We apply a density-ratio estimator for approximating squared-loss mutual information that is formulated as a minimum contrast estimator on parametric or nonparametric models. Since cross-validation is available for choosing an appropriate model, our method does not require any prespecified structure on the underlying distributions. We elucidate the asymptotic bias of our estimator on parametric models and the asymptotic convergence rate on nonparametric models. The convergence analysis utilizes the uniform tail-bound of a U-process, and the convergence rate is characterized by the bracketing entropy of the model. We then develop a natural gradient algorithm on the Grassmann manifold for sufficient subspace search. The analytic formula of our estimator allows us to compute the gradient efficiently. Numerical experiments show that the proposed method compares favorably with existing dimension-reduction approaches on artificial and benchmark data sets.

Journal ArticleDOI
03 Jan 2013-PLOS ONE
TL;DR: Potential consequences of the number of thresholds and non-independency of samples are demonstrated in two examples (using artificial data and EEG data) and alternative approaches are presented, which overcome these methodological issues.
Abstract: Graph theory deterministically models networks as sets of vertices, which are linked by connections. Such mathematical representation of networks, called graphs are increasingly used in neuroscience to model functional brain networks. It was shown that many forms of structural and functional brain networks have small-world characteristics, thus, constitute networks of dense local and highly effective distal information processing. Motivated by a previous small-world connectivity analysis of resting EEG-data we explored implications of a commonly used analysis approach. This common course of analysis is to compare small-world characteristics between two groups using classical inferential statistics. This however, becomes problematic when using measures of inter-subject correlations, as it is the case in commonly used brain imaging methods such as structural and diffusion tensor imaging with the exception of fibre tracking. Since for each voxel, or region there is only one data point, a measure of connectivity can only be computed for a group. To empirically determine an adequate small-world network threshold and to generate the necessary distribution of measures for classical inferential statistics, samples are generated by thresholding the networks on the group level over a range of thresholds. We believe that there are mainly two problems with this approach. First, the number of thresholded networks is arbitrary. Second, the obtained thresholded networks are not independent samples. Both issues become problematic when using commonly applied parametric statistical tests. Here, we demonstrate potential consequences of the number of thresholds and non-independency of samples in two examples (using artificial data and EEG data). Consequently alternative approaches are presented, which overcome these methodological issues.

Journal ArticleDOI
TL;DR: The results demonstrate that the novel speed controller can improve dynamic response performance and robustness characteristics of PMSM drive.
Abstract: Permanent magnet synchronous motor (PMSM) is a typical nonlinear multivariable coupled system. It is sensitive to the load disturbance and the changing of motor parameters such as stator inductance. Magnetic-field distribution is observed by finite element analysis, and inductance parametric variations are obtained. To improve dynamic quality of PMSM control system, a new exponent reaching law is proposed. The results demonstrate that the novel speed controller can improve dynamic response performance and robustness characteristics of PMSM drive.

Journal ArticleDOI
TL;DR: In this paper, a theory of parametric resonance in tunable superconducting cavities is developed, where the nonlinearity introduced by the super-conducting quantum interference device (SQUID) attached to the cavity and damping due to connection of the cavity to a transmission line are taken into consideration.
Abstract: We develop a theory of parametric resonance in tunable superconducting cavities. The nonlinearity introduced by the superconducting quantum interference device (SQUID) attached to the cavity and damping due to connection of the cavity to a transmission line are taken into consideration. We study in detail the nonlinear classical dynamics of the cavity field below and above the parametric threshold for the degenerate parametric resonance, featuring regimes of multistability and parametric radiation. We investigate the phase-sensitive amplification of external signals on resonance, as well as amplification of detuned signals, and relate the amplifier performance to that of linear parametric amplifiers. We also discuss applications of the device for dispersive qubit readout. Beyond the classical response of the cavity, we investigate small quantum fluctuations around the amplified classical signals. We evaluate the noise power spectrum both for the internal field in the cavity and the output field. Other quantum-statistical properties of the noise are addressed such as squeezing spectra, second-order coherence, and two-mode entanglement.