scispace - formally typeset
Search or ask a question
Author

Hiren G. Patel

Bio: Hiren G. Patel is an academic researcher from Sardar Vallabhbhai National Institute of Technology, Surat. The author has contributed to research in topics: Control theory & Kalman filter. The author has an hindex of 4, co-authored 22 publications receiving 1518 citations.

Papers
More filters
Book ChapterDOI
17 Aug 2010
TL;DR: The Fokker-Planck equation as mentioned in this paper describes the evolution of conditional probability density for given initial states for a Markov process, which satisfies the Ito stochastic differential equation.
Abstract: In 1984, H. Risken authored a book (H. Risken, The Fokker-Planck Equation: Methods of Solution, Applications, Springer-Verlag, Berlin, New York) discussing the Fokker-Planck equation for one variable, several variables, methods of solution and its applications, especially dealing with laser statistics. There has been a considerable progress on the topic as well as the topic has received greater clarity. For these reasons, it seems worthwhile again to summarize previous as well as recent developments, spread in literature, on the topic. The Fokker-Planck equation describes the evolution of conditional probability density for given initial states for a Markov process, which satisfies the Ito stochastic differential equation. The structure of the Fokker-Planck equation for the vector case is

1,762 citations

Journal ArticleDOI
TL;DR: The main objective of this proposal is to design robust automated control system that work efficiently in most of the patients with smooth BIS and minimum variations of propofol during surgery to avoid adverse post reactions and instability of anesthetic parameters.
Abstract: Anesthetic agent propofol needs to be administered at an appropriate rate to prevent hypotension and postoperative adverse reactions. To comprehend more suitable anesthetic drug rate during surgery is a crucial aspect. The main objective of this proposal is to design robust automated control system that work efficiently in most of the patients with smooth BIS and minimum variations of propofol during surgery to avoid adverse post reactions and instability of anesthetic parameters. And also, to design advanced computer control system that improves the health of patient with short recovery time and less clinical expenditures. Unlike existing research work, this system administrates propofol as a hypnotic drug to regulate BIS, with fast bolus infusion in induction phase and slow continuous infusion in maintenance phase of anesthesia. The novelty of the paper lies in possibility to simplify the drug sensitivity-based adaption with infusion delay approach to achieve closed-loop control of hypnosis during surgery. Proposed work uses a brain concentration as a feedback signal in place of the BIS signal. Regression model based estimated sensitivity parameters are used for adaption to avoid BIS signal based frequent adaption procedure and large offset error. Adaptive smith predictor with lead–lag filter approach is applied on 22 different patients’ model identified by actual clinical data. The actual BIS and propofol infusion signals recorded during clinical trials were used to estimate patient’s sensitivity parameters EC50 and λ. Simulation results indicate that patient’s drug sensitivity parameters based adaptive strategy facilitates optimal controller performance in most of the patients. Results are obtained with proposed scheme having less settling time, BIS oscillations and small offset error leads to adequate depth of anesthesia. A comparison with manual control mode and previously reported system shows that proposed system achieves reduction in the total variations of the propofol dose. Proposed adaptive scheme provides better performance with less oscillation in spite of computation delay, surgical stimulations and patient variability. Proposed scheme also provides improvement in robustness and may be suitable for clinical practices.

12 citations

Journal ArticleDOI
TL;DR: In this paper, a stochastic differential equation (SDE) formalism in combination with a nonlinear discrete observation equation is developed by adopting a unified systematic approach involving celebrated results of stochastically calculus, where the conditional characteristic function is exploited to develop filtering at the observation instant.
Abstract: Approximate higher-order filters are more attractive and popular in control and signal processing literature in contrast to the exact filter, since the analytical and numerical solutions of the nonlinear exact filter are not possible. The filtering model of this paper involves stochastic differential equation (SDE) formalism in combination with a nonlinear discrete observation equation. The theory of this paper is developed by adopting a unified systematic approach involving celebrated results of stochastic calculus. The Kolmogorov–Fokker–Planck equation in combination with the Kolmogorov backward equation plays the pivotal role to construct the theory of this paper “between the observations.” The conditional characteristic function is exploited to develop “filtering” at the observation instant. Subsequently, the efficacy of the filtering method of this paper is examined on the basis of its comparison with extended Kalman filtering and true state trajectories. This paper will be of interest to applied mathematicians and research communities in systems and control looking for stochastic filtering methods in theoretical studies as well as their application to real physical systems.

10 citations

Journal ArticleDOI
TL;DR: In this paper, the authors developed the estimation-theoretic scenarios of dynamical systems embedded in the colored noise environment as well as developed the theory of the Ornstein-Uhlenbeck process-driven Duffing-van der Pol (DvdP) system.
Abstract: The statistical properties of the Ornstein–Uhlenbeck (OU) process, a colored noise process, confirm the real noise statistics, since the real noise process has finite, nonzero correlation time. For this reason, it seems worthwhile to develop the estimation-theoretic scenarios of dynamical systems embedded in the colored noise environment as well. Importantly, the application of the Ito theory is not straightforward to the dynamical system in which the OU variable is a driving input. The augmented solution vector approach coupled with the Ito stochastic differential rule plays the pivotal role to develop the theory of the OU process-driven Duffing–van der Pol (DvdP) system of this paper. Notably, the noise analysis of the Duffing–van der Pol system, especially from the estimation-theoretic viewpoint, under the colored noise influence is not available yet in literature. Numerical experimentations with three different sets of data are demonstrated to examine the efficacy of analytical findings of this paper. The results of this paper will be of interest to noise scientists, especially research communities in systems and control, looking for the estimation-theoretic scenarios of the colored noise-driven "vector" stochastic differential system.

6 citations

Journal ArticleDOI
TL;DR: In this article, a non-linear system having multiple-input-multiple-output (MIMO) has been discussed with a single-board helicopter, tail and main rotor of a helicopter system.
Abstract: This paper concerns with the non-linear system having multiple-inputs multiple-outputs (MIMO) The plant mainly comprises: bench-top helicopter, tail and main rotor of a helicopter system The dyna

6 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Efficiency and, in particular, efficiency at maximum power can be discussed systematically beyond the linear response regime for two classes of molecular machines, isothermal ones such as molecular motors, and heat engines such as thermoelectric devices, using a common framework based on a cycle decomposition of entropy production.
Abstract: Stochastic thermodynamics as reviewed here systematically provides a framework for extending the notions of classical thermodynamics such as work, heat and entropy production to the level of individual trajectories of well-defined non-equilibrium ensembles. It applies whenever a non-equilibrium process is still coupled to one (or several) heat bath(s) of constant temperature. Paradigmatic systems are single colloidal particles in time-dependent laser traps, polymers in external flow, enzymes and molecular motors in single molecule assays, small biochemical networks and thermoelectric devices involving single electron transport. For such systems, a first-law like energy balance can be identified along fluctuating trajectories. For a basic Markovian dynamics implemented either on the continuum level with Langevin equations or on a discrete set of states as a master equation, thermodynamic consistency imposes a local-detailed balance constraint on noise and rates, respectively. Various integral and detailed fluctuation theorems, which are derived here in a unifying approach from one master theorem, constrain the probability distributions for work, heat and entropy production depending on the nature of the system and the choice of non-equilibrium conditions. For non-equilibrium steady states, particularly strong results hold like a generalized fluctuation–dissipation theorem involving entropy production. Ramifications and applications of these concepts include optimal driving between specified states in finite time, the role of measurement-based feedback processes and the relation between dissipation and irreversibility. Efficiency and, in particular, efficiency at maximum power can be discussed systematically beyond the linear response regime for two classes of molecular machines, isothermal ones such as molecular motors, and heat engines such as thermoelectric devices, using a common framework based on a cycle decomposition of entropy production. (Some figures may appear in colour only in the online journal) This article was invited by Erwin Frey.

2,834 citations

Journal ArticleDOI
TL;DR: A review of the fundamental and technological aspects of these subjects can be found in this article, where the focus is mainly on surface tension effects, which result from the cohesive properties of liquids Paradoxically, cohesive forces promote the breakup of jets, widely encountered in nature, technology and basic science.
Abstract: Jets, ie collimated streams of matter, occur from the microscale up to the large-scale structure of the universe Our focus will be mostly on surface tension effects, which result from the cohesive properties of liquids Paradoxically, cohesive forces promote the breakup of jets, widely encountered in nature, technology and basic science, for example in nuclear fission, DNA sampling, medical diagnostics, sprays, agricultural irrigation and jet engine technology Liquid jets thus serve as a paradigm for free-surface motion, hydrodynamic instability and singularity formation leading to drop breakup In addition to their practical usefulness, jets are an ideal probe for liquid properties, such as surface tension, viscosity or non-Newtonian rheology They also arise from the last but one topology change of liquid masses bursting into sprays Jet dynamics are sensitive to the turbulent or thermal excitation of the fluid, as well as to the surrounding gas or fluid medium The aim of this review is to provide a unified description of the fundamental and the technological aspects of these subjects

1,583 citations

Journal ArticleDOI
TL;DR: In this paper, the constructive role of Brownian motion is exemplified for various physical and technological setups, which are inspired by the cellular molecular machinery: the working principles and characteristics of stylized devices are discussed to show how fluctuations, either thermal or extrinsic, can be used to control diffusive particle transport.
Abstract: In systems possessing spatial or dynamical symmetry breaking, Brownian motion combined with unbiased external input signals, deterministic and random alike, can assist directed motion of particles at submicron scales. In such cases, one speaks of ``Brownian motors.'' In this review the constructive role of Brownian motion is exemplified for various physical and technological setups, which are inspired by the cellular molecular machinery: the working principles and characteristics of stylized devices are discussed to show how fluctuations, either thermal or extrinsic, can be used to control diffusive particle transport. Recent experimental demonstrations of this concept are surveyed with particular attention to transport in artificial, i.e., nonbiological, nanopores, lithographic tracks, and optical traps, where single-particle currents were first measured. Much emphasis is given to two- and three-dimensional devices containing many interacting particles of one or more species; for this class of artificial motors, noise rectification results also from the interplay of particle Brownian motion and geometric constraints. Recently, selective control and optimization of the transport of interacting colloidal particles and magnetic vortices have been successfully achieved, thus leading to the new generation of microfluidic and superconducting devices presented here. The field has recently been enriched with impressive experimental achievements in building artificial Brownian motor devices that even operate within the quantum domain by harvesting quantum Brownian motion. Sundry akin topics include activities aimed at noise-assisted shuttling other degrees of freedom such as charge, spin, or even heat and the assembly of chemical synthetic molecular motors. This review ends with a perspective for future pathways and potential new applications.

1,319 citations

Journal ArticleDOI
TL;DR: It is argued that elaborating principled and informed models is a prerequisite for grounding empirical neuroscience in a cogent theoretical framework, commensurate with the achievements in the physical sciences.
Abstract: The cortex is a complex system, characterized by its dynamics and architecture, which underlie many functions such as action, perception, learning, language, and cognition Its structural architecture has been studied for more than a hundred years; however, its dynamics have been addressed much less thoroughly In this paper, we review and integrate, in a unifying framework, a variety of computational approaches that have been used to characterize the dynamics of the cortex, as evidenced at different levels of measurement Computational models at different space-time scales help us understand the fundamental mechanisms that underpin neural processes and relate these processes to neuroscience data Modeling at the single neuron level is necessary because this is the level at which information is exchanged between the computing elements of the brain; the neurons Mesoscopic models tell us how neural elements interact to yield emergent behavior at the level of microcolumns and cortical columns Macroscopic models can inform us about whole brain dynamics and interactions between large-scale neural systems such as cortical regions, the thalamus, and brain stem Each level of description relates uniquely to neuroscience data, from single-unit recordings, through local field potentials to functional magnetic resonance imaging (fMRI), electroencephalogram (EEG), and magnetoencephalogram (MEG) Models of the cortex can establish which types of large-scale neuronal networks can perform computations and characterize their emergent properties Mean-field and related formulations of dynamics also play an essential and complementary role as forward models that can be inverted given empirical data This makes dynamic models critical in integrating theory and experiments We argue that elaborating principled and informed models is a prerequisite for grounding empirical neuroscience in a cogent theoretical framework, commensurate with the achievements in the physical sciences

986 citations

Posted Content
TL;DR: This review places special emphasis on the fundamental principles of flow design, and discusses foundational topics such as expressive power and computational trade-offs, and summarizes the use of flows for tasks such as generative modeling, approximate inference, and supervised learning.
Abstract: Normalizing flows provide a general mechanism for defining expressive probability distributions, only requiring the specification of a (usually simple) base distribution and a series of bijective transformations. There has been much recent work on normalizing flows, ranging from improving their expressive power to expanding their application. We believe the field has now matured and is in need of a unified perspective. In this review, we attempt to provide such a perspective by describing flows through the lens of probabilistic modeling and inference. We place special emphasis on the fundamental principles of flow design, and discuss foundational topics such as expressive power and computational trade-offs. We also broaden the conceptual framing of flows by relating them to more general probability transformations. Lastly, we summarize the use of flows for tasks such as generative modeling, approximate inference, and supervised learning.

716 citations