scispace - formally typeset
Search or ask a question

Showing papers on "Linear approximation published in 2016"


Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of deriving an explicit approximate solution of the nonlinear power equations that describe a balanced power distribution network and propose an approximation that is linear in the active and reactive power demands of the PQ buses.
Abstract: We consider the problem of deriving an explicit approximate solution of the nonlinear power equations that describe a balanced power distribution network. We give sufficient conditions for the existence of a practical solution to the power flow equations, and we propose an approximation that is linear in the active and reactive power demands of the PQ buses. For this approximation, which is valid for generic power line impedances and grid topology, we derive a bound on the approximation error as a function of the grid parameters. We illustrate the quality of the approximation via simulations, we show how it can also model the presence of voltage controlled (PV) buses, and we discuss how it generalizes the DC power flow model to lossy networks.

407 citations


Journal ArticleDOI
TL;DR: This letter proposes a linear load flow for three-phase power distribution systems that is very accurate compared to the conventional back-forward sweep algorithm.
Abstract: This letter proposes a linear load flow for three-phase power distribution systems. Balanced and unbalanced operation are considered as well as the ZIP models of the loads. The methodology does not require any assumption related to the $R/X$ ratio. Despite its simplicity, it is very accurate compared to the conventional back-forward sweep algorithm.

231 citations


Book ChapterDOI
20 Mar 2016
TL;DR: This paper proposes an MILP-based method for automatic search for differential characteristics and linear approximations in ARX ciphers and presents a method to describe the differential characteristic and linear approximation with linear inequalities under the assumptions of independent inputs to the modular addition and independent rounds.
Abstract: In recent years, Mixed Integer Linear Programming MILP has been successfully applied in searching for differential characteristics and linear approximations in block ciphers and has produced the significant results for some ciphers such as SIMON a family of lightweight and hardware-optimized block ciphers designed by NSA etc. However, in the literature, the MILP-based automatic search algorithm for differential characteristics and linear approximations is still infeasible for block ciphers such as ARX constructions. In this paper, we propose an MILP-based method for automatic search for differential characteristics and linear approximations in ARX ciphers. By researching the properties of differential characteristic and linear approximation of modular addition in ARX ciphers, we present a method to describe the differential characteristic and linear approximation with linear inequalities under the assumptions of independent inputs to the modular addition and independent rounds. We use this representation as an input to the publicly available MILP optimizer Gurobi to search for differential characteristics and linear approximations for ARX ciphers. As an illustration, we apply our method to Speck, a family of lightweight and software-optimized block ciphers designed by NSA, which results in the improved differential characteristics and linear approximations compared with the existing ones. Moreover, we provide the improved differential attacks on Speck48, Speck64, Speck96 and Speck128, which are the best attacks on them in terms of the number of rounds.

114 citations


Journal ArticleDOI
Zhifang Yang1, Haiwang Zhong1, Qing Xia1, Anjan Bose, Chongqing Kang1 
TL;DR: In this article, a new solution to the alternating current optimal power flow problem based on successive linear approximation of power flow equations is introduced, which guarantees the accuracy of the linear approximation when the quality of initial points regarding voltage magnitude is relatively low.
Abstract: In this study, the authors introduce a new solution to the alternating current optimal power flow problem based on successive linear approximation of power flow equations. The polar coordination of the power flow equations is used to take advantage of the quasi-linear P– θ relationship. A mathematical transformation for the crossing term of voltage magnitude is used, which guarantees the accuracy of the linear approximation when the quality of initial points regarding voltage magnitude is relatively low in the first few iterations. As a result, the accuracy of the proposed approximation becomes very high in very few iterations. Linearisation method for the quadratic apparent branch flow limits is provided. Methods to recover the AC feasibility from the obtained optimal power flow solution and correct possible constraint violations are introduced. The proposed method is tested in several IEEE and Polish benchmark systems. The difference in objective functions relative to MATPOWER benchmark results is generally <0.1% when the algorithm terminates.

94 citations


Journal ArticleDOI
TL;DR: In this paper, a probabilistic optimal power flow (P-OPF) model with chance constraints that considers the uncertainties of wind power generation (WPG) and load is proposed.
Abstract: A novel probabilistic optimal power flow (P-OPF) model with chance constraints that considers the uncertainties of wind power generation (WPG) and load is proposed in this paper. An affine generation dispatch strategy is adopted to balance the system power uncertainty by several conventional generators, and thus the linear approximation of the cost function with respect to the power uncertainty is proposed to compute the quantile (which is also recognized as the value-at-risk) corresponding to a given probability value. The proposed model applies this quantile as the objective function and minimizes it to meet distinct probabilistic cost regulation purposes via properly selecting the given probability. In particular, the hedging effect due to the used affine generation dispatch is also thoroughly investigated. In addition, an analytical method to calculate probabilistic load flow (PLF) is developed with the probability density function of WPG, which is proposed to be approximated by a customized Gaussian mixture model whose parameters are easily obtained. Accordingly, it is successful to analytically compute the chance constraints on the transmission line power and the power outputs of conventional units. Numerical studies of two benchmark systems show the satisfactory accuracy of the PLF method, and the effectiveness of the proposed P-OPF model.

94 citations


Journal ArticleDOI
TL;DR: In this paper, a linear dynamic time-invariant model is identified to describe the relationship between the reference signal and the output of the system, and the power spectrum of the unmodeled disturbances are identified to generate uncertainty bounds on the estimated model.
Abstract: Linear system identification [1]?[4] is a basic step in modern control design approaches. Starting from experimental data, a linear dynamic time-invariant model is identified to describe the relationship between the reference signal and the output of the system. At the same time, the power spectrum of the unmodeled disturbances is identified to generate uncertainty bounds on the estimated model.

83 citations


Journal ArticleDOI
TL;DR: In this article, the exact computation of sensitivity factors that speed up the discrete coordinate-descent implementation, by significantly reducing the number of forward/backward substitutions in the current injection power flow method, without affecting the control setting quality of the original implementation, is presented.
Abstract: The discrete coordinate-descent algorithm is a practical approach that is currently used in centralized Volt/VAr Control (VVC) implementations, mainly due to its good performance and speed for real-time applications. Its viability is however challenged by the increasing number of distributed generation that contribute to the VVC solution, in addition to the conventional transformer taps and switched capacitors. This paper presents the exact computation of sensitivity factors that speed up the discrete coordinate-descent implementation, by significantly reducing the number of forward/backward substitutions in the current injection power flow method; the speed up is achieved without affecting the control setting quality of the original implementation. The optimality of the discrete coordinate-descent solutions is investigated by computing the gaps relative to mixed-integer linear programming set-points, derived from a polyhedral reformulation of the VVC problem. The sensitivity-based discrete coordinate-descent algorithm is tested starting from two initial points, the default one given by the current control set-points, and a continuous solution obtained from a linear approximation of the VVC problem. Numerical results on networks with up to 3145 nodes show that the sensitivity-based approach significantly improves the runtime of the discrete coordinate-descent algorithm, and that the linear programming initialization leads to VVC solutions with gaps relative to the mixed-integer set-points that are less than 0.5%.

70 citations


Proceedings ArticleDOI
01 Dec 2016
TL;DR: This paper proposes an algorithm capable of computing rigorous bounds on the approximation error in the DC power flow (and, in future extensions, more general linearized approximations) using convex relaxation techniques and shows that the bounds are reasonably tight over a range of operating conditions.
Abstract: Power flow models are fundamental to power systems analyses ranging from short-term market clearing and voltage stability studies to long-term planning. Due to the nonlinear nature of the AC power flow equations and the associated computational challenges, linearized approximations (like the DC power flow) have been widely used to solve these problems in a computationally tractable manner. The linearized approximations have been justified using traditional engineering assumptions that under “normal” operating conditions, voltage magnitudes do not significantly deviate from nominal values and phase differences are “small”. However, there is only limited work on rigorously quantifying when it is safe to use these linearized approximations. In this paper, we propose an algorithm capable of computing rigorous bounds on the approximation error in the DC power flow (and, in future extensions, more general linearized approximations) using convex relaxation techniques. Given a set of operational constraints (limits on the voltage magnitudes, phase angle differences, and power injections), the algorithm determines an upper bound on the difference in injections at each bus computed by the AC and DC power flow power flow models within this domain. We test our approach on several IEEE benchmark networks. Our experimental results show that the bounds are reasonably tight (i.e., there are points within the domain of interest that are close to achieving the bound) over a range of operating conditions.

59 citations


Journal ArticleDOI
TL;DR: A novel convex approximation technique to approximate the original problem by a series of convex subproblems, each of which decomposes across all the cells, which shows that the proposed framework is effective for solving interference management problems in large HetNet.
Abstract: We study the downlink linear precoder design problem in a multicell dense heterogeneous network (HetNet). The problem is formulated as a general sum-utility maximization (SUM) problem, which includes as special cases many practical precoder design problems such as multicell coordinated linear precoding, full and partial per-cell coordinated multipoint transmission, zero-forcing precoding, and joint BS clustering and beamforming/precoding. The SUM problem is difficult due to its nonconvexity and the tight coupling of the users’ precoders. In this paper, we propose a novel convex approximation technique to approximate the original problem by a series of convex subproblems, each of which decomposes across all the cells. The convexity of the subproblems allows for efficient computation, while their decomposability leads to distributed implementation. Our approach hinges upon the identification of certain key convexity properties of the sum-utility objective, which allows us to transform the problem into a form that can be solved using a popular algorithmic framework called block successive upper-bound minimization (BSUM). Simulation experiments show that the proposed framework is effective for solving interference management problems in large HetNet.

49 citations


Journal ArticleDOI
TL;DR: The cryptographic strength of the new S-box is critically analyzed by studying the properties of S- box such as nonlinearity, strict avalanche, bit independence, linear approximation probability and differential approximation probability.
Abstract: We study the structure of an S-box based on a fractional linear transformation applied on the Galois field $$GF(2^{8})$$ . The algorithm followed is very simple and yields an S-box with a very high ability to create confusion in the data. The cryptographic strength of the new S-box is critically analyzed by studying the properties of S-box such as nonlinearity, strict avalanche, bit independence, linear approximation probability and differential approximation probability. We also apply majority logic criterion to determine the effectiveness of our proposed S-box in image encryption applications.

46 citations


Proceedings ArticleDOI
01 Sep 2016
TL;DR: By combining an accurate linear approximation of the AC power flow equations with the distributionally robust chance constraint reformulation, the resulting optimization problem becomes convex and computationally tractable.
Abstract: This paper outlines a data-driven, distributionally robust approach to solve chance-constrained AC optimal power flow problems in distribution networks. Uncertain forecasts for loads and power generated by photovoltaic (PV) systems are considered, with the goal of minimizing PV curtailment while meeting power flow and voltage regulation constraints. A data-driven approach is utilized to develop a distributionally robust conservative convex approximation of the chance-constraints; particularly, the mean and covariance matrix of the forecast errors are updated online, and leveraged to enforce voltage regulation with predetermined probability via Chebyshev-based bounds. By combining an accurate linear approximation of the AC power flow equations with the distributionally robust chance constraint reformulation, the resulting optimization problem becomes convex and computationally tractable.

Journal ArticleDOI
TL;DR: In this paper, a semi-implicit co-simulation approach for solver coupling with algebraic constraints has been presented by Schweizer and Lu (Multibody Syst. Dyn., 2014) for the case that constant approximation is used for extrapolating/interpolating the coupling variables.
Abstract: Based on the stabilized index-2 formulation for multibody systems, a semi-implicit co-simulation approach for solver coupling with algebraic constraints has been presented by Schweizer and Lu (Multibody Syst. Dyn., 2014) for the case that constant approximation is used for extrapolating/interpolating the coupling variables. In the manuscript at hand, this method is generalized to the case that higher-order approximation is employed. Direct application of higher-order polynomials for extrapolating/interpolating the coupling variables fails. Using linear approximation polynomials, artificial oscillations in the Lagrange multipliers of the kinematical differential equations are observed. For quadratic and higher-order polynomials, the co-simulation becomes unstable. In this work, the key idea to obtain stable solutions without artificial oscillations is to apply a relaxation technique. A detailed stability and convergence analysis is presented in the paper for the case of higher-order approximation. In this context, the influence of the relaxation parameter on the stability and convergence behavior is investigated. Applicability and robustness of the stabilized index-2 co-simulation method incorporating higher-order approximation polynomials is demonstrated with different numerical examples. Using piecewise constant approximation polynomials for the coupling variables produces discontinuous accelerations and reaction forces in the subsystems at the macrotime points, which may entail problems for the subsystem integrator. With higher-order approximation polynomials, the coupling variables and in consequence the accelerations and reaction forces in the subsystems become continuous.

Journal ArticleDOI
TL;DR: A mathematical basis for the piecewise linear approximation method associated with the convergence rate is shown through this inequality, and this suggests that the Piecewise Linear approximation method may drastically outperform the conventional method in the L1 optimal controller synthesis problem of sampled-data systems.
Abstract: This paper develops a new discretization method with piecewise linear approximation for the $L_{1}$ optimal controller synthesis problem of sampled-data systems, which is the problem of minimizing the $L_{\infty}$ -induced norm of sampled-data systems. We apply fast-lifting on the top of the lifting technique, by which the sampling interval $[0,h)$ is divided into $M$ subintervals with an equal width. The signals on each subinterval are then approximated by linear functions by introducing two types of ‘linearizing operators’ for input and output, which leads to piecewise linear approximation of sampled-data systems. By using the arguments of preadjoint operators, we provide an important inequality that forms a theoretical basis for tackling the $L_{1}$ optimal controller synthesis problem of sampled-data systems more efficiently than the conventional method. More precisely, a mathematical basis for the piecewise linear approximation method associated with the convergence rate is shown through this inequality, and this suggests that the piecewise linear approximation method may drastically outperform the conventional method in the $L_{1}$ optimal controller synthesis problem of sampled-data systems. We then provide a discretization procedure of sampled-data systems by which the $L_{1}$ optimal controller synthesis problem is converted to the discrete-time $l_{1}$ optimal controller synthesis problem. Finally, effectiveness of the proposed method is demonstrated through a numerical example.

Journal ArticleDOI
TL;DR: It is shown that a class of nonconvex learning problems are equivalent to general quadratic programs, and this equivalence facilitates us in developing mixed integer linear programming reformulations, which admit finite algorithms that find a provably global optimal solution.
Abstract: This paper is concerned with solving nonconvex learning problems with folded concave penalty. Despite that their global solutions entail desirable statistical properties, there lack optimization techniques that guarantee global optimality in a general setting. In this paper, we show that a class of nonconvex learning problems are equivalent to general quadratic programs. This equivalence facilitates us in developing mixed integer linear programming reformulations, which admit finite algorithms that find a provably global optimal solution. We refer to this reformulation-based technique as the mixed integer programming-based global optimization (MIPGO). To our knowledge, this is the first global optimization scheme with a theoretical guarantee for folded concave penalized nonconvex learning with the SCAD penalty (Fan and Li, 2001) and the MCP penalty (Zhang, 2010). Numerical results indicate a significant outperformance of MIPGO over the state-of-the-art solution scheme, local linear approximation, and other alternative solution techniques in literature in terms of solution quality.

Journal ArticleDOI
TL;DR: In this paper, a linear source (LS) approximation scheme for the two-dimensional method of characteristics (MOC) is presented. The LS approximation relies on the computation of track-based spatial moments over so-called "track moments".
Abstract: A linear source (LS) approximation scheme is presented for the two-dimensional method of characteristics (MOC). The LS approximation relies on the computation of track-based spatial moments over so...

Journal ArticleDOI
TL;DR: In this article, the authors show that if limited, potentially higher order interpolation is used for the mesh transfer, convergence is guaranteed and provide numerical tests for the mean-variance optimal investment problem and the uncertain volatility option pricing model.

Journal ArticleDOI
TL;DR: The proposed method reconstructs feature curves from the intersections of developable strip pairs which approximate the regions along both sides of the features.

Journal ArticleDOI
TL;DR: It is shown that the best linear approximation of the MIMO LNL system in the mean square sense can be obtained by the orthogonal projection (ORT) subspace identification method.

Journal ArticleDOI
TL;DR: In this article, a novel microwave imaging approach to reconstruct the dielectric properties of targets hosted in partially known, noncanonical, scenarios is proposed and assessed, taking joint advantage of the recently introduced virtual experiments paradigm and exploits a new linear approximation developed within such a framework.
Abstract: A novel microwave imaging approach to reconstruct the dielectric properties of targets hosted in partially known, noncanonical, scenarios is proposed and assessed. The method takes joint advantage of the recently introduced virtual experiments paradigm and exploits a new linear approximation developed within such a framework. Such an approximation implicitly depends on the unknown targets and, therefore, has a broader applicability as compared with the traditional distorted Born approximation. Being noniterative, the resulting distorted-wave inversion method is capable of quasi-real-time imaging and successfully images nonweak perturbations. The performances of the novel imaging method have been assessed with simulated data and validated experimentally against some of Fresnel data sets.

Journal ArticleDOI
TL;DR: A linear approximation is proposed to estimate the expected value of the next stage inflow and, using the proposed approach, next-stage inflows are estimated by a model that uses transformed time series.
Abstract: Stochastic dual dynamic programming (SDDP) is a widely used technique for operation optimization of large-scale hydropower systems in which reservoir inflow uncertainty is modeled with discrete scenarios produced by statistical time series models, such as the family of periodic auto-regressive (PAR) models. It is a common practice in statistical modeling of hydrologic time series to fit a well-known probability distribution (usually normal distribution) to the data by applying proper transformation. Box-Cox transformation is a commonly used transformation in the case of normal distribution fitting. The convexity requirement of SDDP means that nonlinearly transformed time series cannot be used for statistical inflow model calibration. In this paper, a linear approximation is proposed to estimate the expected value of the next stage inflow. In the proposed approach, next-stage inflows are estimated by a model that uses transformed time series. Furthermore, using the proposed linear approximation, it...


Journal ArticleDOI
01 Mar 2016-Robotica
TL;DR: A globally effective nilpotent approximation model is developed and the parameterized polynomial input is adopted to stabilize the system to its non-singularity equilibrium configuration and it is shown that designing a stable closed-loop control system for the underactuated mechanical system can be ascribed to solving a set of nonlinear algebraic equations.
Abstract: The weightless planar 2R underactuated manipulators with passive last joint are considered in this paper for investigating a feasible method to stabilize the system, which is a second-order nonholonomic-constraint mechanical system with drifts. The characteristics including the controllability of the linear approximation model, the minimum phase property, the Small Time Local Controllability (STLC), the differential flatness, and the exactly nilpotentizable properties, are analyzed. Unfortunately, these negative characteristics indicate that the simplest underactuated mechanical system is difficult to design a stable closed-loop control system. In this paper, nilpotent approximation and iterative steering methods are utilized to solve the problem. A globally effective nilpotent approximation model is developed and the parameterized polynomial input is adopted to stabilize the system to its non-singularity equilibrium configuration. In accordance with this scheme, it is shown that designing a stable closed-loop control system for the underactuated mechanical system can be ascribed to solving a set of nonlinear algebraic equations. If the nonlinear algebraic equations are solvable, then the controller is asymptotically stable. Some numerical simulations demonstrate the effectiveness of the presented approach.

Journal ArticleDOI
TL;DR: Tight upper and lower bounds of the cardinality of the index sets of certain hyperbolic crosses which reflect mixed Sobolev-Korobov-type smoothness and mixed Sobosev-analytic-type smootherness are given in the infinite-dimensional case where specific summability properties of the smoothness indices are fulfilled.

Journal ArticleDOI
TL;DR: Comparing properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo study using a nonlinear coupled oscillators model with Mixed effects showed that overall, the FodEmixed outperformed both the G LLA and gold across all the embedding dimensions considered.
Abstract: Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and...

Journal ArticleDOI
TL;DR: A thorough error analysis is developed that yields asymptotically tight bounds to accurately quantify the approximation performance of both representations and provides an improvement upon previous error estimates and allows the user to control the tradeoff between the approximation error and the number of evaluation subintervals.
Abstract: Many computer vision and human–computer interaction applications developed in recent years need evaluating complex and continuous mathematical functions as an essential step toward proper operation. However, rigorous evaluation of these kind of functions often implies a very high computational cost, unacceptable in real-time applications. To alleviate this problem, functions are commonly approximated by simpler piecewise-polynomial representations. Following this idea, we propose a novel, efficient, and practical technique to evaluate complex and continuous functions using a nearly optimal design of two types of piecewise linear approximations in the case of a large budget of evaluation subintervals. To this end, we develop a thorough error analysis that yields asymptotically tight bounds to accurately quantify the approximation performance of both representations. It provides an improvement upon previous error estimates and allows the user to control the tradeoff between the approximation error and the number of evaluation subintervals. To guarantee real-time operation, the method is suitable for, but not limited to, an efficient implementation in modern graphics processing units, where it outperforms previous alternative approaches by exploiting the fixed-function interpolation routines present in their texture units. The proposed technique is a perfect match for any application requiring the evaluation of continuous functions; we have measured in detail its quality and efficiency on several functions, and, in particular, the Gaussian function because it is extensively used in many areas of computer vision and cybernetics, and it is expensive to evaluate.

Journal ArticleDOI
TL;DR: In this article, it is shown that quantum massive conformal gravity is renormalizable but has ghost states, and a possible decoupling of these ghost states at high energies is discussed.
Abstract: We first find the linear approximation of the second plus fourth order derivative massive conformal gravity action. Then we reduce the linearized action to separated second order derivative terms, which allows us to quantize the theory by using the standard first order canonical quantization method. It is shown that quantum massive conformal gravity is renormalizable but has ghost states. A possible decoupling of these ghost states at high energies is discussed.

Journal ArticleDOI
TL;DR: It is demonstrated that the mass transfer in reservoir simulation and, as a consequence, the net-present value (NPV) function are more sensitive to the degree of the time step refinement when using bottom-hole pressure (BHP) controls than when using production rate controls.
Abstract: The adjoint gradient method is well recognized for its efficiency in large-scale production optimization. When implemented in a sequential quadratic programming (SQP) algorithm, adjoint gradients enable the construction of a quadratic approximation of the objective function and linear approximation of the nonlinear constraints using just one forward and one backward simulation (with multiple right-hand sides). In this work, the focus is on the performance of the adjoint gradient method with respect to the adaptive time step refinement generated in the underlying forward simulations. First, we demonstrate that the mass transfer in reservoir simulation and, as a consequence, the net-present value (NPV) function are more sensitive to the degree of the time step refinement when using production bottom-hole pressure (BHP) controls than when using production rate controls. Effects of this sensitivity on optimization process are studied using six examples of uniform time stepping with different degrees of refinements. By comparing those examples, we show that corresponding optimal solutions for target production BHPs deviate at early stages of the optimization process. It indicates an inconsistency in the evaluation of the adjoint gradients and NPV function for different time step refinements. Next, we investigate the effects of this inconsistency on the results of a constrained production optimization. Two strategies of nonlinear constraints are considered: (i) nonlinear constraints handled in the optimization process and (ii) constraints applied directly in forward simulations with a common control switch procedure. In both strategies, we observe that the progress of the optimization process is greatly influenced by the degree of the time step refinement after control update. In the case of constrained simulations, the presence of control switches combined with large time steps after control update forces adaptive refinement to vary the time step size significantly. As a result, the inconsistency of the adjoint gradients and NPV values provoke an early termination of the SQP algorithm. In the case of constrained optimization, the inconsistencies in gradient evaluations are less significant, and the performance of the optimization process is governed by a satisfaction of nonlinear constraints in SQP algorithm.

Journal ArticleDOI
Xu Chao, Wei Gu, Fei Gao, Xiaohui Song, Xiaoli Meng, Miao Fan1 
TL;DR: In this paper, a new solution method based on linear approximation of the affine arithmetic (AA) based power flow model and optimal solution technique incorporated with boundary load flow framework under generation and load data uncertainties is proposed.
Abstract: Power flow (PF) problem need to be further studied when confronted with uncertainties brought in by the increasing use of renewable energy. This study proposes a new solution method based on linear approximation of the affine arithmetic (AA) based PF model and optimal solution technique incorporated with boundary load flow framework under generation and load data uncertainties. In each iteration solution step, non-linear interval PF problem is modelled by the approximation technique with AA. Boundaries of state variables are explored by solving linear programming models with constraints reformed at given operating points. After optimisation process, new operating point is obtained and updated for further iteration solution step. Application of the proposed methodology is implemented in several IEEE benchmark test systems and results are demonstrated in details. Comparisons between the previous interval method and Monte Carlo simulations verify the effectiveness and better performance of the proposed method.

Journal ArticleDOI
TL;DR: In this paper, a group of coupling relationships among concentration, density, viscosity, as well as diffusion coefficient are introduced to accurately simulate the mixing process with a viscous flow involved.
Abstract: In microfluidic mixing, great attention has been devoted to the structural design to enhance mixing efficiency. However, the influence of the variant viscosity in the mixing process is rarely discussed due to the practical challenges originated from the strong and complex couplings between species concentration and other fluid properties such as density, viscosity and diffusion coefficient. In this work, a group of coupling relationships among concentration, density, viscosity, as well as diffusion coefficient are introduced to accurately simulate the mixing process with a viscous flow involved. Compared with the traditional linear approximation, the new approach is more suitable to simulate the concentration-dependent viscous mixing in microfluidics. Furthermore, a planar passive micromixer is designed to validate the coupling approach from both modeling and experiment perspectives. By comparing experimental and numerical results, it turns out that the coupling approach achieves higher accuracy than the traditional linear approximation. In addition, four derived models are experimentally tested and numerically simulated by adopting the new method. The results of each model reach a good agreement between modeling and experiment.

Journal ArticleDOI
TL;DR: This work gives a closed-form exact solution for the correlation involving the multiple polynomial of any weight for the first time and shows that Walsh analysis is useful and effective to a broad class of cryptanalysis problems.
Abstract: Walsh transform is used in a wide variety of scientific and engineering applications, including bent functions and cryptanalytic optimization techniques in cryptography. In linear cryptanalysis, it is a key question to find a good linear approximation, which holds with probability (1+d)/2 and the bias d is large in absolute value. Lu and Desmedt (2011) take a step toward answering this key question in a more generalized setting and initiate the work on the generalized bias problem with linearly-dependent inputs. In this paper, we give fully extended results. Deep insights on assumptions behind the problem are given. We take an information-theoretic approach to show that our bias problem assumes the setting of the maximum input entropy subject to the input constraint. By means of Walsh transform, the bias can be expressed in a simple form. It incorporates Piling-up lemma as a special case. Secondly, as application, we answer a long-standing open problem in correlation attacks on combiners with memory. We give a closed-form exact solution for the correlation involving the multiple polynomial of any weight for the first time. We also give Walsh analysis for numerical approximation. An interesting bias phenomenon is uncovered, i.e., for even and odd weight of the polynomial, the correlation behaves differently. Thirdly, we introduce the notion of weakly biased distribution, and study bias approximation for a more general case by Walsh analysis. We show that for weakly biased distribution, Piling-up lemma is still valid. Our work shows that Walsh analysis is useful and effective to a broad class of cryptanalysis problems.