scispace - formally typeset
Search or ask a question

Showing papers on "Piecewise published in 2014"


Journal ArticleDOI
TL;DR: The toolkit as mentioned in this paper adapts a first-order perturbation approach and applies it in a piecewise fashion to solve dynamic models with occasionally binding constraints, such as a real business cycle model with a constraint on the level of investment and a New Keynesian model subject to the zero lower bound on nominal interest rates.

355 citations


Journal ArticleDOI
TL;DR: A novel a posteriori finite volume subcell limiter technique for the Discontinuous Galerkin finite element method for nonlinear systems of hyperbolic conservation laws in multiple space dimensions that works well for arbitrary high order of accuracy in space and time and that does not destroy the natural subcell resolution properties of the DG method.

292 citations


Journal ArticleDOI
TL;DR: Recently, the authors showed that trend filtering estimates adapt to the local level of smoothness much better than smoothing splines, and further, they exhibit a remarkable similarity to locally adaptive regression splines.
Abstract: We study trend filtering, a recently proposed tool of Kim et al. [SIAM Rev. 51 (2009) 339–360] for nonparametric regression. The trend filtering estimate is defined as the minimizer of a penalized least squares criterion, in which the penalty term sums the absolute $k$th order discrete derivatives over the input points. Perhaps not surprisingly, trend filtering estimates appear to have the structure of $k$th degree spline functions, with adaptively chosen knot points (we say “appear” here as trend filtering estimates are not really functions over continuous domains, and are only defined over the discrete set of inputs). This brings to mind comparisons to other nonparametric regression tools that also produce adaptive splines; in particular, we compare trend filtering to smoothing splines, which penalize the sum of squared derivatives across input points, and to locally adaptive regression splines [Ann. Statist. 25 (1997) 387–413], which penalize the total variation of the $k$th derivative. Empirically, we discover that trend filtering estimates adapt to the local level of smoothness much better than smoothing splines, and further, they exhibit a remarkable similarity to locally adaptive regression splines. We also provide theoretical support for these empirical findings; most notably, we prove that (with the right choice of tuning parameter) the trend filtering estimate converges to the true underlying function at the minimax rate for functions whose $k$th derivative is of bounded variation. This is done via an asymptotic pairing of trend filtering and locally adaptive regression splines, which have already been shown to converge at the minimax rate [Ann. Statist. 25 (1997) 387–413]. At the core of this argument is a new result tying together the fitted values of two lasso problems that share the same outcome vector, but have different predictor matrices.

229 citations


Journal ArticleDOI
TL;DR: In this article, the duality between robust hedging of path dependent European options and a martingale optimal transport problem is proved, and a family of simple, piecewise constant super-replication portfolios that asymptotically achieve the minimal superreplication cost is constructed.
Abstract: The duality between the robust (or equivalently, model independent) hedging of path dependent European options and a martingale optimal transport problem is proved. The financial market is modeled through a risky asset whose price is only assumed to be a continuous function of time. The hedging problem is to construct a minimal super-hedging portfolio that consists of dynamically trading the underlying risky asset and a static position of vanilla options which can be exercised at the given, fixed maturity. The dual is a Monge–Kantorovich type martingale transport problem of maximizing the expected value of the option over all martingale measures that have a given marginal at maturity. In addition to duality, a family of simple, piecewise constant super-replication portfolios that asymptotically achieve the minimal super-replication cost is constructed.

200 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of stochastic stability for a class of semi-Markovian systems with mode-dependent time-variant delays is investigated by Lyapunov function approach, together with a piecewise analysis method.
Abstract: SUMMARY Semi-Markovian jump systems, due to the relaxed conditions on the stochastic process, and its transition rates are time varying, can be used to describe a larger class of dynamical systems than conventional full Markovian jump systems. In this paper, the problem of stochastic stability for a class of semi-Markovian systems with mode-dependent time-variant delays is investigated. By Lyapunov function approach, together with a piecewise analysis method, a sufficient condition is proposed to guarantee the stochastic stability of the underlying systems. As more time-delay information is used, our results are much less conservative than some existing ones in literature. Finally, two examples are given to show the effectiveness and advantages of the proposed techniques. Copyright © 2013 John Wiley & Sons, Ltd.

160 citations


Journal ArticleDOI
TL;DR: In this article, a weak Galerkin (WG) finite element method is introduced and analyzed for the biharmonic equation in its primary form, and the resulting WG finite element formulation is symmetric, positive definite, and parameter-free.
Abstract: A new weak Galerkin (WG) finite element method is introduced and analyzed in this article for the biharmonic equation in its primary form. This method is highly robust and flexible in the element construction by using discontinuous piecewise polynomials on general finite element partitions consisting of polygons or polyhedra of arbitrary shape. The resulting WG finite element formulation is symmetric, positive definite, and parameter-free. Optimal order error estimates in a discrete H2 norm is established for the corresponding WG finite element solutions. Error estimates in the usual L2 norm are also derived, yielding a suboptimal order of convergence for the lowest order element and an optimal order of convergence for all high order of elements. Numerical results are presented to confirm the theory of convergence under suitable regularity assumptions. © 2014 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq 30: 1003–1029, 2014

141 citations


Proceedings ArticleDOI
23 Jun 2014
TL;DR: This paper presents a system to reconstruct piecewise planar and compact floorplans from images, which are then converted to high quality texture-mapped models for free- viewpoint visualization, and shows that the texture mapped mesh models provide compelling free-viewpoint visualization experiences, when compared against the state-of-the-art and ground truth.
Abstract: This paper presents a system to reconstruct piecewise planar and compact floorplans from images, which are then converted to high quality texture-mapped models for free- viewpoint visualization. There are two main challenges in image-based floorplan reconstruction. The first is the lack of 3D information that can be extracted from images by Structure from Motion and Multi-View Stereo, as indoor scenes abound with non-diffuse and homogeneous surfaces plus clutter. The second challenge is the need of a sophisti- cated regularization technique that enforces piecewise pla- narity, to suppress clutter and yield high quality texture mapped models. Our technical contributions are twofold. First, we propose a novel structure classification technique to classify each pixel to three regions (floor, ceiling, and wall), which provide 3D cues even from a single image. Second, we cast floorplan reconstruction as a shortest path problem on a specially crafted graph, which enables us to enforce piecewise planarity. Besides producing compact piecewise planar models, this formulation allows us to di- rectly control the number of vertices (i.e., density) of the output mesh. We evaluate our system on real indoor scenes, and show that our texture mapped mesh models provide compelling free-viewpoint visualization experiences, when compared against the state-of-the-art and ground truth.

134 citations


Journal ArticleDOI
TL;DR: An efficient method to solve the piecewise constant Mumford-Shah (M-S) model for two-phase image segmentation within the level set framework is presented, which avoids using complicated alternating optimization to minimize the reduced M-S functional.
Abstract: In the paper, we present an efficient method to solve the piecewise constant Mumford-Shah (M-S) model for two-phase image segmentation within the level set framework. A clustering algorithm is used to find approximately the intensity means of foreground and background in the image, and so the M-S functional is reduced to the functional of a single variable (level set function), which avoids using complicated alternating optimization to minimize the reduced M-S functional. Experimental results demonstrated some advantages of the proposed method over the well-known Chan-Vese method using alternating optimization, such as robustness to the locations of initial contour and the high computation efficiency.

131 citations


Book ChapterDOI
06 Sep 2014
TL;DR: This work proposes a new scene flow approach that exploits the local and piecewise rigidity of real world scenes and gives a general formulation to solve for local and global rigid motions by jointly using intensity and depth data.
Abstract: Scene flow is defined as the motion field in 3D space, and can be computed from a single view when using an RGBD sensor. We propose a new scene flow approach that exploits the local and piecewise rigidity of real world scenes. By modeling the motion as a field of twists, our method encourages piecewise smooth solutions of rigid body motions. We give a general formulation to solve for local and global rigid motions by jointly using intensity and depth data. In order to deal efficiently with a moving camera, we model the motion as a rigid component plus a non-rigid residual and propose an alternating solver. The evaluation demonstrates that the proposed method achieves the best results in the most commonly used scene flow benchmark. Through additional experiments we indicate the general applicability of our approach in a variety of different scenarios.

121 citations


Journal ArticleDOI
TL;DR: A real-time MPC approach for linear systems that provides guarantees on feasibility and stability for arbitrary time constraints, allowing one to trade off computation time vs. performance.

115 citations


Journal ArticleDOI
27 Jul 2014
TL;DR: This work presents a new algorithm that solves for the shape of a transparent object such that the refracted light paints a desired caustic image on a receiver screen and introduces an optimal transport formulation to establish a correspondence between the input geometry and the unknown target shape.
Abstract: We present a new algorithm for computational caustic design. Our algorithm solves for the shape of a transparent object such that the refracted light paints a desired caustic image on a receiver screen. We introduce an optimal transport formulation to establish a correspondence between the input geometry and the unknown target shape. A subsequent 3D optimization based on an adaptive discretization scheme then finds the target surface from the correspondence map. Our approach supports piecewise smooth surfaces and non-bijective mappings, which eliminates a number of shortcomings of previous methods. This leads to a significantly richer space of caustic images, including smooth transitions, singularities of infinite light density, and completely black areas. We demonstrate the effectiveness of our approach with several simulated and fabricated examples.

Journal ArticleDOI
TL;DR: Two new techniques for microwave imaging of layered structures are introduced to address the limiting issues associated with classical synthetic aperture radar (SAR) imaging techniques in generating focused and properly-positioned images of embedded objects in generally layered dielectric structures.
Abstract: In this paper, two new techniques for microwave imaging of layered structures are introduced. These techniques were developed to address the limiting issues associated with classical synthetic aperture radar (SAR) imaging techniques in generating focused and properly-positioned images of embedded objects in generally layered dielectric structures. The first method, referred to as piecewise SAR (PW-SAR), is a natural extension of the classical SAR technique, and considers physical and electrical properties of each individual layer and the discontinuity among them. Although this method works well with low loss dielectric media, its applicability to lossy media is limited. This is due to the fact that this method does not consider signal attenuation. Moreover, multiple reflections within each layer are not incorporated. To improve imaging performance in which these important phenomena are included, a second method was developed that utilizes the Green's function of the layered structure and casts the imaging approach into a deconvolution procedure. Subsequently, a Wiener filter-based deconvolution technique is used to solve the problem. The technique is referred to as Wiener filter-based layered SAR (WL-SAR). The performance and efficacy of these SAR based imaging techniques are demonstrated using simulations and corresponding measurements of several different layered media.

Journal ArticleDOI
TL;DR: Results confirm that the proposed approach yields considerably smaller errors, higher convergence rates and it avoid spurious numerical effects at a symmetry axis.

Book ChapterDOI
01 Jan 2014
TL;DR: The design of numerical solution methods for the minimization of functionals with TGV\(^2\) penalty and present, in particular, a class of primal-dual algorithms.
Abstract: We study and extend the recently introduced total generalized variation (TGV) functional for multichannel images. This functional has already been established to constitute a well-suited convex model for piecewise smooth scalar images. It comprises exactly the functions of bounded variation but is, unlike purely total-variation based functionals, also aware of higher-order smoothness. For the multichannel version which is developed in this paper, basic properties and existence of minimizers for associated variational problems regularized with second-order TGV is shown. Furthermore, we address the design of numerical solution methods for the minimization of functionals with TGV\(^2\) penalty and present, in particular, a class of primal-dual algorithms. Finally, the concrete realization for various image processing problems, such as image denoising, deblurring, zooming, dequantization and compressive imaging, are discussed and numerical experiments are presented.

Journal ArticleDOI
TL;DR: In this paper, a mixed integer quadratic programming (MIQP) is proposed to solve the dynamic economic dispatch (DED) with valve-point effect (VPE) where the non-linear and non-smooth cost caused by VPE is piecewise linearized.
Abstract: In this paper a mixed integer quadratic programming (MIQP) is proposed to solve the dynamic economic dispatch (DED) with valve-point effect (VPE) where the non-linear and non-smooth cost caused by VPE is piecewise linearized. However if the DED with VPE is directly solved by the MIQP in a single step, the optimization suffers convergence stagnancy and will run out of memory. In this paper the multi-step method, the warm start technique and the range restriction scheme are combined with the MIQP. The optimization process can then break the convergence stagnancy and the computation efficiency can be greatly improved. When the system loss is considered, the loss formula is piecewise linearized. A post-processing procedure is proposed to eliminate the approximation error caused by linearization of the loss formula. The effectiveness of the proposed method is demonstrated by seven cases and the results are compared with those obtained by the previous published methods.

Journal ArticleDOI
TL;DR: In this paper, the divergence operator maps the velocity space into the space of piecewise constants and produces exactly divergence-free velocity approximations, and the existence of a bounded Fortin projection and therefore the necessary LBB condition is satisfied.
Abstract: Conforming finite element pairs for the three-dimensional Stokes problem on general simplicial triangulations are constructed. The pressure space simply consists of piecewise constants, where as the velocity space consists of cubic polynomials augmented with rational functions. We show the existence of a bounded Fortin projection and therefore the necessary LBB condition is satisfied. In addition the divergence operator maps the velocity space into the space of piecewise constants. Consequently, the method produces exactly divergence-free velocity approximations.

Journal ArticleDOI
TL;DR: It is shown that, when tone mapping is approximated by a piecewise constant/linear function, a fast computational scheme is possible requiring computational time similar to the fast implementation of normalized cross correlation (NCC).
Abstract: A fast pattern matching scheme termed matching by tone mapping (MTM) is introduced which allows matching under nonlinear tone mappings. We show that, when tone mapping is approximated by a piecewise constant/linear function, a fast computational scheme is possible requiring computational time similar to the fast implementation of normalized cross correlation (NCC). In fact, the MTM measure can be viewed as a generalization of the NCC for nonlinear mappings and actually reduces to NCC when mappings are restricted to be linear. We empirically show that the MTM is highly discriminative and robust to noise with comparable performance capability to that of the well performing mutual information, but on par with NCC in terms of computation time.

Journal ArticleDOI
TL;DR: Experimental results demonstrate the efficiency of the proposed variational Bayesian method for providing competitive performance without additional information about the unknown parameters, and when prior information is added the proposed method outperforms the non-Bayesian-based Retinex methods the authors compared.
Abstract: In this paper, we propose a variational Bayesian method for Retinex to simulate and interpret how the human visual system perceives color. To construct a hierarchical Bayesian model, we use the Gibbs distributions as prior distributions for the reflectance and the illumination, and the gamma distributions for the model parameters. By assuming that the reflection function is piecewise continuous and illumination function is spatially smooth, we define the energy functions in the Gibbs distributions as a total variation function and a smooth function for the reflectance and the illumination, respectively. We then apply the variational Bayes approximation to obtain the approximation of the posterior distribution of unknowns so that the unknown images and hyperparameters are estimated simultaneously. Experimental results demonstrate the efficiency of the proposed method for providing competitive performance without additional information about the unknown parameters, and when prior information is added the proposed method outperforms the non-Bayesian-based Retinex methods we compared.

Proceedings ArticleDOI
14 Dec 2014
TL;DR: The theoretical upper bound for the proposed algorithm is derived and its asymptotic properties via bias-variance decomposition are analyzed, demonstrating that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings.
Abstract: Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS Forestto systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request.

Journal ArticleDOI
TL;DR: In this article, the authors consider the mean-variance formulation in multi-period portfolio selection under no-shorting constraint and show that the optimal portfolio policy is piecewise linear with respect to the current wealth level, and derive the semi-analytical expression of the piecewise quadratic value function.

Journal ArticleDOI
TL;DR: The attainable order of convergence of proposed algorithms is studied and a (global) superconvergence effect for a special choice of collocation points is established.

Journal ArticleDOI
TL;DR: A general a posteriori error analysis is established for the natural norms of the DPG schemes under conditions equivalent to a priori stability estimates, proven that the locally computable residual norm of any discrete function is a lower and an upper error bound up to explicit data approximation errors.
Abstract: A combination of ideas in least-squares finite element methods with those of hybridized methods recently led to discontinuous Petrov--Galerkin (DPG) finite element methods. They minimize a residual inherited from a piecewise ultraweak formulation in a nonstandard, locally computable, dual norm. This paper establishes a general a posteriori error analysis for the natural norms of the DPG schemes under conditions equivalent to a priori stability estimates. It is proven that the locally computable residual norm of any discrete function is a lower and an upper error bound up to explicit data approximation errors. The presented abstract framework for a posteriori error analysis applies to known DPG discretizations of Laplace and Lame equations and to a novel DPG method for the stress-velocity formulation of Stokes flow with symmetric stress approximations. Since the error control does not rely on the discrete equations, it applies to inexactly computed or otherwise perturbed solutions within the discrete space...

Journal ArticleDOI
TL;DR: In this article, the authors developed an envelope approach to time-dependent mechanism reliability defined in a period of time where a certain motion output is required, where the envelope function of the motion error is not explicitly related to time.
Abstract: This work develops an envelope approach to time-dependent mechanism reliability defined in a period of time where a certain motion output is required. Since the envelope function of the motion error is not explicitly related to time, the time-dependent problem can be converted into a time-independent problem. The envelope function is approximated by piecewise hyperplanes. To find the expansion points for the hyperplanes, the approach linearizes the motion error at the means of random dimension variables, and this approximation is accurate because the tolerances of the dimension variables are small. The expansion points are found with the maximum probability density at the failure threshold. The time-dependent mechanism reliability is then estimated by a multivariable normal distribution at the expansion points. As an example, analytical equations are derived for a four-bar function generating mechanism. The numerical example shows the significant accuracy improvement.

Journal ArticleDOI
TL;DR: In this paper, a crack growth simulation is presented in saturated porous media using the extended finite element method, where the mass balance equation of fluid phase and the momentum balance of bulk and fluid phases are employed to obtain the fully coupled set of equations in the framework of $$u{-}p$$ formulation.
Abstract: In this paper, the crack growth simulation is presented in saturated porous media using the extended finite element method. The mass balance equation of fluid phase and the momentum balance of bulk and fluid phases are employed to obtain the fully coupled set of equations in the framework of $$u{-}p$$ formulation. The fluid flow within the fracture is modeled using the Darcy law, in which the fracture permeability is assumed according to the well-known cubic law. The spatial discritization is performed using the extended finite element method, the time domain discritization is performed based on the generalized Newmark scheme, and the non-linear system of equations is solved using the Newton–Raphson iterative procedure. In the context of the X-FEM, the discontinuity in the displacement field is modeled by enhancing the standard piecewise polynomial basis with the Heaviside and crack-tip asymptotic functions, and the discontinuity in the fluid flow normal to the fracture is modeled by enhancing the pressure approximation field with the modified level-set function, which is commonly used for weak discontinuities. Two alternative computational algorithms are employed to compute the interfacial forces due to fluid pressure exerted on the fracture faces based on a ‘partitioned solution algorithm’ and a ‘time-dependent constant pressure algorithm’ that are mostly applicable to impermeable media, and the results are compared with the coupling X-FEM model. Finally, several benchmark problems are solved numerically to illustrate the performance of the X-FEM method for hydraulic fracture propagation in saturated porous media.

Book ChapterDOI
06 Sep 2014
TL;DR: This paper uses the bilateral domain to reformulate a piecewise smooth constraint as continuous global modeling constraint and demonstrates how the model can reliably obtain large numbers of good quality correspondences over wide baselines, while keeping outliers to a minimum.
Abstract: This paper proposes modeling motion in a bilateral domain that augments spatial information with the motion itself. We use the bilateral domain to reformulate a piecewise smooth constraint as continuous global modeling constraint. The resultant model can be robustly computed from highly noisy scattered feature points using a global minimization. We demonstrate how the model can reliably obtain large numbers of good quality correspondences over wide baselines, while keeping outliers to a minimum.

Journal ArticleDOI
TL;DR: This work proposes a fast splitting approach to the classical variational formulation of the image partitioning problem, which is frequently referred to as the Potts or piecewise constant Mumford--Shah model, and produces results of a quality comparable with that of graph cuts and the convex relaxation strategies.
Abstract: We propose a fast splitting approach to the classical variational formulation of the image partitioning problem, which is frequently referred to as the Potts or piecewise constant Mumford--Shah model. For vector-valued images, our approach is significantly faster than the methods based on graph cuts and convex relaxations of the Potts model which are presently the state-of-the-art. The computational costs of our algorithm only grow linearly with the dimension of the data space which contrasts the exponential growth of the state-of-the-art methods. This allows us to process images with high-dimensional codomains such as multispectral images. Our approach produces results of a quality comparable with that of graph cuts and the convex relaxation strategies, and we do not need an a priori discretization of the label space. Furthermore, the number of partitions has almost no influence on the computational costs, which makes our algorithm also suitable for the reconstruction of piecewise constant (color or vect...

Journal ArticleDOI
Jan Peszek1
TL;DR: In this paper, the existence of global C 1 piecewise weak solutions for the discrete Cucker-Smale's flocking model with a non-Lipschitz communication weight ψ ( s ) = s − α, 0 α 1.

Proceedings ArticleDOI
31 May 2014
TL;DR: A computationally efficient semi-agnostic algorithm for learning univariate probability distributions that are well approximated by piecewise polynomial density functions, which is applied to obtain a wide range of results for many natural density estimation problems over both continuous and discrete domains.
Abstract: We give a computationally efficient semi-agnostic algorithm for learning univariate probability distributions that are well approximated by piecewise polynomial density functions. Let p be an arbitrary distribution over an interval I, and suppose that p is τ-close (in total variation distance) to an unknown probability distribution q that is defined by an unknown partition of I into t intervals and t unknown degree d polynomials specifying q over each of the intervals. We give an algorithm that draws O(t(d + 1)/e2) samples from p, runs in time poly(t, d + 1, 1/e), and with high probability outputs a piecewise polynomial hypothesis distribution h that is (14τ + e)-close to p in total variation distance. Our algorithm combines tools from real approximation theory, uniform convergence, linear programming, and dynamic programming. Its sample complexity is simultaneously near optimal in all three parameters t, d and e; we show that even for τ = 0, any algorithm that learns an unknown t-piecewise degree-d probability distribution over I to accuracy e must use [EQUATION] samples from the distribution, regardless of its running time. We apply this general algorithm to obtain a wide range of results for many natural density estimation problems over both continuous and discrete domains. These include state-of-the-art results for learning mixtures of log-concave distributions; mixtures of t-modal distributions; mixtures of Monotone Hazard Rate distributions; mixtures of Poisson Binomial Distributions; mixtures of Gaussians; and mixtures of k-monotone densities. Our general technique gives improved results, with provably optimal sample complexities (up to logarithmic factors) in all parameters in most cases, for all these problems via a single unified algorithm.

Journal ArticleDOI
TL;DR: In this paper, a piecewise analytical function is proposed and applied to investigate the steady-state behavior of series-parallel resonant converter operated in a discontinuous current mode.
Abstract: A piecewise analytical function is proposed and applied to investigate the steady-state behavior of series-parallel resonant converter operated in a discontinuous current mode. The converter shows two sequences of the equivalent circuits alternatively operated in the discontinuous current mode with different output voltages. To get the response time of current in one resonance, a successive solving process based on the state-space method is presented analytically in each sequence. This solving process can describe the complicated behavior resulting from the load rectifier, which makes the output capacitor appear or disappear several times in one switching period. By introducing the output voltage coefficient and the principle of energy transmission balance, the steady-state model is deduced afterward. This model is accurate and simple, making it helpful to design and optimize the converter conveniently. An excellent agreement is obtained when comparing numerical values calculated by the proposed model to the simulation and to the experimental results.

Journal ArticleDOI
TL;DR: This method provides comprehensive, objective analysis of multiple traces requiring few user inputs about the underlying physical models and is faster and more precise in determining the number of states than established and cutting-edge methods for single-molecule data analysis.
Abstract: We introduce a step transition and state identification (STaSI) method for piecewise constant single-molecule data with a newly derived minimum description length equation as the objective function. We detect the step transitions using the Student’s t test and group the segments into states by hierarchical clustering. The optimum number of states is determined based on the minimum description length equation. This method provides comprehensive, objective analysis of multiple traces requiring few user inputs about the underlying physical models and is faster and more precise in determining the number of states than established and cutting-edge methods for single-molecule data analysis. Perhaps most importantly, the method does not require either time-tagged photon counting or photon counting in general and thus can be applied to a broad range of experimental setups and analytes.