# Showing papers in "Computers & Mathematics With Applications in 2008"

••

[...]

TL;DR: This paper initiates the study of soft semirings by using the soft set theory, and the notions of soft Semirings, soft subsemirings,soft ideals, idealistic softSemirings and soft semiring homomorphisms are introduced, and several related properties are investigated.

Abstract: Molodtsov introduced the concept of soft sets, which can be seen as a new mathematical tool for dealing with uncertainty. In this paper, we initiate the study of soft semirings by using the soft set theory. The notions of soft semirings, soft subsemirings, soft ideals, idealistic soft semirings and soft semiring homomorphisms are introduced, and several related properties are investigated.

579 citations

••

TL;DR: This paper applies the notion of soft sets by Molodtsov to the theory of BCK/BCI-algebras and soft subalgeBRas, and their basic properties are derived.

Abstract: Molodtsov [D. Molodtsov, Soft set theory - First results, Comput. Math. Appl. 37 (1999) 19-31] introduced the concept of soft set as a new mathematical tool for dealing with uncertainties that is free from the difficulties that have troubled the usual theoretical approaches. In this paper we apply the notion of soft sets by Molodtsov to the theory of BCK/BCI-algebras. The notion of soft BCK/BCI-algebras and soft subalgebras are introduced, and their basic properties are derived.

429 citations

••

TL;DR: This paper develops an implicit unconditionally stable numerical method to solve the one-dimensional linear time fractional diffusion equation, formulated with Caputo's fractional derivative, on a finite slab.

Abstract: Time fractional diffusion equations are used when attempting to describe transport processes with long memory where the rate of diffusion is inconsistent with the classical Brownian motion model In this paper we develop an implicit unconditionally stable numerical method to solve the one-dimensional linear time fractional diffusion equation, formulated with Caputo's fractional derivative, on a finite slab Several numerical examples of interest are also included

393 citations

••

TL;DR: The CAT(0) space analogs of results on weak convergence of the Picard, Mann and Ishikawa iterates proved in uniformly convex Banach spaces by Opial and Tan and Xu are given.

Abstract: By using the concept of @?-convergence introduced by Lim [T.C. Lim, Remarks on some fixed point theorems, Proc. Amer. Math. Soc. 60 (1976) 179-182], we are able to give the CAT(0) space analogs of results on weak convergence of the Picard, Mann and Ishikawa iterates proved in uniformly convex Banach spaces by Opial [Z. Opial, Weak convergence of the sequence of successive approximations for nonexpansive mappings, Bull. Amer. Math. Soc. 73 (1967) 591-597], Ishikawa [S. Ishikawa, Fixed points and iteration of a nonexpansive mapping in a Banach space, Proc. Amer. Math. Soc. 59 (1976) 65-71] and Tan and Xu [K.K. Tan, H.K. Xu, Approximating fixed points of nonexpansive mappings by the Ishikawa iteration process, J. Math. Anal. Appl. 178 (1993) 301-308].

392 citations

••

TL;DR: Two new definitions, parameter important degree and decision partition, are proposed for analyzing the algorithm of normal parameter reduction in soft sets to overcome the problems of suboptimal choice and added parameter set.

Abstract: This paper is concerned with the reduction of soft sets and fuzzy soft sets. Firstly, the problems of suboptimal choice and added parameter set of soft sets are analyzed. Then, we introduce the definition of normal parameter reduction in soft sets to overcome these problems. In addition, a heuristic algorithm of normal parameter reduction is presented. Two new definitions, parameter important degree and decision partition, are proposed for analyzing the algorithm of normal parameter reduction. Furthermore, the normal parameter reduction is also investigated in fuzzy soft sets.

301 citations

••

TL;DR: A revised method is proposed which can avoid problems of Chu and Tsao's method for ranking fuzzy numbers, and it is easy to rank fuzzy numbers in a way similar to the original method.

Abstract: In 2002, Chu and Tsao proposed a method to rank fuzzy numbers. They employed an area between the centroid and original points to rank fuzzy numbers; however there were some problems with the ranking method. In this paper, we want to indicate these problems of Chu and Tsao's method, and then propose a revised method which can avoid these problems for ranking fuzzy numbers. Since the revised method is based on the Chu and Tsao's method, it is easy to rank fuzzy numbers in a way similar to the original method.

258 citations

••

TL;DR: An extended stochastic gradient algorithm is developed to estimate the parameters of Hammerstein-Wiener ARMAX models to improve the identification accuracy and results indicate that the parameter estimation errors become small by introducing the forgetting factor.

Abstract: An extended stochastic gradient algorithm is developed to estimate the parameters of Hammerstein-Wiener ARMAX models. The basic idea is to replace the unmeasurable noise terms in the information vector of the pseudo-linear regression identification model with the corresponding noise estimates which are computed by the obtained parameter estimates. The obtained parameter estimates of the identification model include the product terms of the parameters of the original systems. Two methods of separating the parameter estimates of the original parameters from the product terms are discussed: the average method and the singular value decomposition method. To improve the identification accuracy, an extended stochastic gradient algorithm with a forgetting factor is presented. The simulation results indicate that the parameter estimation errors become small by introducing the forgetting factor.

181 citations

••

TL;DR: This work has applied one of the variants of this algorithm to two case studies: the Hanoi water distribution network and the New York City water supply tunnel system, and presented a detailed comparison of the new results with those previously obtained by other authors.

Abstract: In the past decade, evolutionary methods have been used by various researchers to tackle optimal design problems for water supply systems (WSS). Particle Swarm Optimization (PSO) is one of these evolutionary algorithms which, in spite of the fact that it has primarily been developed for the solution of optimization problems with continuous variables, has been successfully adapted in other contexts to problems with discrete variables. In this work we have applied one of the variants of this algorithm to two case studies: the Hanoi water distribution network and the New York City water supply tunnel system. Both cases occur frequently in the related literature and provide two standard networks for benchmarking studies. This allows us to present a detailed comparison of our new results with those previously obtained by other authors.

180 citations

••

TL;DR: The proposed model is a combination of fast growth with respect to low gradient and slow growth when the gradient is large and the existence theorem for solutions and some numerical results are given.

Abstract: We give some diffusion models which can be used for restoration in image processing. The proposed model is a combination of fast growth with respect to low gradient and slow growth when the gradient is large. The existence theorem for solutions and some numerical results are given.

179 citations

••

TL;DR: In this study, differential transform method (DTM) is applied to both integro-differential and integral equation systems and further expanded with a formulation to treat Fredholm integrals to show capability and robustness.

Abstract: In this study, differential transform method (DTM) is applied to both integro-differential and integral equation systems. The method is further expanded with a formulation to treat Fredholm integrals. If the system considered has a solution in terms of the series expansion of known functions, this powerful method catches the exact solution. So as to show this capability and robustness, some systems of integral and integro-differential equations are solved as numerical examples.

160 citations

••

TL;DR: Compared with other artificial intelligence (AI) methods, the results demonstrate the efficiency of the proposed noninvasive method, and also show high accuracy for detecting ECG signals.

Abstract: This paper proposes a method for electrocardiogram (ECG) heartbeat discrimination using novel grey relational analysis (GRA). A typical ECG signal consists of the P-wave, QRS complexes and T-wave. We convert each QRS complexes to a Fourier spectrum from ECG signals, the spectrum varies with the rhythm origin and conduction path. The variations of power spectrum are observed in the range of 0-20 Hz in the frequency domain. To quantify the frequency components among the various ECG beats, GRA is performed to classify the cardiac arrhythmias. According to the AAMI (Association for the Advancement of Medical Instrumentation) recommended standard, heartbeat classes are recommended including the normal beat, supraventricular ectopic beat, bundle branch ectopic beat, ventricular ectopic beat, fusion beat and unknown beat. The method was tested on MIT-BIH (Massachusetts Institute of Technology-Beth Israel Hospital) arrhythmia database. Compared with other artificial intelligence (AI) methods, the results demonstrate the efficiency of the proposed noninvasive method, and also show high accuracy for detecting ECG signals.

••

TL;DR: To simplify knowledge representation and extract much simpler dominance rules, this work proposes attribute reductions of interval ordered information systems and decision tables that eliminate only the information that are not essential from the viewpoint of the ordering of objects or dominance rules.

Abstract: Interval information systems are generalized models of single-valued information systems. By introducing a dominance relation to interval information systems, we propose a ranking approach for all objects based on dominance classes and establish a dominance-based rough set approach, which is mainly based on substitution of the indiscernibility relation by the dominance relation. Furthermore, we discuss interval ordered decision tables and dominance rules. To simplify knowledge representation and extract much simpler dominance rules, we propose attribute reductions of interval ordered information systems and decision tables that eliminate only the information that are not essential from the viewpoint of the ordering of objects or dominance rules. The approaches show how to simplify an interval ordered information system and find dominance rules directly from an interval ordered decision table. These results will be helpful for decision-making analysis in interval information systems.

••

Ege University

^{1}TL;DR: The homotopy perturbation method is applied to solve both linear and nonlinear boundary value problems for fourth-order integro-differential equations and shows that it is of high accuracy, more convenient and efficient for solving Integro- differential equations.

Abstract: In this study, the homotopy perturbation method proposed by Ji-Huan He is applied to solve both linear and nonlinear boundary value problems for fourth-order integro-differential equations. The analysis is accompanied by numerical examples. The results show that the homotopy perturbation method is of high accuracy, more convenient and efficient for solving integro-differential equations.

••

TL;DR: An uncertainty principle is established for the right-sided quaternionic Fourier transform that prescribes a lower bound on the product of the effective widths of quaternion-valued signals in the spatial and frequency domains.

Abstract: We review the quaternionic Fourier transform (QFT). Using the properties of the QFT we establish an uncertainty principle for the right-sided QFT. This uncertainty principle prescribes a lower bound on the product of the effective widths of quaternion-valued signals in the spatial and frequency domains. It is shown that only a Gaussian quaternion signal minimizes the uncertainty.

••

TL;DR: A basic foundation is set up of the covering generalized rough set theory and its applications by proposing three kinds of datasets which the traditional rough sets cannot handle and improving the definition of upper approximation to make it more reasonable than the existing ones.

Abstract: The covering generalized rough sets are an improvement of traditional rough set model to deal with more complex practical problems which the traditional one cannot handle. It is well known that any generalization of traditional rough set theory should first have practical applied background and two important theoretical issues must be addressed. The first one is to present reasonable definitions of set approximations, and the second one is to develop reasonable algorithms for attributes reduct. The existing covering generalized rough sets, however, mainly pay attention to constructing approximation operators. The ideas of constructing lower approximations are similar but the ideas of constructing upper approximations are different and they all seem to be unreasonable. Furthermore, less effort has been put on the discussion of the applied background and the attributes reduct of covering generalized rough sets. In this paper we concentrate our discussion on the above two issues. We first discuss the applied background of covering generalized rough sets by proposing three kinds of datasets which the traditional rough sets cannot handle and improve the definition of upper approximation for covering generalized rough sets to make it more reasonable than the existing ones. Then we study the attributes reduct with covering generalized rough sets and present an algorithm by using discernibility matrix to compute all the attributes reducts with covering generalized rough sets. With these discussions we can set up a basic foundation of the covering generalized rough set theory and broaden its applications.

••

TL;DR: This work investigates the global stability, periodic nature, oscillation and the boundedness of solutions of the difference equation and discusses the existence of unbounded solutions under certain conditions when l = 0.

Abstract: The aim of this work is to investigate the global stability, periodic nature, oscillation and the boundedness of solutions of the difference equation x n + 1 = A ∏ i = l k x n − 2 i − 1 B + C ∏ i = l k − 1 x n − 2 i , n = 0 , 1 , … where A , B , C are nonnegative real numbers and l , k are nonnegative integers, l k . We discuss the existence of unbounded solutions under certain conditions when l = 0 .

••

TL;DR: Several models from the most relevant class of nonlinear Black-Scholes equations for European and American options with a volatility depending on different factors, such as the stock price, the time, the option price and its derivatives due to transaction costs are focused on.

Abstract: Nonlinear Black-Scholes equations have been increasingly attracting interest over the last two decades, since they provide more accurate values by taking into account more realistic assumptions, such as transaction costs, risks from an unprotected portfolio, large investor's preferences or illiquid markets (which may have an impact on the stock price), the volatility, the drift and the option price itself. In this paper we will focus on several models from the most relevant class of nonlinear Black-Scholes equations for European and American options with a volatility depending on different factors, such as the stock price, the time, the option price and its derivatives due to transaction costs. We will analytically approach the option price by transforming the problem for a European Call option into a convection-diffusion equation with a nonlinear term and the free boundary problem for an American Call option into a fully nonlinear nonlocal parabolic equation defined on a fixed domain following Sevcovic's idea. Finally, we will present the results of different numerical discretization schemes for European options for various volatility models including the Leland model, the Barles and Soner model and the Risk adjusted pricing methodology model.

••

TL;DR: This paper develops a practical method for numerical solution of fractional reaction-diffusion equations, based on operator splitting, and presents results of numerical simulations to illustrate the method, and investigates properties of numerical solutions.

Abstract: Fractional diffusion equations are useful for applications in which a cloud of particles spreads faster than predicted by the classical equation. In a fractional diffusion equation, the second derivative in the spatial variable is replaced by a fractional derivative of order less than two. The resulting solutions spread faster than the classical solutions and may exhibit asymmetry, depending on the fractional derivative used. Fractional reaction-diffusion equations combine the fractional diffusion with a classical reaction term. In this paper, we develop a practical method for numerical solution of fractional reaction-diffusion equations, based on operator splitting. Then we present results of numerical simulations to illustrate the method, and investigate properties of numerical solutions. We also discuss applications to biology, where the reaction term models species growth and the diffusion term accounts for movements.

••

TL;DR: This paper presents a polynomial-time approximation to microaggregate multivariate numerical data for which bounds to optimal microaggregation can be derived at least for two different optimality criteria: minimum within-groups Euclidean distance and minimumWithin-groups sum of squares.

Abstract: Microaggregation is a family of methods for statistical disclosure control (SDC) of microdata (records on individuals and/or companies), that is, for masking microdata so that they can be released without disclosing private information on the underlying individuals. Microaggregation techniques are currently being used by many statistical agencies. The principle of microaggregation is to group original database records into small aggregates prior to publication. Each aggregate should contain at least k records to prevent disclosure of individual information, where k is a constant value preset by the data protector. In addition to it being a good masking method, microaggregation has recently been shown useful to achieve k-anonymity. In k-anonymity, the parameter k specifies the maximum acceptable disclosure risk, so that, once a value for k has been selected, the only job left is to maximize data utility: if microaggregation is used to implement k-anonymity, maximizing utility can be achieved by microaggregating optimally, i.e. with minimum within-groups variability loss. Unfortunately, optimal microaggregation can only be computed in polynomial time for univariate data. For multivariate data, it has been shown to be NP-hard. We present in this paper a polynomial-time approximation to microaggregate multivariate numerical data for which bounds to optimal microaggregation can be derived at least for two different optimality criteria: minimum within-groups Euclidean distance and minimum within-groups sum of squares. Beyond the theoretical interest of being the first microaggregation proposal with proven approximation bounds for any k, our method is empirically shown to be comparable to the best available heuristics for multivariate microaggregation.

••

TL;DR: The modified differential transform method (MDTM) technique introduces an alternative framework designed to overcome the difficulty of capturing the periodic behavior of the solution, which is characteristic of oscillator equations, and give a good approximation to the true solution in a very large region.

Abstract: A numerical method for solving nonlinear oscillators is proposed. The proposed scheme is based on the differential transform method (DTM), Laplace transform and Pade approximants. The modified differential transform method (MDTM) technique introduces an alternative framework designed to overcome the difficulty of capturing the periodic behavior of the solution, which is characteristic of oscillator equations, and give a good approximation to the true solution in a very large region. The numerical results demonstrate the validity and applicability of the new technique and a comparison is made with existing results.

••

TL;DR: A novel forecasting model is proposed to enhance forecasting functionality and allow processing of two-factor forecasting problems, which applies fuzzy c-means (FCM) clustering to deal with interval partitioning, which takes the nature of data points into account and produces unequal-sized intervals.

Abstract: The study of fuzzy time series has increasingly attracted much attention due to its salient capabilities of tackling uncertainty and vagueness inherent in the data collected. A variety of forecasting models including high-order models have been devoted to improving forecasting accuracy. However, the high-order forecasting approach is accompanied by the crucial problem of determining an appropriate order number. Consequently, such a deficiency was recently solved by Li and Cheng [S.-T. Li, Y.-C. Cheng, Deterministic Fuzzy time series model for forecasting enrollments, Computers and Mathematics with Applications 53 (2007) 1904-1920] using a deterministic forecasting method. In this paper, we propose a novel forecasting model to enhance forecasting functionality and allow processing of two-factor forecasting problems. In addition, this model applies fuzzy c-means (FCM) clustering to deal with interval partitioning, which takes the nature of data points into account and produces unequal-sized intervals. Furthermore, in order to cope with the randomness of initially assigned membership degrees of FCM clustering, Monte Carlo simulations are used to justify the reliability of the proposed model. The superior accuracy of the proposed model is demonstrated by experiments comparing it to other existing models using real-world empirical data.

••

TL;DR: This paper applies the homotopy perturbation method for solving the sixth-order boundary value problems by reformulating them as an equivalent system of integral equations by using a suitable transformation.

Abstract: In this paper, we apply the homotopy perturbation method for solving the sixth-order boundary value problems by reformulating them as an equivalent system of integral equations. This equivalent formulation is obtained by using a suitable transformation. The analytical results of the integral equations have been obtained in terms of convergent series with easily computable components. Several examples are given to illustrate the efficiency and implementation of the homotopy perturbation method. Comparisons are made to confirm the reliability of the homotopy perturbation method. We have also considered an example where the homotopy perturbation method is not reliable.

••

TL;DR: The hyper-Wiener indices of the Cartesian product, composition, join and disjunction of graphs are computed and some of the results are applied to compute the hyper- Wiener index of C"4 nanotubes, C" 4 nanotori and q-multi-walled polyhexnanotori.

Abstract: Let G be a graph. The distance d(u,v) between the vertices u and v of the graph G is equal to the length of a shortest path that connects u and v. The Wiener index W(G) is the sum of all distances between vertices of G, whereas the hyper-Wiener index WW(G) is defined as WW(G)=12W(G)[email protected]?"{"u","v"}"@?"V"("G")d(u,v)^2. In this paper the hyper-Wiener indices of the Cartesian product, composition, join and disjunction of graphs are computed. We apply some of our results to compute the hyper-Wiener index of C"4 nanotubes, C"4 nanotori and q-multi-walled polyhex nanotori.

••

TL;DR: A similarity measure uses the concept of geometry to calculate the center-of-gravity points of the lower fuzzy numbers and the upper fuzzy numbers of interval-valued fuzzy numbers, respectively, to Calculate the degree of similarity between interval- valued fuzzy numbers.

Abstract: In this paper, we present a new method for handling fuzzy risk analysis problems based on measures of similarity between interval-valued fuzzy numbers. First, we propose a similarity measure to calculate the degree of similarity between interval-valued fuzzy numbers. The proposed similarity measure uses the concept of geometry to calculate the center-of-gravity (COG) points of the lower fuzzy numbers and the upper fuzzy numbers of interval-valued fuzzy numbers, respectively, to calculate the degree of similarity between interval-valued fuzzy numbers. We also prove some properties of the proposed similarity measure. Then, we use the proposed similarity measure for interval-valued fuzzy numbers for handling fuzzy risk analysis problems. The proposed method is more flexible and more intelligent than the methods presented in [S.J. Chen, S.M. Chen, Fuzzy risk analysis based on similarity measures of generalized fuzzy numbers, IEEE Transactions on Fuzzy Systems 11 (1) (2003) 45-56; S.M. Chen, Evaluating the rate of aggregative risk in software development using fuzzy set theory, Cybernetics and Systems 30 (1) (1999) 57-75; S.M. Chen, New methods for subjective mental workload assessment and fuzzy risk analysis, Cybernetics and Systems 27 (5) (1996) 449-472; H.M. Lee, Applying fuzzy set theory to evaluate the rate of aggregative risk in software development, Fuzzy Sets and Systems 79 (3) (1996) 323-336; K.J. Schmucker, Fuzzy Sets, Natural Language Computations, and Risk Analysis, Computer Science Press, MD (1984)] due to the fact that it uses interval-valued fuzzy numbers rather than fuzzy numbers or generalized fuzzy numbers for handling fuzzy risk analysis problems. It provides us with a useful way for handling fuzzy risk analysis problems.

••

TL;DR: In this work, a similarity equation of the momentum boundary layer is studied for a moving flat plate with mass transfer in a stationary fluid and the solution is applicable to the practical problem of a shrinking sheet with a constant sheet velocity.

Abstract: In this work, a similarity equation of the momentum boundary layer is studied for a moving flat plate with mass transfer in a stationary fluid. The solution is applicable to the practical problem of a shrinking sheet with a constant sheet velocity. Theoretical estimation of the solution domain is obtained. It is shown that the solution only exists with mass suction at the wall surface. The equation with the associated boundary conditions is solved using numerical techniques. Greatly different from the continuously stretching surface problem and the Blasius problem with a free stream, quite complicated behavior is observed in the results. It is seen that there are three different solution zones divided by two critical mass transfer parameters, f"0"1~1.7028 and f"0"2~1.7324. When f"0f"0"2). There is a terminating point for the solution domain and the terminating point corresponds to a special algebraically decaying solution for the current problem. The current results provide a new solution branch of the Blasius equation, which is greatly different from the previous study and provide more insight into the understanding of the Blasius equation.

••

TL;DR: The purpose of this paper is to obtain coincidence theorems and study the problem of stability of iterations for solving coincidence equations on a general setting.

Abstract: The purpose of this paper is to obtain coincidence theorems and study the problem of stability of iterations for solving coincidence equations on a general setting. Several special cases are discussed.

••

TL;DR: PSO is shown to be a promising method to solve optimal design problems regarding, in particular, wastewater collection networks, according to the results herein obtained.

Abstract: Optimal design of wastewater collection networks is addressed in this paper by making use of the so-called PSO (Particle Swarm Optimization) technique. This already popular evolutionary technique is adapted for dealing both with continuous and discrete variables as required by this problem. An example of a wastewater collection network is used to show the algorithm performance and the obtained results are compared with those given by using dynamic programming to solve the same problem under the same conditions. PSO is shown to be a promising method to solve optimal design problems regarding, in particular, wastewater collection networks, according to the results herein obtained.

••

TL;DR: It is shown that when the lattice Boltzmann equation is formulated with a small-time step @Dt and an associated space scale @Dx, a Taylor expansion joined with the so-called equivalent equation methodology leads to establishing macroscopic fluid equations as a formal limit.

Abstract: We show that when we formulate the lattice Boltzmann equation with a small-time step @Dt and an associated space scale @Dx, a Taylor expansion joined with the so-called equivalent equation methodology leads to establishing macroscopic fluid equations as a formal limit. We recover the Euler equations of gas dynamics at the first order and the compressible Navier-Stokes equations at the second order.

••

TL;DR: It is proved that the Lipschitzian character of the function which introduces the functional dependence in a differential equation is not a necessary condition for the development of the monotone iterative technique to obtain a solution and to approximate the extremal solutions to the equation in a given functional interval.

Abstract: This paper is related to the existence and approximation of solutions for impulsive functional differential equations with periodic boundary conditions. We study the existence and approximation of extremal solutions to different types of functional differential equations with impulses at fixed times, by the use of the monotone method. Some of the options included in this formulation are differential equations with maximum and integro-differential equations. In this paper, we also prove that the Lipschitzian character of the function which introduces the functional dependence in a differential equation is not a necessary condition for the development of the monotone iterative technique to obtain a solution and to approximate the extremal solutions to the equation in a given functional interval. The corresponding results are established for the impulsive case. The general formulation includes several types of functional dependence (delay equations, equations with maxima, integro-differential equations). Finally, we consider the case of functional dependence which is given by nonincreasing and bounded functions.

••

TL;DR: This paper proposes an idea of using a larger function class containing the union of all possible hypothesis spaces (varying with the sample) to measure the approximation ability of the algorithm and shows how this idea provides error analysis for two particular classes of learning algorithms in kernel methods: learning the kernel via regularization and coefficient based regularization.

Abstract: Many learning algorithms use hypothesis spaces which are trained from samples, but little theoretical work has been devoted to the study of these algorithms. In this paper we show that mathematical analysis for these algorithms is essentially different from that for algorithms with hypothesis spaces independent of the sample or depending only on the sample size. The difficulty lies in the lack of a proper characterization of approximation error. To overcome this difficulty, we propose an idea of using a larger function class (not necessarily linear space) containing the union of all possible hypothesis spaces (varying with the sample) to measure the approximation ability of the algorithm. We show how this idea provides error analysis for two particular classes of learning algorithms in kernel methods: learning the kernel via regularization and coefficient based regularization. We demonstrate the power of this approach by its wide applicability.