Author

# Xuyan Liu

Bio: Xuyan Liu is an academic researcher from Xidian University. The author has contributed to research in topics: Mathematics & Optimization problem. The author has an hindex of 3, co-authored 4 publications receiving 38 citations.

##### Papers

More filters

••

TL;DR: A new filled function which is continuous and differentiable without any parameter to tune is proposed which has three advantages: firstly, it is not easier to produce extra local minima, secondly, more efficient local search algorithms using gradient information can be applied and thirdly, a continuous andDifferentiable filled function can be optimized more easily.

Abstract: Many real world problems can be modelled as optimization problems. However, the traditional algorithms for these problems often encounter the problem of being trapped in local minima. The filled function method is an effective approach to tackle this kind of problems. However the existing filled functions have the disadvantages of discontinuity, non-differentiability or sensitivity to parameters which limit their efficiency. In this paper, we proposed a new filled function which is continuous and differentiable without any parameter to tune. Compared to discontinuous or non-differentiable filled functions, the continuous and differentiable filled function mainly has three advantages: firstly, it is not easier to produce extra local minima, secondly, more efficient local search algorithms using gradient information can be applied and thirdly, a continuous and differentiable filled function can be optimized more easily. Based on the new proposed filled function, a new algorithm was designed for unco...

26 citations

••

24 Jul 2016TL;DR: This work tries to unveil the impact of different grouping strategies on CC and the relationship between the grouping strategies and the search algorithms by empirical study.

Abstract: The cooperative co-evolution framework (CC) is widely used in the large scale global optimization. It is believed that the CC framework is very sensitive to grouping strategies and the performance deteriorate if interacted variables are not correctly grouped. So many efforts have been devoted to find good ways to correctly decompose the large scale problem into smaller sub-problems so as to effectively solve the original problem by optimizing these smaller sub-problems using a search algorithm. However, what is the relationship between the grouping strategy and the search algorithm adopted in CC? what is the effect of grouping strategies on the CC framework? This work will tackle these issues. We try to unveil the impact of different grouping strategies on CC and the relationship between the grouping strategies and the search algorithms by empirical study. The experiment results show that the correct result of variable grouping is very important since it can turn the large scale problem into smaller sub-problems and make the problem solving easier. It indeed has a big influence on the results obtained by the search algorithm. However, when the search algorithm adopted is not suitable or effective, even if the grouping strategy gives the correct grouping results, the final results may be poor. In this case, grouping strategy only plays little role on the CC. Thus, only effective grouping strategy plus efficient search algorithm can result in good solutions for large global optimization problems.

13 citations

01 Jun 2017

TL;DR: A function formula based grouping (FBG) strategy is adopted to classify the separable variables into different groups and put the interactive variables into the same group and a local search scheme is proposed to speed up the search.

Abstract: In this paper, we propose a hybrid genetic algorithm based on variable grouping and uniform design for global optimization problems, a function formula based grouping (FBG) strategy is adopted to classify the separable variables into different groups and put the interactive variables into the same group. In this way, the problem considered can be changed into several lower dimension sub-problems. The solution can be more easily obtained by simultaneously solving these sub-problems. Then, an efficient crossover operator is designed by using a specific uniform design method. When we have no prior knowledge on global optimal solution, this crossover operator has more possibility to find high quality solutions. Furthermore, in order to enhance the diversity and efficient explore the search space, an adapted mutation operator is design to adaptively adjust the search scope, and a local search scheme is proposed to speed up the search. By integrating all these schemes, a hybrid genetic algorithm is proposed for global optimization problems. Finally, the experiments are conducted on widely used benchmarks and the results indicate the proposed algorithm is efficient and effective.

4 citations

•

25 Jan 2017

TL;DR: In this paper, a large-data clustering optimization method based on dimension reducing grouping is proposed, which is able to solve large-scale clustering problems accurately and adaptively.

Abstract: The invention discloses a large-data clustering optimization method based on dimension reducing grouping. The large-data clustering optimization method includes the steps that (1) initialization is carried out; (2) similarity expressions corresponding to large-data clustering optimization problems are scanned, and whether relative symbols exist or not is judged; (3) relative dimensionality is stored; (4) whether similarity sub-expressions exist or not is judged; (5) ephemeral data of the similarity sub-expressions is stored; (6) whether relative symbols exist in the similarity sub-expressions or not is judged; (7) relative sub-dimensionality is stored; (8) whether a first symbol after the similarity sub-expressions is the similarity symbol or not is judged; (9) the relative dimensionality is merged; (10) ephemeral data is released; (11) sub-dimensionality with common elements is merged. By means of the large-data clustering optimization method based on dimension reducing grouping, the large-data clustering optimization problems can be accurately subjected to dimension reducing grouping, the speed is high, and wide adaptation is achieved.

2 citations

06 Oct 2022

TL;DR: In particular, the authors showed that a hypothesis class with finite VC-dimension is PAC-learnable under the assumption that the VC dimension of the hypothesis class is as large as possible.

Abstract: Given a domain $X$ and a collection $\mathcal{H}$ of functions $h:X\to \{0,1\}$, the Vapnik-Chervonenkis (VC) dimension of $\mathcal{H}$ measures its complexity in an appropriate sense. In particular, the fundamental theorem of statistical learning says that a hypothesis class with finite VC-dimension is PAC learnable. Recent work by Fitzpatrick, Wyman, the fourth and seventh named authors studied the VC-dimension of a natural family of functions $\mathcal{H}_t^{'2}(E): \mathbb{F}_q^2\to \{0,1\}$, corresponding to indicator functions of circles centered at points in a subset $E\subseteq \mathbb{F}_q^2$. They showed that when $|E|$ is large enough, the VC-dimension of $\mathcal{H}_t^{'2}(E)$ is the same as in the case that $E = \mathbb F_q^2$. We study a related hypothesis class, $\mathcal{H}_t^d(E)$, corresponding to intersections of spheres in $\mathbb{F}_q^d$, and ask how large $E\subseteq \mathbb{F}_q^d$ needs to be to ensure the maximum possible VC-dimension. We resolve this problem in all dimensions, proving that whenever $|E|\geq C_dq^{d-1/(d-1)}$ for $d\geq 3$, the VC-dimension of $\mathcal{H}_t^d(E)$ is as large as possible. We get a slightly stronger result if $d=3$: this result holds as long as $|E|\geq C_3 q^{7/3}$. Furthermore, when $d=2$ the result holds when $|E|\geq C_2 q^{7/4}$.

##### Cited by

More filters

••

TL;DR: The main purpose of this paper is to outline the state of the art and to identify open challenges concerning the most relevant areas within bio-inspired optimization, thereby highlighting the need for reaching a consensus and joining forces towards achieving valuable insights into the understanding of this family of optimization techniques.

Abstract: In recent years, the research community has witnessed an explosion of literature dealing with the mimicking of behavioral patterns and social phenomena observed in nature towards efficiently solving complex computational tasks. This trend has been especially dramatic in what relates to optimization problems, mainly due to the unprecedented complexity of problem instances, arising from a diverse spectrum of domains such as transportation, logistics, energy, climate, social networks, health and industry 4.0, among many others. Notwithstanding this upsurge of activity, research in this vibrant topic should be steered towards certain areas that, despite their eventual value and impact on the field of bio-inspired computation, still remain insufficiently explored to date. The main purpose of this paper is to outline the state of the art and to identify open challenges concerning the most relevant areas within bio-inspired optimization. An analysis and discussion are also carried out over the general trajectory followed in recent years by the community working in this field, thereby highlighting the need for reaching a consensus and joining forces towards achieving valuable insights into the understanding of this family of optimization techniques.

401 citations

•

01 Jan 2011335 citations

••

TL;DR: This paper proposes a new decomposition method, which it is called recursive differential grouping (RDG), by considering the interaction between decision variables based on nonlinearity detection, and shows that RDG greatly improved the efficiency of problem decomposition in terms of time complexity.

Abstract: Cooperative co-evolution (CC) is an evolutionary computation framework that can be used to solve high-dimensional optimization problems via a “divide-and-conquer” mechanism. However, the main challenge when using this framework lies in problem decomposition. That is, deciding how to allocate decision variables to a particular subproblem, especially interacting decision variables. Existing decomposition methods are typically computationally expensive. In this paper, we propose a new decomposition method, which we call recursive differential grouping (RDG), by considering the interaction between decision variables based on nonlinearity detection. RDG recursively examines the interaction between a selected decision variable and the remaining variables, placing all interacting decision variables into the same subproblem. We use analytical methods to show that RDG can be used to efficiently decompose a problem, without explicitly examining all pairwise variable interactions. We evaluated the efficacy of the RDG method using large scale benchmark optimization problems. Numerical simulation experiments showed that RDG greatly improved the efficiency of problem decomposition in terms of time complexity. Significantly, when RDG was embedded in a CC framework, the optimization results were better than results from seven other decomposition methods.

151 citations

••

TL;DR: The experimental results and analysis suggest that ERDG is a competitive method for decomposing large-scale continuous problems and improves the performance of CC for solving the large- scale optimization problems.

Abstract: Cooperative co-evolution (CC) is an efficient and practical evolutionary framework for solving large-scale optimization problems. The performance of CC is affected by the variable decomposition. An accurate variable decomposition can help to improve the performance of CC on solving an optimization problem. The variable grouping methods usually spend many computational resources obtaining an accurate variable decomposition. To reduce the computational cost on the decomposition, we propose an efficient recursive differential grouping (ERDG) method in this article. By exploiting the historical information on examining the interrelationship between the variables of an optimization problem, ERDG is able to avoid examining some interrelationship and spend much less computation than other recursive differential grouping methods. Our experimental results and analysis suggest that ERDG is a competitive method for decomposing large-scale continuous problems and improves the performance of CC for solving the large-scale optimization problems.

45 citations

••

02 Jul 2018

TL;DR: An upper bound of the round-off errors is derived, which is shown to be sufficient when identifying variable interactions across a wide range of large-scale benchmark problems.

Abstract: Problem decomposition plays an essential role in the success of cooperative co-evolution (CC), when used for solving large-scale optimization problems. The recently proposed recursive differential grouping (RDG) method has been shown to be very efficient, especially in terms of time complexity. However, it requires an appropriate parameter setting to estimate a threshold value in order to determine if two subsets of decision variables interact or not. Furthermore, using one global threshold value may be insufficient to identify variable interactions in components with different contribution to the fitness value. Inspired by the different grouping 2 (DG2) method, in this paper, we adaptively estimates a threshold value based on computational round-off errors for RDG. We derive an upper bound of the round-off errors, which is shown to be sufficient when identifying variable interactions across a wide range of large-scale benchmark problems. Comprehensive numerical experimental results showed that the proposed RDG2 method achieved higher decomposition accuracy than RDG and DG2. When embedded into a CC framework, it achieved statistically equal or significantly better solution quality than RDG and DG2, when used to solve the benchmark problems.

39 citations