scispace - formally typeset
Search or ask a question
Author

Roman Neruda

Bio: Roman Neruda is an academic researcher from Academy of Sciences of the Czech Republic. The author has contributed to research in topics: Evolutionary algorithm & Artificial neural network. The author has an hindex of 13, co-authored 152 publications receiving 812 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: Several learning methods for RBF networks and their combinations are presented, a gradient-based learning, the three-step algorithm with unsupervised part, and an evolutionary algorithms are introduced, and their performance compared on benchmark problems from the Proben 1 database.

79 citations

Proceedings ArticleDOI
23 Sep 2008
TL;DR: This paper proposes a realization of the DPS using a SOAP version of the SPARQL protocol and a dynamic configuration interface allowing easy interactions of service requesters with data providing services.
Abstract: Web Services providing access to datasources with structured data have an important place in the SOA. In this paper we focus on modeling and discovery of generic data providing services (DPS), with the goal of making data providing services available for interactions with service requesters in contexts such as service composition and mediation. In our model RDF Views are used to represent the content provided by the DPS. A characterization of match between description of DPS as RDF Views and the OWL-S service request is specified, based on which we developed a flexible matchmaking algorithm for discovery of data providing services. Finally, we propose a realization of the DPS using a SOAP version of the SPARQL protocol and a dynamic configuration interface allowing easy interactions of service requesters with data providing services.

48 citations

Journal ArticleDOI
TL;DR: This work proposes an Abstract Process Mediation Framework (APMF) identifying the key functional areas that need to be addressed by process mediation components and presents algorithms for solving the process mediation problem in two scenarios: when the mediation process has complete visibility of the process model of the service provider and service requester and when it does not.
Abstract: The ability to deal with the incompatibilities of service requesters and providers is a critical factor for achieving interoperability in dynamic open environments. We focus on the problem of process mediation of the semantically annotated process models of the service requester and service provider. We propose an Abstract Process Mediation Framework (APMF) identifying the key functional areas that need to be addressed by process mediation components. Next, we present algorithms for solving the process mediation problem in two scenarios: (1) when the mediation process has complete visibility of the process model of the service provider and service requester (complete visibility scenario) and (2) when the mediation process has visibility only of the process model of the service provider, but not the service requester (asymmetric scenario). The algorithms combine planning and semantic reasoning with the discovery of appropriate external services such as data mediators. Finally, the Process Mediation Agent (PMA) is introduced, which realises an execution infrastructure for runtime mediation.

34 citations

Proceedings ArticleDOI
24 Sep 2017
TL;DR: This work proposes a genetic algorithm for an optimization of a network architecture for the prediction of air pollution based on sensor measurements and compared to several fixed architectures and support vector regression.
Abstract: Deep neural networks enjoy high interest and have become the state-of-art methods in many fields of machine learning recently. Still, there is no easy way for a choice of network architecture. However, the choice of architecture can significantly influence the network performance. This work is the first step towards an automatic architecture design. We propose a genetic algorithm for an optimization of a network architecture. The algorithm is inspired by and designed directly for the Keras library [1] that is one of the most common implementations of deep neural networks. The target application is the prediction of air pollution based on sensor measurements. The proposed algorithm is evaluated on experiments on sensor data and compared to several fixed architectures and support vector regression.

31 citations

Proceedings ArticleDOI
05 Jun 2011
TL;DR: This paper presents a novel distance based aggregate surrogate model for multiobjective optimization and describes a memetic multiObjective algorithm based on this model, which greatly reduces the number of required objective function evaluations.
Abstract: Evolutionary algorithms generally require a large number of objective function evaluations which can be costly in practice. These evaluations can be replaced by evaluations of a cheaper meta-model (surrogate model) of the objective functions. In this paper we present a novel distance based aggregate surrogate model for multiobjective optimization and describe a memetic multiobjective algorithm based on this model. Various variants of the models are tested and discussed and the algorithm is compared to standard multiobjective evolutionary algorithms. We show that our algorithm greatly reduces the number of required objective function evaluations.

25 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: Experimental results indicate that MOEA/D-AWA outperforms the benchmark algorithms in terms of the IGD metric, particularly when the PF of the MOP is complex.
Abstract: Recently, MOEA/D multi-objective evolutionary algorithm based on decomposition has achieved great success in the field of evolutionary multi-objective optimization and has attracted a lot of attention. It decomposes a multi-objective optimization problem MOP into a set of scalar subproblems using uniformly distributed aggregation weight vectors and provides an excellent general algorithmic framework of evolutionary multi-objective optimization. Generally, the uniformity of weight vectors in MOEA/D can ensure the diversity of the Pareto optimal solutions, however, it cannot work as well when the target MOP has a complex Pareto front PF; i.e., discontinuous PF or PF with sharp peak or low tail. To remedy this, we propose an improved MOEA/D with adaptive weight vector adjustment MOEA/D-AWA. According to the analysis of the geometric relationship between the weight vectors and the optimal solutions under the Chebyshev decomposition scheme, a new weight vector initialization method and an adaptive weight vector adjustment strategy are introduced in MOEA/D-AWA. The weights are adjusted periodically so that the weights of subproblems can be redistributed adaptively to obtain better uniformity of solutions. Meanwhile, computing efforts devoted to subproblems with duplicate optimal solution can be saved. Moreover, an external elite population is introduced to help adding new subproblems into real sparse regions rather than pseudo sparse regions of the complex PF, that is, discontinuous regions of the PF. MOEA/D-AWA has been compared with four state of the art MOEAs, namely the original MOEA/D, Adaptive-MOEA/D, -MOEA/D, and NSGA-II on 10 widely used test problems, two newly constructed complex problems, and two many-objective problems. Experimental results indicate that MOEA/D-AWA outperforms the benchmark algorithms in terms of the IGD metric, particularly when the PF of the MOP is complex.

514 citations

Journal ArticleDOI
TL;DR: A comprehensive survey of the decomposition-based MOEAs proposed in the last decade is presented, including development of novel weight vector generation methods, use of new decomposition approaches, efficient allocation of computational resources, modifications in the reproduction operation, mating selection and replacement mechanism, hybridizing decompositions- and dominance-based approaches, etc.
Abstract: Decomposition is a well-known strategy in traditional multiobjective optimization. However, the decomposition strategy was not widely employed in evolutionary multiobjective optimization until Zhang and Li proposed multiobjective evolutionary algorithm based on decomposition (MOEA/D) in 2007. MOEA/D proposed by Zhang and Li decomposes a multiobjective optimization problem into a number of scalar optimization subproblems and optimizes them in a collaborative manner using an evolutionary algorithm (EA). Each subproblem is optimized by utilizing the information mainly from its several neighboring subproblems. Since the proposition of MOEA/D in 2007, decomposition-based MOEAs have attracted significant attention from the researchers. Investigations have been undertaken in several directions, including development of novel weight vector generation methods, use of new decomposition approaches, efficient allocation of computational resources, modifications in the reproduction operation, mating selection and replacement mechanism, hybridizing decomposition- and dominance-based approaches, etc. Furthermore, several attempts have been made at extending the decomposition-based framework to constrained multiobjective optimization, many-objective optimization, and incorporate the preference of decision makers. Additionally, there have been many attempts at application of decomposition-based MOEAs to solve complex real-world optimization problems. This paper presents a comprehensive survey of the decomposition-based MOEAs proposed in the last decade.

436 citations

Patent
31 Aug 2011
TL;DR: In this article, a method for modifying an image is presented, which consists of displaying an image, the image comprising a portion of an object; determining if an edge of the object is in a location within the portion; and detecting movement in a member direction, of an operating member with respect to the edge.
Abstract: A method is provided for modifying an image. The method comprises displaying an image, the image comprising a portion of an object; and determining if an edge of the object is in a location within the portion. The method further comprises detecting movement, in a member direction, of an operating member with respect to the edge. The method still further comprises moving, if the edge is not in the location, the object in an object direction corresponding to the detected movement; and modifying, if the edge is in the location, the image in response to the detected movement, the modified image comprising the edge in the location.

434 citations