Other affiliations: Royal Dutch Shell, Econometric Institute, Erasmus Research Institute of Management ...read more
Bio: Uzay Kaymak is an academic researcher from Eindhoven University of Technology. The author has contributed to research in topics: Fuzzy logic & Fuzzy set. The author has an hindex of 41, co-authored 349 publications receiving 6360 citations. Previous affiliations of Uzay Kaymak include Royal Dutch Shell & Econometric Institute.
Papers published on a yearly basis
••01 Jun 1998
TL;DR: Using a measure of similarity, a rule base simplification method is proposed that reduces the number of fuzzy sets in the model by merging similar fuzzy sets to create a common fuzzy set to replace them in the rule base.
Abstract: In fuzzy rule-based models acquired from numerical data, redundancy may be present in the form of similar fuzzy sets that represent compatible concepts. This results in an unnecessarily complex and less transparent linguistic description of the system. By using a measure of similarity, a rule base simplification method is proposed that reduces the number of fuzzy sets in the model. Similar fuzzy sets are merged to create a common fuzzy set to replace them in the rule base. If the redundancy in the model is high, merging similar fuzzy sets might result in equal rules that also can be merged, thereby reducing the number of rules as well. The simplified rule base is computationally more efficient and linguistically more tractable. The approach has been successfully applied to fuzzy models of real world systems.
TL;DR: This paper proposes a novel meta-heuristic approach based on a hybrid genetic algorithm combined with constructive heuristics for ready-mixed concrete delivery of just-in-time production and transportation to distributed customers.
Abstract: The coordination of just-in-time production and transportation in a network of partially independent facilities to guarantee timely delivery to distributed customers is one of the most challenging aspect of supply chain management. From a theoretical perspective, the timely production/distribution can be viewed as a hybrid combination of planning, scheduling and routing problems, each notoriously affected by nearly prohibitive combinatorial complexity. From a practical viewpoint, the problem calls for a trade-off between risks and profits. This paper focuses on the ready-mixed concrete delivery: in addition to the mentioned complexity, strict time-constraints forbid both earliness and lateness of the supply. After developing a detailed model of the considered problem, we propose a novel meta-heuristic approach based on a hybrid genetic algorithm combined with constructive heuristics. A detailed case study derived from industrial data is used to illustrate the potential of the proposed approach.
••18 Mar 2013
TL;DR: How emoticons typically convey sentiment is analyzed and how to exploit this by using a novel, manually created emoticon sentiment lexicon in order to improve a state-of-the-art lexicon-based sentiment classification method.
Abstract: As people increasingly use emoticons in text in order to express, stress, or disambiguate their sentiment, it is crucial for automated sentiment analysis tools to correctly account for such graphical cues for sentiment. We analyze how emoticons typically convey sentiment and demonstrate how we can exploit this by using a novel, manually created emoticon sentiment lexicon in order to improve a state-of-the-art lexicon-based sentiment classification method. We evaluate our approach on 2,080 Dutch tweets and forum messages, which all contain emoticons and have been manually annotated for sentiment. On this corpus, paragraph-level accounting for sentiment implied by emoticons significantly improves sentiment classification accuracy. This indicates that whenever emoticons are used, their associated sentiment dominates the sentiment conveyed by textual cues and forms a good proxy for intended sentiment.
01 Dec 2002
TL;DR: This paper presents a meta-modelling framework for Model-Based Predictive Control that automates the very labor-intensive and therefore time-heavy and expensive process of designing and implementing model-based control systems.
Abstract: Fuzzy Decision Making Fuzzy Decision Functions Fuzzy Aggregated Membership Control Modeling and Identification Fuzzy Decision Making for Modeling Fuzzy Model-Based Control Performance Criteria Model-Based Control with Fuzzy Decision Functions Derivative-Free Optimization Advanced Optimization Issues Application Example Future Developments Appendices: Model-Based Predictive Control Nonlinear Internal Model Control.
TL;DR: In this article, two techniques to improve the calculation of the fuzzy covariance matrix in the Gustafson-Kessel (GK) clustering algorithm are presented, which are useful when the GK algorithm is employed in the extraction of Takagi-Sugeno fuzzy model from data.
Abstract: This article presents two techniques to improve the calculation of the fuzzy covariance matrix in the Gustafson-Kessel (GK) clustering algorithm. The first one overcomes problems that occur in the standard GK clustering when the number of data samples is small or when the data within a cluster are linearly correlated. The improvement is achieved by fixing the ratio between the maximal and minimal eigenvalue of the covariance matrix. The second technique is useful when the GK algorithm is employed in the extraction of Takagi-Sugeno fuzzy model from data. It reduces the risk of overfitting when the number of training samples is low in comparison to the number of clusters. This is achieved by adding a scaled unity matrix to the calculated covariance matrix. Numerical examples are presented to demonstrate the benefits of the proposed techniques.
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).
01 Jan 2002
TL;DR: VOSviewer’s ability to handle large maps is demonstrated by using the program to construct and display a co-citation map of 5,000 major scientific journals.
Abstract: We present VOSviewer, a freely available computer program that we have developed for constructing and viewing bibliometric maps. Unlike most computer programs that are used for bibliometric mapping, VOSviewer pays special attention to the graphical representation of bibliometric maps. The functionality of VOSviewer is especially useful for displaying large bibliometric maps in an easy-to-interpret way. The paper consists of three parts. In the first part, an overview of VOSviewer’s functionality for displaying bibliometric maps is provided. In the second part, the technical implementation of specific parts of the program is discussed. Finally, in the third part, VOSviewer’s ability to handle large maps is demonstrated by using the program to construct and display a co-citation map of 5,000 major scientific journals.