Author
Enrique Herrera-Viedma
Other affiliations: King Abdulaziz University, Sichuan University, Universidad Internacional de La Rioja ...read more
Bio: Enrique Herrera-Viedma is an academic researcher from University of Granada. The author has contributed to research in topics: Group decision-making & Computer science. The author has an hindex of 98, co-authored 671 publications receiving 35808 citations. Previous affiliations of Enrique Herrera-Viedma include King Abdulaziz University & Sichuan University.
Papers published on a yearly basis
Papers
More filters
TL;DR: A study on the steps to follow in linguistic decision analysis is presented in a context of multi-criteria/multi-person decision making.
Abstract: A study on the steps to follow in linguistic decision analysis is presented in a context of multi-criteria/multi-person decision making. Three steps are established for solving a multi-criteria decision making problem under linguistic information: (i) the choice of the linguistic term set with its semantic in order to express the linguistic performance values according to all the criteria, (ii) the choice of the aggregation operator of linguistic information in order to aggregate the linguistic performance values, and (iii) the choice of the best alternatives, which is made up by two phases: (a) the aggregation of linguistic information for obtaining a collective linguistic performance value on the alternatives, and (b) the exploitation of the collective linguistic performance value in order to establish a rank ordering among the alternatives for choosing the best alternatives. Finally, an example is shown.
1,522 citations
TL;DR: The aim of this article is to review, analyze, and compare some of the software tools used to carry out science mapping analysis, taking into account aspects such as the bibliometric techniques available and the different kinds of analysis.
Abstract: Science mapping aims to build bibliometric maps that describe how specific disciplines, scientific domains, or research fields are conceptually, intellectually, and socially structured. Different techniques and software tools have been proposed to carry out science mapping analysis. The aim of this article is to review, analyze, and compare some of these software tools, taking into account aspects such as the bibliometric techniques available and the different kinds of analysis. © 2011 Wiley Periodicals, Inc.
1,444 citations
TL;DR: This approach combines performance analysis and science mapping for detecting and visualizing conceptual subdomains (particular themes or general thematic areas) and allows us to quantify and visualize the thematic evolution of a given research field.
Abstract: This paper presents an approach to analyze the thematic evolution of a given research field This approach combines performance analysis and science mapping for detecting and visualizing conceptual subdomains (particular themes or general thematic areas) It allows us to quantify and visualize the thematic evolution of a given research field To do this, co-word analysis is used in a longitudinal framework in order to detect the different themes treated by the research field across the given time period The performance analysis uses different bibliometric measures, including the h-index, with the purpose of measuring the impact of both the detected themes and thematic areas The presented approach includes a visualization method for showing the thematic evolution of the studied field Then, as an example, the thematic evolution of the Fuzzy Sets Theory field is analyzed using the two most important journals in the topic: Fuzzy Sets and Systems and IEEE Transactions on Fuzzy Systems
1,094 citations
TL;DR: This paper presents a consensus model in group decision making under linguistic assessments, based on the use of linguistic preferences to provide individuals' opinions, and on theUse of fuzzy majority of consensus, represented by means of a linguistic quantifier.
Abstract: This paper presents a consensus model in group decision making under linguistic assessments. It is based on the use of linguistic preferences to provide individuals' opinions, and on the use of fuzzy majority of consensus, represented by means of a linguistic quantifier. Several linguistic consensus degrees and linguistic distances are defined, acting on three levels. The consensus degrees indicate how far a group of individuals is from the maximum consensus, and linguistic distances indicate how far each individual is from current consensus labels over the preferences. This consensus model allows to incorporate more human consistency in decision support systems.
1,093 citations
TL;DR: A new characterization of the consistency property defined by the additive transitivity property of the fuzzy preference Relations is presented and a method for constructing consistent fuzzy preference relations from a set of n preference data is proposed.
Abstract: In decision making, in order to avoid misleading solutions, the study of consistency when the decision makers express their opinions by means of preference relations becomes a very important aspect in order to avoid misleading solutions. In decision making problems based on fuzzy preference relations the study of consistency is associated with the study of the transitivity property. In this paper, a new characterization of the consistency property defined by the additive transitivity property of the fuzzy preference relations is presented. Using this new characterization a method for constructing consistent fuzzy preference relations from a set of n � 1 preference data is proposed. Applying this method it is possible to assure better consistency of the fuzzy preference relations provided by the decision makers, and in such a way, to avoid the inconsistent solutions in the decision making processes. Additionally, a similar study of consistency is developed for the case of multiplicative preference relations. 2002 Elsevier B.V. All rights reserved.
929 citations
Cited by
More filters
Book•
[...]
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.
38,208 citations
Journal Article•
28,685 citations
[...]
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).
13,246 citations
TL;DR: This paper proposes a unique open-source tool, designed by the authors, called bibliometrix, for performing comprehensive science mapping analysis, programmed in R, and can be rapidly upgraded and integrated with other statistical R-packages.
Abstract: The use of bibliometrics is gradually extending to all disciplines. It is particularly suitable for science mapping at a time when the emphasis on empirical contributions is producing voluminous, fragmented, and controversial research streams. Science mapping is complex and unwieldly because it is multi-step and frequently requires numerous and diverse software tools, which are not all necessarily freeware. Although automated workflows that integrate these software tools into an organized data flow are emerging, in this paper we propose a unique open-source tool, designed by the authors, called bibliometrix, for performing comprehensive science mapping analysis. bibliometrix supports a recommended workflow to perform bibliometric analyses. As it is programmed in R, the proposed tool is flexible and can be rapidly upgraded and integrated with other statistical R-packages. It is therefore useful in a constantly changing science such as bibliometrics.
3,502 citations
TL;DR: The rating of each alternative and the weight of each criterion are described by linguistic terms which can be expressed in triangular fuzzy numbers and a vertex method is proposed to calculate the distance between two triangular fuzzyNumbers.
Abstract: The aim of this paper is to extend the TOPSIS to the fuzzy environment. Owing to vague concepts frequently represented in decision data, the crisp value are inadequate to model real-life situations. In this paper, the rating of each alternative and the weight of each criterion are described by linguistic terms which can be expressed in triangular fuzzy numbers. Then, a vertex method is proposed to calculate the distance between two triangular fuzzy numbers. According to the concept of the TOPSIS, a closeness coefficient is defined to determine the ranking order of all alternatives by calculating the distances to both the fuzzy positive-ideal solution (FPIS) and fuzzy negative-ideal solution (FNIS) simultaneously. Finally, an example is shown to highlight the procedure of the proposed method at the end of this paper.
3,109 citations