scispace - formally typeset
Search or ask a question
Author

Guy Melançon

Bio: Guy Melançon is an academic researcher from University of Bordeaux. The author has contributed to research in topics: Visualization & Information visualization. The author has an hindex of 26, co-authored 127 publications receiving 4322 citations. Previous affiliations of Guy Melançon include Centre national de la recherche scientifique & Centrum Wiskunde & Informatica.


Papers
More filters
Journal ArticleDOI
TL;DR: This is a survey on graph visualization and navigation techniques, as used in information visualization, which approaches the results of traditional graph drawing from a different perspective.
Abstract: This is a survey on graph visualization and navigation techniques, as used in information visualization. Graphs appear in numerous applications such as Web browsing, state-transition diagrams, and data structures. The ability to visualize and to navigate in these potentially large, abstract graphs is often a crucial part of an application. Information visualization has specific requirements, which means that this survey approaches the results of traditional graph drawing from a different perspective.

1,648 citations

Book ChapterDOI
TL;DR: The possibilities to collect and store data increase at a faster rate than the ability to use it for making decisions, and in most applications, raw data has no value in itself; instead the authors want to extract the information contained in it.
Abstract: We are living in a world which faces a rapidly increasing amount of data to be dealt with on a daily basis. In the last decade, the steady improvement of data storage devices and means to create and collect data along the way influenced our way of dealing with information: Most of the time, data is stored without filtering and refinement for later use. Virtually every branch of industry or business, and any political or personal activity nowadays generate vast amounts of data. Making matters worse, the possibilities to collect and store data increase at a faster rate than our ability to use it for making decisions. However, in most applications, raw data has no value in itself; instead we want to extract the information contained in it.

1,047 citations

Proceedings ArticleDOI
19 Oct 2003
TL;DR: A metric that has been designed in order to identify the weakest edges in a small world network leading to an easy and low cost filtering procedure that breaks up a graph into smaller and highly connected components is described.
Abstract: Many networks under study in information visualization are "small world" networks. These networks first appeared in the study of social networks and were shown to be relevant models in other application domains such as software reverse engineering and biology. Furthermore, many of these networks actually have a multiscale nature: they can be viewed as a network of groups that are themselves small world networks. We describe a metric that has been designed in order to identify the weakest edges in a small world network leading to an easy and low cost filtering procedure that breaks up a graph into smaller and highly connected components. We show how this metric can be exploited through an interactive navigation of the network based on semantic zooming. Once the network is decomposed into a hierarchy of sub-networks, a user can easily find groups and subgroups of actors and understand their dynamics.

225 citations

Journal ArticleDOI
TL;DR: This contribution addresses two main issues by bringing together both decision Making approaches and opinion dynamics to develop a similarity-confidence-consistency based Social network that enables the agents to provide their opinions with the possibility of allocating uncertainty by means of the Intuitionistic fuzzy preference relations and at the same time interact with like-minded agents in order to achieve an agreement.

141 citations

Proceedings ArticleDOI
10 May 2003
TL;DR: A simple, fast computing and easy to implement method for finding relatively good clusterings of software systems by applying a straightforward metric, MQ, defined in terms of the neighborhoods of its end vertices to identify the weak edges of the graph.
Abstract: We describe a simple, fast computing and easy to implement method for finding relatively good clusterings of software systems. Our method relies on the ability to compute the strength of an edge in a graph by applying a straightforward metric defined in terms of the neighborhoods of its end vertices. The metric is used to identify the weak edges of the graph, which are momentarily deleted to break it into several components. We study the quality metric MQ introduced by S. Mancoridis et al. (1998) and exhibit mathematical properties that make it a good measure for clustering quality. Letting the threshold weakness of edges vary defines a path, i.e. a sequence of clusterings in the solution space (of all possible clustering of the graph). This path is described in terms of a curve linking MQ to the weakness of the edges in the graph.

94 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

01 Jan 2002

9,314 citations

Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations

Journal ArticleDOI

3,628 citations

Journal ArticleDOI
TL;DR: This is a survey on graph visualization and navigation techniques, as used in information visualization, which approaches the results of traditional graph drawing from a different perspective.
Abstract: This is a survey on graph visualization and navigation techniques, as used in information visualization. Graphs appear in numerous applications such as Web browsing, state-transition diagrams, and data structures. The ability to visualize and to navigate in these potentially large, abstract graphs is often a crucial part of an application. Information visualization has specific requirements, which means that this survey approaches the results of traditional graph drawing from a different perspective.

1,648 citations