scispace - formally typeset
Search or ask a question

Showing papers by "Laurent Viennot published in 2003"


12 May 2003
TL;DR: This paper shows that multipoint relaying can be used as well in reactive protocols in order to save overhead in route discovery and specifies a very simple reactive protocol called MPRDV (Multipoint Relay Distance Vector protocol).
Abstract: Multipoint relays have been introduced in the proactive protocol OLSR in order to optimize the flooding overhead of control traffic. In this paper we show that multipoint relaying can be used as well in reactive protocols in order to save overhead in route discovery. To this end we specify a very simple reactive protocol called MPRDV (Multipoint Relay Distance Vector protocol). In MPRDV route requests and route replies are all flooded via Multipoint Relays (MPR). They both open routes to their originators. Route repairs are performed by new route request flooding. We show with simulation that the use of MPR flooding does not lead to the control traffic explosion that we experience with basic reactive protocol in presence of frequent route discovery and failure. MPR provide also another optimization since it tends to offer optimal routes to data packets and so increases the protocol performances.

10 citations


Proceedings Article
20 May 2003
TL;DR: The web graph has been widely adopted as the core describing the web structure but little attention has been paid to the relationship betweenthe web graph and the location of the pages.
Abstract: The web graph has been widely adopted as the core describing the web structure [4] However, little attention has been paid to the relationship betweenthe web graph and the location of the pages It has already been noticed that links are often local (ie from a page to another page of the same server) and this can be used for efficient encoding of the web graph [9,7] Locality in the web can be further modelled by the clustered graph induced by the prefix tree of URLs The web tree's internal nodes are the commonprefixes of URLs and its leaves are the URLs themselves A prefix ordering of URLs according to this tree allows to observe local structure in the web directly on the adjacency matrix M of the web graph M splits in two terms : M = D + S, where D is diagonal by blocks and S is a very sparse matrix The blocks of D that can be observed along the diagonal are sets of pages strongly related together

6 citations


01 Jan 2003
TL;DR: In this paper, a prefix ordering of URLs according to this tree allows to observe local structure in the web directly on the adjacency matrix M of the web graph, where M splits in two terms : M = D + S, where D is diagonal by blocks and S is a very sparse matrix.
Abstract: The web graph has been widely adopted as the core describing the web structure [4]. However, little attention has been paid to the relationship betweenthe web graph and the location of the pages. It has already been noticed that links are often local (i.e. from a page to another page of the same server) and this can be used for efficient encoding of the web graph [9,7]. Locality in the web can be further modelled by the clustered graph induced by the prefix tree of URLs. The web tree's internal nodes are the commonprefixes of URLs and its leaves are the URLs themselves. A prefix ordering of URLs according to this tree allows to observe local structure in the web directly on the adjacency matrix M of the web graph. M splits in two terms : M = D + S, where D is diagonal by blocks and S is a very sparse matrix. The blocks of D that can be observed along the diagonal are sets of pages strongly related together.

6 citations


01 Jan 2003
TL;DR: This paper is devoted to estimate how accurate is the view of the web obtained by crawling, to compare crawling to other ways of discovering the web (mainly by analyzing server or proxy logs of web surfers activity).
Abstract: The web is now de facto the first place to publish data. However, retrieving the whole database represented by the web appears almost impossible. Some parts are known to be hard to discover automatically, giving rise to the so called hidden or invisible web. On the one hand, search engines try to index most of the web. Almost all related work is based on discovering the web by crawling. This paper is devoted to estimate how accurate is the view of the web obtained by crawling. Our approach is to compare crawling to other ways of discovering the web (mainly by analyzing server or proxy logs of web surfers activity). This work is a first step towards identifying the observable web.

4 citations


12 May 2003
TL;DR: In this article, it is shown that it is possible to decompose le PageRank into two parties distinctes, i.e., PageRank interne and PageRank externe.
Abstract: Lance en 1998, le moteur de recherche Google classe les pages grâce a la combinaison de plusieurs facteurs dont le principal porte le nom de PageRank. Plus precisement, le classement des pages est fait en utilisant un indice numerique (le «PageRank») calcule pour chaque page. Nous allons montrer qu'il est possible de decomposer le PageRank en deux parties distinctes, que nous appellerons PageRank interne et PageRank externe. Ces deux PageRank jouent des roles fondamentalement differents, et leur introduction permet de mieux comprendre comment fonctionne le PageRank a l'interieur et a l'exterieur d'un site. Une premiere application est un algorithme local d'estimation du PageRank des pages d'un site. Nous allons egalement mettre en evidence des resultats quantitatifs sur la possibilite pour un site de «doper» son propre PageRank.

1 citations