scispace - formally typeset
Search or ask a question

Showing papers by "Katsumi Tanaka published in 2006"


Proceedings ArticleDOI
23 May 2006
TL;DR: The directions available for tighter integration of Web search with a GIS, in terms of extraction, knowledge discovery, and presentation are discussed, and implementations are described to support the argument that the integration must go beyond the simple map-and hyperlink architecture.
Abstract: Integration of Web search with geographic information has recently attracted much attention. There are a number of local Web search systems enabling users to find location-specific Web content. In this paper, however, we point out that this integration is still at a superficial level. Most local Web search systems today only link local Web content to a map interface. They are extensions of a conventional stand-alone geographic information system (GIS), applied to a Web-based client-server architecture. In this paper, we discuss the directions available for tighter integration of Web search with a GIS, in terms of extraction, knowledge discovery, and presentation. We also describe implementations to support our argument that the integration must go beyond the simple map-and hyperlink architecture.

68 citations


Journal ArticleDOI
TL;DR: An application system is described that enables a TV news program to be presented concurrently with complementary news Web pages, providing the viewer with an easy way of acquiring more details about a news topic from different perspectives.

44 citations


Book ChapterDOI
23 Oct 2006
TL;DR: This system uses a conventional Web search engine to do two searches where queries are generated by connecting the user's query term with a conjunction “OR” and obtains background context shared by the query term and each returned coordinate term.
Abstract: We propose a method for searching coordinate terms using a traditional Web search engine. “Coordinate terms” are terms which have the same hypernym. There are several research methods that acquire coordinate terms, but they need parsed corpora or a lot of computation time. Our system does not need any preprocessing and can rapidly acquire coordinate terms for any query term. It uses a conventional Web search engine to do two searches where queries are generated by connecting the user's query term with a conjunction “OR”. It also obtains background context shared by the query term and each returned coordinate term.

41 citations


Journal Article
TL;DR: In this paper, the authors describe a way to extract visitors' experiences from Weblogs (blogs) and also a method to mine and visualize activities of visitors at sightseeing spots.
Abstract: We describe a way to extract visitors' experiences from Weblogs (blogs) and also a way to mine and visualize activities of visitors at sightseeing spots. A system using our proposed method mines association rules between locations, time periods, and types of experiences out of blog entries. Association rules between experiences are also extracted. We constructed a local information search system that enables the user to specify a location, a time period, or a type of experience in a search query and find relevant Web content. Results of experiments showed that three proposed refinement algorithms applied to a conventional text mining method raises the precision and recall of the extracted rules.

37 citations


Proceedings ArticleDOI
22 Aug 2006
TL;DR: A meta-archive approach for increasing the coverage of past Web pages and for providing a unified interface to the past Web is proposed, and query-based and localized approaches for filtered browsing that enhance and speed up browsing and information retrieval from Web archives are introduced.
Abstract: While the Internet community recognized early on the need to store and preserve past content of the Web for future use, the tools developed so far for retrieving information from Web archives are still difficult to use and far less efficient than those developed for the "live Web" We expect that future information retrieval systems will utilize both the "live" and "past Web" and have thus developed a general framework for a past Web browser A browser built using this framework would be a client-side system that downloads, in real time, past page versions from Web archives for their customized presentation It would use passive browsing, change detection and change animation to provide a smooth and satisfactory browsing experience We propose a meta-archive approach for increasing the coverage of past Web pages and for providing a unified interface to the past Web Finally, we introduce query-based and localized approaches for filtered browsing that enhance and speed up browsing and information retrieval from Web archives

21 citations


Proceedings ArticleDOI
26 Jan 2006
TL;DR: This paper explores ways by which multiple authors can annotate 3D models from multiple viewpoints in a 3D collaborative environment, with particular reference to the environment provided by Croquet.
Abstract: This paper explores ways by which multiple authors can annotate 3D models from multiple viewpoints in a 3D collaborative environment, with particular reference to the environment provided by Croquet. We deal with two types of viewpoint: the conceptual viewpoint and the physical viewpoint. Our approach is to exploit the portal, which is a notable feature of Croquet, in order to achieve our goal. We can assume that a physical viewpoint is expressed by the position and orientation of a portal. To provide a method for annotation based on the conceptual viewpoint, we developed a new portal called an "interactor." The design and our preliminary implementation are discussed.

16 citations


Proceedings ArticleDOI
23 May 2006
TL;DR: A browser for the past web that can retrieve data from multiple past web resources and features a passive browsing style based on change detection and presentation that enables automatic skipping of changeless periods and filtered browsing based on user specified query.
Abstract: We describe a browser for the past web. It can retrieve data from multiple past web resources and features a passive browsing style based on change detection and presentation. The browser shows past pages one by one along a time line. The parts that were changed between consecutive page versions are animated to reflect their deletion or insertion, thereby drawing the user's attention to them. The browser enables automatic skipping of changeless periods and filtered browsing based on user specified query.

15 citations


Book ChapterDOI
27 Nov 2006
TL;DR: This paper proposes a novel method for query refinement based on real-world contexts of a mobile user, such as his/her current geographic location and the typical activities at the location which are extracted by Blog mining.
Abstract: Mobile Web search will gain more importance. This paper proposes a novel method for query refinement based on real-world contexts of a mobile user, such as his/her current geographic location and the typical activities at the location which are extracted by Blog mining. Our method enhances location-awareness and even further context-awareness to the existing location-free keyword-based Web search engines.

13 citations


Book ChapterDOI
04 Sep 2006
TL;DR: In this paper, the authors describe a way to extract visitors' experiences from Weblogs (blogs) and also a method to mine and visualize activities of visitors at sightseeing spots.
Abstract: We describe a way to extract visitors' experiences from Weblogs (blogs) and also a way to mine and visualize activities of visitors at sightseeing spots. A system using our proposed method mines association rules between locations, time periods, and types of experiences out of blog entries. Association rules between experiences are also extracted. We constructed a local information search system that enables the user to specify a location, a time period, or a type of experience in a search query and find relevant Web content. Results of experiments showed that three proposed refinement algorithms applied to a conventional text mining method raises the precision and recall of the extracted rules.

10 citations


Book ChapterDOI
16 Jan 2006
TL;DR: Models of blogs and blog thread data, methods of extracting blog threads, discovering important bloggers, and a definition of agitators as bloggers' roles who have a great influence on bloggers’ discussion are discussed.
Abstract: A blog (weblog) lets people promptly publish content (such as comments) relating to other blogs through hyperlinks. This type of web content can be considered as a conversation rather than a collection of archived documents. To capture ‘hot’ conversation topics from blogs and deliver them to users in a timely manner, we propose a method of discovering bloggers who take important roles in conversations. We characterize bloggers based on their roles in previous blog threads (a set of blog entries comprises a conversation). We provide a definition of agitators as bloggers’ roles who have a great influence on bloggers’ discussion. We consider that these bloggers are likely to be useful in identifying hot conversations. In this paper, we discuss models of blogs and blog thread data, methods of extracting blog threads, discovering important bloggers.

9 citations


Book ChapterDOI
Chi Tian1, Taro Tezuka1, Satoshi Oyama1, Keishi Tajima1, Katsumi Tanaka1 
04 Sep 2006
TL;DR: This work has developed a method to improve the precison of Web retrieval based on the semantic relationships between and proximity of keywords for two-keyword queries and implemented a system that re-ranks Web search results based on three measures: first-appearance term distance, minimum term Distance, and local appearance density.
Abstract: Based on recent studies, the most common queries in Web searches involve one or two keywords. While most Web search engines perform very well for a single-keyword query, their precisions is not as good for queries involving two or more keywords. Search results often contain a large number of pages that are only weakly relevant to either of the keywords. One solution is to focus on the proximity of keywords in the search results. Filtering keywords by semantic relationships could also be used. We developed a method to improve the precison of Web retrieval based on the semantic relationships between and proximity of keywords for two-keyword queries. We have implemented a system that re-ranks Web search results based on three measures: first-appearance term distance, minimum term distance, and local appearance density. Furthermore, the system enables the user to assign weights to the new rank and original ranks so that the result can be presented in order of the combined rank. We built a prototype user interface in which the user can dynamically change the weights on two different ranks. The result of the experiment showed that our method improves the precision of Web search results for two-keyword queries.

Proceedings ArticleDOI
01 Nov 2006
TL;DR: A prototype search system is proposed that can handle Web content as an information source with hyperlinks and TV programs as another without them, that performs integrated searches of those content, and that can subsequently search for content related to each search result.
Abstract: A search engine that can handle TV programs and Web content in an integrated way is proposed. Conventional search engines can target Web content and/or data stored in desktop PCs. However, in the future, the information to be searched is expected to be stored in various places such as in hard-disk recorders, digital cameras, mobile devices, and even in real space, and a search engine that can search across such heterogeneous resources will become essential. As a first step towards developing such a next-generation search engine, a prototype search system is proposed that can handle Web content as an information source with hyperlinks and TV programs as another without them, that performs integrated searches of those content, and that can subsequently search for content related to each search result. An integrated search is achieved by generating integrated indices based on keywords obtained from TV programs and Web content and by ranking them, and a chain search for related content is done by calculating the similarities of and ranking the content in the integrated indices. A zoom-based display of the results enables information to be acquired efficiently. Testing a prototype of the system validated the approach of the proposed method.

Book ChapterDOI
16 Jan 2006
TL;DR: This work investigates the method of extracting names and visual descriptions of objects from large size texts, such as the Web and encyclopedias, and the extracted information is integrated to meet the requirements for such conversions.
Abstract: In using web search engines, there are cases where the name of the target object is unavailable, and the user can only give the visual descriptions of the object. The existing keyword-based search engines have limited capabilities under such situations. In the real-space oriented search engines also, there are often cases where the user wants to search using the visual characteristics of the object. In the car or walk navigation systems, the visual descriptions of the buildings are often more useful than their names, when traveling in an unfamiliar area. As a fundamental technology for converting names and visual descriptions of objects, we investigate the method of extracting these pairs from large size texts, such as the Web and encyclopedias. The extracted information is integrated to meet the requirements for such conversions.

Proceedings ArticleDOI
10 May 2006
TL;DR: An architecture and a model for space entry control based on its dynamically changing contents, such as users, physical resources and virtual resources outputted by some embedded devices are proposed.
Abstract: We define "Secure Space" as a physical space in which any resource is always protected from its unauthorized users in terms of enforcing its authorization policies assuredly. Aiming to build such secure spaces, this paper proposes an architecture and a model for space entry control based on its dynamically changing contents, such as users, physical resources and virtual resources outputted by some embedded devices. We first describe the architecture and then formalize the model and mechanism for secure spaces.

Proceedings ArticleDOI
10 May 2006
TL;DR: This paper proposes two novel methods for query modification based on real-world contexts of a mobile user, such as his/her geographic location and the objects surrounding him/her, aiming to enhance location- awareness, and moreover, context-awareness, to the existing location-free information retrieval systems.
Abstract: With the growing amount of information on the WWW and the improvement of mobile computing environments, mobile Web search engines will increase significance or more in the future. Because mobile devices have the restriction of output performance and we have little time for browsing information slowly while moving or doing some activities in the real world, it is necessary to refine the retrieval results in mobile computing environments better than in fixed ones. However, since a mobile user’s query is often shorter and more ambiguous than a fixed user’s query which is not enough to guess his/her information demand accurately, too many results might be retrieved by commonly used Web search engines. This paper proposes two novel methods for query modification based on real-world contexts of a mobile user, such as his/her geographic location and the objects surrounding him/her, aiming to enhance location-awareness, and moreover, context-awareness, to the existing location-free information retrieval systems.

Book ChapterDOI
27 Nov 2006
TL;DR: This paper proposes the notion of a “page set ranking”, which is to rank each pertinent set of searched Web pages, and describes the new algorithm of the page set ranking to efficiently construct and rank page sets.
Abstract: Conventional Web search engines rank their searched results page by page. That is, conventionally, the information unit for both searching and ranking is a single Web page. There are, however, cases where a set of searched pages shows a better similarity (relevance) to a given (keyword) query than each individually searched page. This is because the information a user wishes to have is sometimes distributed on multiple Web pages. In such cases, the information unit used for ranking should be a set of pages rather than a single page. In this paper, we propose the notion of a “page set ranking”, which is to rank each pertinent set of searched Web pages. We describe our new algorithm of the page set ranking to efficiently construct and rank page sets. We present some experimental results and the effectiveness of our approach.

Book ChapterDOI
04 Sep 2006
TL;DR: In this paper, the authors describe a method for improving page revisiting by detecting and highlighting the information on browsed Web pages that is fresh for a user, based on comparison with the previously viewed versions of pages.
Abstract: Page revisiting is a popular browsing activity in the Web. In this paper we describe a method for improving page revisiting by detecting and highlighting the information on browsed Web pages that is fresh for a user. Content freshness is determined based on comparison with the previously viewed versions of pages. Any new content for the user is marked, enabling the user to quickly spot it. We also describe a mechanism for visually informing users about the degree of freshness of linked pages. By indicating the freshness level of content on linked pages, the system enables users to navigate the Web more effectively. Finally, we propose and demonstrate the concept of determining user-dependent, subjective age of page contents. Using this method, elements of Web pages are annotated with dates indicating the first time the elements were accessed by the user.

Book ChapterDOI
16 Jan 2006
TL;DR: Experiments showed that the Context Matcher method often found documents more related to the source document than baseline methods that use context either in only thesource document or search results.
Abstract: When reading a Web page or editing a word processing document, we often search the Web by using a term on the page or in the document as part of a query. There is thus a correlation between the purpose for the search and the document being read or edited. Modifying the query to reflect this purpose can thus improve the relevance of the search results. There have been several attempts to extract keywords from the text surrounding the search term and add them to the initial query. However, identifying appropriate additional keywords is difficult; moreover, existing methods rely on precomputed domain knowledge. We have developed Context Matcher: a query modification method that uses the text surrounding the search term in the initial search results as well as the text surrounding the term in the document being read or edited, the “source document”. It uses the text surrounding the search term in the initial results to weight candidate keywords in the source document for use in query modification. Experiments showed that our method often found documents more related to the source document than baseline methods that use context either in only the source document or search results.

Proceedings ArticleDOI
10 May 2006
TL;DR: This work proposes a system that automatically transforms web content into TV-like video content for ubiquitous environments, called the ubiquitous/universal passive viewer (u-PaV), which consists mainly of audio and visual components.
Abstract: We propose a system that automatically transforms web content into TV-like video content for ubiquitous environments. We call this system the ubiquitous/universal passive viewer (u-PaV). The u-PaV consists mainly of audio and visual components. The audio component uses synthesized speech to read out titles and lines extracted from the target web content. Simultaneously, the visual component of the u- PaV presents the titles and lines to a user display through a ticker. Keywords and images extracted from the web content are animated on the display. A suitable background color is determined based on the overall impression of the content. The u-PaV synchronizes the ticker, animation, and speech. We introduce the u-PaV and explain how the keywords are extracted and how the impression value of the web content is determined. A test with 50 users showed that the u-PaV is easier to use and understand than browsing web content alone.

Book ChapterDOI
18 Sep 2006
TL;DR: Experiments on author identification using a bibliographic database showed that the learned metric improves identification F-measure.
Abstract: A method is described for learning a distance metric for use in object identification that does not require human supervision. It is based on two assumptions. One is that pairs of different names refer to different objects. The other is that names are arbitrary. These two assumptions justify using pairs of data items for objects with different names as “cannot-be-linked” example pairs for learning a distance metric for use in clustering ambiguous names. The metric learning is formulated using only dissimilar example pairs as a convex quadratic programming problem that can be solved much faster than a semi-definite programming problem, which generally must be solved to learn a distance metric matrix. Experiments on author identification using a bibliographic database showed that the learned metric improves identification F-measure.

DOI
01 Jan 2006
TL;DR: A model and an architecture for space entry control based on its dynamically changing contents, such as users, physical resources and virtual resources outputted by embedded devices is proposed.
Abstract: We introduce the novel concept of ”Secure Spaces”, physical environments in which any resource is always protected from its unauthorized users’ eyes or ears by assuredly enforcing its access control policies for pairs of it and each user inside them. Aiming to build such secure spaces, this paper proposes a model and an architecture for space entry control based on its dynamically changing contents, such as users, physical resources and virtual resources outputted by embedded devices. We firstly formalize the content-based entry control model and mechanism, and then describe the architecture for building secure spaces.

Book ChapterDOI
01 Jan 2006
TL;DR: A Web browser called the AmbientBrowser system that supports people in their daily acquisition of knowledge that continuously searches Web pages using both system-defined and user-defined keywords, and displays sensors detect users' and environmental conditions and control the system's behavior such as knowledge selection or a style of presentation.
Abstract: Recently, due to the remarkable advancement of technology, the ubiquitous computing environment is becoming a reality. People can directly obtain information anytime from ubiquitous computer. However, conventional computing style with a keyboard and a mouse is not suitable for everyday use. We proposed and developed a Web browser called the AmbientBrowser system that supports people in their daily acquisition of knowledge. It continuously searches Web pages using both system-defined and user-defined keywords, and displays sensors detect users' and environmental conditions and control the system's behavior such as knowledge selection or a style of presentation. Thus, the user can encounter a wide variety of knowledge without active operations. It monitors the context of the environment, such as lighting conditions and temperature. In addition, it displays Web pages incrementally in proportion to the context. This paper describes the implementation of the AmbientBrowser system and discusses its effects.

Proceedings ArticleDOI
23 May 2006
TL;DR: Testing of a prototype of the integrated search engine for Web and TV programs is developed that performs integrated search of those content, and that allows chain search where related content can be accessed from each search result.
Abstract: A search engine that can handle TV programs and Web content in an integrated way is proposed. Conventional search engines have been able to handle Web content and/or data stored in a PC desktop as target information. In the future, however, the target information is expected to be stored in various places such as in hard-disk (HD)/DVD recorders, digital cameras, mobile devices, and even in real space as ubiquitous content, and a search engine that can search across such heterogeneous resources will become essential. Therefore, as a first step towards developing such next-generation search engine, a prototype search system for Web and TV programs is developed that performs integrated search of those content, and that allows chain search where related content can be accessed from each search result. The integrated search is achieved by generating integrated indices for Web and TV content based on vector space model and by computing similarity between the query and all the content described by the indices. The chain search of related content is done by computing similarity between the selected result and all other content based on the integrated indices. Also, the zoom-based display of the search results enables to control media transition and level of details of the contents to acquire information efficiently. In this paper, testing of a prototype of the integrated search engine validated the approach taken by the proposed method.

Proceedings ArticleDOI
Chi Tian1, Taro Tezuka1, Satoshi Oyama1, Keishi Tajima1, Katsumi Tanaka 
03 Apr 2006
TL;DR: A method to improve the precison of Web retrieval based on proximity and density of keywords for two-keyword queries by implementing a system that re-ranks Web search results based on three measures: first-appearance term distance, minimumterm distance, and local appearance density.
Abstract: This paper proposes a method to improve the precison of Web retrieval based on proximity and density of keywords for two-keyword queries. In addition, filtering keywords by semantic relationships also be used. We have implemented a system that re-ranks Web search results based on three measures: first-appearance term distance, minimum term distance, and local appearance density. Furthermore, the system enables the user to assign weights to the new rank and original ranks so that the result can be presented in order of the combined rank. We built a prototype user interface in which the user can dynamically change the weights on two different ranks. The result of the experiment showed that our method improves the precision of Web search results for two-keyword queries.

Proceedings ArticleDOI
10 May 2006
TL;DR: A mechanism for photographing people and annotating their behavior along with nearby elements using multiple embedded cameras using radio frequency identification (RFID) tags and a method of dynamically integrating and presenting the recorded content is developed.
Abstract: We developed a mechanism for photographing people and annotating their behavior along with nearby elements using multiple embedded cameras. In addition, we developed a method of dynamically integrating and presenting the recorded content. As a conventional camera is used to photograph objects selected by a photographer, taking pictures that include him while holding the camera is difficult. Security camera systems take pictures that include people and nearby elements, but such systems cannot take the user aspect of intentions and interest of the people. After detecting the intentions and behaviors of subjects using radio frequency identification (RFID) tags, our system selects the best camera from cameras located in an area, and then the camera photographs the subject and the surrounding area. In addition, these photos can be annotated with meta datainformation about the context and interest of the subject.

Book ChapterDOI
27 Nov 2006
TL;DR: This work proposes methods of searching Web pages that are “semantically” regarded as “siblings” with respect to given page examples, which will be useful for supporting a user's opportunistic search, meaning a search in which the user's interest and intention are not fixed.
Abstract: We propose methods of searching Web pages that are “semantically” regarded as “siblings” with respect to given page examples. That is, our approach aims to find pages that are similar in theme but have different content from the given sample pages. We called this “sibling page search”. The proposed search methods are different from conventional content-based similarity search for Web pages. Our approach recommends Web pages whose “conceptual” classification category is the same as that of the given sample pages, but whose content is different from the sample pages. In this sense, our approach will be useful for supporting a user's opportunistic search, meaning a search in which the user's interest and intention are not fixed. The proposed methods were implemented by computing the “common” and “unique” feature vectors of the given sample pages, and by comparing those feature vectors with each retrieved page. We evaluated our method for sibling page search, in which our method was applied to test sets consisting of page collections from the Open Directory Project (ODP).

Proceedings ArticleDOI
10 May 2006
TL;DR: A network management device is developed that makes it possible to acquire embedded content using coordinated ubiquitous devices to enable sharing of a range of digital content in real-world situations.
Abstract: In next-generation networking environments, ubiquitous networks will be available both indoors and outdoors. Various devices will be ubiquitously embedded in our homes and cityscape. Digital content will be stored not only by servers on the Internet, but also in embedded devices belonging to ubiquitous networks. In this paper, we propose a content-processing mechanism for use in environment-enabling collaborative acquisition of embedded digital content in real-world situations. We have developed a network management device that makes it possible to acquire embedded content using coordinated ubiquitous devices. This management device actively configures networks that include content-providing devices and browsing devices to enable sharing of a range of digital content. To demonstrate our system, we built a practical prototype called the "Virtual Insect Catching System", which is simple enough for children to use. In a test that 48 children took part in we demonstrated that the system can be used to find embedded devices, build peer-topeer networks, acquire embedded digital content, retrieve related content from the Internet, and then create new web content.

Proceedings ArticleDOI
01 Nov 2006
TL;DR: The content coverage of the search results is also useful for estimating the content comprehensibility provided by a Web page on a certain topic, therefore, it can also use the content coverage as a measure to re-rank the searchresults.
Abstract: In this paper, we propose a trust-oriented evaluation method for information retrieval methods based on computing the content coverage of the search results. At first, for a user given query, we collect the related resources available on the Internet. We then sketch out an overview (called topic-sketch) of these resources. We compute the content coverage of the search results to evaluate the retrieval methods to see whether they comprehensively find and provide us information, by comparing the resource overviews and each result. In other words, the content coverage is also useful for estimating the content comprehensibility provided by a Web page on a certain topic. Therefore, we can also use the content coverage as a measure to re-rank the search results. Some preliminary experimental results are shown in this paper to validate our method.

Proceedings ArticleDOI
Shumian He1, Yukiko Kawai, Yutaka Kidawara, K. Zettsu, Katsumi Tanaka1 
10 May 2006
TL;DR: A mechanism for photographing people and annotating their behavior along with nearby elements using multiple embedded cameras using RFID tags and a method of dynamically integrating and presenting the recorded content is developed.
Abstract: We developed a mechanism for photographing people and annotating their behavior along with nearby elements using multiple embedded cameras. In addition, we developed a method of dynamically integrating and presenting the recorded content. As conventional cameras are used to photograph objects selected by users, taking pictures that include users while holding the camera is difficult. Security camera systems take pictures that include people and nearby elements, but such systems cannot show the intentions of the people being photographed. After detecting the intentions and behaviors of subjects with radio frequency identification (RFID) tags, our system selects the best camera from cameras located in an area, and then the camera photographs the subject and the surrounding area. In addition, these photos can be annotated with information about the context and movement history of the subject. We created a prototype of our system and determined its effectiveness experimentally.

Proceedings ArticleDOI
Y. Kabutoya1, Takayuki Yumoto1, Satoshi Oyama1, Keishi Tajima1, Katsumi Tanaka1 
03 Apr 2006
TL;DR: This research proposes a method to estimate the quality of local contents without link structure by using the PageRank values of Web contents similar to them, and this method enables us to search contents across different resources such as Web contents and local contents.
Abstract: Recently, it is getting more frequent to search not Web contents but local contents, e.g., by Google Desktop Search. Google succeeded in the Web search because of its PageRank algorithm for the ranking of the search results. PageRank estimates the quality of Web pages based on their popularity, which in turn is estimated by the number and the quality of pages referring to them through hyperlinks. This algorithm, however, is not applicable when we search local contents without link structure, such as text data. In this research, we propose a method to estimate the quality of local contents without link structure by using the PageRank values of Web contents similar to them. Based on this estimation, we can rank the desktop search results. Furthermore, this method enables us to search contents across different resources such as Web contents and local contents. In this paper, we applied this method to Web contents, calculated the scores that estimate their quality, and we compare them with their page quality scores by PageRank.