Open AccessProceedings Article
Crawling the Hidden Web
Sriram Raghavan,Hector Garcia-Molina +1 more
- pp 129-138
Reads0
Chats0
TLDR
In this paper, the authors address the problem of designing a crawler capable of extracting content from the hidden web, i.e., the set of web pages reachable purely by following hypertext links, ignoring search forms and pages that require authorization or prior registration.Citations
More filters
Patent
Serving advertisements based on content
Darrell Anderson,Paul T. Buchheit,Alexander Paul Carobus,Yingwei Cui,Jeffrey Dean,Georges R. Harik,Deepak Jindal,Narayanan Shivakumar +7 more
TL;DR: In this article, the authors present a method for placing targeted ads on page on the web (or some other document of any media type) by obtaining content that includes available spots for ads, determining ads relevant to content, and/or combining content with ads determined to be relevant to the content.
Crawling the Hidden Web.
TL;DR: A generic operational model of a hidden Web crawler is introduced and how this model is realized in HiWE (Hidden Web Exposer), a prototype crawler built at Stanford is described.
Proceedings ArticleDOI
Web application security assessment by fault injection and behavior monitoring
TL;DR: The design of Web application security assessment mechanisms are analyzed in order to identify poor coding practices that render Web applications vulnerable to attacks such as SQL injection and cross-site scripting.
Proceedings ArticleDOI
Data extraction and label assignment for web databases
TL;DR: A system called, DeLa, which reconstructs (part of) a "hidden" back-end web database by sending queries through HTML forms, automatically generating regular expression wrappers to extract data objects from the result pages and restoring the retrieved data into an annotated (labelled) table.
Journal ArticleDOI
Structured databases on the web: observations and implications
TL;DR: This paper surveys this relatively unexplored frontier of the deep Web, measuring characteristics pertinent to both exploring and integrating structured Web sources, to conclude with several implications which, while necessarily subjective, might help shape research directions and solutions.
References
More filters
Book
Information Retrieval: Data Structures and Algorithms
TL;DR: For programmers and students interested in parsing text, automated indexing, its the first collection in book form of the basic data structures and algorithms that are critical to the storage and retrieval of documents.
Journal ArticleDOI
Focused crawling: a new approach to topic-specific Web resource discovery
TL;DR: A new hypertext resource discovery system called a Focused Crawler that is robust against large perturbations in the starting set of URLs, and capable of exploring out and discovering valuable resources that are dozens of links away from the start set, while carefully pruning the millions of pages that may lie within this same radius.
Journal ArticleDOI
Accessibility of information on the web
Steve Lawrence,C.L. Giles +1 more
TL;DR: As the web becomes a major communications medium, the data on it must be made more accessible, and search engines need to make the data more accessible.
Journal ArticleDOI
Searching the World Wide Web
Steve Lawrence,C. Lee Giles +1 more
TL;DR: The coverage and recency of the major World Wide Web search engines was analyzed, yielding some surprising results, including a lower bound on the size of the indexable Web of 320 million pages.
Journal ArticleDOI
Efficient crawling through URL ordering
TL;DR: In this paper, the authors study in what order a crawler should visit the URLs it has seen, in order to obtain more "important" pages first, and they show that a good ordering scheme can obtain important pages significantly faster than one without.