scispace - formally typeset
Search or ask a question
Author

Tim Berners-Lee

Other affiliations: CERN, University of Southampton
Bio: Tim Berners-Lee is an academic researcher from Massachusetts Institute of Technology. The author has contributed to research in topics: Semantic Web & Web standards. The author has an hindex of 51, co-authored 116 publications receiving 34516 citations. Previous affiliations of Tim Berners-Lee include CERN & University of Southampton.


Papers
More filters
Journal ArticleDOI
TL;DR: The authors describe progress to date in publishing Linked Data on the Web, review applications that have been developed to exploit the Web of Data, and map out a research agenda for the Linked data community as it moves forward.
Abstract: The term “Linked Data” refers to a set of best practices for publishing and connecting structured data on the Web. These best practices have been adopted by an increasing number of data providers over the last three years, leading to the creation of a global data space containing billions of assertions— the Web of Data. In this article, the authors present the concept and technical principles of Linked Data, and situate these within the broader context of related technological developments. They describe progress to date in publishing Linked Data on the Web, review applications that have been developed to exploit the Web of Data, and map out a research agenda for the Linked Data community as it moves forward.

5,113 citations

Proceedings Article
01 Jan 1997
TL;DR: The Hypertext Transfer Protocol is an application-level protocol for distributed, collaborative, hypermedia information systems, which can be used for many tasks beyond its use for hypertext through extension of its request methods, error codes and headers.
Abstract: The Hypertext Transfer Protocol (HTTP) is an application-level protocol for distributed, collaborative, hypermedia information systems. It is a generic, stateless, protocol which can be used for many tasks beyond its use for hypertext, such as name servers and distributed object management systems, through extension of its request methods, error codes and headers [47]. A feature of HTTP is the typing and negotiation of data representation, allowing systems to be built independently of the data being transferred.

3,881 citations

01 Aug 1998
TL;DR: This document defines the generic syntax of URI, including both absolute and relative forms, and guidelines for their use, and revises and replaces the generic definitions in RFC 1738 and RFC 1808.
Abstract: A Uniform Resource Identifier (URI) is a compact string of characters for identifying an abstract or physical resource. This document defines the generic syntax of URI, including both absolute and relative forms, and guidelines for their use; it revises and replaces the generic definitions in RFC 1738 and RFC 1808.

2,258 citations

Journal ArticleDOI
TL;DR: It is argued that agents can only flourish when standards are well established and that the Web standards for expressing shared meaning have progressed steadily over the past five years.
Abstract: The article included many scenarios in which intelligent agents and bots undertook tasks on behalf of their human or corporate owners. Of course, shopbots and auction bots abound on the Web, but these are essentially handcrafted for particular tasks: they have little ability to interact with heterogeneous data and information types. Because we haven't yet delivered large-scale, agent-based mediation, some commentators argue that the semantic Web has failed to deliver. We argue that agents can only flourish when standards are well established and that the Web standards for expressing shared meaning have progressed steadily over the past five years

1,830 citations


Cited by
More filters
Journal ArticleDOI
Jon Kleinberg1
TL;DR: This work proposes and test an algorithmic formulation of the notion of authority, based on the relationship between a set of relevant authoritative pages and the set of “hub pages” that join them together in the link structure, and has connections to the eigenvectors of certain matrices associated with the link graph.
Abstract: The network structure of a hyperlinked environment can be a rich source of information about the content of the environment, provided we have effective means for understanding it. We develop a set of algorithmic tools for extracting information from the link structures of such environments, and report on experiments that demonstrate their effectiveness in a variety of context on the World Wide Web. The central issue we address within our framework is the distillation of broad search topics, through the discovery of “authorative” information sources on such topics. We propose and test an algorithmic formulation of the notion of authority, based on the relationship between a set of relevant authoritative pages and the set of “hub pages” that join them together in the link structure. Our formulation has connections to the eigenvectors of certain matrices associated with the link graph; these connections in turn motivate additional heuristrics for link-based analysis.

8,328 citations

Patent
30 Sep 2010
TL;DR: In this article, the authors proposed a secure content distribution method for a configurable general-purpose electronic commercial transaction/distribution control system, which includes a process for encapsulating digital information in one or more digital containers, a process of encrypting at least a portion of digital information, a protocol for associating at least partially secure control information for managing interactions with encrypted digital information and/or digital container, and a process that delivering one or multiple digital containers to a digital information user.
Abstract: PROBLEM TO BE SOLVED: To solve the problem, wherein it is impossible for an electronic content information provider to provide commercially secure and effective method, for a configurable general-purpose electronic commercial transaction/distribution control system. SOLUTION: In this system, having at least one protected processing environment for safely controlling at least one portion of decoding of digital information, a secure content distribution method comprises a process for encapsulating digital information in one or more digital containers; a process for encrypting at least a portion of digital information; a process for associating at least partially secure control information for managing interactions with encrypted digital information and/or digital container; a process for delivering one or more digital containers to a digital information user; and a process for using a protected processing environment, for safely controlling at least a portion of the decoding of the digital information. COPYRIGHT: (C)2006,JPO&NCIPI

7,643 citations

Journal ArticleDOI
Jeffrey O. Kephart1, David M. Chess1
TL;DR: A 2001 IBM manifesto noted the almost impossible difficulty of managing current and planned computing systems, which require integrating several heterogeneous environments into corporate-wide computing systems that extend into the Internet.
Abstract: A 2001 IBM manifesto observed that a looming software complexity crisis -caused by applications and environments that number into the tens of millions of lines of code - threatened to halt progress in computing. The manifesto noted the almost impossible difficulty of managing current and planned computing systems, which require integrating several heterogeneous environments into corporate-wide computing systems that extend into the Internet. Autonomic computing, perhaps the most attractive approach to solving this problem, creates systems that can manage themselves when given high-level objectives from administrators. Systems manage themselves according to an administrator's goals. New components integrate as effortlessly as a new cell establishes itself in the human body. These ideas are not science fiction, but elements of the grand challenge to create self-managing computing systems.

6,527 citations

Book ChapterDOI
01 Jun 2002
TL;DR: Session Initiation Protocol (SIP) as discussed by the authors is an application layer control (signaling) protocol for creating, modifying, and terminating sessions with one or more participants, such as Internet telephone calls, multimedia distribution, and multimedia conferences.
Abstract: This document describes Session Initiation Protocol (SIP), an application-layer control (signaling) protocol for creating, modifying, and terminating sessions with one or more participants. These sessions include Internet telephone calls, multimedia distribution, and multimedia conferences.

5,482 citations

Journal ArticleDOI
TL;DR: The authors describe progress to date in publishing Linked Data on the Web, review applications that have been developed to exploit the Web of Data, and map out a research agenda for the Linked data community as it moves forward.
Abstract: The term “Linked Data” refers to a set of best practices for publishing and connecting structured data on the Web. These best practices have been adopted by an increasing number of data providers over the last three years, leading to the creation of a global data space containing billions of assertions— the Web of Data. In this article, the authors present the concept and technical principles of Linked Data, and situate these within the broader context of related technological developments. They describe progress to date in publishing Linked Data on the Web, review applications that have been developed to exploit the Web of Data, and map out a research agenda for the Linked Data community as it moves forward.

5,113 citations