scispace - formally typeset
Search or ask a question

Showing papers by "Juris Hartmanis published in 2000"



BookDOI
01 Jan 2000

26 citations


BookDOI
01 Jan 2000
TL;DR: A possible future path that encompasses the research interests of the OHSWG while still leading ultimately to interoperability is described, presenting the Fundamental Open Hypermedia Model (FOHM) as an example of this more realistic approach.
Abstract: Over the last six years the Open Hypermedia Systems Working Group (OHSWG) has been working in a coordinated effort to produce a protocol which will allow components of an Open Hypermedia System to talk to one another in a standardised manner. In this paper we reflect on this work and the knowledge that has come out of it, evaluating the different approaches to standardisation in the light of our experiences. We discuss the problems we encountered and redefine the goals of the effort to be more realistic, presenting the Fundamental Open Hypermedia Model (FOHM) as an example of this more realistic approach. Finally we describe a possible future path that encompasses the research interests of the OHSWG while still leading ultimately to interoperability. 1 History of the OHP Effort 1.1 Original Proposal The First Workshop on Open Hypermedia [25] was held at Edinburgh in conjunction with ECHT’94. This workshop was concerned with the growing class of hypermedia systems such as Chimera [2], DHM [9], HyperForm [24], Microcosm [5], Multicard [20] and the HB/SP series [21], which clearly separated hypertext structure (links) from the content (documents). The participants in this workshop were keen to provide hypertext link services which could provide hypertext structure for documents which were displayed using existing desktop applications such as Word for Windows and Emacs. This workshop lead to the formation of the Open Hypermedia Systems Working Group (OHSWG), the full history and rationale behind the work of this group can be viewed on their web pages [1]. An interesting finding of this first workshop was that although the major area of interest for the participating research groups was the design and implementation of link servers, most of their time was being spent on the implementation of clients; the researchers were spending significant effort producing text and graphics clients for the link services, either by writing them from scratch or writing macros to adapt existing desktop applications. A proposal from Antoine S. Reich and K.M. Anderson (Eds.): OHS/SC 2000, LNCS 1903, pp. 3–12, 2000. c © Springer-Verlag Berlin Heidelberg 2000 4 D. Millard, H. Davis, and L. Moreau Rizk was that the group could contribute by producing a lightweight message based protocol that could be used to communicate about simple link service functions. The rationale was that all link services had an approximately similar data model and that the operations that the link services could perform were also similar; all that would be required was a simple ”shim” (protocol converter) that could convert between the client protocol and the server protocol, and then it would be possible for groups to share client implementations. The idea was simple and lead to the production of the first draft of the Open Hypermedia Protocol (OHP) [6] which was presented at the next workshop in 1996. What happened next to OHP may well be a familiar story to other groups who have attempted to produce an application level protocol. The committee effect started to take hold and the protocol grew; there were discussions about whether we were going to use a message passing interface or an API; there were arguments about the on-the-wire protocol to be used, and the group became confused about aspects of resource location and naming instead of concentrating on hypertext. Furthermore, the increasing influence of the World Wide Web throughout this period tended to change original assumptions by producing a system that was open in very different ways from the OHSWG systems. However, there were some good outcomes from this stage of the work. The scope of the project changed from attempting to provide a lightweight communication mechanism for shared clients and heterogeneous link servers, to attempting to create a reference model and implementation for open hypertext systems. A standardized data model and basic set of operations was agreed and the groups concerned produced native OHP link servers, and agreed a temporary on the wire protocol that made possible a significant demonstration at Hypertext ’98, and a paper on the experiences to that stage [18]. 1.2 The Hypertext 98 Demonstration Two systems were developed for a demonstration of interoperability at Hypertext ’98 (held in Pittsburgh, USA). One from the University of Arhus, Denmark, and the other developed at the Multimedia Research Group (MMRG) at the University of Southampton. During the development of these systems several problems became evident. Both with the protocol itself and also, more importantly, with the scope of the original draft proposal. The original draft was meant as a standard interface between clients and servers, to allow software reuse. This has increased in scope dramatically and become an all-encompassing effort to understand the nature of hypermedia and thus produce standards to provide for it. However it was soon understood that such a large goal was impossible to realise within a single protocol and as a result the protocol was split into several domains, each domain dealing with a particular type of hypermedia. The original OHP protocol was therefore renamed OHP-Navigational (OHPNav) and reduced in scope to deal exclusively with navigational (node/link) hypermedia. Other domains were envisaged such as Spatial Hypermedia [12] (OHP-Space) and Taxonomic Hypermedia [22] (OHP-Tax). Standardizing Hypertext: Where Next for OHP? 5 The protocol itself had been originally based on the Microcosm message format [8], a sequence of tag/value pairs. However this proved difficult to parse so the OHSWG adopted XML as a suitable format [3] and the OHP-Nav message set and hypermedia objects were all defined as XML elements in a Document Type Definition (DTD). 1.3 The Hypertext 99 Demonstration At the OHSWG’s meeting at Southampton (OHS 4.5) it was decided that as the Hypertext ’98 demonstration had formed such a positive focus point for the group a similar demonstration should be attempted for Hypertext ’99 in Darmstadt, Germany. It was also decided that since we had demonstrated interoperability at Hypertext ’98 we should now concentrate on showing some of the features of the protocol, ironically removing the need to interoperate. Some of the more successful parts of the Hypertext ’98 demonstration were the collaboration aspects. So the Danish contribution to the Hypertext ’99 demonstration was an extension of this simple support into a more advanced system called ‘Construct’. The Southampton contribution was to investigate a definition of computational links, where opaque computation objects are included in the hypertext model and can be referenced similarly to other objects. This resulted in a component based system known as ‘Solent’. Another system demonstrated at Hypertext ’99 was CAOS [19], a spatial hypermedia collaborative system. Discussions within the working group turned to the definition of OHP-Space, starting us thinking about whether the different domains were actually that different after all. 2 What Were the Problems with OHP? 2.1 What Were We Trying to Standardize? It has already been mentioned that the purpose of OHP has changed a great deal since the protocol first appeared. Initially a basic client-server communication protocol it has grown to reflect the concerns of a large community of researchers. Even given that we understood what functionality we were going to standardize there is still the question of what actually will become standard in the system. I.e. how will components actually talk to one another? There are two approaches: 1. A programming API : By using a standardised API client source code compatibility is preserved, whatever the servers involved, and server source code remains valid whatever the clients. This requires specifying: a) the system calls, b) the callbacks to be used, c) the data to be exchanged. 6 D. Millard, H. Davis, and L. Moreau Examples would be to use CORBA and its interface definition language IDL, Microsoft DCOM or a Java component system using Java Beans. One should note that in this approach the data representation is dependent on the binding (i.e. the IDL compiler for the chosen language and the chosen implementation of the communication module). 2. An on-the-wire communication model : This involves defining: a) the syntax of the messages (e.g. an XML hierarchy), b) a set of requests, associated responses and their syntax, c) the data and its syntax, d) how to setup the transport medium (e.g. opening socket on a port, etc.). At one time or another both approaches have been argued for. However the need to produce communicating systems that actually worked resulted in the group returning to the on-the-wire approach, even though the API approach seems cleaner and allows us to concentrate on the hypertext issues rather than the networking ones. The API approach also preserves source code compatibility. The implementers just need to implement two APIs, one for the client and another for the server. If they then find that a new on-the-wire protocol should be used, they can change their implementation without altering the source code of the components involved. However it is still not a perfect solution as it may require recompiling applications when a different medium is required. This is indeed the case with CORBA where binary applications are ORB dependent. This does not give a lot of freedom to the final user as a binary is typically compiled for a fixed communication medium. Recently, Southampton researchers have been experimenting with implementing hypertext functionality on top of agent frameworks; in particular, the Javabased SoFAR [15], the Southampton Framework for Agent Research, was the focus of this experiment. SoFAR adopts an abstract communication model, where agents communicate using “virtual channels”, identified by a startpoint and an endpoint; the latter is a client-side proxy used to initiate communications and the former is a server

16 citations


BookDOI
01 Jan 2000
TL;DR: This paper exploits recent developments in the fields of variational inference and latent variable models to develop a novel and tractable probabilistic approach to modelling manifolds which can handle complex non-linearities.
Abstract: In recent years several techniques have been proposed for modelling the low-dimensional manifolds, or ‘subspaces’, of natural images. Examples include principal component analysis (as used for instance in ‘eigen-faces’), independent component analysis, and auto-encoder neural networks. Such methods suffer from a number of restrictions such as the limitation to linear manifolds or the absence of a probablistic representation. In this paper we exploit recent developments in the fields of variational inference and latent variable models to develop a novel and tractable probabilistic approach to modelling manifolds which can handle complex non-linearities. Our framework comprises a mixture of sub-space components in which both the number of components and the effective dimensionality of the subspaces are determined automatically as part of the Bayesian inference procedure. We illustrate our approach using two classical problems: modelling the manifold of face images and modelling the manifolds of hand-written digits.

5 citations


BookDOI
01 Jan 2000
TL;DR: This workshop has provided an opportunity for researchers with a broad range of interests in meta-level architectures and reflective techniques to discuss recent developments in this field and to encourage people to present works in progress.
Abstract: Previous workshops on reflection both in ECOOP and in OOPSLA have pointed out the growing interest and importance of Reflection and Metalevel Architectures in the fields of programming languages and systems (ECOOP’98, OOPSLA’98), software engineering (OOPSLA’99) and middleware (Middleware 2000). Following these workshops but also the conference Reflection’99 held in SaintMalo (France), this workshop has provided an opportunity for researchers with a broad range of interests in meta-level architectures and reflective techniques to discuss recent developments in this field. It has also provided a good test-bed for preparing them to submit their works to Reflection’01. The workshop main goal is to encourage people to present works in progress. These works could cover all the spectrum from theory to practice. To ensure creativity, originality, and audience interests, participants have been selected by the workshop organizers on the basis of 5-page position paper. We hope that the workshop will help them to mature their idea and improve the quality of their future publications based on the presented work.

5 citations


BookDOI
01 Jan 2000
TL;DR: This paper shows that the dynamic knowledge representation paradigm introduced in [ALP00] and the associated language LUPS, defined in [APPP99], constitute natural, powerful and expressive tools for representing dynamically changing knowledge and extends the approach to the three-valued semantics to allow proper handling of conflicting updates.
Abstract: This paper has two main objectives. One is to show that the dynamic knowledge representation paradigm introduced in [ALP00] and the associated language LUPS, defined in [APPP99], constitute natural, powerful and expressive tools for representing dynamically changing knowledge. We do so by demonstrating the applicability of the dynamic knowledge representation paradigm and the language LUPS to several broad knowledge representation domains, for each of which we provide an illustrative example. Our second objective is to extend our approach to allow proper handling of conflicting updates. So far, our research on knowledge updates was restricted to a two-valued semantics, which, in the presence of conflicting updates, leads to an inconsistent update, even though the updated knowledge base does not necessarily contain any truly contradictory information. By extending our approach to the three-valued semantics we gain the added expressiveness allowing us to express undefined or non-

2 citations


BookDOI
01 Jan 2000
TL;DR: Amortized fully-dynamic polylogarithmic algorithms for connectivity, minimum spanning trees (MST), 2-edgeand biconnectivity, and improved static algorithms for finding unique matchings in graphs are reviewed.
Abstract: First we review amortized fully-dynamic polylogarithmic algorithms for connectivity, minimum spanning trees (MST), 2-edgeand biconnectivity. Second we discuss how they yield improved static algorithms: connectivity for constructing a tree from homeomorphic subtrees, 2-edge connectivity for finding unique matchings in graphs, and MST for packing spanning trees in graphs. The application of MST for spanning tree packing is new and when boot-strapped, it yields a fully-dynamic polylogarithmic algorithm for approximating general edge connectivity within a factor √ 2 + o(1). Finally, on the more practical side, we will discuss how output sensitive algorithms for dynamic shortest paths have been applied successfully to speed up local search algorithms for improving routing on the internet, roughly doubling the capacity. 1 Dynamic Graph Algorithms In this talk, we will discuss some simple dynamic graph algorithms and their applications within static graph problems. As a new result, we will derive a fully dynamic polylogarithmic algorithm approximating the edge connectivity λ within a factor √ 2 + o(1), that is, the algorithm will output a value between λ/ √ 2 + o(1) and λ ×2 + o(1). The talk is not intended as a general survey of dynamic graph algorithms and their applications. Rather its goal is just to present a few nice illustrations of the potent relationship between dynamic graph algorithms and their applications in static graph problems, showing contexts in which dynamic graph algorithms play a role similar to that played by priority queues for greedy algorithms. In a fully dynamic graph problem, we are considering a graph G over a fixed vertex set V , |V | = n. The graph G may be updated by insertions and deletions of edges. Unless otherwise stated, we assume that we start with an empty edge set. We will review the fully dynamic graph algorithms of Holm et al. [11] for connectivity, minimum spanning trees (MST), 2-edge, and biconnectivity in undirected graphs. For the connectivity type problems, the updates may be interspersed by queries on (2-edge-/bi-) connectivity of the graph or between specified vertices. For MST, the fully dynamic algorithm should update the MST in connection with each update to the graph: an inserted edge might have to go into the MST, and if an MST edge is deleted, we should replace with the lightest edge possible. M.M. Halldórsson (Ed.): SWAT 2000, LNCS 1851, pp. 1–9, 2000. c © Springer-Verlag Berlin Heidelberg 2000 2 M. Thorup and D.R. Karger Both updates and queries are presented on-line, meaning that we have to respond to an update or query without knowing anything about the future. The time bounds for these operations are polylogarithmic but amortized meaning that we only bound the average operation time over any sequence of operations, starting with no edges. In our later applications for static graph problems, we only care about the total amount of time spent over all dynamic graph operations, and hence the amortized time bounds suffice. The above mentioned results are all for undirected graphs. For directed graphs there are very few results. In a recent break-through, King [16] showed how to maintain the full transitive closure of a graph in Õ(n2) amortized time per update. Further, she showed how to maintain all pairs shortest paths in O(n2.5 √ log C) time per update if C is the maximum weight in the graph. However, if one is is just interested in maintaining whether t can be reached from s for two fixed vertices s and t, nobody knows how to do this in o(m) time. On the more practical side, Ramalingan and Reps [24] have suggested a lazy implementation of Dijkstra’s [4] single source shortest paths algorithm for a dynamic directed graph. If X is the number of vertices that change distance from the source s in connection with an arc insertion or deletion, they can update a shortest path tree from s in Õ( ∑ v∈X degree(v)) time. Although this does not in general improve over the Õ(m) time it takes to compute a single source shortest path tree from scratch, there has been experimental evidence suggesting that this kind of laziness is worthwhile in connection with internet like topologies [7].

1 citations