scispace - formally typeset
Search or ask a question

Showing papers on "Semantic similarity published in 1994"


Journal ArticleDOI
TL;DR: A reassessment of evidence indicates that similarity can be sufficiently constrained and sophisticated to provide at least a partial account of many categories.

446 citations


Journal ArticleDOI
TL;DR: A theory of semantic values as a unit of exchange that facilitates semantic interoperability betweeen heterogeneous information systems is provided and it is shown how semantic values can either be stored explicitly or be defined by environments.
Abstract: Large organizations need to exchange information among many separately developed systems. In order for this exchange to be useful, the individual systems must agree on the meaning of their exchanged data. That is, the organization must ensure semantic interoperability. This paper provides a theory of semantic values as a unit of exchange that facilitates semantic interoperability betweeen heterogeneous information systems. We show how semantic values can either be stored explicitly or be defined by environments. A system architecture is presented that allows autonomous components to share semantic values. The key component in this architecture is called the context mediator, whose job is to identify and construct the semantic values being sent, to determine when the exchange is meaningful, and to convert the semantic values to the form required by the receiver.Our theory is then applied to the relational model. We provide an interpretation of standard SQL queries in which context conversions and manipulations are transparent to the user. We also introduce an extension of SQL, called Context-SQL (C-SQL), in which the context of a semantic value can be explicitly accessed and updated. Finally, we describe the implementation of a prototype context mediator for a relational C-SQL system.

371 citations


Journal ArticleDOI
TL;DR: This article found that children focus on groups of "like kind" in word meaning extension, but they differ in their assumption as to the nature of "likeness" for young children.

259 citations


Journal ArticleDOI
TL;DR: It is concluded that completely opaque compounds are not connected with their constituents at the level of semantic representations, independent of semantic transparency.
Abstract: The processing and representation of Dutch compound words was investigated as a function of their semantic transparency The first experiment, with immediate partial repetition of the constituents of the compounds, provided clear evidence for the sensitivity of the lexical processing system to morphological complexity, independent of semantic transparency This was confirmed in a second experiment, with semantic priming of the meaning of the constituents Unlike compounds that are semantically fully or partially transparent, completely opaque compounds did not prime the associates of their constituents The results for completely opaque compounds were the same as for monomorphemic words that accidentally contain two existing morphemes of the language It is concluded that completely opaque compounds are not connected with their constituents at the level of semantic representations

259 citations


Patent
13 Sep 1994
TL;DR: In this paper, the system allows a user to create a semantic object data model of the database schema, which is defined by one or more semantic objects, each of which includes attributes that describe a characteristic of the semantic objects.
Abstract: A computer-based system for allowing a user to create a relational database schema. The system allows a user to create a semantic object data model of the database schema. The semantic object data model is defined by one or more semantic objects, each of which includes one or more attributes that describe a characteristic of the semantic objects. The attributes are defined as being either simple value attributes that describe a single characteristic of the semantic object; group attributes that include one or more member attributes that collectively describe a characteristic of the semantic object; formula attributes that set forth a computation that describes a characteristic of a semantic object; or object link attributes that define a relationship between two or more semantic objects. Once the semantic object data model is created, the system validates the semantic objects to ensure no modeling errors have been made and transforms the semantic objects and their included attributes into a plurality of relational database tables that will store data as defined by the semantic object data model.

250 citations


Journal ArticleDOI
01 Dec 1994
TL;DR: This paper presents a metadatabase system which realizes the semantic associative search for images by giving keywords representing the user's impression and the image's contents by using the metadata representing the features of images.
Abstract: In the design of multimedia database systems, one of the most important issues is to extract images dynamically according to the user's impression and the image's contents In this paper, we present a metadatabase system which realizes the semantic associative search for images by giving keywords representing the user's impression and the image's contentsThis metadatabase system provides several functions for performing the semantic associative search for images by using the metadata representing the features of images These functions are realized by using our proposed mathematical model of meaning The mathematical model of meaning is extended to compute specific meanings of keywords which are used for retrieving images unambiguously and dynamically The main feature of this model is that the semantic associative search is performed in the orthogonal semantic space This space is created for dynamically computing semantic equivalence or similarity between the metadata items of the images and keywords

149 citations


Journal ArticleDOI
TL;DR: The key to the analysis is a curved arc-trajectory in the central Schema, replacing the flat "across" trajectory presumed by Brugman and Lakoff.
Abstract: Brugman (1981). followed by Lakoff (1987), showed that OVER can be analyzed äs a chained System of senses using Image Schemas and natural image-schema transformations. The Brugman/Lakoff analysis can be improved, however, by relying more exclusively on image-schema transformations and eliminating some remnants offeature analysis. The key to the analysis is a curved arc-trajectory in the central Schema, replacing the flat \"across\" trajectory presumed by Brugman and Lakoff. This modification leads to the elimination of unnecessary features such äs the shape ofthe landmark, \"contact\", and subschemas (for ABOVE and ACROSsJ. Its main advantage, though, is that the arc-path Schema provides the basis for explaining all of the variants of OVER using natural image-schema transformations (andmetaphors). The proposed image-schema transformations ine lüde: multiple trajectors; multiplex-mass; multiple path; path-segment profiling; extending-path trajector; reflexive trajector; resulting state (including extended-path trajector and subjective path to endpoint); and shiftedperspective. A process is also proposed by which a new independent variant can \"split off\" from an established complex ofimages.

112 citations


Journal ArticleDOI
TL;DR: In this article, a number of cross-language priming experiments are reported that evaluate whether word meanings in the first and second language are represented in common or separate systems, and a masked priming procedure is used on the assumption that when prime awareness is limited, any priming effects directly reveal the underlying structure of the semantic system.
Abstract: A number of cross-language priming experiments are reported that evaluate whether word meanings in the first and second language are represented in common or separate systems. A masked priming procedure was used on the assumption that when prime awareness is limited, any priming effects directly reveal the underlying structure of the semantic system. Primes were presented in the subjects' first language, while targets were in their second language. Priming effects were obtained for word pairs that were semantically highly similar but not translation equivalents, for example fence-haie (= hedge in French), suggesting that words in the two languages share common elements of semantic code. Priming was also obtained between translation equivalents which, in conjunction with the results for semantically similar pairs, is most naturally interpreted in terms of partially shared semantic representations. However, no masked priming effects were obtained between associated pairs of relatively low semantic ...

84 citations


Proceedings ArticleDOI
29 Nov 1994
TL;DR: An approach to information brokering is discussed, which uses a partial context representation to capture the semantics in terms of the assumptions in the intended use of the objects and the intended meaning of the user query.
Abstract: The rapid advances in computer and communication technologies, and their merger, is leading to a global information market place. It will consist of federations of very large number of information systems that will cooperate to varying extents to support the users' information needs. We discuss an approach to information brokering in the above environment. We discuss two of its tasks: information resource discovery, which identifies relevant information sources for a given query, and query processing, which involves the generation of appropriate mapping from relevant but structurally heterogeneous objects. Query processing consists of information focusing and information correlation.Our approach is based on: semantic proximity, which represents semantic similarities based on the context of comparison, and schema correspondences which are used to represent structural mappings and are associated with the context. The context of comparison of the two objects is the primary vehicle to represent the semantics for determining semantic proximity. Specifically, we use a partial context representation to capture the semantics in terms of the assumptions in the intended use of the objects and the intended meaning of the user query. Information focusing is supported by subsequent context comparison. The same mechanism can be used to support information resource discovery. Context comparison leads to changes in schema correspondences that are used to support information correlation.

64 citations


Book ChapterDOI
01 May 1994
TL;DR: The paper suggests that similarity is dependent upon the context: It is influenced by the given set of objects, and the concept under discussion, which means that compared to other measures, the Context-Similarity suites best for natural concepts.
Abstract: This paper concentrates upon similarity between objects described by vectors of nominal features. It proposes non-metric measures for evaluating the similarity between: (a) two identical values in a feature, (b) two different values in a feature, (c) two objects. The paper suggests that similarity is dependent upon the context: It is influenced by the given set of objects, and the concept under discussion. The proposed Context-Similarity measure was tested, and the paper presents comparisons with other measures. The comparisons suggest that compared to other measures, the Context-Similarity suites best for natural concepts.

40 citations


Journal ArticleDOI
TL;DR: This article found that synonym judgments for synonyms were faster than for indirect antonyms, even when compared to synonyms of similar relatedness, and that semantic distance and markedness had similar effects on these word classes.
Abstract: Recent models of antonymy differ over the involvement of associative and conceptual connections in the representation of direct and indirect antonyms. To assess these models, subjects were presented with two sequential adjectives. In the first study they made relatedness judgments, and in the second study they made antonym and synonym judgments. Conceptual processes were manipulated by varying semantic distance, and associative processes were manipulated by varying lexical markedness. Judgments were fastest for direct antonyms, even when compared to synonyms of similar relatedness. Although judgments for synonyms were faster than for indirect antonyms, semantic distance and markedness had similar effects on these word classes. These results suggest that direct antonymy may utilize associative connections, but that indirect antonymy, like synonymy, relies primarily on conceptual connections.

Patent
10 Feb 1994
TL;DR: In this article, a method and system for determining the derivational relatedness of a derived word and a base word is presented. But the system is limited to a single root word.
Abstract: A method and system for determining the derivational relatedness of a derived word and a base word. In a preferred embodiment, the system includes a machine-readable dictionary containing entries for head words and morphemes. Each entry contains definitional information and semantic relations. Each semantic relation specifies a relation between the head word with a word used in its definition. Semantic relations may contain nested semantic relations to specify relations between words in the definition. The system compares the semantic relations of the derived word to the semantic relations of a morpheme, which is putatively combined with the base word when forming the derived word. The system then generates a derivational score that indicates the confidence that the derived word derives from the base word.


Book ChapterDOI
16 Oct 1994
TL;DR: A weakening mechanism generating more tolerant queries is proposed which consists in relaxing one or several fuzzy predicates with the help of a fuzzy linguistic modifier which is used to build a lattice of weakened queries expressing a semantic distance between these queries.
Abstract: In this paper, we present a cooperative approach intended to avoid empty answers to conjunctive fuzzy relational queries. We propose a weakening mechanism generating more tolerant queries. This consists in relaxing one or several fuzzy predicates with the help of a fuzzy linguistic modifier which is used to build a lattice of weakened queries expressing a semantic distance between these queries. Selectivity is then used as a heuristic to guide through the lattice, in order to find the query with a non-empty answer which is as close as possible to the initial one.

Journal ArticleDOI
TL;DR: The numerical methods and dynamic analysis of the Similarity Unit and the Similarities System, measurement of similarity degree and the meaning of thesimilarity Entropy, as well as origin of similarity, are investigated and the general principle of theSimilarity System Theory is presented.
Abstract: A new type of system theory called Similarity System Theory is introduced. Three new concepts, i.e., Similarity Unit, Similar Systems and Similarity Entropy are proposed. Two different kinds of distinction of the Similarity Unit may be made—i.e., Classical Similarity Unit and Fuzzy Similarity Unit. The numerical methods and dynamic analysis of the Similarity Unit and the Similarity System, measurement of similarity degree and the meaning of the Similarity Entropy, as well as origin of similarity, are investigated. Thus, the general principle of the Similarity System Theory is presented. Many potential applications of the Similarity System Theory are also considered.

01 Jan 1994
TL;DR: A new context sensitive method (CSM) was developed for constructing a Chinese phoneme-to-character (PTC) automatic conversion system called "GOING" and this method is unconventional in that it relies heavily on semantic pattern matching.
Abstract: We have recently developed a Chinese phoneme-to-character conversion system with a conversion rate close to 96%. The underlying algorithm, called the context sensitive method, is based on "semantic pattern matching". The construction of these semantic patterns is largely based on linguistic common sense and corpus statistics. An interesting finding is that this method is well suited for many other types of Chinese NLP. In this paper we apply this method to the construction of a Chinese parser in the phoneme-to-character conversion system. A new context sensitive method (CSM) was developed for constructing a Chinese phoneme-to-character (PTC) automatic conversion system called "GOING". This method is unconventional in that it relies heavily on semantic pattern matching. Semantic patterns provide an efficient way to reduce the huge amount of data processing, which is required for homophonic character selection in Chinese phonetic input. The current conversion rate is close to 96% based on a random sampling from a corpus with seven million Chinese characters collected from Freedom Times. The major advantage of the CSM is that the conversion rate could be continuously improved without any conceivable limit. There have been a number of approaches for the PTC system (1~12). Besides a few attempts on using grammar or semantic analysis (4,8,9,10,11), most of these methods are based on the manipulation (for example, using dynamic programming, hidden Markov model) of word frequency and bigrams. Our approach is not just based on linguistic knowledge, but more on computational and psychological consideration. There are three dominant factors in shaping our CSM method:

01 Jan 1994
TL;DR: A method for constructing and maintaining a `semantic index' using a system based on description logics and, based on subsumption and disjointness reasoning with respect to indexing concepts, instances are immediately categorized as hits, misses or candidates withrespect to the query.
Abstract: A method for constructing and maintaining a `semantic index' using a system based on description logics is described. A persistent index into a large number of objects is built by classifying the objects with respect to a set of indexing concepts and storing the resulting relation between object-ids and most speciic indexing concepts on a le. These les can be incre-mentally updated. The index can be used for eeciently accessing the set of objects matching a query concept. The query is classiied, and, based on subsumption and disjointness reasoning with respect to indexing concepts, instances are immediately categorized as hits, misses or candidates with respect to the query. Based on the index only, delayless feedback concerning the cardinality of the query (upper and lower bounds) can be provided during query editing.


Proceedings ArticleDOI
19 Apr 1994
TL;DR: A new model of speech understanding, based on the cooperation of the speech recognizer and language analyzer, which interacts with the knowledge sources while keeping its modularity is presented, which realizes robust understanding.
Abstract: We present a new model of speech understanding, based on the cooperation of the speech recognizer and language analyzer, which interacts with the knowledge sources while keeping its modularity. The semantic analyzer is realized with a semantic network that represents the possible concepts in a task. The speech recognizer based on an LR parser interacts with the semantic analyzer to eliminate invalid hypotheses at an early stage. The coupling of a loose grammar and interactive semantic analysis accepts ill-formed sentences while filtering out non-sense ones, thus realizes robust understanding. Dialog-level knowledge is also incorporated to constrain both the syntactic and the semantic knowledge sources. The key to guide the search efficiently is powerful heuristics. The relationship between the heuristic power and search efficiency is examined experimentally. The stochastic word bigram is derived from the probabilistic LR grammar as A*-admissible heuristics. >

15 Dec 1994
TL;DR: A framework for examining information loss during translation is required as ad-hoc translation has too many variable factors for algorithmic prediction of semantic loss, and a "translation as migration" system is used.
Abstract: A greater demand for automated data translation and the increasing complexity of data models have altered the focus of data translation from restructuring data to preserving its semantic meaning. The goal of this thesis is to consider the issues involved in semantic loss in data translation, specifically during syntactically directed translation. A framework for examining information loss during translation is required as ad-hoc translation has too many variable factors for algorithmic prediction of semantic loss. Therefore a "translation as migration" system is used. This involves import and export mappings to a neutral model, and the use of a schema migration system in the neutral model to effect the translation. If a rich neutral model is chosen then the syntactic and semantic impact of the import and export mappings are minimal. This requires classifying the different features in data modelling languages and specifying the types and behaviors of migration functions needed for any given neutral model. Next the semantics of the stored data needs to be recorded. A synthesis of existing semantic data models is used. The semantics are then linked with the features of the data modelling language that denotes them. Using these techniques, it is possible to predict for a given data modelling language what types of semantics it will support and also predict many of the semantic effects caused by its migration functions. A prototype system is developed using the ROSE 3.0 implementation of EXPRESS as a proof of concept. The prototype is tested with several short examples and one large example. The conclusions drawn are that the basic approach is sound but can be improved. Semantic loss, such as the destruction of relationships or loss of constraints on a data type, can be predicted based on syntactically specified migration functions. However, this approach cannot predict the creation of semantic information when new data types are introduced nor can it differentiate between semantic schemas which are identical in definition but differ in meaning. To address these issues several syntactical operations need to be combined to form semantic migration functions. Work on this extension to the prototype is proposed.

Proceedings ArticleDOI
05 Aug 1994
TL;DR: This paper argues that this can be achieved through the use of semantic relations as query primitives, and describes a new technique for extracting semantic relations from an online dictionary that involves the composition of basic semantic relations.
Abstract: Information retrieval systems can be made more effective by providing more expressive query languages for users to specify their information need. This paper argues that this can be achieved through the use of semantic relations as query primitives, and describes a new technique for extracting semantic relations from an online dictionary. In contrast to existing research, this technique involves the composition of basic semantic relations, a process akin to constrained spreading activation in semantic networks. The proposed technique is evaluated in the context of extracting semantic relations that are relevant for retrieval from a corpus of pictures.

Proceedings ArticleDOI
01 Jan 1994
TL;DR: In this paper, a text representation and searching technique labeled as "Semantic Vector Space Model" (SVSM) is described, which combines Salton's VSM (1991) with distributed representation of semantic case structures of natural language text.
Abstract: This paper describes a text representation and searching technique labeled as "Semantic Vector Space Model" (SVSM). The proposed technique combines Salton's VSM (1991) with distributed representation of semantic case structures of natural language text. It promises a way of abstracting and encoding richer semantic information of natural language text, and therefore, a better precision performance of IR, without involving sophisticated semantic processing. >

Proceedings Article
01 Jan 1994
TL;DR: This invention relates to the production of cellular synthetic resin stock, especially polyurethane foam, having a substantially rectangular cross section and which is generally isotropic in character.
Abstract: This invention relates to the production of cellular synthetic resin stock, and more particularly relates to methods and apparatus for producing foamed synthetic resin stock, especially polyurethane foam, having a substantially rectangular cross section and which is generally isotropic in character.

Journal ArticleDOI
TL;DR: In this paper, subjects made timed lexical decisions to target words (or nonwords) preceded by primes that were semantically related or unrelated to them, and a stem or fragment completion task was administered as an implicit memory test, followed by an explicit recognition test of memory for the previously seen primes and targets.
Abstract: In 5 experiments subjects made timed lexical decisions to target words (or nonwords) preceded by primes that were semantically related or unrelated to them. Subsequently, a stem or fragment completion task was administered as an implicit memory test (e.g., complete bu ― ― ― ― for butter), followed by an explicit recognition test of memory for the previously seen primes and targets. Conditions of presentation for the lexical decision task were varied across experiments. In Experiment 1, both semantic relatedness and semantic elaboration (primes vs. targets) influenced performance on both the implicit and explicit tests. In Experiments 2-5, a dissociation was obtained between the tests, with reliable effects of relatedness and elaboration obtained for recognition but not completion

Journal ArticleDOI
TL;DR: An efficient and compact way to represent similarity relations in the form of a sequence is introduced and an algorithm to obtain the normal form of any given membership matrix of a similarity relation is established.

Dissertation
01 Jan 1994
TL;DR: Results indicate firstly that the proposed semantic information processing system is capable of being used as a controlled vocabulary and secondly that the approaches to estimating semantic similarity operate well at their intended concept level.
Abstract: The research reported in this thesis is centred around the development of a semantic based approach to information processing. Traditional word-based pattern matching approaches to information processing suffer from both the richness and ambiguousness of natural language. Although retrieval performances of traditional systems can be satisfactory in many situations, it is commonly held that the traditional approach has reached the peak of its potential and any substantial improvements will be very difficult to achieve, [Smea91], Word-based pattern matching retrieval systems are devoid of the semantic power necessary to either distinguish between different senses of homonyms or identity the similar meanings of related terms. Our proposed semantic information processing system was designed to tackle these problems among others, (we also wanted to allow phrasal as well as single word terms to describe concepts). Our prototype system is comprised of a WordNet derived domain independent knowledge base (KB) and a concept level semantic similarity estimator. The KB, which is rich in noun phrases, is used as a controlled vocabulary which effectively addresses many of the problems posed by ambiguities in natural language. Similarly both proposals for the semantic similarity estimator tackle issues regarding the richness of natural language and in particular the multitude of ways of expressing the same concept. A semantic based document retrieval system is developed as a means of evaluating our approach. However, many other information processing applications are discussed with particular attention directed towards the application of our approach to locating and relating information in a large scale Federated Database System (FDBS). The document retrieval evaluation application operates by obtaining KB representations of both the documents and queries and using the semantic similarity estimators as the comparison mechanism in the procedure to determine the degree of relevance of a document for a query. The construction of KB representations for documents and queries is a completely automatic procedure, and among other steps includes a sense disambiguation phase. The sense disambiguator developed for this research also represents a departure from existing approaches to sense disambiguation. In our approach four individual disambiguation mechanisms are used to individually weight different senses of ambiguous terms. This allows the possibility of there being more than one correct sense. Our evaluation mechanism employs the Wall Street Journal text corpus and a set of TREC queries along with their relevance assessments in an ovrall document retrieval application. A traditional pattern matching tPIDF system is used as a baseline system in our evaluation experiments. The results indicate firstly that our WordNet derived KB is capable of being used as a controlled vocabulary and secondly that our approaches to estimating semantic similarity operate well at their intended concept level. However, it is more difficult to arrive at conclusive interpretations of the results with regard to the application of our semantic based systems to the complex task of document retrieval. A more complete evaluation is left as a topic for future research.

Proceedings ArticleDOI
05 Aug 1994
TL;DR: A syntactic-head-driven algorithm provides a basis for a logically well-defined treatment of the movement of (syntactic) heads, for which only ad-hoc solutions existed, so far.
Abstract: The previously proposed semantic -head-driven generation methods run into problems if none of the daughter constituents in the syntacto-semantic rule schemata of a grammar fits the definition of a semantic head given in [Shieber et al., 1990]. This is the case for the semantic analysis rules of certain constraint-based semantic representations, e.g. Underspecified Discourse Representation Structures (UDRSs) [Frank and Reyle, 1992].Since head-driven generation in general has its merits, we simply return to a syntactic definition of 'head' and demonstrate the feasibility of syntaclic-head-driven generation. In addition to its generality, a syntactic-head-driven algorithm provides a basis for a logically well-defined treatment of the movement of (syntactic) heads, for which only ad-hoc solutions existed, so far.

Journal ArticleDOI
TL;DR: The model is applied in various situations of teaching, but the main application is in the construction of a semantic distance suitable for analysing students’ difficulties with quadrilaterals.
Abstract: In this paper a model of semantic difference between various mathematical terms used in the classroom is built. As a basic tool in the construction of this model the concept of codification, which is a mapping σ of a space M of meaning into a set A of terms or expressions, is used. Given a set ? of codifications (which may belong to the teacher or to the pupils), a semantic distance μS is defined in ? by the formula: μS (s,t) = sup {d(x,y)|x, y?M with σ 1(x) = s, σ 2(y) = t for some σ 1, σ 2?S} where d is a distance function in M. In particular the ‘semantic width’ of a term t, WS (t) = μS (t, t), is generally different from zero. The model is applied in various situations of teaching, but the main application is in the construction of a semantic distance suitable for analysing students’ difficulties with quadrilaterals. In this respect a ‘Boolean’ representation of types of quadrilaterals is combined with a ‘tree‐like’ representation related to Aristotle's γ?vη, and as a result, a larger semantic space i...

Proceedings ArticleDOI
05 Aug 1994
TL;DR: In this paper, three heuristic methods, two of which use semantic information in text such as company names and their patterns, are proposed and tested on how accurately they identify the correct referents.
Abstract: Reference resolution is one of the important tasks in natural language processing. In this paper, the author first determines the referents and their locations of "dousha", literally meaning "the same company", which appear in Japanese newspaper articles. Secondly, three heuristic methods, two of which use semantic information in text such as company names and their patterns, are proposed and tested on how accurately they identify the correct referents. The proposed methods based on semantic patterns show high accuracy for reference resolution of "dousha" (more than 90\%). This suggests that semantic pattern-matching methods are effective for reference resolution in newspaper articles.

01 Jan 1994
TL;DR: This research extends database management systems in their ability to provide semantic information support for geographic information systems (GIS) with a "Semantic Framework", which organizes issues and representation concepts related to semantic support, and emphasizes the importance of content analysis.
Abstract: This research extends database management systems in their ability to provide semantic information support for geographic information systems (GIS). A "Semantic Framework" organizes issues and representation concepts related to semantic support, and emphasizes the importance of content analysis. With GIS, the need for semantic information support centers around associating meaning with spatial object data. Data models of existing GIS, and an examination of the Spatial Data Transfer Standard, show that a lack of support for complex relationships between spatial objects and real-world information limits the ability of a system to support semantic information. Digital cartographic generalization exemplifies a problem that requires more availability of the meaning of spatial objects than systems typically have. A high level architecture for generalization enables a system to generalize data representing roads, which are real-world features prominent in maps. The architecture includes the use of a detailed roads data model, associated rules, and specialized feature conversion methods that enable the system to change from one representation of roads to another according to scale requirements. As a result, a system can manage multiple levels of representation for roads, which means that it can produce an appropriate display for any scale from a single, detailed data set. Spatial-object level operations simplify the specification of the conversion methods. These operations, derived from an analysis of published vector and raster generalization operators, define the spatial transformations that generalization produces.