scispace - formally typeset
Search or ask a question

Showing papers on "Data element published in 1993"


Patent
04 May 1993
TL;DR: In this paper, the security of data elements which represent an industrial process, which are manipulated by users on a data processing system and in which the industrial process includes a series of industrial process steps, are controlled by permitting groups of users to access predetermined data elements based on the industrial processes step at which the user is currently active.
Abstract: The security of data elements which represent an industrial process, which are manipulated by users on a data processing system and in which the industrial process includes a series of industrial process steps, are controlled by permitting groups of users to access predetermined data elements based on the industrial process step at which the industrial process is currently active. A user is prevented from accessing the requested element if the industrial process is not at an industrial process step corresponding to one of the industrial process steps for which the user has authority to access the data element. Thus, access to data is prevented based on the status of the data, in addition to the type of data. When selected database elements are associated with one of many locations, access is also denied to a user based on the location. Security access based on status and location may be provided in response to a change in the current industrial process step. Access authority to the data elements is changed compared to the access authority at the immediately preceding industrial process step based on mappings in one or more tables. Improved security of data elements which represent an industrial process is thereby provided.

56 citations


Patent
Oscar C. Strohacker1
23 Dec 1993
TL;DR: An apparatus for compressing data including apparatus for using a received data element as an address to a location in a memory, an apparatus for determining whether the addressed memory location contains a first record of a first matching data element, and a pointer to the first matching Data Element as mentioned in this paper.
Abstract: An apparatus for compressing data including apparatus for using a received data element as an address to a location in a memory, an apparatus for determining whether the addressed memory location contains a first record of a first matching data element, and an apparatus for generating a pointer to the first matching data element In addition, a method for compressing data including the steps of using a received data element as an address to a location in a memory, determining whether the addressed memory location contains a first record of a first matching data element, and generating a pointer to the first matching data element

43 citations


Patent
Robert Yung1
04 Jun 1993
TL;DR: In this paper, a complementary selection data storage structure is provided with a complementary predictive annotation storage structure comprising a number of corresponding predictive annotation vectors, each having a number predictive annotation tuples to retrieve data from a data block tuple of a data vector.
Abstract: A data storage structure and its complementary selection data storage structure is provided with a complementary predictive annotation storage structure comprising a number of corresponding predictive annotation vectors, each having a number of predictive annotation tuples To retrieve data from a data block tuple of a data vector, a data vector and its corresponding data selection and predictive annotation vectors are read out concurrently Determination is made as to whether there is a selection hit and a prediction hit Concurrently, one of the predictive annotation tuples is selected and recorded for the next access based on the predictive annotation selected and recorded in the previous access Also concurrently, a data block tuple is selected based on the predictive annotation selected and recorded in the previous access, and a data element is selected from the selected data block tuple based on the access key, without waiting for the determination results Remedial actions are subsequently taken if it is determined that either there is no selection hit or no prediction hit Additionally, the data vector, selection vector, the predictive annotation vector and the previously recorded predictive annotations are conditionally updated depending on the selection and prediction hit determinations

18 citations


Patent
Paul A. Fisher1
03 Mar 1993
TL;DR: In this article, an image processing apparatus includes a device for sorting the data elements according to each of the first positional addresses and an additional device that sorts the output for the first position addresses according to the second position addresses.
Abstract: An image processing apparatus processes image data representative of an image for a plurality of pixels of a display. The pixels have positions designated by locational addresses. The image data includes data elements and a first and second positional address for each data element corresponding to the locational addresses indicating to which pixel each data element corresponds. The image processing apparatus includes a device for sorting the data elements according to each of the first positional addresses. The device generates an output including the data elements and the second positional addresses of each data element in a first positional address order. An additional device sorts the output for the first positional addresses according to the second positional addresses. The additional device generates a display output of the data elements in a first positional address/second positional address order. A method for processing this image data includes sorting the data elements and second positional addresses of each data element according to the first positional addresses thereof. An output is generated including the data elements and their second positional addresses for each data element in a first positional order. The output is sorted according to the second positional addresses for each data element and a display output is generated of data elements in a first positional/second positional order.

18 citations


Book ChapterDOI
26 Oct 1993
TL;DR: This work defines a similar lattice of displays and study visualization processes as functions from data lattices to display lattices, which can be applied to visualize data objects of all data types and are thus polymorphic.
Abstract: The VIS-AD data model integrates metadata about the precision of values, including missing data indicators and the way that arrays sample continuous functions, with the data objects of a scientific programming language The data objects of this data model form a lattice, ordered by the precision with which they approximate mathematical objects We define a similar lattice of displays and study visualization processes as functions from data lattices to display lattices Such functions can be applied to visualize data objects of all data types and are thus polymorphic

7 citations


Proceedings ArticleDOI
01 Jan 1993
TL;DR: The emerging Standard for The Exchange of Product Model Data (STEP), being developed in the International Organization for Standardization (ISO), addresses this need by providing information models, called application protocols, which clearly and unambiguously describe data.
Abstract: The problem of sharing data has many facets. The need to share data across multiple enterprises, different hardware platforms, different data storage paradigms and systems, and a variety of network architectures is growing. The emerging Standard for The Exchange of Product Model Data (STEP), being developed in the International Organization for Standardization (ISO), addresses this need by providing information models, called application protocols, which clearly and unambiguously describe data. The validity of these information models is essential for success in sharing data in a highly automated engineering environment.

6 citations


Journal ArticleDOI
TL;DR: A frame-based metadata design is discussed, in which the relationships between the data objects and the data components are represented using a semantic network and frames to allow for a more complete definition of data and the objects' relationships.

5 citations


02 Jan 1993
TL;DR: It is concluded that the layer-structure of GIS constrains metadata management on the level of individual features and a data model for incorporating metadata with a spatial database is proposed.
Abstract: This research identifies the relations between data type, data object, and associated metadata. A framework relating metadata specifics to different data objects will be developed. Three types of metadata are identified: specifications, history and findings. They have different behavior patterns as well as varying representation requirements. Therefore, querying metadata calls for a variety of techniques. This research also examines a specific domain of data--soils data, in an attempt to identify domain-specific problems in relation to metadata queries. It is found that high degrees of uncertainty are involved in delineation of soil map units. Assessing accuracy under this condition is relatively difficult. Instead of searching for a precise measure for accuracy, educating users on the artifacts of soil boundaries may be a more essential and practical approach to the problem. Finally, this research addresses the design problem of embedding metadata within a spatial data base. It is concluded that the layer-structure of GIS constrains metadata management on the level of individual features. A data model for incorporating metadata with a spatial database is proposed.

4 citations


Book ChapterDOI
06 Sep 1993
TL;DR: A Context Model is proposed that unifies detail, meta and contextual data and discusses two of its main features-nested structure and role dynamism.
Abstract: In this paper, we define Context as a data structure that gives the meaning and the environmental information of the detail data, and discuss two of its main features-nested structure and role dynamism. Then we propose a Context Model that unifies detail, meta and contextual data. We also discuss a Context-oriented query mechanism and illustrate the model by query examples.

4 citations


Journal Article
TL;DR: The overall design of the CIS at CPMC is heavily influenced by the decision support component, and the developers believe that data are most valuable when available for both human and automated decision support.
Abstract: The overall design of the CIS at CPMC is heavily influenced by the decision support component The type of automated decision support being implemented dictates the need for highly structured or coded data The value of decision support systems has been well documented The current reliance on free-text documents is natural and a rewarding first step to a more valuable mix of coded and free text While the health care provider might find the textual comments of the various reports extremely useful, the capability of an automated system to vigilantly review every data element for trends and anomalies is becoming invaluable in today's ever more complex health care delivery environment Other approaches such as optical imaging systems would facilitate human decision support, but do not supply data in a format that can be processed by automated decision support systems The developers of the CIS at CPMC believe that data are most valuable when available for both human and automated decision support

3 citations


Journal ArticleDOI
David J. Hand1
TL;DR: A distinction is made between context-specific metadata and context-free metadata, illustrated in depth with the example of measurement scale.
Abstract: Data consist of more than merely the numbers representing the magnitudes of attributes of the objects being investigated. The other component of data is the meaning of the numbers the context in which they arose the "metadata" . A distinction is made between context-specific metadata and context-free metadata. The latter is illustrated in depth with the example of measurement scale. Brief discussions are given of general issues relating to metadata research.

Proceedings Article
01 Jan 1993
TL;DR: MR-CDF is a system for managing multi-resolution scientific data sets, an extension of the popular CDF (Common Data Format) system that provides a simple functional interface to client programs for storage and retrieval of data.
Abstract: MR-CDF is a system for managing multi-resolution scientific data sets. It is an extension of the popular CDF (Common Data Format) system. MR-CDF provides a simple functional interface to client programs for storage and retrieval of data. Data is stored so that low resolution versions of the data can be provided quickly. Higher resolutions are also available, but not as quickly. By managing data with MR-CDF, an application can be relieved of the low-level details of data management, and can easily trade data resolution for improved access time.

01 Mar 1993
TL;DR: A standard is proposed describing a portable format for electronic exchange of data in the physical sciences to allow the user to make her own choices regarding strategic tradeoffs to achieve the performance desired in her local environment.
Abstract: A standard is proposed describing a portable format for electronic exchange of data in the physical sciences. Writing scientific data in a standard format has three basic advantages: portability; the ability to use metadata to aid in interpretation of the data (understandability); and reusability. An improperly formulated standard format tends towards four disadvantages: (1) it can be inflexible and fail to allow the user to express his data as needed; (2) reading and writing such datasets can involve high overhead in computing time and storage space; (3) the format may be accessible only on certain machines using certain languages; and (4) under some circumstances it may be uncertain whether a given dataset actually conforms to the standard. A format was designed which enhances these advantages and lessens the disadvantages. The fundamental approach is to allow the user to make her own choices regarding strategic tradeoffs to achieve the performance desired in her local environment. The choices made are encoded in a specific and portable way in a set of records. A fully detailed description and specification of the format is given, and examples are used to illustrate various concepts. Implementation is discussed.

Proceedings ArticleDOI
04 Aug 1993
TL;DR: An architecture of object-oriented semantic data flow system (OSDF), put forward to model scientific data set, concentrated on the scientific data representation and access inData flow system.
Abstract: At present, most visualization tools have applied the traditional flat sequential file for handling scientific data. This results in inefficient and ineffective in storage, access and ease-of-use for large complex data set especially for applications like scientific visualization. The available data models for scientific visualization, including CDF, netCDF and HDF, only supported some common data types. The relationship among data is not considered. But it is important for revealing the insight of scientific data set. In this paper, we presented an architecture of object-oriented semantic data flow system (OSDF). OSDF concentrated on the scientific data representation and access in data flow system. Scientific Data Model (SDM) was put forward to model scientific data set. In SDM, the semantics of data were described by the Association and Data Constructor. Association was used to describe the relationship among data and data constructor to construct new data type. All data objects of an application were stored in object base. Data were organized and accessed by its semantics. An interface to access data in object base was supplied. To attain the best visualization effect, rules and principles for selecting visualization techniques were integrated into data object.© (1993) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.


Proceedings ArticleDOI
O. Graf1, M. Jones1, F. Sisco1
26 Apr 1993
TL;DR: The prototype system demonstrates an approach to issues of data ingestion, data restructuring, physical data models, relationship file structure to data system performance, data product generation, data transfer to remote users, data subset extraction, data browsing, and user interface.
Abstract: Approaches to the issues of data ingestion, data restructuring, physical data models, relationship file structure to data system performance, data product generation, data transfer to remote users, data subset extraction, data browsing, and user interface have been examined. The High Performance Data System architecture provides an environment for bringing together the technologies of mass storage, large bandwidth data networks, high-performance data processing, and intelligent data access. The prototype system demonstrates an approach to these issues. In addition, the design process has defined some important requirements for the mass storage file system, such as logical grouping of files, aggregate file writes, and multiple dynamic storage device hierarchies. >

Book ChapterDOI
Karen L. Ryan1
26 Oct 1993
TL;DR: It is argued that appropriate treatment of metadata such as theory dependencies and implementation dependencies is critical to the long term success and extensibility of scientific data systems.
Abstract: This paper discusses integration issues and metadata requirements exposed by integrating independently developed molecular simulation codes into a single package. We discuss the use of a structured data model as compared to programming language data structures for data integration. As an example we consider the architecture of a commercially available simulation and visualization system for quantum chemistry. The system architecture currently uses a fairly low level approach to integration using a physical integration scheme much like CDF or HDF. Extensions and trade-offs in moving towards a more structured model are presented. We also present requirements for metadata in scientific data systems. Issues of theory dependencies and implementation dependencies must be addressed when integrating scientific data systems. We argue that appropriate treatment of metadata such as theory dependencies and implementation dependencies is critical to the long term success and extensibility of scientific data systems.