scispace - formally typeset
Search or ask a question
Topic

Data access

About: Data access is a research topic. Over the lifetime, 13141 publications have been published within this topic receiving 172859 citations. The topic is also known as: Data access.


Papers
More filters
Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed an auditable access control model, based on an attribute-based access control, and managed the access control policy for private data through the request record, the response record, and the access record stored in the blockchain network.
Abstract: Internet of Things (IoT) devices are widely considered in smart cities, intelligent medicine, and intelligent transportation, among other fields that facilitate people's lives, producing a large amount of private data. However, due to the mobility, limited performance, and distributed deployment of IoT, traditional access control methods cannot support the security of private data's access control process in current IoT environments. To address such problems, this article proposes an auditable access control model, based on an attribute-based access control model, and manages the access control policy for private data through the request record, the response record, and the access record stored in the blockchain network. Additionally, a Blockchain-based auditable access control system is also proposed based on the auditable access control model, ensuring private data security in IoT environments and realizing effective management and auditable access to these data. Experimental results show that the proposed system can maintain high throughput while ensuring private data security for real application scenarios in IoT environments.

43 citations

Proceedings ArticleDOI
10 Jun 2019
TL;DR: A new data-driven spatial index structure, namely learned Z-order Model (ZM) index, which combines the Z- order space filling curve and the staged learning model is designed, which significantly reduces the memory cost and performs more efficiently than R-tree in most scenarios.
Abstract: With the pervasiveness of location-based services (LBS), spatial data processing has received considerable attention in the research of database system management. Among various spatial query techniques, index structures play a key role in data access and query processing. However, existing spatial index structures (e.g., R-tree) mainly focus on partitioning data space or data objects. In this paper, we explore the potential to construct the spatial index structure by learning the distribution of the data. We design a new data-driven spatial index structure, namely learned Z-order Model (ZM) index, which combines the Z-order space filling curve and the staged learning model. Experimental results on both real and synthetic datasets show that our learned index significantly reduces the memory cost and performs more efficiently than R-tree in most scenarios.

43 citations

Patent
09 Aug 2001
TL;DR: In this paper, a technique for accessing data stored in separate databases is provided for access data stored on separate databases, where a communication system having a first and second computer system is used to retrieve data from a primary or a secondary database.
Abstract: A technique is provided for accessing data stored in separate databases. A communication system having a first and second computer system is used to retrieve data from a primary or a secondary database. The primary and secondary databases are coupled to the first computer system. A data access controller is used in the system to automatically shift the first computer system from retrieving data from the primary database to retrieving data from the secondary database. The data access controller shifts when a communication error between the primary database and the first computer system is detected by the system.

43 citations

Patent
08 Oct 2003
TL;DR: In this paper, a method is disclosed for gathering, processing, storing and reporting pharmacy data, where authorized users may issue requests to the data repository based on current and/or historical data.
Abstract: A method is disclosed for gathering, processing, storing and reporting pharmacy data. Data from individual pharmacies is transmitted regularly to a data repository via an electronic communications network. Upon receipt at the data repository, the data first must pass through an access security screen wherein data failing to meet predetermined criteria are rejected. If the data are determined to be valid, it is added to a data warehouse database by a computer data server. Authorized users may issue requests to the data repository based on current and/or historical data. Such requests may be made at any time via the electronic communications network to create reports based on current data. The user first sends a request to the data repository for access. Once access to the data repository is granted, the user is able to obtain various types of information residing within the data warehouse. The amount and types of data available to the user may be limited by the user's predetermined security level clearance and need-to-know, in order to protect patient privacy. The data may be presented to the user in a variety of predetermined report formats, as established by the type of data and the function of the user.

43 citations

DOI
01 Jan 2010
TL;DR: This dissertation describes the re-quirements and a domain-independent approach respectively, to provide context-sensitive and user-adapted access to heterogeneous data sources, and describes a processing pipeline, consisting of three steps, which takes a user request as input and delivers a personal response.
Abstract: In a variety of domains, developers nowadays are struggling with the dilemma of how they can provide a more personal service to their users. A more personal service can for example be facilitated by offering user-adapted search, generating recommendations, personalized content navigation, personalized user interfaces, etc. However, providing such functionality on top of a particular data set, requires to have good knowledge of the relevant domain items (which can for example represent books, songs, TV programs, art pieces, etc.) as well as to have good knowledge of the relevant users (in terms of the user’s behavior, interests, preferences, etc. with respect to those domain items). In this dissertation and more specifically in Chapter 3 and Chapter 4, we describe the re-quirements and a domain-independent approach respectively, to provide context-sensitive and user-adapted access to heterogeneous data sources. This approach consists of three main parts, including: 1) Data Integration, 2) User Modeling and 3) User-Adapted Data Access. Chapter 5 focusses on the integration of information from various heterogeneous data sources. To provide user-adapted access, a good description of the relevant domain items is key. The more descriptive information we have about every item, the more raw material is there to for example compare different items, compare items with user profiles, deduce new information, etc. Unfortunately, in the real world, items often come poorly described. On the other hand however, with the immense growth of available information on the Web, many different data sources (like IMDb, Wikipedia, social networks, etc.) exist and offer free access to their data. By using Semantic Web techniques we describe how we can enrich the descriptive metadata of those domain items by on the one hand integrating and matching information from different external sources providing instance metadata, and on the other hand taking relevant ontological background information into account. Chapter 6 concentrates on the second part in our approach: the creation of an extensive model of the end-user. Such a user model is the user’s digital representation and encompasses all valuable user data we can obtain. Information can be provided explicitly by the user himself (e.g. the user states that he is 45 years old, male, capable of speaking three languages, fond of tennis, etc.) but also implicitly. Implicit feedback includes all the information the user gives away without realizing it, by means of his behavioral patterns (e.g. the user watches the news every day at 8, he always adds books from the same author to his favorites, etc.). However, user feedback (both explicit and implicit) can be hard to interpret since it depends on a wide variety of parameters. Numerous influences like mood, location, time, environment, health, etc., make that people can behave very differently at any given time. Therefore, every statement in the user model is contextualized. In other words, the constrained setting in which a specific user statement was valid, which we call the statement’s context, is saved and is used later to predict the user’s interests accurately in any given situation. Further, since our approach depends on the quality and richness of the user model and new users usually start with an empty profile, we suffer from the so-called cold start problem. To deal with this situation where new users have an empty profile, we provide a number of strategies based on user statistics and stereotypes to alleviate this problem. The third and last part in our approach encompasses the strategies to adapt any user request and provide a personalized set of results, based on both the integrated data structure describing the domain and the user model. Chapter 7 describes a processing pipeline, consisting of three steps, which takes a user request as input and delivers a personal response. The first step involves cleaning and conceptualizing the user’s request with respect to the current domain. Secondly, the updated query is sent to the database to retrieve matching results. However, trying to find not only exactly matching results but also highly related results, the database automatically broadens the result space of the query in a controlled fashion. It does so by reasoning over well-chosen semantic relations including for example transitivity and synonymity. When matching results are retrieved, the last step filters them by following a set of rules. These rules are predefined and can include restrictions based on both ontological as well as user model information. This pipeline is employed both to provide user-adapted search as well as for the generation of personal recommendations. However, systems dealing with potentially large amounts of data and on top of that provide complex functionality like reasoning, user-adapted search, integration of data, recommendations, etc., require extra care in terms of their database setup. Moreover, efficiency in terms of querying speed is vital for any system’s long-term success. Therefore, in Chapter 8, we introduce a number of optimizations to improve the efficiency of the database in terms of size and querying speed. To illustrate our approach, we apply it in the television domain which we introduce in Chapter 2. Together with Stoneroos we developed a cross-platform application called iFanzy, which tries to bring personalized access to television programs to the user via a set-top box interface, a Web site and an iPhone application. All three of these platforms are synchronized and behave as one ubiquitous application supporting the user in putting together the best possible television experience, by finding exactly those TV programs fitting the user best. In Chapter 9, we give an overview of these three platforms in terms of functionality and user interface. Furthermore, we perform an evaluation on the interface of the iFanzy Web portal including experiments like a Cognitive Walkthrough, the Thinking Aloud method and a Heuristic Evaluation. Through the commercial availability of iFanzy we were able to further evaluate our approach, focussing on the recommendation quality and user satisfaction. In Chapter 10, we elucidate our evaluation which features a group of 60 people using the iFanzy Web interface for about two full weeks. From the data of this evaluation we can investigate the influence of both explicit and implicit user feedback on the predictive power of the system and the accuracy of generated recommendations. To further improve the quality of the recommendation strategy, Chapter 10 concludes with an approach to improve the serendipity of the recommender system, which leads to more surprising or serendipitous discoveries. Reusing the data from the previous evaluation, we describe a number of measurements to quantify the degree of serendipity in the recommendations of a recommender system.

43 citations


Network Information
Related Topics (5)
Software
130.5K papers, 2M citations
86% related
Cloud computing
156.4K papers, 1.9M citations
86% related
Cluster analysis
146.5K papers, 2.9M citations
85% related
The Internet
213.2K papers, 3.8M citations
85% related
Information system
107.5K papers, 1.8M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202351
2022125
2021403
2020721
2019906
2018816