scispace - formally typeset
Search or ask a question

Showing papers on "Database-centric architecture published in 2019"


Journal ArticleDOI
TL;DR: This work proposes a way to synthesize multi-level, multi-perspective candidate architectures and to explore them across the different layers and perspectives and shows that it is possible to synthesizing and explore optimal candidate architectures for two highly configurable automotive sub-systems.
Abstract: In industry, evaluating candidate architectures for automotive embedded systems is routinely done during the design process. Today’s engineers, however, are limited in the number of candidates that they are able to evaluate in order to find the optimal architectures. This limitation results from the difficulty in defining the candidates as it is a mostly manual process. In this work, we propose a way to synthesize multi-level, multi-perspective candidate architectures and to explore them across the different layers and perspectives. Using a reference model similar to the EAST-ADL domain model but with a focus on early design, we explore the candidate architectures for two case studies: an automotive power window system and the central door locking system. Further, we provide a comprehensive set of question templates, based on the different layers and perspectives, that engineers can ask to synthesize only the candidates relevant to their task at hand. Finally, using the modeling language Clafer, which is supported by automated backend reasoners, we show that it is possible to synthesize and explore optimal candidate architectures for two highly configurable automotive sub-systems.

19 citations


Proceedings ArticleDOI
23 Oct 2019
TL;DR: This paper aims to further explore how the Named Data Networking (NDN), a future Internet architecture, addresses the challenges of IoUT and can be adapted to potentially provide a more secure, simplified, and efficient implementation of Io UT.
Abstract: The Internet of Underwater Things (IoUT) advances our ability in exploring oceans, lakes and rivers through multiple communication technologies that connect stationary and mobile nodes underwater, at the surface and in the sky. However, characteristics such as low data rate, long propagation delay, energy-constraint, mobility and sparsity of underwater communications remain as major challenges to the potential benefits that IoUT can bring to data availability and data sciences. This paper aims to further explore how the Named Data Networking (NDN), a future Internet architecture, addresses the challenges of IoUT and can be adapted to potentially provide a more secure, simplified, and efficient implementation of IoUT. The IoUT network and application semantics are aligned with the data-centric communication model of NDN, which enables operators to deploy and configure networks more easily, and developers to focus more on "things" and data. The paper starts from introducing new challenges in IoUT and then illustrating in detail with simple examples to show how to employ NDN architecture to IoUT and how to enable additional functionalities required by IoUT.

10 citations


01 Jan 2019
TL;DR: In this article, the authors present an edge-based data-centric architecture for the Internet of Things (IoT), which consists of distributed computing nodes, forming an overlay that enables data exchange between IoT services running on any node.
Abstract: The oil of the Internet of Things (IoT) is data. Consequently a data-centric or name-based design fits the challenges of the IoT very well. Especially when looking at edge-based approaches introducing a data-centric Internet architecture becomes possible as it does not require any changes at the core. Scalability and latency issues also play a smaller role at the edge, leveraging some problems of data-centric architectures. In this paper we present an edge-based data-centric architecture for the Internet of things (IoT). Our system architecture consists of distributed computing nodes. We show how they can manage themselves, forming an overlay that enables data exchange between IoT services running on any node. The core of our abstraction is a hierarchical addressing scheme. We show how it enables complex service discovery. A key feature of our solution is using data as interface to services. We show how we solve the challenge of unifying interfaces. We evaluate our solution in three perspectives: usability, performance in terms of latency, and scalability in terms of throughput.

7 citations


Proceedings ArticleDOI
01 Oct 2019
TL;DR: This paper presents a novel design approach that ensures predictable timing and resource sharing while focusing on a minimalist and modular design and shows how it fits to a state-of-the-art middleware, while keeping the complexity as small as possible.
Abstract: Deterministic communication is a key challenge for modern embedded real-time systems. This is especially true for the automotive domain, where safety-critical functions are distributed over multiple electronic control units (ECUs) across the vehicle. To handle the increased complexity together with the higher data volume, Ethernet is considered as a universal media. Consequently, a great effort has been spent on making Ethernet predictable to satisfy timing and safety requirements. However, the communication stacks (COM-stacks) that build the bridge between applications and the network did not receive the same attention. Until now it has been dealt with as legacy software and overloaded with additional features to support the new multi-level protocols. Moreover, with the shift to automated vehicles, communication requirements will change fundamentally, transforming the car to a data centric system. In this paper we highlight the communication requirements of future autonomous cars and show why today's COM-stack designs are not suited to meet them. We present a novel design approach that ensures predictable timing and resource sharing while focusing on a minimalist and modular design. We also show how it fits to a state-of-the-art middleware, while keeping the complexity as small as possible.

6 citations


Journal ArticleDOI
TL;DR: This article proposes and validates the UML Profile and implementation for precision agriculture and smart farming through computer-aided and software engineering.
Abstract: Modelling WSN data behaviour is relevant since it would allow to evaluate the capacity of an application for supplying the user needs, moreover, it could enable a transparent integration with different data-centric information systems. Therefore, this article proposes a data-centric UML profile for the design of wireless sensor nodes from the user point-of-view capable of representing the gathered and delivered data of the node. This profile considers different characteristics and configurations of frequency, aggregation, persistence and quality at the level of the wireless sensor nodes. Furthermore, this article validates the UML profile through a computer-aided software engineering (CASE) tool implementation and one case study, centred on the data collected by a real WSN implementation for precision agriculture and smart farming.

5 citations


Proceedings ArticleDOI
25 Mar 2019
TL;DR: A data-centric approach to AxC is proposed, which can boost the performance of memory-subsystem-limited applications and proposes a data-access approximation technique called data subsetting, in which all accesses to a data structure are redirected to a subset of its elements so that the overall footprint of memory accesses is decreased.
Abstract: Approximate Computing (AxC), which leverages the intrinsic resilience of applications to approximations in their underlying computations, has emerged as a promising approach to improving computing system efficiency. Most prior efforts in AxC take a compute-centric approach and approximate arithmetic or other compute operations through design techniques at different levels of abstraction. However, emerging workloads such as machine learning, search and data analytics process large amounts of data and are significantly limited by the memory sub-systems of modern computing platforms.In this work, we shift the focus of approximations from computations to data, and propose a data-centric approach to AxC, which can boost the performance of memory-subsystem-limited applications. The key idea is to modulate the application’s data-accesses in a manner that reduces off-chip memory traffic. Specifically, we propose a data-access approximation technique called data subsetting, in which all accesses to a data structure are redirected to a subset of its elements so that the overall footprint of memory accesses is decreased. We realize data subsetting in a manner that is transparent to hardware and requires only minimal changes to application software. Recognizing that most applications of interest represent and process data as multi-dimensional arrays or tensors, we develop a templated data structure called SubsettableTensor that embodies mechanisms to define the accessible subset and to suitably redirect accesses to elements outside the subset. As a further optimization, we observe that data subsetting may cause some computations to become redundant and propose a mechanism for application software to identify and eliminate such computations. We implement SubsettableTensor as a C++ class and evaluate it using parallel software implementations of 7 machine learning applications on a 48-core AMD Opteron server. Our experiments indicate that data subsetting enables 1.33×–4.44× performance improvement with <0.5% loss in application-level quality, underscoring its promise as a new approach to approximate computing.

5 citations


Proceedings ArticleDOI
01 Mar 2019
TL;DR: The Virtual State Layer (VSL), a site-local data-centric architecture for the IoT, is presented and special features of the solution are full separation of logic and data in IoT services, which significantly reduces the overall system complexity.
Abstract: The heart of the Internet of Things (IoT) is data. IoT services processes data from sensors that interface their physical surroundings, and from other software such as Internet weather databases. They produce data to control physical environments via actuators, and offer data to other services. More recently, service-centric designs for managing the IoT have been proposed. Data-centric or name-based communication architectures complement these developments very well. Especially for edge-based or site-local installations, data-centric Internet architectures can be implemented already today, as they do not require any changes at the core. We present the Virtual State Layer (VSL), a site-local data-centric architecture for the IoT. Special features of our solution are full separation of logic and data in IoT services, offering the data-centric VSL interface directly to developers, which significantly reduces the overall system complexity, explicit data modeling, a semantically-rich data item lookup, stream connections between services, and security-by-design. We evaluate our solution regarding usability, performance, scalability, resilience, energy efficiency, and security.

4 citations


Proceedings ArticleDOI
15 Mar 2019
TL;DR: This work discusses the security issue of hierarchical architecture and proposes a solution that identifies attacked cluster head and changes the CH by identifying the fittest node using genetic algorithm based search.
Abstract: This Wireless Sensor Network is a network of devices that communicates the information gathered from a monitored field through wireless links. Small size sensor nodes constitute wireless sensor networks. A Sensor is a device that responds and detects some type of input from both the physical or environmental conditions, such as pressure, heat, light, etc. Applications of wireless sensor networks include home automation, street lighting, military, healthcare and industrial process monitoring. As wireless sensor networks are distributed across large geographical area, these are vulnerable to various security threats. This affects the performance of the wireless sensor networks. The impact of security issues will become more critical if the network is used for mission-critical applications like tactical battlefield. In real life deployment scenarios, the probability of failure of nodes is more. As a result of resource constraints in the sensor nodes, traditional methods which involve large overhead computation and communication are not feasible in WSNs. Hence, design and deployment of secured WSNs is a challenging task. Attacks on WSNs include attack on confidentiality, integrity and availability.There are various types of architectures that are used to deploy WSNs. Some of them are data centric, hierarchical, location based, mobility based etc. This work discusses the security issue of hierarchical architecture and proposes a solution. In hierarchical architectures, sensor nodes are grouped to form clusters. Intra-cluster communication happens through cluster heads. Cluster heads also facilitate inter-cluster communication with other cluster heads. Aggregation of data generated by sensor nodes is done by cluster heads. Aggregated data also get transferred to base through multi-hop approach in most cases. Cluster heads are vulnerable to various malicious attacks and this greatly affects the performance of the wireless sensor network. The proposed solution identifies attacked cluster head and changes the CH by identifying the fittest node using genetic algorithm based search.

4 citations


Proceedings ArticleDOI
01 Jul 2019
TL;DR: The paper describes the use of the Named Data Networking (NDN) network architecture within an experimental theatrical work being developed at UCLA, which uses NDN to enable a hybrid design paradigm for real-time video that combines properties of streams, buses, and stores.
Abstract: Network video streaming abstractions tend to replicate the paradigms of hardwired video dating back to analog broadcast. With IP video distribution becoming increasingly realistic for a variety of low-latency applications, this paper looks ahead to a data-centric architecture for video that can provide a superset of features from existing abstractions, to support how video is increasingly being used: for non-linear retrieval, variable speed and spatially selective playback, machine analysis, and other new approaches. As a case study, the paper describes the use of the Named Data Networking (NDN) network architecture within an experimental theatrical work being developed at UCLA. The work, a new play, Entropy Bound, uses NDN to enable a hybrid design paradigm for real-time video that combines properties of streams, buses, and stores. This approach unifies real-time live and historical playback, and is used to support edge-assisted machine learning. The paper introduces the play and its requirements (as well as the NDN components applied and developed), discusses key design patterns enabled and explored and their influence on the application architecture, and describes what was learned through practical implementation in a realworld production setting. The paper intends to inform future experimentation with real-time media over information-centric networking and elaborate on the benefits and challenges of using NDN in practice for mixed reality applications today.

3 citations


Proceedings ArticleDOI
19 Aug 2019
TL;DR: A top-to-bottom lean and optimized architecture is proposed, which allows applications to customize the OS kernel's IO subsystems with application-provided code, which enables sharing and high-performance IO among applications.
Abstract: Today's computer architectures are fundamentally different than a decade ago: IO devices and interfaces can sustain much higher data rates than the compute capacity of a single threaded CPU. To meet the computational requirements of modern applications, the operating system (OS) requires lean and optimized software running on CPUs for applications to fully exploit the IO resources. Despite the changes in hardware, today's traditional system software unfortunately uses the same assumptions of a decade ago---the IO is slow, and the CPU is fast. This paper makes a case for the data-centric extensible OS, which enables full exploitation of emerging high-performance IO hardware. Based on the idea of minimizing data movements in software, a top-to-bottom lean and optimized architecture is proposed, which allows applications to customize the OS kernel's IO subsystems with application-provided code. This enables sharing and high-performance IO among applications---initial microbenchmarks on a Linux prototype where we used eBPF to specialize the Linux kernel show performance improvements of up to 1.8× for database primitives and 4.8× for UNIX utility tools.

3 citations


Book ChapterDOI
01 Jan 2019
TL;DR: RACE (Reliable Control and Automation Environment) is an attempt to redefine the architecture of future cars from an information processing point of view, and implements mechanisms for fault tolerance and features plug-and-play techniques for smooth retrofitting of functions at any point in a car’s lifetime.
Abstract: As cars are turning more and more into “computers on wheels,” the development foci for future generations of cars are shifting away from improved driving characteristics toward features and functions that are implemented in software Classical decentralized electrical and electronic (E/E) architectures based on a large number of electronic control units (ECUs) are becoming more and more difficult to adapt to the extreme complexity that results from this trend Moreover, the innovation speed, which will be dictated by the computer industry’s dramatically short product lifecycles, requires new architectural and software engineering approaches if the car industry wants to rise to the resulting multidimensional challenges While classical evolutionary architectures mapped the set of functions that constitute the driving behavior into a coherent set of communicating control units, RACE (Reliable Control and Automation Environment) is an attempt to redefine the architecture of future cars from an information processing point of view It implements a straightforward perception-control/cognition-action paradigm; it is data centric, striking a balance between central and decentralized control It implements mechanisms for fault tolerance and features plug-and-play techniques for smooth retrofitting of functions at any point in a car’s lifetime

Posted Content
TL;DR: The PiNVSM architecture suggests a principally new architecture of the computing core that creates a new opportunity for data self-organization, data and code synthesis.
Abstract: The AI problem has no solution in the environment of existing hardware stack and OS architecture. CPU-centric model of computation has a huge number of drawbacks that originate from memory hierarchy and obsolete architecture of the computing core. The concept of mixing memory and logic has been around since 1960s. However, the concept of Processor-In-Memory (PIM) is unable to resolve the critical issues of the CPU-centric computing model because of inevitable replication of von Neumann architecture's drawbacks. The next generation of NVM/SCM memory is able to give the second birth to the data-centric computing paradigm. This paper presents a concept of Processor in Non-Volatile Memory (PiNVSM) architecture. The basis of PiNVSM architecture is the concept of DPU that contains the NVM memory and dedicated PU. All necessary PU's registers can be implemented in the space of NVM memory. NVM memory of DPU is the single space for storing and transformation of data. In the basis of PiNVSM architecture lies the DPU array is able to overcome the limitations as Turing machine model as von Neumann architecture. The DPU array hasn't a centralized computing core. Every data portion has dedicated computing core that excludes the necessity to transfer data to the place of data processing. Every DPU contains data portion that is associated with the set of keywords. Any complex data structure can be split on elementary items that can be stored into independent DPU with dedicated computing core(s). One DPU is able to apply the elementary transformation on one item. But the DPU array is able to make the transformation of complex structure by means of concurrent execution of elementary transformations in different DPUs. The PiNVSM architecture suggests a principally new architecture of the computing core that creates a new opportunity for data self-organization, data and code synthesis.

Proceedings ArticleDOI
01 Dec 2019
TL;DR: The theoretical analysis proves that the proposed method can achieve name privacy protection in SDC-NDN and the experiment results observed from the system implementation show better performance in terms of time consumption, data latency and storage space compared with the state-of-arts.
Abstract: Named Data Networking (NDN) is a branch of future network architecture, which shifts host-based communication to name-based data retrieval. Sensor networks accompanying with ubiquitous interconnecting and sensing have become an important data source for physical world and human beings. However, the hierarchical human-readable naming scheme generally used in Sensory Data Centric Named Data Networking (SDC-NDN) always contains some human-readable semantic information, which may potentially leak users' privacy. In this paper, we propose an elliptic curve based name privacy protection mechanism for SDC-NDN. Specifically we first randomly map the hierarchical human-readable name to an elliptic curve, then we further obfuscate the mapping result by introducing random numbers, so that the name of Interest packet (Interest) and Data packet (Data) in SDC-NDN can be obfuscated to become illegible by human. We implement our proposed mechanism and evaluate it in both theoretical analysis and experiments. The theoretical analysis proves that our method can achieve name privacy protection in SDC-NDN and the experiment results observed from SDC-NDN system implementation show better performance in terms of time consumption, data latency and storage space compared with the state-of-arts.

Proceedings ArticleDOI
25 Sep 2019
TL;DR: In this article, the authors proposed a hardware architecture of PIM and verified the functions of the proposed architecture by an embedded system based on the PIM platform, which employs a commercialized application processor (AP) and a standard memory protocol.
Abstract: The data-centric computing paradigm has recently garnered a great deal of attention from the research community as a method for overcoming the performance limits of traditional computing systems, including the memory wall crisis. One promising approach to mitigating this issue in future computer systems is processing in memory (PIM). PIM facilitates the stacking of processing logic and memory dies in a single package and minimizes data movement by placing the computation close to where the data reside. As this approach, however, requires compatibility with existing computer architectures and operating systems, it has not been widely adopted. To meet the need for compatibility, this paper proposes a hardware architecture of PIM and verifies the functions of the proposed architecture by an embedded system based on the PIM platform, which employs a commercialized application processor (AP) and a standard memory protocol. We also propose a PIM-based data-centric accelerator for image processing. Experiments involve the development of AP- and PIM-based application programs for processing a median filter that uses a 24-bit color image with a 512 × 512 resolution test image. Using the same test image, we measure the median filter processing times and compare the processing times of the AP and the proposed PIM. Results of the experiments show that the processing time of PIM is about 84% faster than that of the AP.