In this paper, the authors present a CA for a tile-based MP-SoC, which has smaller memory requirements and a lower latency than existing CAs, and compare it with two existing DMA controllers.
Abstract:
Modern multi-processor systems need to provide guaranteed services to their users. A communication assist (CA) helps in achieving tight timing guarantees. In this paper, we present a CA for a tile-based MP-SoC. Our CA has smaller memory requirements and a lower latency than existing CAs. The CA has been implemented in hardware. We compare it with two existing DMA controllers. When compared with these DMAs, our CA is up-to 44% smaller in terms of equivalent gate count.
TL;DR: A worst-case performance model of the authors' CA is proposed so that the performance of the CA-based platform can be analyzed before its implementation, and a fully automated design flow to generate communication assist (CA) based multi-processor systems (CA-MPSoC) is presented.
TL;DR: A predictable high-performance communication assist (CA) that helps to tackle design challenges in integrating IP cores into heterogeneous Multi-Processor System-on-Chips (MPSoCs), and a predictable heterogeneous multi-processor platform template for streaming applications is presented.
TL;DR: A novel heuristic algorithm is presented that can design MPSoC platforms and map tasks of multiple applications onto this platform while satisfying the throughput constraints of these applications, and allows sharing of resources between multiple applications.
TL;DR: This dissertation presents predictable architectural components for MPSoCs, a Predictable MP soC design strategy, automatic platform synthesis tool, a run-time system and an MPSoC simulation technique to design and manage these multi-processor based systems efficiently.
TL;DR: This book explains the forces behind this convergence of shared-memory, message-passing, data parallel, and data-driven computing architectures and provides comprehensive discussions of parallel programming for high performance and of workload-driven evaluation, based on understanding hardware-software interactions.
TL;DR: This paper explains how starting from system level platform, application, and mapping specifications, a multiprocessor platform is synthesized and programmed in a systematic and automated way in order to reduce the design time and to satisfy the performance needs of applications executed on these platforms.
TL;DR: The proposed Distributed Memory Server is composed of high-performance and flexible memory service access points (MSAPs), which execute data transfers without intervention of the processing elements, and data network, and control network that can handle direct massive data transfer between the distributed memories of an MPSoC.
TL;DR: An multi-core architecture using a networkon-chip, which provides the required flexibility and scalability is described, and it is shown that the latency is comparable to the current architecture.
TL;DR: A contrastive comparison of cache-based versus scratch-pad managed inter-processor communication for (distributed shared-memory) multiprocessor systems-on-chip shows that the scratchpad application mapping has the best overall performance, that it helps smoothing NoC traffic and that it is not sensitive to the quality-of-service (QoS) used.
Q1. What have the authors contributed in "A predictable communication assist" ?
In this paper, the authors present a CA for a tile-based MP-SoC.
Q2. What is the function of the MA?
The MA executes the data transfer by generating a memory address, memory control signal and NI FIFO control signals according to the received context.
Q3. What is the function of the address translation unit?
The AT monitors the address bus of the processor and distinguishes between the local memory accesses and buffer memory accesses, it passes the local memory accesses to the DM, translates the virtual address of buffer into physical memory address.
Q4. What is the purpose of the paper?
In [2], a multi-processor platform is introduced that decouples the computation and communication of applications through a communication assist (CA).
Q5. What is the effect of the CA on the overall system?
This leads to lower memory requirement for the overall system and to a lower communication latency as compared to CAs in literature.
Q6. What is the definition of a CA?
B.4.3 [Hardware]: Input/Output and data communication—Interconnections,interfacesDesign, PerformanceCA, Predictable, FPGAs, Communication, MP-SoC, DMAThe number of applications which is executed concurrently in an embedded system is increasing rapidly.
Q7. What is the function of the buffer context?
The PSU selects one of the buffer contexts as indicated by the MA, sends the selected context to the MA and updates the registers for management of the circular buffers.
Q8. What are the configurations of the PSU?
Possible configurations of the PSU include the sizeof the buffer, the base address of the buffer in physical memory, and the id of the connected NI FIFO.