Z
Zejda Jindrich
Researcher at Xilinx
Publications - 9
Citations - 26
Zejda Jindrich is an academic researcher from Xilinx. The author has contributed to research in topics: Double data rate & Massively parallel. The author has an hindex of 3, co-authored 9 publications receiving 26 citations.
Papers
More filters
Patent
Image preprocessing for generalized image processing
TL;DR: An example preprocessor circuit for formatting image data into a plurality of streams of image samples includes: a first buffer configured to store the image data and output a row of the plurality of rows as discussed by the authors.
Patent
Machine learning runtime library for neural network acceleration
Ng Aaron,Zejda Jindrich,Elliott Delaye,Teng Xiao,Santan Sonal,Soe Soren T,Ashish Sirasao,Ghasemi Ehsan,Settle Sean +8 more
TL;DR: In this article, the authors describe techniques for interfacing a neural network application (120) with a Neural Network accelerator (165) using a library (130), which can process multiple packets in parallel which can increase the utilization of the neural network accelerator on the hardware system.
Patent
Software-defined memory bandwidth reduction by hierarchical stream buffering for general matrix multiplication in a programmable IC
TL;DR: In this article, the authors describe a method for partitioning and reordering block-based matrix multiplications for high-speed data streaming in general matrix multiplication (GEMM), which may be implemented by a programmable integrated circuit (IC).
Patent
Static block scheduling in massively parallel software defined hardware systems
TL;DR: In this paper, the authors describe techniques for static scheduling a neural network (100) implemented in a massively parallel hardware system (205) using three different scheduling levels referred to herein as an upper level, an intermediate level, and a lower level.
Patent
Data format suitable for fast massively parallel general matrix multiplication in a programmable IC
TL;DR: In this article, a data format for general matrix multiplication (GEMM) is proposed for arbitrarily-sized input matrices for GEMM implemented on a finite-size accelerator in the form of a rectangular compute array of DSP elements or similar compute cores.