scispace - formally typeset
D

Dipankar Das

Researcher at Intel

Publications -  54
Citations -  2252

Dipankar Das is an academic researcher from Intel. The author has contributed to research in topics: Artificial neural network & Floating point. The author has an hindex of 19, co-authored 54 publications receiving 1693 citations. Previous affiliations of Dipankar Das include General Motors & Indian Institute of Technology Kharagpur.

Papers
More filters
Journal ArticleDOI

GraphMat: high performance graph analytics made productive

TL;DR: GraphMat is a single-node multicore graph framework written in C++ that achieves better multicore scalability than other frameworks and is 1.2X off native, hand-optimized code on a variety of graph algorithms.
Proceedings ArticleDOI

SIGMA: A Sparse and Irregular GEMM Accelerator with Flexible Interconnects for DNN Training

TL;DR: SIGMA is proposed, a flexible and scalable architecture that offers high utilization of all its processing elements (PEs) regardless of kernel shape and sparsity, and includes a novel reduction tree microarchitecture named Forwarding Adder Network (FAN).
Proceedings ArticleDOI

ScaleDeep: A Scalable Compute Architecture for Learning and Evaluating Deep Networks

TL;DR: SCALEDEEP is a dense, scalable server architecture, whose processing, memory and interconnect subsystems are specialized to leverage the compute and communication characteristics of DNNs, and primarily targets DNN training, as opposed to only inference or evaluation.
Book ChapterDOI

Out-of-Distribution Detection Using an Ensemble of Self Supervised Leave-Out Classifiers

TL;DR: The authors proposed an ensemble of classifiers to detect out-of-distribution (OOD) inputs using a margin-based loss over the softmax output which seeks to maintain at least a margin m between the average entropy of the OOD and in-dist distribution samples in conjunction with the standard cross-entropy loss.
Posted Content

A Study of BFLOAT16 for Deep Learning Training

TL;DR: The results show that deep learning training using BFLOAT16 tensors achieves the same state-of-the-art (SOTA) results across domains as FP32 tensors in the same number of iterations and with no changes to hyper-parameters.