K
Khalid Ashraf
Researcher at University of California, Berkeley
Publications - 21
Citations - 7293
Khalid Ashraf is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Magnetization & Graphene. The author has an hindex of 13, co-authored 21 publications receiving 5830 citations. Previous affiliations of Khalid Ashraf include University of California, Riverside.
Papers
More filters
Posted Content
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size
TL;DR: This work proposes a small DNN architecture called SqueezeNet, which achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters and is able to compress to less than 0.5MB (510x smaller than AlexNet).
Journal ArticleDOI
Electric-field-induced magnetization reversal in a ferromagnet-multiferroic heterostructure.
Jt T. Heron,Morgan Trassin,Khalid Ashraf,M. Gajek,Qing He,Sy Y. Yang,De E. Nikonov,Yh-H. Chu,Yh-H. Chu,Sayeef Salahuddin,Ramamoorthy Ramesh,Ramamoorthy Ramesh +11 more
TL;DR: A nonvolatile, room temperature magnetization reversal determined by an electric field in a ferromagnet-multiferroic system demonstrates an avenue for next-generation, low-energy consumption spintronics.
Proceedings ArticleDOI
FireCaffe: Near-Linear Acceleration of Deep Neural Network Training on Compute Clusters
TL;DR: FireCaffe is presented, which successfully scales deep neural network training across a cluster of GPUs, and finds that reduction trees are more efficient and scalable than the traditional parameter server approach.
Posted Content
FireCaffe: near-linear acceleration of deep neural network training on compute clusters
TL;DR: In this paper, the authors present FireCaffe, which scales deep neural network training across a cluster of GPUs by selecting network hardware that achieves high bandwidth between GPU servers and using reduction trees to reduce communication overhead.
Posted Content
Abnormality Detection and Localization in Chest X-Rays using Deep Convolutional Neural Networks
TL;DR: This work uses a publicly available Indiana CXR, JSRT and Shenzhen dataset and studied the performance of known deep convolutional network (DCN) architectures on different abnormalities to find that the same DCN architecture doesn't perform well across all abnormalities.