B
Bita Darvish Rouhani
Researcher at University of California, San Diego
Publications - 54
Citations - 1541
Bita Darvish Rouhani is an academic researcher from University of California, San Diego. The author has contributed to research in topics: Deep learning & Artificial neural network. The author has an hindex of 15, co-authored 53 publications receiving 1089 citations. Previous affiliations of Bita Darvish Rouhani include Microsoft & University of California.
Papers
More filters
Journal ArticleDOI
Serving DNNs in Real Time at Datacenter Scale with Project Brainwave
Eric S. Chung,Jeremy Fowers,Kalin Ovtcharov,Michael K. Papamichael,Adrian M. Caulfield,Todd Massengill,Ming Liu,Lo Daniel,Shlomi Alkalay,Michael Haselman,Maleen Abeydeera,Logan Adams,Hari Angepat,Christian Boehn,Derek Chiou,Oren Firestein,Alessandro Forin,Kang Su Gatlin,Mahdi Ghandi,Stephen F. Heil,Kyle Holohan,Ahmad M. El Husseini,Tamas Juhasz,Kara Kagi,Ratna Kumar Kovvuri,Sitaram Lanka,Friedel van Megen,Dima Mukhortov,Prerak Patel,Brandon Perez,Amanda Rapsang,Steven K. Reinhardt,Bita Darvish Rouhani,Adam Sapek,Raja Seera,Sangeetha Shekar,Balaji Sridharan,Gabriel Weisz,Lisa Woods,Phillip Yi Xiao,Dan Zhang,Ritchie Zhao,Doug Burger +42 more
TL;DR: Project Brainwave, Microsofts principal infrastructure for AI serving in real time, accelerates deep neural network inferencing in major services such as Bings intelligent search features and Azure by exploiting distributed model parallelism and pinning over low-latency hardware microservices.
Proceedings ArticleDOI
Deepsecure: scalable provably-secure deep learning
TL;DR: The DeepSecure framework is the first to empower accurate and scalable DL analysis of data generated by distributed clients without sacrificing the security to maintain efficiency and introduces a set of novel low-overhead pre-processing techniques which further reduce the GC overall runtime in the context of DL.
Posted Content
DeepSecure: Scalable Provably-Secure Deep Learning
TL;DR: DeepSecure as discussed by the authors proposes a framework that enables scalable execution of the state-of-the-art deep learning models in a privacy-preserving setting using Yao's Garbled Circuit (GC) protocol.
Proceedings ArticleDOI
DeepSigns: An End-to-End Watermarking Framework for Ownership Protection of Deep Neural Networks
TL;DR: DeepSigns is proposed, the first end-to-end IP protection framework that enables developers to systematically insert digital watermarks in the target DL model before distributing the model, and can demonstrably withstand various removal and transformation attacks.
Proceedings ArticleDOI
DeepMarks: A Secure Fingerprinting Framework for Digital Rights Management of Deep Learning Models
TL;DR: DeepMarks is introduced, the first end-to-end collusion-secure fingerprinting framework that enables the owner to retrieve model authorship information and identification of unique users in the context of deep learning (DL).