scispace - formally typeset
A

Aruna Balasubramanian

Researcher at Stony Brook University

Publications -  71
Citations -  7605

Aruna Balasubramanian is an academic researcher from Stony Brook University. The author has contributed to research in topics: Computer science & Web page. The author has an hindex of 21, co-authored 62 publications receiving 7085 citations. Previous affiliations of Aruna Balasubramanian include University of Washington & Microsoft.

Papers
More filters
Proceedings ArticleDOI

Impact of Device Performance on Mobile Internet QoE

TL;DR: This work explores the offloading of regular expression computation to a DSP coprocessor and shows an improvement of 18% in page load time while saving energy by a factor of four and demonstrates that the performance of Web browsing is much more sensitive to low-end hardware than that of video applications, especially video streaming.
Proceedings ArticleDOI

DeFormer: Decomposing Pre-trained Transformers for Faster Question Answering

Abstract: Transformer-based QA models use input-wide self-attention -- i.e. across both the question and the input passage -- at all layers, causing them to be slow and memory-intensive. It turns out that we can get by without input-wide self-attention at all layers, especially in the lower layers. We introduce DeFormer, a decomposed transformer, which substitutes the full self-attention with question-wide and passage-wide self-attentions in the lower layers. This allows for question-independent processing of the input text representations, which in turn enables pre-computing passage representations reducing runtime compute drastically. Furthermore, because DeFormer is largely similar to the original model, we can initialize DeFormer with the pre-training weights of a standard transformer, and directly fine-tune on the target QA dataset. We show DeFormer versions of BERT and XLNet can be used to speed up QA by over 4.3x and with simple distillation-based losses they incur only a 1% drop in accuracy. We open source the code at https://github.com/StonyBrookNLP/deformer.
Posted Content

DeFormer: Decomposing Pre-trained Transformers for Faster Question Answering

TL;DR: DeFormer, a decomposed transformer, is introduced, which substitutes the full self-attention with question-wide and passage-wideself-attentions in the lower layers, which allows for question-independent processing of the input text representations.
Proceedings ArticleDOI

Deconstructing the Energy Consumption of the Mobile Page Load

TL;DR: This work presents RECON (REsource- and COmpoNent-based modeling), a modeling approach that addresses the above challenges to estimate the energy consumption of any Web page load and shows that RECON can accurately estimate theEnergy consumption of a Web page under different network conditions, even when the model is trained under a default network condition.
Proceedings ArticleDOI

A highly resilient and scalable broker architecture for IoT applications

TL;DR: This work presents Nucleus, a container architecture that supports scaling to a large number of IoT devices, provides high resiliency to failures, and incurs low overhead in terms of data transfer between the IoT devices and the broker.