scispace - formally typeset
Search or ask a question
Author

Pulkit Grover

Bio: Pulkit Grover is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Decoding methods & Counterexample. The author has an hindex of 27, co-authored 176 publications receiving 4874 citations. Previous affiliations of Pulkit Grover include Stanford University & University of California, Berkeley.


Papers
More filters
Proceedings ArticleDOI
13 Jun 2010
TL;DR: The problem considered here is that of wireless information and power transfer across a noisy coupled-inductor circuit, which is a frequency-selective channel with additive white Gaussian noise, and the optimal tradeoff is characterized given the total power available.
Abstract: The problem considered here is that of wireless information and power transfer across a noisy coupled-inductor circuit, which is a frequency-selective channel with additive white Gaussian noise. The optimal tradeoff between the achievable rate and the power transferred is characterized given the total power available. The practical utility of such systems is also discussed.

1,137 citations

Journal ArticleDOI
TL;DR: The current state of the art for wireless networks composed of energy harvesting nodes, starting from the information-theoretic performance limits to transmission scheduling policies and resource allocation, medium access, and networking issues are provided.
Abstract: This paper summarizes recent contributions in the broad area of energy harvesting wireless communications. In particular, we provide the current state of the art for wireless networks composed of energy harvesting nodes, starting from the information-theoretic performance limits to transmission scheduling policies and resource allocation, medium access, and networking issues. The emerging related area of energy transfer for self-sustaining energy harvesting wireless networks is considered in detail covering both energy cooperation aspects and simultaneous energy and information transfer. Various potential models with energy harvesting nodes at different network scales are reviewed, as well as models for energy consumption at the nodes.

829 citations

Proceedings Article
01 Jan 2016
TL;DR: In this article, the authors propose a technique called Short-Dot to reduce the number of redundant computations in a coding theory-inspired fashion for computing linear transforms of long vectors.
Abstract: Faced with saturation of Moore's law and increasing size and dimension of data, system designers have increasingly resorted to parallel and distributed computing to reduce computation time of machine-learning algorithms. However, distributed computing is often bottle necked by a small fraction of slow processors called "stragglers" that reduce the speed of computation because the fusion node has to wait for all processors to complete their processing. To combat the effect of stragglers, recent literature proposes introducing redundancy in computations across processors, e.g., using repetition-based strategies or erasure codes. The fusion node can exploit this redundancy by completing the computation using outputs from only a subset of the processors, ignoring the stragglers. In this paper, we propose a novel technique - that we call "Short-Dot" - to introduce redundant computations in a coding theory inspired fashion, for computing linear transforms of long vectors. Instead of computing long dot products as required in the original linear transform, we construct a larger number of redundant and short dot products that can be computed more efficiently at individual processors. Further, only a subset of these short dot products are required at the fusion node to finish the computation successfully. We demonstrate through probabilistic analysis as well as experiments on computing clusters that Short-Dot offers significant speed-up compared to existing techniques. We also derive trade-offs between the length of the dot-products and the resilience to stragglers (number of processors required to finish), for any such strategy and compare it to that achieved by our strategy.

281 citations

Journal ArticleDOI
TL;DR: “super-Nyquist” density EEG (“SND”) with Nyquist density arrays for assessing the spatiotemporal aspects of early visual processing argued for increased development of this approach in basic and translational neuroscience.
Abstract: Standard human EEG systems based on spatial Nyquist estimates suggest that 20–30 mm electrode spacing suffices to capture neural signals on the scalp, but recent studies posit that increasing sensor density can provide higher resolution neural information. Here, we compared “super-Nyquist” density EEG (“SND”) with Nyquist density (“ND”) arrays for assessing the spatiotemporal aspects of early visual processing. EEG was measured from 128 electrodes arranged over occipitotemporal brain regions (14 mm spacing) while participants viewed flickering checkerboard stimuli. Analyses compared SND with ND-equivalent subsets of the same electrodes. Frequency-tagged stimuli were classified more accurately with SND than ND arrays in both the time and the frequency domains. Representational similarity analysis revealed that a computational model of V1 correlated more highly with the SND than the ND array. Overall, SND EEG captured more neural information from visual cortex, arguing for increased development of this approach in basic and translational neuroscience.

278 citations

Journal ArticleDOI
TL;DR: Novel coded computation strategies for distributed matrix–matrix products that outperform the recent “Polynomial code” constructions in recovery threshold, i.e., the required number of successful workers are provided.
Abstract: We provide novel coded computation strategies for distributed matrix–matrix products that outperform the recent “Polynomial code” constructions in recovery threshold, i.e., the required number of successful workers. When a fixed $1/m$ fraction of each matrix can be stored at each worker node, Polynomial codes require $m^{2}$ successful workers, while our MatDot codes only require $2m-1$ successful workers. However, MatDot codes have higher computation cost per worker and higher communication cost from each worker to the fusion node. We also provide a systematic construction of MatDot codes. Furthermore, we propose “PolyDot” coding that interpolates between Polynomial codes and MatDot codes to trade off computation/communication costs and recovery thresholds. Finally, we demonstrate a novel coding technique for multiplying $n$ matrices ( $n \geq 3$ ) using ideas from MatDot and PolyDot codes.

217 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

01 May 1993
TL;DR: Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems.
Abstract: Three parallel algorithms for classical molecular dynamics are presented. The first assigns each processor a fixed subset of atoms; the second assigns each a fixed subset of inter-atomic forces to compute; the third assigns each a fixed spatial region. The algorithms are suitable for molecular dynamics models which can be difficult to parallelize efficiently—those with short-range forces where the neighbors of each atom change rapidly. They can be implemented on any distributed-memory parallel machine which allows for message-passing of data between independently executing processors. The algorithms are tested on a standard Lennard-Jones benchmark problem for system sizes ranging from 500 to 100,000,000 atoms on several parallel supercomputers--the nCUBE 2, Intel iPSC/860 and Paragon, and Cray T3D. Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems. For large problems, the spatial algorithm achieves parallel efficiencies of 90% and a 1840-node Intel Paragon performs up to 165 faster than a single Cray C9O processor. Trade-offs between the three algorithms and guidelines for adapting them to more complex molecular dynamics simulations are also discussed.

29,323 citations

Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations

Journal ArticleDOI
TL;DR: A comprehensive survey of the state-of-the-art MEC research with a focus on joint radio-and-computational resource management is provided in this paper, where a set of issues, challenges, and future research directions for MEC are discussed.
Abstract: Driven by the visions of Internet of Things and 5G communications, recent years have seen a paradigm shift in mobile computing, from the centralized mobile cloud computing toward mobile edge computing (MEC). The main feature of MEC is to push mobile computing, network control and storage to the network edges (e.g., base stations and access points) so as to enable computation-intensive and latency-critical applications at the resource-limited mobile devices. MEC promises dramatic reduction in latency and mobile energy consumption, tackling the key challenges for materializing 5G vision. The promised gains of MEC have motivated extensive efforts in both academia and industry on developing the technology. A main thrust of MEC research is to seamlessly merge the two disciplines of wireless communications and mobile computing, resulting in a wide-range of new designs ranging from techniques for computation offloading to network architectures. This paper provides a comprehensive survey of the state-of-the-art MEC research with a focus on joint radio-and-computational resource management. We also discuss a set of issues, challenges, and future research directions for MEC research, including MEC system deployment, cache-enabled MEC, mobility management for MEC, green MEC, as well as privacy-aware MEC. Advancements in these directions will facilitate the transformation of MEC from theory to practice. Finally, we introduce recent standardization efforts on MEC as well as some typical MEC application scenarios.

2,992 citations