scispace - formally typeset
Search or ask a question
Author

Nitish Mital

Bio: Nitish Mital is an academic researcher from Imperial College London. The author has contributed to research in topics: Computer science & Cache. The author has an hindex of 4, co-authored 12 publications receiving 68 citations. Previous affiliations of Nitish Mital include Indian Institute of Technology Bombay.
Topics: Computer science, Cache, Engineering, Server, Encoder

Papers
More filters
Journal ArticleDOI
15 Apr 2018
TL;DR: In this paper, a joint storage and proactive caching scheme is proposed, which exploits coded storage across the servers, uncoded cache placement at the users, and coded delivery, and the optimality of the proposed scheme is also proved for successive transmissions for both successive and parallel transmissions from the servers.
Abstract: Cache-aided content delivery is studied in a multi-server system with $P$ servers and $K$ users, each equipped with a local cache memory. In the delivery phase, each user connects randomly to any $\rho $ out of $P$ servers. Thanks to the availability of multiple servers, which model small-cell base stations (SBSs), demands can be satisfied with reduced storage capacity at each server and reduced delivery rate per server; however, this also leads to reduced multicasting opportunities compared to the single-server scenario. A joint storage and proactive caching scheme is proposed, which exploits coded storage across the servers, uncoded cache placement at the users, and coded delivery. The delivery latency is studied for both successive and parallel transmissions from the servers. It is shown that, with successive transmissions the achievable average delivery latency is comparable to the one achieved in the single-server scenario, while the gap between the two depends on $\rho $ , the available redundancy across the servers, and can be reduced by increasing the storage capacity at the SBSs. The optimality of the proposed scheme with uncoded cache placement and MDS-coded server storage is also proved for successive transmissions.

33 citations

Posted Content
TL;DR: Methods for performing other common matrix computations securely on distributed servers are proposed, including changing the parameters of secret sharing, matrix transpose, matrix exponentiation, solving a linear system, and matrix inversion, which are then used to show how arbitrary matrix polynomials can be computed securely onributed servers using the proposed procedure.
Abstract: We consider the problem of secure distributed matrix computation (SDMC), where a \textit{user} can query a function of data matrices generated at distributed \textit{source} nodes. We assume the availability of $N$ honest but curious computation servers, which are connected to the sources, the user, and each other through orthogonal and reliable communication links. Our goal is to minimize the amount of data that must be transmitted from the sources to the servers, called the \textit{upload cost}, while guaranteeing that no $T$ colluding servers can learn any information about the source matrices, and the user cannot learn any information beyond the computation result. We first focus on secure distributed matrix multiplication (SDMM), considering two matrices, and propose a novel polynomial coding scheme using the properties of finite field discrete Fourier transform, which achieves an upload cost significantly lower than the existing results in the literature. We then generalize the proposed scheme to include straggler mitigation, as well as to the multiplication of multiple matrices while keeping the input matrices, the intermediate computation results, as well as the final result secure against any $T$ colluding servers. We also consider a special case, called computation with own data, where the data matrices used for computation belong to the user. In this case, we drop the security requirement against the user, and show that the proposed scheme achieves the minimal upload cost. We then propose methods for performing other common matrix computations securely on distributed servers, including changing the parameters of secret sharing, matrix transpose, matrix exponentiation, solving a linear system, and matrix inversion, which are then used to show how arbitrary matrix polynomials can be computed securely on distributed servers using the proposed procedure.

32 citations

Posted Content
TL;DR: In this paper, a system called Dopamine, which employs federated learning with differentially private stochastic gradient descent (DPSGD) is proposed to train deep neural networks for medical diagnostics.
Abstract: While rich medical datasets are hosted in hospitals distributed across the world, concerns on patients' privacy is a barrier against using such data to train deep neural networks (DNNs) for medical diagnostics. We propose Dopamine, a system to train DNNs on distributed datasets, which employs federated learning (FL) with differentially-private stochastic gradient descent (DPSGD), and, in combination with secure aggregation, can establish a better trade-off between differential privacy (DP) guarantee and DNN's accuracy than other approaches. Results on a diabetic retinopathy~(DR) task show that Dopamine provides a DP guarantee close to the centralized training counterpart, while achieving a better classification accuracy than FL with parallel DP where DPSGD is applied without coordination. Code is available at this https URL.

10 citations

Journal ArticleDOI
TL;DR: This work considers a fast fading AWGN multiple-access channel with full receiver CSI and distributed CSI at the transmitters to evaluate the ergodic sum-capacity of this decentralized model, under identical average powers and channel statistics across users.
Abstract: We consider a fast fading AWGN multiple-access channel (MAC) with full receiver CSI and distributed CSI at the transmitters. The objective is to evaluate the ergodic sum-capacity of this decentralized model, under identical average powers and channel statistics across users. While an optimal water-filling solution can be found for centralized MACs with full CSI at all terminals, such an explicit solution is not considered feasible in distributed CSI models. Our main contribution is an upper-bound on the ergodic sum-capacity when each transmitter is aware only of its own fading coefficients. Interestingly, our techniques also suggest an appropriate lower bound. These bounds are shown to be very close to each other, suggesting the tight nature of the results.

6 citations

Proceedings ArticleDOI
01 Nov 2018
TL;DR: In this article, the trade-off between the storage capacity and the repair bandwidth is derived in a distributed wireless content caching system, where r out of a total of n cache nodes lose part of their cached data.
Abstract: Repair of multiple partially failed cache nodes is studied in a distributed wireless content caching system, where r out of a total of n cache nodes lose part of their cached data. Broadcast repair of failed cache contents at the network edge is studied; that is, the surviving cache nodes transmit broadcast messages to the failed ones, which are then used, together with the surviving data in their local cache memories, to recover the lost content. The trade-off between the storage capacity and the repair bandwidth is derived. It is shown that utilizing the broadcast nature of the wireless medium and the surviving cache contents at partially failed nodes significantly reduces the required repair bandwidth per node.

4 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper summarized the latest research on the application of federated learning in various fields of smart cities from the Internet of Things, transportation, communications, finance, medical and other fields and the key technologies and the latest results.
Abstract: Federated learning (FL) plays an important role in the development of smart cities. With the evolution of big data and artificial intelligence, issues related to data privacy and protection have em...

72 citations

Proceedings ArticleDOI
01 Aug 2019
TL;DR: GASP_{r}$ (Gap Additive Secure Polynomial codes) parametrized by an integer r outperforms all previously known polynomial codes for SDMM and achieves the lower bounds in the case of no server collusion.
Abstract: We consider the problem of secure distributed matrix multiplication (SDMM) in which a user wishes to compute the product of two matrices with the assistance of honest but curious servers. We construct polynomial codes for SDMM by studying a recently introduced combinatorial tool called the degree table. Maximizing the download rate of a polynomial code for SDMM is equivalent to minimizing N, the number of distinct elements in the corresponding degree table. We propose new constructions of degree tables with a low number of distinct elements. These new constructions lead to a general family of polynomial codes for SDMM, which we call $GASP_{r}$ (Gap Additive Secure Polynomial codes) parametrized by an integer r. $GASP_{r}$ outperforms all previously known polynomial codes for SDMM. We also present lower bounds on N and show that $GASP_{r}$ achieves the lower bounds in the case of no server collusion.

32 citations

Journal ArticleDOI
TL;DR: The analysis allows the optimization of the PHY parameters (PHY coding rate at the ENs and MDS coding rate) for given MAN scheme parameters, and the propagation is affected by Rayleigh fading and distance-dependent pathloss.
Abstract: The coded caching scheme proposed by Maddah-Ali and Niesen (MAN) critically hinges on the ability of the system to deliver a common coded multicast message from a server to all users in the system at a fixed rate, independent of the number of users. In order to apply this paradigm to a spatially distributed wireless network, it is important to make sure that such a common multicast rate does not vanish, as the number of users in the network and/or the network area increase. This paper starts from a variant of the MAN scheme successively proposed for the so-called combination network , where the multicast message is further encoded by a maximum distance separable (MDS) code, and the MDS-coded blocks are sent to multiple spatially distributed single-antenna edge nodes (ENs), transmitting at a fixed rate with no channel state information. The users have multiple antennas. They obtain receiver channel state information from the standard downlink pilots and can select to decode a desired number of EN transmissions while either nulling or treating as noise the others. The system is reminiscent of the so-called evolved Multimedia Broadcast Multicast Service , since the fundamental underlying transmission mechanism is multipoint multicasting , where each user can independently (in a user-centric manner) decide which EN to decode, without any explicit association of users with ENs. We study the performance of the proposed system when users and ENs are distributed according to homogeneous Poisson point processes in the plane, and the propagation is affected by Rayleigh fading and distance-dependent pathloss. Our analysis allows the optimization of the PHY parameters (PHY coding rate at the ENs and MDS coding rate) for given MAN scheme parameters. The proposed scheme achieves full spatial scalability in the following sense: for an extended network with arbitrary constant ratio of users per EN and area $A = O(N_{\mathrm{ E}})$ , where $N_{\mathrm{ E}}$ denotes the number of ENs, the system achieves a per-user delivery rate that does not vanish as $N_{\mathrm{ E}}\rightarrow \infty $ .

31 citations

Journal ArticleDOI
TL;DR: In this article, the authors investigated the optimal information-theoretic performance under both serial and pipelined fronthaul-edge transmission modes for minimizing the minimum high SNR latency in terms of normalized delivery time (NDT) for worst case users' demands.
Abstract: In fog-aided cellular systems, content delivery latency can be minimized by jointly optimizing edge caching and transmission strategies In order to account for the cache capacity limitations at the edge nodes (ENs), transmission generally involves both fronthaul transfer from a cloud processor with access to the content library to the ENs and wireless delivery from the ENs to the users In this paper, the resulting problem is studied from an information-theoretic viewpoint by making the following practically relevant assumptions: 1) the ENs have multiple antennas; 2) only uncoded fractional caching is allowed; 3) the fronthaul links are used to send fractions of contents; and 4) the ENs are constrained to use one-shot linear zero-forcing precoding on the wireless channel Assuming off-line proactive caching and focusing on a high signal-to-noise ratio (SNR) latency metric, the optimal information-theoretic performance is investigated under both serial and pipelined fronthaul-edge transmission modes The analysis characterizes the minimum high-SNR latency in terms of normalized delivery time (NDT) for worst case users’ demands The characterization is exact for a subset of system parameters and is generally optimal within a multiplicative factor of 3/2 for the serial case and 2 for the pipelined case The results bring insights into the optimal interplay between edge and cloud processing in fog-aided wireless networks as a function of system resources, including the number of antennas at the ENs, the ENs’ cache capacity, and the fronthaul capacity

29 citations

Journal ArticleDOI
TL;DR: It is shown that the proposed scheme provides a significant reduction both in the backhaul load when the cache capacity is sufficiently large, and in the number of sub-files required, which decidedly outperforms state-of-the-art alternatives.
Abstract: We consider a cache-enabled heterogeneous cellular network, where mobile users (MUs) connect to multiple cache-enabled small-cell base stations (SBSs) during a video downloading session. SBSs can deliver these requests using their local cache contents as well as by downloading them from a macro-cell base station (MBS), which has access to the file library. We introduce a novel mobility-aware content storage and delivery scheme, which jointly exploits coded storage at the SBSs and coded delivery from the MBS to reduce the backhaul load from the MBS to the SBSs. We show that the proposed scheme provides a significant reduction both in the backhaul load when the cache capacity is sufficiently large, and in the number of sub-files required. Overall, for practical scenarios, in which the number of sub-files that can be created is limited either by the size of the files, or by the protocol overhead, the proposed coded caching and delivery scheme decidedly outperforms state-of-the-art alternatives. Finally, we show that the benefits of the proposed scheme also extends to scenarios with non-uniform file popularities and arbitrary mobility patterns.

23 citations