scispace - formally typeset
Search or ask a question
Author

Brejesh Lall

Bio: Brejesh Lall is an academic researcher from Indian Institute of Technology Delhi. The author has contributed to research in topics: Deep learning & Filter bank. The author has an hindex of 16, co-authored 255 publications receiving 1132 citations. Previous affiliations of Brejesh Lall include Indian Institutes of Technology & Indian Institutes of Information Technology.


Papers
More filters
Journal ArticleDOI
TL;DR: Two different models are developed to capture the trend of a number of cases and also predict the cases in the days to come, so that appropriate preparations can be made to fight this disease.
Abstract: COVID-19 is caused by a novel coronavirus and has played havoc on many countries across the globe. A majority of the world population is now living in a restricted environment for more than a month with minimal economic activities, to prevent exposure to this highly infectious disease. Medical professionals are going through a stressful period while trying to save the larger population. In this paper, we develop two different models to capture the trend of a number of cases and also predict the cases in the days to come, so that appropriate preparations can be made to fight this disease. The first one is a mathematical model accounting for various parameters relating to the spread of the virus, while the second one is a non-parametric model based on the Fourier decomposition method (FDM), fitted on the available data. The study is performed for various countries, but detailed results are provided for the India, Italy, and United States of America (USA). The turnaround dates for the trend of infected cases are estimated. The end-dates are also predicted and are found to agree well with a very popular study based on the classic susceptible-infected-recovered (SIR) model. Worldwide, the total number of expected cases and deaths are 12.7 × 106 and 5.27 × 105, respectively, predicted with data as of 06-06-2020 and 95% confidence intervals. The proposed study produces promising results with the potential to serve as a good complement to existing methods for continuous predictive monitoring of the COVID-19 pandemic.

87 citations

Journal ArticleDOI
TL;DR: This paper investigates modulation techniques for end-to-end communication between two nanomachines placed in a fluid medium and proposes an M-ary modulation scheme and an extended scheme, which is a slight variation of a binary modulation scheme.
Abstract: In this paper, we investigate modulation techniques for end-to-end communication between two nanomachines placed in a fluid medium. The information is encoded as the number of molecules transmitted leading to such schemes being aptly named as amplitude modulation schemes. The propagation of molecules obeys the laws of Brownian motion with a positive drift from the transmitter to the receiver nanomachine. The channel is characterized by two parameters of the fluid medium: the drift velocity and the diffusion coefficient. Assuming the molecules degrade over time, the life expectancy of the molecules also plays a significant role in such communication scenarios. We consider an $M$ -ary modulation scheme and also propose an extended scheme, which is a slight variation of a binary modulation scheme. The received symbol is corrupted by interference from the previous symbols as well as other noise sources present in the medium. Considering maximum likelihood detection at the receiver, we derive analytical expressions for the end-to-end symbol error probability and the capacity for these modulation schemes. Numerical results bring out the impact of various parameters on the performance of the system. Our results show that these schemes offer a promising approach to set up molecular communication over diffusion-based channels.

79 citations

Posted Content
TL;DR: A large-scale 3D shape understanding benchmark using data and annotation from ShapeNet 3D object database and the best performing teams have outperformed state-of-the-art approaches on both tasks.
Abstract: We introduce a large-scale 3D shape understanding benchmark using data and annotation from ShapeNet 3D object database. The benchmark consists of two tasks: part-level segmentation of 3D shapes and 3D reconstruction from single view images. Ten teams have participated in the challenge and the best performing teams have outperformed state-of-the-art approaches on both tasks. A few novel deep learning architectures have been proposed on various 3D representations on both tasks. We report the techniques used by each team and the corresponding performances. In addition, we summarize the major discoveries from the reported results and possible trends for the future work in the field.

58 citations

Posted Content
TL;DR: This white paper first provides a generic discussion, shows some facts and discusses targets set in international bodies related to rural and remote connectivity and digital divide, and digs into technical details, i.e., into a solutions space.
Abstract: In many places all over the world rural and remote areas lack proper connectivity that has led to increasing digital divide. These areas might have low population density, low incomes, etc., making them less attractive places to invest and operate connectivity networks. 6G could be the first mobile radio generation truly aiming to close the digital divide. However, in order to do so, special requirements and challenges have to be considered since the beginning of the design process. The aim of this white paper is to discuss requirements and challenges and point out related, identified research topics that have to be solved in 6G. This white paper first provides a generic discussion, shows some facts and discusses targets set in international bodies related to rural and remote connectivity and digital divide. Then the paper digs into technical details, i.e., into a solutions space. Each technical section ends with a discussion and then highlights identified 6G challenges and research ideas as a list.

41 citations

Posted Content
TL;DR: It is shown that using SLAF along with standard activations can provide performance improvements with only a small increase in number of parameters, and it is proved that SLNNs can approximate any neural network with lipschitz continuous activations, to any arbitrary error highlighting their capacity and possible equivalence with standard NNs.
Abstract: The scope of research in the domain of activation functions remains limited and centered around improving the ease of optimization or generalization quality of neural networks (NNs). However, to develop a deeper understanding of deep learning, it becomes important to look at the non linear component of NNs more carefully. In this paper, we aim to provide a generic form of activation function along with appropriate mathematical grounding so as to allow for insights into the working of NNs in future. We propose "Self-Learnable Activation Functions" (SLAF), which are learned during training and are capable of approximating most of the existing activation functions. SLAF is given as a weighted sum of pre-defined basis elements which can serve for a good approximation of the optimal activation function. The coefficients for these basis elements allow a search in the entire space of continuous functions (consisting of all the conventional activations). We propose various training routines which can be used to achieve performance with SLAF equipped neural networks (SLNNs). We prove that SLNNs can approximate any neural network with lipschitz continuous activations, to any arbitrary error highlighting their capacity and possible equivalence with standard NNs. Also, SLNNs can be completely represented as a collections of finite degree polynomial upto the very last layer obviating several hyper parameters like width and depth. Since the optimization of SLNNs is still a challenge, we show that using SLAF along with standard activations (like ReLU) can provide performance improvements with only a small increase in number of parameters.

36 citations


Cited by
More filters
01 Jan 2004
TL;DR: Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance and describes numerous important application areas such as image based rendering and digital libraries.
Abstract: From the Publisher: The accessible presentation of this book gives both a general view of the entire computer vision enterprise and also offers sufficient detail to be able to build useful applications. Users learn techniques that have proven to be useful by first-hand experience and a wide range of mathematical methods. A CD-ROM with every copy of the text contains source code for programming practice, color images, and illustrative movies. Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance. Topics are discussed in substantial and increasing depth. Application surveys describe numerous important application areas such as image based rendering and digital libraries. Many important algorithms broken down and illustrated in pseudo code. Appropriate for use by engineers as a comprehensive reference to the computer vision enterprise.

3,627 citations

01 Jan 2006

3,012 citations

Journal ArticleDOI

2,415 citations

01 Apr 1997
TL;DR: The objective of this paper is to give a comprehensive introduction to applied cryptography with an engineer or computer scientist in mind on the knowledge needed to create practical systems which supports integrity, confidentiality, or authenticity.
Abstract: The objective of this paper is to give a comprehensive introduction to applied cryptography with an engineer or computer scientist in mind. The emphasis is on the knowledge needed to create practical systems which supports integrity, confidentiality, or authenticity. Topics covered includes an introduction to the concepts in cryptography, attacks against cryptographic systems, key use and handling, random bit generation, encryption modes, and message authentication codes. Recommendations on algorithms and further reading is given in the end of the paper. This paper should make the reader able to build, understand and evaluate system descriptions and designs based on the cryptographic components described in the paper.

2,188 citations

Proceedings Article
03 Dec 2018
TL;DR: This work proposes to learn an Χ-transformation from the input points to simultaneously promote two causes: the first is the weighting of the input features associated with the points, and the second is the permutation of the points into a latent and potentially canonical order.
Abstract: We present a simple and general framework for feature learning from point clouds. The key to the success of CNNs is the convolution operator that is capable of leveraging spatially-local correlation in data represented densely in grids (e.g. images). However, point clouds are irregular and unordered, thus directly convolving kernels against features associated with the points will result in desertion of shape information and variance to point ordering. To address these problems, we propose to learn an Χ-transformation from the input points to simultaneously promote two causes: the first is the weighting of the input features associated with the points, and the second is the permutation of the points into a latent and potentially canonical order. Element-wise product and sum operations of the typical convolution operator are subsequently applied on the Χ-transformed features. The proposed method is a generalization of typical CNNs to feature learning from point clouds, thus we call it PointCNN. Experiments show that PointCNN achieves on par or better performance than state-of-the-art methods on multiple challenging benchmark datasets and tasks.

1,535 citations