scispace - formally typeset
Search or ask a question
Author

Abhishek Sharma

Bio: Abhishek Sharma is an academic researcher from Manipal University Jaipur. The author has contributed to research in topics: Medicine & Large Hadron Collider. The author has an hindex of 52, co-authored 426 publications receiving 9715 citations. Previous affiliations of Abhishek Sharma include Victoria University, Australia & University of Texas Health Science Center at San Antonio.


Papers
More filters
Proceedings ArticleDOI
16 Jun 2012
TL;DR: GMA solves a joint, relaxed QCQP over different feature spaces to obtain a single (non)linear subspace and is a supervised extension of Canonical Correlational Analysis (CCA), which is useful for cross-view classification and retrieval.
Abstract: This paper presents a general multi-view feature extraction approach that we call Generalized Multiview Analysis or GMA. GMA has all the desirable properties required for cross-view classification and retrieval: it is supervised, it allows generalization to unseen classes, it is multi-view and kernelizable, it affords an efficient eigenvalue based solution and is applicable to any domain. GMA exploits the fact that most popular supervised and unsupervised feature extraction techniques are the solution of a special form of a quadratic constrained quadratic program (QCQP), which can be solved efficiently as a generalized eigenvalue problem. GMA solves a joint, relaxed QCQP over different feature spaces to obtain a single (non)linear subspace. Intuitively, GMA is a supervised extension of Canonical Correlational Analysis (CCA), which is useful for cross-view classification and retrieval. The proposed approach is general and has the potential to replace CCA whenever classification or retrieval is the purpose and label information is available. We outperform previous approaches for textimage retrieval on Pascal and Wiki text-image data. We report state-of-the-art results for pose and lighting invariant face recognition on the MultiPIE face dataset, significantly outperforming other approaches.

733 citations

Journal ArticleDOI
TL;DR: A review of the current status of mathematical modelling studies of biomass pyrolysis with the aim to identify knowledge gaps for further research and opportunities for integration of biometer-level models of disparate scales is provided in this paper.
Abstract: Biomass as a form of energy source may be utilized in two different ways: directly by burning the biomass and indirectly by converting it into solid, liquid or gaseous fuels. Pyrolysis is an indirect conversion method, and can be described in simpler terms as a thermal decomposition of biomass under oxygen-depleted conditions to an array of solid, liquid and gaseous products, namely biochar, bio-oil and fuel gas. However, pyrolysis of biomass is a complex chemical process with several operational and environmental challenges. Consequently, this process has been widely investigated in order to understand the mechanisms and kinetics of pyrolysis at different scales, viz. particle level, multi-phase reacting flow, product distribution and reactor performance, process integration and control. However, there are a number of uncertainties in current biomass pyrolysis models, especially in their ability to optimize process conditions to achieve desired product yields and distribution. The present contribution provides a critical review of the current status of mathematical modelling studies of biomass pyrolysis with the aim to identify knowledge gaps for further research and opportunities for integration of biomass pyrolysis models of disparate scales. Models for the hydrodynamic behaviour of particles in pyrolysis, and their interaction with the reactive flow and the effect on the performance of the reactors have also been critically analyzed. From this analysis it becomes apparent that feedstock characteristics, evolving physical and chemical properties of biomass particles and residence times of both solid and gas phases in reactors hold the key to the desired performance of the pyrolysis process. Finally, the importance of catalytic effects in pyrolysis has also been critically analyzed, resulting in recommendations for further research in this area especially on selection of catalysts for optimal product yields under varying operating conditions.

425 citations

Proceedings ArticleDOI
20 Jun 2011
TL;DR: This paper uses Partial Least Squares to linearly map images in different modalities to a common linear subspace in which they are highly correlated, and forms a generic intermediate subspace comparison framework for multi-modal recognition.
Abstract: This paper presents a novel way to perform multi-modal face recognition. We use Partial Least Squares (PLS) to linearly map images in different modalities to a common linear subspace in which they are highly correlated. PLS has been previously used effectively for feature selection in face recognition. We show both theoretically and experimentally that PLS can be used effectively across modalities. We also formulate a generic intermediate subspace comparison framework for multi-modal recognition. Surprisingly, we achieve high performance using only pixel intensities as features. We experimentally demonstrate the highest published recognition rates on the pose variations in the PIE data set, and also show that PLS can be used to compare sketches to photos, and to compare images taken at different resolutions.

382 citations

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Dale Charles Abbott3, Ovsat Abdinov4  +2934 moreInstitutions (199)
TL;DR: In this article, a search for the electroweak production of charginos and sleptons decaying into final states with two electrons or muons is presented, based on 139.fb$^{-1}$ of proton-proton collisions recorded by the ATLAS detector at the Large Hadron Collider at
Abstract: A search for the electroweak production of charginos and sleptons decaying into final states with two electrons or muons is presented. The analysis is based on 139 fb$^{-1}$ of proton–proton collisions recorded by the ATLAS detector at the Large Hadron Collider at $\sqrt{s}=13$ $\text {TeV}$. Three R-parity-conserving scenarios where the lightest neutralino is the lightest supersymmetric particle are considered: the production of chargino pairs with decays via either W bosons or sleptons, and the direct production of slepton pairs. The analysis is optimised for the first of these scenarios, but the results are also interpreted in the others. No significant deviations from the Standard Model expectations are observed and limits at 95% confidence level are set on the masses of relevant supersymmetric particles in each of the scenarios. For a massless lightest neutralino, masses up to 420 $\text {Ge}\text {V}$ are excluded for the production of the lightest-chargino pairs assuming W-boson-mediated decays and up to 1 $\text {TeV}$ for slepton-mediated decays, whereas for slepton-pair production masses up to 700 $\text {Ge}\text {V}$ are excluded assuming three generations of mass-degenerate sleptons.

272 citations

Journal ArticleDOI
Morad Aaboud, Georges Aad1, Brad Abbott2, Dale Charles Abbott3  +2936 moreInstitutions (198)
TL;DR: An exclusion limit on the H→invisible branching ratio of 0.26(0.17_{-0.05}^{+0.07}) at 95% confidence level is observed (expected) in combination with the results at sqrt[s]=7 and 8 TeV.
Abstract: Dark matter particles, if sufficiently light, may be produced in decays of the Higgs boson. This Letter presents a statistical combination of searches for H→invisible decays where H is produced according to the standard model via vector boson fusion, Z(ll)H, and W/Z(had)H, all performed with the ATLAS detector using 36.1 fb^{-1} of pp collisions at a center-of-mass energy of sqrt[s]=13 TeV at the LHC. In combination with the results at sqrt[s]=7 and 8 TeV, an exclusion limit on the H→invisible branching ratio of 0.26(0.17_{-0.05}^{+0.07}) at 95% confidence level is observed (expected).

234 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Proceedings ArticleDOI
01 Jun 2016
TL;DR: This work introduces Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling, and exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity.
Abstract: Visual understanding of complex urban street scenes is an enabling factor for a wide range of applications. Object detection has benefited enormously from large-scale datasets, especially in the context of deep learning. For semantic urban scene understanding, however, no current dataset adequately captures the complexity of real-world urban scenes. To address this, we introduce Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling. Cityscapes is comprised of a large, diverse set of stereo video sequences recorded in streets from 50 different cities. 5000 of these images have high quality pixel-level annotations, 20 000 additional images have coarse annotations to enable methods that leverage large volumes of weakly-labeled data. Crucially, our effort exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity. Our accompanying empirical study provides an in-depth analysis of the dataset characteristics, as well as a performance evaluation of several state-of-the-art approaches based on our benchmark.

7,547 citations

Journal ArticleDOI
TL;DR: The 11th edition of Harrison's Principles of Internal Medicine welcomes Anthony Fauci to its editorial staff, in addition to more than 85 new contributors.
Abstract: The 11th edition of Harrison's Principles of Internal Medicine welcomes Anthony Fauci to its editorial staff, in addition to more than 85 new contributors. While the organization of the book is similar to previous editions, major emphasis has been placed on disorders that affect multiple organ systems. Important advances in genetics, immunology, and oncology are emphasized. Many chapters of the book have been rewritten and describe major advances in internal medicine. Subjects that received only a paragraph or two of attention in previous editions are now covered in entire chapters. Among the chapters that have been extensively revised are the chapters on infections in the compromised host, on skin rashes in infections, on many of the viral infections, including cytomegalovirus and Epstein-Barr virus, on sexually transmitted diseases, on diabetes mellitus, on disorders of bone and mineral metabolism, and on lymphadenopathy and splenomegaly. The major revisions in these chapters and many

6,968 citations

Proceedings Article
07 Dec 2015
TL;DR: This work introduces a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network, and can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps.
Abstract: Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations.

6,150 citations