scispace - formally typeset
Search or ask a question
Institution

International Institute of Information Technology, Hyderabad

EducationHyderabad, India
About: International Institute of Information Technology, Hyderabad is a education organization based out in Hyderabad, India. It is known for research contribution in the topics: Computer science & Authentication. The organization has 2048 authors who have published 3677 publications receiving 45319 citations. The organization is also known as: IIIT Hyderabad & International Institute of Information Technology (IIIT).


Papers
More filters
Posted Content
TL;DR: In this paper, an auto-encoder based architecture is proposed for phase retrieval under both low overlap, where traditional techniques completely fail, and at higher levels of overlap, and for the high overlap case, optimizing the generator for reducing the forward model error is an appropriate choice.
Abstract: Fourier Ptychography is a recently proposed imaging technique that yields high-resolution images by computationally transcending the diffraction blur of an optical system. At the crux of this method is the phase retrieval algorithm, which is used for computationally stitching together low-resolution images taken under varying illumination angles of a coherent light source. However, the traditional iterative phase retrieval technique relies heavily on the initialization and also need a good amount of overlap in the Fourier domain for the successively captured low-resolution images, thus increasing the acquisition time and data. We show that an auto-encoder based architecture can be adaptively trained for phase retrieval under both low overlap, where traditional techniques completely fail, and at higher levels of overlap. For the low overlap case we show that a supervised deep learning technique using an autoencoder generator is a good choice for solving the Fourier ptychography problem. And for the high overlap case, we show that optimizing the generator for reducing the forward model error is an appropriate choice. Using simulations for the challenging case of uncorrelated phase and amplitude, we show that our method outperforms many of the previously proposed Fourier ptychography phase retrieval techniques.

17 citations

Proceedings ArticleDOI
24 Oct 2011
TL;DR: It is shown that the approximate algorithm for distance based outlier detection using Locality Sensitive Hashing technique can be effectively extended to a constant round protocol with low communication costs, in a distributed setting with horizontal partitioning.
Abstract: In this paper, we give an approximate algorithm for distance based outlier detection using Locality Sensitive Hashing (LSH) technique. We propose an algorithm for the centralized case wherein the entire dataset is locally available for processing. However, in case of very large datasets collected from various input sources, often the data is distributed across the network. Accordingly, we show that our algorithm can be effectively extended to a constant round protocol with low communication costs, in a distributed setting with horizontal partitioning.

17 citations

Journal ArticleDOI
01 Oct 2012
TL;DR: The research work addressing optimal secure and protected Multicasting in wired and wireless Hierarchical Sensor Networks (HSN) is presented and some conditions for satisfying the Kraft inequality are discussed.
Abstract: This paper presents the research work addressing optimal secure and protected Multicasting in wired and wireless Hierarchical Sensor Networks (HSN). The multicast nodes in a hierarchical set up are associated with†Importance values†that are normalized into probabilities. The security constraint imposed is associated with the concept of “Prefix-free paths†in the associated graph. The optimality constraint is to minimize the average path length based on hop count from root node to multicast nodes. The paper also discusses doubly optimal secure multicasting. A practical Hierarchical Sensor Network is described. Some conditions for satisfying the Kraft inequality are also discussed.

17 citations

Book ChapterDOI
06 Jun 2011
TL;DR: This paper has leveraged Wikipedia knowledge structure (such as cross-lingual links, category, outlinks, Infobox information, etc.) to enrich the document representation for clustering multilingual documents and provides a general framework which can be easily extendable to other languages.
Abstract: This paper presents Multilingual Document Clustering (MDC) on comparable corpora. Wikipedia has evolved to be a major structured multilingual knowledge base. It has been highly exploited in many monolingual clustering approaches and also in comparing multilingual corpora. But there is no prior work which studied the impact of Wikipedia on MDC. Here, we have studied availing Wikipedia in enhancing MDC performance. We have leveraged Wikipedia knowledge structure (such as cross-lingual links, category, outlinks, Infobox information, etc.) to enrich the document representation for clustering multilingual documents. We have implemented Bisecting k-means clustering algorithm and experiments are conducted on a standard dataset provided by FIRE for their 2010 Ad-hoc Cross-Lingual document retrieval task on Indian languages. We have considered English and Hindi datasets for our experiments. By avoiding language-specific tools, our approach provides a general framework which can be easily extendable to other languages. The system was evaluated using F-score and Purity measures and the results obtained were encouraging.

17 citations

Proceedings ArticleDOI
27 May 2018
TL;DR: This poster attempts to combine Latent Dirichlet Allocation (LDA) and word embeddings to leverage the strengths of both approaches for duplicate bug report detection and validate the hypothesis on a real world dataset of Firefox project and show that there is potential in combining both LDA and wordembeddings.
Abstract: Bug reporting is a major part of software maintenance and due to its inherently asynchronous nature, duplicate bug reporting has become fairly common. Detecting duplicate bug reports is an important task in order to avoid the assignment of a same bug to different developers. Earlier approaches have improved duplicate bug report detection by using the notions of word embeddings, topic models and other machine learning approaches. In this poster, we attempt to combine Latent Dirichlet Allocation (LDA) and word embeddings to leverage the strengths of both approaches for this task. As a first step towards this idea, we present initial analysis and an approach which is able to outperform both word embeddings and LDA for this task. We validate our hypothesis on a real world dataset of Firefox project and show that there is potential in combining both LDA and word embeddings for duplicate bug report detection.

17 citations


Authors

Showing all 2066 results

NameH-indexPapersCitations
Ravi Shankar6667219326
Joakim Nivre6129517203
Aravind K. Joshi5924916417
Ashok Kumar Das562789166
Malcolm F. White5517210762
B. Yegnanarayana5434012861
Ram Bilas Pachori481828140
C. V. Jawahar454799582
Saurabh Garg402066738
Himanshu Thapliyal362013992
Monika Sharma362384412
Ponnurangam Kumaraguru332696849
Abhijit Mitra332407795
Ramanathan Sowdhamini332564458
Helmut Schiessel321173527
Network Information
Related Institutions (5)
Microsoft
86.9K papers, 4.1M citations

90% related

Facebook
10.9K papers, 570.1K citations

89% related

Google
39.8K papers, 2.1M citations

89% related

Carnegie Mellon University
104.3K papers, 5.9M citations

87% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
202310
202229
2021373
2020440
2019367
2018364