scispace - formally typeset
Search or ask a question
Institution

International Institute of Information Technology, Hyderabad

EducationHyderabad, India
About: International Institute of Information Technology, Hyderabad is a education organization based out in Hyderabad, India. It is known for research contribution in the topics: Authentication & Internet security. The organization has 2048 authors who have published 3677 publications receiving 45319 citations. The organization is also known as: IIIT Hyderabad & International Institute of Information Technology (IIIT).


Papers
More filters
Proceedings Article
01 Jan 2018
TL;DR: A deep learning based solution for textured 3D reconstruction of human body shapes from a single view RGB image by first recovering the volumetric grid of the non-rigid human body given a singleView RGB image followed by orthographic texture view synthesis using the respective depth projection of the reconstructed shape and input RGB image.
Abstract: Recovering textured 3D models of non-rigid human body shapes is challenging due to self-occlusions caused by complex body poses and shapes, clothing obstructions, lack of surface texture, background clutter, sparse set of cameras with non-overlapping fields of view, etc. Further, a calibration-free environment adds additional complexity to both - reconstruction and texture recovery. In this paper, we propose a deep learning based solution for textured 3D reconstruction of human body shapes from a single view RGB image. This is achieved by first recovering the volumetric grid of the non-rigid human body given a single view RGB image followed by orthographic texture view synthesis using the respective depth projection of the reconstructed (volumetric) shape and input RGB image. We propose to co-learn the depth information readily available with affordable RGBD sensors (e.g., Kinect) while showing multiple views of the same object during the training phase. We show superior reconstruction performance in terms of quantitative and qualitative results, on both, publicly available datasets (by simulating the depth channel with virtual Kinect) as well as real RGBD data collected with our calibrated multi Kinect setup.

14 citations

Proceedings Article
01 Dec 2016
TL;DR: A novel approach for AQP known as "Deep Feature Fusion Network (DFFN)” which combines the advantages of both hand-crafted features and deep learning based systems and outperforms baseline approaches which individually employ either HCF or DL based techniques alone.
Abstract: Community Question Answering (cQA) forums have become a popular medium for soliciting direct answers to specific questions of users from experts or other experienced users on a given topic. However, for a given question, users sometimes have to sift through a large number of low-quality or irrelevant answers to find out the answer which satisfies their information need. To alleviate this, the problem of Answer Quality Prediction (AQP) aims to predict the quality of an answer posted in response to a forum question. Current AQP systems either learn models using - a) various hand-crafted features (HCF) or b) Deep Learning (DL) techniques which automatically learn the required feature representations. In this paper, we propose a novel approach for AQP known as - “Deep Feature Fusion Network (DFFN)” which combines the advantages of both hand-crafted features and deep learning based systems. Given a question-answer pair along with its metadata, the DFFN architecture independently - a) learns features from the Deep Neural Network (DNN) and b) computes hand-crafted features using various external resources and then combines them using a fully connected neural network trained to predict the final answer quality. DFFN is end-end differentiable and trained as a single system. We propose two different DFFN architectures which vary mainly in the way they model the input question/answer pair - DFFN-CNN uses a Convolutional Neural Network (CNN) and DFFN-BLNA uses a Bi-directional LSTM with Neural Attention (BLNA). Both these proposed variants of DFFN (DFFN-CNN and DFFN-BLNA) achieve state-of-the-art performance on the standard SemEval-2015 and SemEval-2016 benchmark datasets and outperforms baseline approaches which individually employ either HCF or DL based techniques alone.

14 citations

Proceedings ArticleDOI
04 Jul 2015
TL;DR: The matroid structures corresponding to data-local and local maximally recoverable codes (MRC) are established and Greene proved that the weight enumerators of any code can be determined from its associated Tutte polynomial.
Abstract: A code is said to be data-local maximally recoverable if (i) all the information symbols have locality and (ii) any erasure pattern which can be potentially recovered (i.e., the number of equations is equal to the number of unknowns) is recovered by the code. A code is said to be local maximally recoverable if (i) all the symbols of the code have locality and (ii) from the above holds. In this paper, we establish the matroid structures corresponding to data-local and local maximally recoverable codes (MRC). The matroid structures of these codes can be used to determine the associated Tutte polynomial. Greene proved that the weight enumerators of any code can be determined from its associated Tutte polynomial. We will use this result to derive explicit expressions for the weight enumerators of data-local MRC. Also, Britz proved that the higher support weights of any code can be determined from its associated Tutte polynomial. We will use this result to derive expressions for the higher support weights of data-local and local MRC with two local codes.

14 citations

Book ChapterDOI
15 Feb 2012
TL;DR: The first sub-logarithmic result for the reporting version and the first work for the counting version of the problem of reporting and counting maximal points in a query rectangle for a set of n integer points that lie on an n×n grid is presented.
Abstract: In this work, we study the problem of reporting and counting maximal points in a query rectangle for a set of n integer points that lie on an n×n grid A point is said to be maximal inside a query rectangle if it is not dominated by any other point inside the query rectangle Our model of computation is unit-cost RAM model with word size of O(logn) bits For the reporting version of the problem, we present a data structure of size $O(n\frac{\log n}{\log\log n})$ words and support querying in $O(\frac{\log n}{\log\log n}+k)$ time where k is the size of the output For the counting version, we present a data structure of size $O(n\frac{\log^{2} n}{\log\log n})$ words which supports querying in $O(\frac{\log^{\frac{3}{2}}n} {\log\log n})$ Both the data structures are static in nature The reporting version of the problem has been studied in [1] and [5] To the best of our knowledge, this is the first sub-logarithmic result for the reporting version and the first work for the counting version of the problem

14 citations

Posted Content
TL;DR: A simple, yet effective, Neural Machine Translation system for Indian languages is presented, demonstrating the feasibility for multiple language pairs, and establishing a strong baseline for further research.
Abstract: We present a simple, yet effective, Neural Machine Translation system for Indian languages. We demonstrate the feasibility for multiple language pairs, and establish a strong baseline for further research.

14 citations


Authors

Showing all 2066 results

NameH-indexPapersCitations
Ravi Shankar6667219326
Joakim Nivre6129517203
Aravind K. Joshi5924916417
Ashok Kumar Das562789166
Malcolm F. White5517210762
B. Yegnanarayana5434012861
Ram Bilas Pachori481828140
C. V. Jawahar454799582
Saurabh Garg402066738
Himanshu Thapliyal362013992
Monika Sharma362384412
Ponnurangam Kumaraguru332696849
Abhijit Mitra332407795
Ramanathan Sowdhamini332564458
Helmut Schiessel321173527
Network Information
Related Institutions (5)
Microsoft
86.9K papers, 4.1M citations

90% related

Facebook
10.9K papers, 570.1K citations

89% related

Google
39.8K papers, 2.1M citations

89% related

Carnegie Mellon University
104.3K papers, 5.9M citations

87% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
202310
202229
2021373
2020440
2019367
2018364