scispace - formally typeset
Search or ask a question
Institution

International Institute of Information Technology, Hyderabad

EducationHyderabad, India
About: International Institute of Information Technology, Hyderabad is a education organization based out in Hyderabad, India. It is known for research contribution in the topics: Computer science & Authentication. The organization has 2048 authors who have published 3677 publications receiving 45319 citations. The organization is also known as: IIIT Hyderabad & International Institute of Information Technology (IIIT).


Papers
More filters
Proceedings Article
01 Sep 2015
TL;DR: This paper explores the task of classifying the attributes present in a natural language query into different SQL clauses in a SQL query and investigates the effectiveness of various features and Conditional Random Fields for this task.
Abstract: Attribute information in a natural language query is one of the key features for converting a natural language query into a Structured Query Language 1 (SQL) in Natural Language Interface to Database systems. In this paper, we explore the task of classifying the attributes present in a natural language query into different SQL clauses in a SQL query. In particular, we investigate the effectiveness of various features and Conditional Random Fields for this task. Our system uses a statistical classifier trained on manually prepared data. We report our results on three different domains and also show how our system can be used for generating a complete SQL query.

15 citations

Journal ArticleDOI
TL;DR: In this article, the authors performed Weighted Gene Co-expression Network Analysis (WGCNA) to understand how the VAT metabolism is altered at the genome scale and co-regulated with other cellular processes during the progression from obesity to NASH with fibrosis.
Abstract: Non-Alcoholic Fatty Liver Disease (NAFLD) is a complex spectrum of diseases ranging from simple steatosis to Non-Alcoholic Steatohepatitis (NASH) with fibrosis, which can progress to cirrhosis and hepatocellular carcinoma. The pathogenesis of NAFLD is complex, involving crosstalk between multiple organs, cell-types, and environmental and genetic factors. Dysfunction of the adipose tissue plays a central role in NAFLD progression. Here, we analysed transcriptomics data obtained from the Visceral Adipose Tissue (VAT) of NAFLD patients to understand how the VAT metabolism is altered at the genome scale and co-regulated with other cellular processes during the progression from obesity to NASH with fibrosis. For this purpose, we performed Weighted Gene Co-expression Network Analysis (WGCNA), a method that organizes the disease transcriptome into functional modules of cellular processes and pathways. Our analysis revealed the coordination of metabolic and inflammatory modules (termed "immunometabolism") in the VAT of NAFLD patients. We found that genes of arachidonic acid, sphingolipid and glycosphingolipid metabolism were upregulated and co-expressed with genes of proinflammatory signalling pathways and hypoxia in NASH/NASH with fibrosis. We hypothesize that these metabolic alterations might play a role in sustaining VAT inflammation. Furthermore, immunometabolism related genes were also co-expressed with genes involved in Extracellular Matrix (ECM) degradation. Our analysis indicates that upregulation of both ECM degrading enzymes and their inhibitors (incoherent feedforward loop) potentially leads to the ECM deposition in the VAT of NASH with fibrosis patients.

15 citations

Journal ArticleDOI
TL;DR: An efficient elliptic curve cryptography (ECC)-based provably secure three-factor authentication and session key agreement scheme for SIP, which uses the identity, password, and personal biometrics of a user as three factors to resolve the security weaknesses and drawbacks in existing SIP authentication protocols.
Abstract: Session initiation protocol (SIP) is a widely used authentication protocol for the Voice over IP communications. Over the years, several protocols have been proposed in the literature to strengthen the security of SIP. In this paper, we present an efficient elliptic curve cryptography (ECC)-based provably secure three-factor authentication and session key agreement scheme for SIP, which uses the identity, password, and personal biometrics of a user as three factors. Our scheme aims to resolve the security weaknesses and drawbacks in existing SIP authentication protocols. In addition, our scheme supports password and biometric update phase without involving the server and the user mobile device revocation phase in case the mobile device is lost/stolen. Formal security analysis under the standard model and the broadly accepted Burrows–Abadi–Needham logic ensures that the proposed scheme can withstand several known security attacks. The proposed scheme has also been analyzed informally. Simulation for formal security verification using the widely known automated validation of internet security protocols and applications tool shows the replay, and the man-in-the-middle attacks are protected by the scheme. High security and low communication and computation costs make the proposed scheme more suitable for practical application as compared with other existing related ECC-based schemes. Copyright © 2016 John Wiley & Sons, Ltd.

15 citations

Posted ContentDOI
27 Jul 2021-ChemRxiv
TL;DR: This study proposes a computational strategy for de novo generation of molecules with high binding affinities to the specified target and devised a novel strategy in which the property being used to calculate the reward is changed periodically.
Abstract: Design of new inhibitors for novel targets is a very important problem especially in the current scenario with the world being plagued by COVID-19. Conventional approaches undertaken to this end, like, high-throughput virtual screening require extensive combing through existing datasets in the hope of finding possible matches. In this study we propose a computational strategy for de novo generation of molecules with high binding affinities to the specified target. A deep generative model is built using a stack augmented recurrent neural network for initially generating drug like molecules and then it is optimized using reinforcement learning to start generating molecules with desirable properties--primarily the binding affinity. The reinforcement learning section of the pipeline is further extended to multi-objective optimization showcasing the model's ability to generate molecules with a wide variety of properties desirable for drug like molecules, like, LogP, Quantitative Estimate of Drug Likeliness etc.. For multi-objective optimization, we have devised a novel strategy in which the property being used to calculate the reward is changed periodically. In comparison to the conventional approach of taking a weighted sum of all rewards, this strategy shows enhanced ability to generate a significantly higher number of molecules with desirable properties.

15 citations

Proceedings ArticleDOI
11 Jun 2008
TL;DR: This paper uses well studied data compression algorithms which optimize on bringing down the data redundancy which is related to correlated sensor readings and using a probability model to efficiently compress data at the cluster heads and compares the current global reliability index based on all the PMax of cluster heads.
Abstract: With the availability of low-cost sensor nodes there have been many standards developed to integrate and network these nodes to form a reliable network allowing many different types of hardware vendors to coexist. Most of these solutions however have aimed at industry-specific interoperability but not the size of the sensor network and the large amount of data which is collected in course of its lifetime. In this paper we use well studied data compression algorithms which optimize on bringing down the data redundancy which is related to correlated sensor readings and using a probability model to efficiently compress data at the cluster heads. As in the case of sensor networks the data reliability goes down as the network resource depletes and these types of networks lacks any central synchronization making it even more a global problem to compare different reading at the central coordinator. The complexity of calibrating each sensor and using an adaptable measured threshold to correct the reading from sensors is a severe drain in terms of network resources and energy consumption. In this paper we separate the task of comparative global analysis to a central coordinator and use a reference PMax which is a normalized probability of individual source which reflects the current lifetime reliability of the sensors calculated at the cluster heads which then is compared with the current global reliability index based on all the PMax of cluster heads. As this implementation does not need any synchronization at the local nodes it uses compress once and stamp locally without any threshold such as application specific calibration values (0-42degF) and the summarization can be application Independent making It more a sensor network reliability Index and using It independent of the actual measured values.

15 citations


Authors

Showing all 2066 results

NameH-indexPapersCitations
Ravi Shankar6667219326
Joakim Nivre6129517203
Aravind K. Joshi5924916417
Ashok Kumar Das562789166
Malcolm F. White5517210762
B. Yegnanarayana5434012861
Ram Bilas Pachori481828140
C. V. Jawahar454799582
Saurabh Garg402066738
Himanshu Thapliyal362013992
Monika Sharma362384412
Ponnurangam Kumaraguru332696849
Abhijit Mitra332407795
Ramanathan Sowdhamini332564458
Helmut Schiessel321173527
Network Information
Related Institutions (5)
Microsoft
86.9K papers, 4.1M citations

90% related

Facebook
10.9K papers, 570.1K citations

89% related

Google
39.8K papers, 2.1M citations

89% related

Carnegie Mellon University
104.3K papers, 5.9M citations

87% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
202310
202229
2021373
2020440
2019367
2018364