scispace - formally typeset
Search or ask a question

Showing papers by "Mansi Sharma published in 2020"


Journal ArticleDOI
04 Aug 2020
TL;DR: This study provides first-hand structural prospective of PeCHS and PeCHI proteins towards understanding the mechanism of flavonoid biosynthetic pathway in P. emblica.
Abstract: Chalcone synthase (CHS) and chalcone isomerase (CHI) plays a major role in the biosynthesis of flavonoid in plants. In this study, we made extensive bioinformatics analysis to gain functional and structural insight into PeCHS and PeCHI proteins. The phylogenetic distribution of PeCHS and PeCHI genes encoding proteins demonstrated the close evolutionary relationship with different CHS and CHI proteins of other dicot plants. MicroRNA target analysis showed miR169n and 3p miR5053 targeting PeCHS gene while miR169c-3p and miR4248 are targeting PeCHI gene, respectively. Three-dimensional structural models of PeCHS and PeCHI proteins were elucidated by homology modeling with Ramachandran plots showing the excellent geometry of the proteins structure. Molecular docking revealed that cinnamoyl-coa and naringenin chalcone substrates are strongly bound to PeCHS and PeCHI proteins, respectively. Finally, molecular dynamics (MD) simulation for 30 ns, further yielded stability checks of ligands in the binding pocket and behavior of protein complexes. Thus MD simulation and interaction fraction analysis showed the stable conformation of PeCHS and PeCHI proteins with their respective substrates during theee simulation. Our study provides first-hand structural prospective of PeCHS and PeCHI proteins towards understanding the mechanism of flavonoid biosynthetic pathway in P. emblica.

9 citations


Proceedings ArticleDOI
15 Dec 2020
TL;DR: Wang et al. as discussed by the authors proposed a novel Bilateral grid based 3D convolutional neural network, dubbed as 3DBG-UNet, that parameterize high dimensional feature space by encoding compact 3D bilateral grids with UNets and infers sharp geometric layout of the scene.
Abstract: The task of predicting smooth and edge-consistent depth maps is notoriously difficult for single image depth estimation. This paper proposes a novel Bilateral Grid based 3D convolutional neural network, dubbed as 3DBG-UNet, that parameterize high dimensional feature space by encoding compact 3D bilateral grids with UNets and infers sharp geometric layout of the scene. Further, an another novel 3DBGES-UNet model is introduced that integrate 3DBG-UNet for inferring an accurate depth map given a single color view. The 3DBGES-UNet concatenate 3DBG-UNet geometry map with the inception network edge accentuation map and a spatial object's boundary map obtained by leveraging semantic segmentation and train the UNet model with ResNet backbone. Both models are designed with a particular attention to explicitly account for edges or minute details. Preserving sharp discontinuities at depth edges is critical for many applications such as realistic integration of virtual objects in AR video or occlusion-aware view synthesis for 3D display applications. The proposed depth prediction network achieves state-of-the-art performance in both qualitative and quantitative evaluations on the challenging NYUv2-Depth data. The code and corresponding pre-trained weights will be made publicly available.

6 citations


Proceedings ArticleDOI
15 Dec 2020
TL;DR: In this paper, an end-to-end convolutional neural network was designed to perform both foveated reconstruction and view synthesis using only 1.2% of the total light field data.
Abstract: Near-eye light field displays provide a solution to visual discomfort when using head mounted displays by presenting accurate depth and focal cues. However, light field HMDs require rendering the scene from a large number of viewpoints. This computational challenge of rendering sharp imagery of the foveal region and reproduce retinal defocus blur that correctly drives accommodation is tackled in this paper. We designed a novel end-to-end convolutional neural network that leverages human vision to perform both foveated reconstruction and view synthesis using only 1.2% of the total light field data. The proposed architecture comprises of log-polar sampling scheme followed by an interpolation stage and a convolutional neural network. To the best of our knowledge, this is the first attempt that synthesizes the entire light field from sparse RGB-D inputs and simultaneously addresses foveation rendering for computational displays. Our algorithm achieves fidelity in the fovea without any perceptible artifacts in the peripheral regions. The performance in fovea is comparable to the state-of-the-art view synthesis methods, despite using around 10x less light field data.

2 citations