scispace - formally typeset
Search or ask a question
Author

Manoov Rajapandy

Bio: Manoov Rajapandy is an academic researcher from VIT University. The author has contributed to research in topics: Visibility (geometry) & Luminance. The author has an hindex of 1, co-authored 2 publications receiving 4 citations.

Papers
More filters
Book ChapterDOI
27 Jun 2019
TL;DR: The paper presents the implementation of convolutional neural network for the task of detecting and classifying the images into fog and non-fog, and the brightness, luminance, variance and intensity of the images is calculated to find the visibility percent of the fog images.
Abstract: The paper aims to find an approach for predicting the visibility percentage of the foggy images based on the factors like image brightness, luminance, intensity, and variance. The scope of using an image for predicting weather and providing information about the other details i.e. predicting if weather is foggy has increased. The idea is to utilize the data images to firstly classify them as being foggy or not. Secondly to detect the image visibility based on the image of a particular location. Thus the paper presents the implementation of convolutional neural network for the task of detecting and classifying the images into fog and non-fog. After getting the classification output, the brightness, luminance, variance and intensity of the images is calculated to find the visibility percent of the fog images.

6 citations

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a computational method for predicting associations to understand pathogenicity and uncover prognosis markers, which comprises of two phases: first, they use miRNA cluster information in addition to miRNA-functional similarity and disease semantic similarity.
Abstract: Recently, microRNAs (miRNAs) have been shown to play significant roles in the progression of major human diseases. Identifying associations between human diseases and miRNAs using computational tools have attracted considerable attention. Experimental identification and validation of disease–miRNA associations are rare and time consuming. We propose a computational method for predicting associations to understand pathogenicity and uncover prognosis markers. Existing methods employed approaches based on network and machine learning to predict miRNA disease. However, these approaches do not consider the cluster information between miRNAs. Performance results of these methods’ predictions are not satisfactory due to more number of false positives and false negatives. The proposed methodology comprises of two phases: first, we use miRNA cluster information in addition to miRNA-functional similarity and disease semantic similarity. MiRNAs and diseases are placed in the same cluster by transforming the miRNA–disease association data matrix into a low-rank model using Principal Component Analysis (PCA). Second, the Adamic/Adar index was applied that computes the closeness of miRNAs and diseases based on shared neighbors, improving the prediction results. The problem of overestimation is resolved by incorporating similarity information about miRNA and disease. Case studies on Leukemia, Carcinoma, Glioma, Pancreatic Neoplasms, and Melanoma exhibit satisfactory performance with Area Under Curve (AUC) values ranging from 0.736 to 0.834. miRNAs predicted using the proposed method are ascertained to be matching the known associations found in the database of HMDD V3.2, dbDEMC V2.0, and PhenomiR V2.0.

1 citations


Cited by
More filters
Journal ArticleDOI
16 Oct 2021
TL;DR: A new framework called FusionNet is constructed to combine two well-established single shared Convolutional Neural Networks (CNN) to accelerate the search and enhance the forensic information included in the digital pictures shared on social media.
Abstract: Recently, the use of different social media platforms such as Twitter, Facebook, and WhatsApp have increased significantly. A vast number of static images and motion frame pictures posted on such platforms get stored in the device folder making it critical to identify the social network of the downloaded images in the android domain. This is a multimedia forensic job with major cyber security consequences and is said to be accomplished using unique traces contained in picture material (SNs). Therefore, this proposal has been endeavoured to construct a new framework called FusionNet to combine two well-established single shared Convolutional Neural Networks (CNN) to accelerate the search. Moreover, the FusionNet has been found to improve classification accuracy. Image searching is one of the challenging issues in the android domain besides being a time-consuming process. The goal of the proposed network's architecture and training is to enhance the forensic information included in the digital pictures shared on social media. Furthermore, several network designs for the categorization of WhatsApp pictures have been compared and this suggested method has shown better performance in the comparison. The proposed framework's overall performance was measured using the performance metrics.

10 citations

Journal ArticleDOI
28 Aug 2021
TL;DR: This research article focuses on the prediction of transmission maps in the process of image defogging through the combination of dark channel prior (DCP), transmission map with refinement, and atmospheric light estimation process, and shows that the proposed deep learning model is achieving a superior performance when compared to other traditional algorithms.
Abstract: Due to unfavorable weather circumstances, images captured from multiple sensors have limited the contrast and visibility. Many applications, such as web camera surveillance in public locations are used to identify object categorization and capture a vehicle's licence plate in order to detect reckless driving. The traditional methods can improve the image quality by incorporating luminance, minimizing distortion, and removing unwanted visual effects from the given images. Dehazing is a vital step in the image defogging process of many real-time applications. This research article focuses on the prediction of transmission maps in the process of image defogging through the combination of dark channel prior (DCP), transmission map with refinement, and atmospheric light estimation process. This framework has succeeded in the prior segmentation process for obtaining a better visualization. This prediction of transmission maps can be improved through the statistical process of obtaining higher accuracy for the proposed model. This improvement can be achieved by incorporating the proposed framework with an atmospheric light estimation algorithm. Finally, the experimental results show that the proposed deep learning model is achieving a superior performance when compared to other traditional algorithms.
Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a nine unique features-based image classification framework based on KNN, which can accurately classify hazy and clear images, with an accuracy of 92% and a precision of 0.90% for a benchmark dataset, which has both theoretical and practical implications.
Book ChapterDOI
29 Jan 2020
TL;DR: This novel approach is a unique comparative algorithm through integration with Google Maps and has several embedded functionalities to reduce noise caused by fog and shows a drastic increase in variance, in line with the theory that the higher the variance, the lesser the fogginess.
Abstract: This paper aims to tackle the problem of impaired visibility for drivers on the road due to fog, which is a safety concern. This novel approach is a unique comparative algorithm through integration with Google Maps and has several embedded functionalities to reduce noise caused by fog. Real-time input is collected in the form of continuous video frames, on which image processing is carried out. This is a two-step process, first using dark channel prior and second using histogram matching with ideal weather Google Street View images. In order to measure the fogginess of the image at each step, horizontal variance is used. The results obtained show a drastic increase in variance during the two-step process, which is in line with the theory that the higher the variance, the lesser the fogginess. The fog-free images are retrieved and put together to form continuous frames of a video, which is displayed on the driver’s screen in real time.