scispace - formally typeset
Search or ask a question
Author

Anindita Das

Bio: Anindita Das is an academic researcher from Indian Institutes of Information Technology. The author has contributed to research in topics: Visibility & Depth map. The author has co-authored 2 publications.

Papers
More filters
Book ChapterDOI
16 Jun 2021
TL;DR: In this article, a novel approach to render foggy and rainy datasets is proposed, where the rain is generated via estimation of the area of the scene image and then computing streak volume and finally overlapping the streaks with the scene images.
Abstract: Most of the object detection schemes do not perform well when the input image is captured in adverse weather. Reason being that the available datasets for training/testing of those schemes didn’t have many images in such weather conditions. Thus in this work, a novel approach to render foggy and rainy datasets is proposed. The rain is generated via estimation of the area of the scene image and then computing streak volume and finally overlapping the streaks with the scene image. As visibility reduces with depth due to fog, rendering of fog must take depth-map into consideration. In the proposed scheme, the depth map is generated from a single image. Then, the fog coefficient is generated by modifying the 3D Perlin noise with respect to the depth map. Further, blending the corresponding density of the fog with the scene image at a particular region based on precomputed intensities at that region. Demo dataset is available in this https://github.com/senprithwish1994/DatasetAdverse.

2 citations

Book ChapterDOI
28 Sep 2021
TL;DR: In this article, a new style transfer scheme is proposed that uses a single-image super resolution (SISR) network to increase the resolution of the given target image as well as the style image and perform the transformation process using the pre-trained VGG19 model.
Abstract: Style transfer is the process that aims to recreate a given image (target image) with the style of another image (style image). In this work, a new style transfer scheme is proposed that uses a single-image super resolution (SISR) network to increase the resolution of the given target image as well as the style image and perform the transformation process using the pre-trained VGG19 model. The Combination of perceptual loss and total variation loss is used which results in more photo-realistic output. With the change in content weight, the output image contains different semantic information and precise structure of the target image resulting in visually distinguishable results. The generated outputs can be altered accordingly by the user from artistic style to photo-realistic style by changing the weights. Detailed experimentation is done with different target image and style image pairs. The subjective quality of the stylised images is measured. Experimental results show that the quality of the generated image is better than the state of the art existing schemes. This proposed scheme preserves more information from the target image and creates less distortion for all combinations of different types of images. For more effective comparison, the contour of the stylizing images are extracted and also similarity is measured. This experiment shows that the result images have contour closer to the target images, also measured similarity is found maximum which indicates more preservation of semantic information than other existing schemes.

Cited by
More filters
Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors investigated the adverse effects of challenging weather on the visual perception system, investigated the robustness of visual perception systems affected by adverse weather through imaging, and proposed a method to evaluate the robusts of object detection model under different weather variations.

2 citations