scispace - formally typeset
Search or ask a question
Author

Anil Singh Parihar

Bio: Anil Singh Parihar is an academic researcher from Delhi Technological University. The author has contributed to research in topics: Fuzzy logic & Deep learning. The author has an hindex of 9, co-authored 65 publications receiving 339 citations.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
TL;DR: The exhaustive experimentation and analysis show the proposed algorithm efficiently enhances contrast and yields in natural visual quality images.
Abstract: This paper presents contrast enhancement algorithms based on fuzzy contextual information of the images. We introduce fuzzy similarity index and fuzzy contrast factor to capture the neighborhood characteristics of a pixel. A new histogram, using fuzzy contrast factor of each pixel is developed, and termed the fuzzy dissimilarity histogram (FDH). A cumulative distribution function is formed with normalized values of an FDH and used as a transfer function to obtain the contrast enhanced image. The algorithm gives good contrast enhancement and preserves the natural characteristic of the image. In order to develop a contextual intensity transfer function, we introduce a fuzzy membership function based on fuzzy similarity index and coefficient of variation of the image. The contextual intensity transfer function is designed using the fuzzy membership function to achieve final contrast enhanced image. The overall algorithm is referred as the fuzzy contextual contrast-enhancement algorithm. The proposed algorithms are compared with the conventional and the state-of-the-art contrast enhancement algorithms. The quantitative and visual assessment of the results is performed. The results of quantitative measures are statistically analyzed using t-test. The exhaustive experimentation and analysis show the proposed algorithm efficiently enhances contrast and yields in natural visual quality images.

93 citations

Journal ArticleDOI
TL;DR: This paper presents a fuzzy system for edge detection, using smallest univalue segment assimilating nucleus (USAN) principle and bacterial foraging algorithm (BFA) and a parametric fuzzy intensification operator (FINT) to enhance the weak edge information, which results in another fuzzy set.
Abstract: This paper presents a fuzzy system for edge detection, using smallest univalue segment assimilating nucleus (USAN) principle and bacterial foraging algorithm (BFA). The proposed algorithm fuzzifies the USAN area obtained from the original image, using a USAN area histogram-based Gaussian membership function. A parametric fuzzy intensification operator (FINT) is proposed to enhance the weak edge information, which results in another fuzzy set. The fuzzy measures, i.e., fuzzy edge quality factor and sharpness factor, are defined on fuzzy sets. The BFA is used to optimize the parameters involved in the fuzzy membership function and the FINT. The fuzzy edge map is obtained using optimized parameters. The adaptive thresholding is used to defuzzify the fuzzy edge map to obtain a binary edge map. The experimental results are analyzed qualitatively and quantitatively. The quantitative measures, i.e., Pratt's figure of merit, Cohen’ Kappa, Shannon's entropy, and edge strength similarity-based edge quality metric, are used. The quantitative results are statistically analyzed using t-test. The proposed algorithm outperforms many of the traditional and state-of-the-art edge detectors.

73 citations

Journal ArticleDOI
TL;DR: The quantitative and visual assessment shows that the proposed algorithm outperforms most of the existing contrast-enhancement algorithms and results in natural-looking, good contrast images with almost no artefacts.
Abstract: This study presents a new contrast-enhancement approach called entropy-based dynamic sub-histogram equalisation. The proposed algorithm performs a recursive division of the histogram based on the entropy of the sub-histograms. Each sub-histogram is divided recursively into two sub-histograms with equal entropy. A stopping criterion is proposed to achieve an optimum number of sub-histograms. A new dynamic range is allocated to each sub-histogram based on the entropy and number of used and missing intensity levels in the sub-histogram. The final contrast-enhanced image is obtained by equalising each sub-histogram independently. The proposed algorithm is compared with conventional as well as state-of-the-art contrast-enhancement algorithms. The quantitative results for a large image data set are statistically analysed using a paired t-test. The quantitative and visual assessment shows that the proposed algorithm outperforms most of the existing contrast-enhancement algorithms. The proposed algorithm results in natural-looking, good contrast images with almost no artefacts.

60 citations

Proceedings ArticleDOI
01 Jan 2018
TL;DR: This paper will discuss Single Scale Retinex (SSR), Multi-Scale RetineX (MSR), Improved Retinez Image Enhancement (IRIE), MSR improvement for night time Enhancement (MSSRINTE), and retinex Based Perceptual Contrast Enhancement in image using luminance adaptation (RBPCELA).
Abstract: In this paper, It focuses on few out of many Retinex based method for Image Enhancement. Retinex is basically a concept of capturing an image in such a way in which a human being perceives it after looking at an object at the place with the help of their retina (Human Eye) and cortex (Mind). On the basis of Retinex theory, we can say an image as a product of illumination and reflectance from the object. Retinex focuses on dynamic range and color constancy of an image. There are various methods proposed by various researchers till date which use Retinex for image contrast enhancement. In this paper, we will discuss Single Scale Retinex (SSR), Multi-Scale Retinex (MSR), Improved Retinex Image Enhancement (IRIE), MSR improvement for night time Enhancement (MSRINTE) and Retinex Based Perceptual Contrast Enhancement in image using luminance adaptation (RBPCELA).

49 citations

Journal ArticleDOI
TL;DR: A new edge detection technique is proposed to deal with the noisy image using fuzzy derivative and bacterial foraging algorithm that can detect the edges in an image in the presence of impulse noise up to 30%.
Abstract: Bio-inspired edge detection using fuzzy logic has achieved great attention in the recent years. The bacterial foraging (BF) algorithm, introduced in Passino (IEEE Control Syst Mag 22(3):52---67, 2002) is one of the powerful bio-inspired optimization algorithms. It attempts to imitate a single bacterium or groups of E. Coli bacteria. In BF algorithm, a set of bacteria forages towards a nutrient rich medium to get more nutrients. A new edge detection technique is proposed to deal with the noisy image using fuzzy derivative and bacterial foraging algorithm. The bacteria detect edge pixels as well as noisy pixels in its path during the foraging. The new fuzzy inference rules are devised and the direction of movement of each bacterium is found using these rules. During the foraging if a bacterium encounters a noisy pixel, it first removes the noisy pixel using an adaptive fuzzy switching median filter in Toh and Isa (IEEE Signal Process Lett 17(3):281---284, 2010). If the bacterium does not encounter any noisy pixel then it searches only the edge pixel in the image and draws the edge map. This approach can detect the edges in an image in the presence of impulse noise up to 30%.

32 citations


Cited by
More filters
01 Jan 1979
TL;DR: This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis and addressing interesting real-world computer Vision and multimedia applications.
Abstract: In the real world, a realistic setting for computer vision or multimedia recognition problems is that we have some classes containing lots of training data and many classes contain a small amount of training data. Therefore, how to use frequent classes to help learning rare classes for which it is harder to collect the training data is an open question. Learning with Shared Information is an emerging topic in machine learning, computer vision and multimedia analysis. There are different level of components that can be shared during concept modeling and machine learning stages, such as sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, etc. Regarding the specific methods, multi-task learning, transfer learning and deep learning can be seen as using different strategies to share information. These learning with shared information methods are very effective in solving real-world large-scale problems. This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis. Both state-of-the-art works, as well as literature reviews, are welcome for submission. Papers addressing interesting real-world computer vision and multimedia applications are especially encouraged. Topics of interest include, but are not limited to: • Multi-task learning or transfer learning for large-scale computer vision and multimedia analysis • Deep learning for large-scale computer vision and multimedia analysis • Multi-modal approach for large-scale computer vision and multimedia analysis • Different sharing strategies, e.g., sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, • Real-world computer vision and multimedia applications based on learning with shared information, e.g., event detection, object recognition, object detection, action recognition, human head pose estimation, object tracking, location-based services, semantic indexing. • New datasets and metrics to evaluate the benefit of the proposed sharing ability for the specific computer vision or multimedia problem. • Survey papers regarding the topic of learning with shared information. Authors who are unsure whether their planned submission is in scope may contact the guest editors prior to the submission deadline with an abstract, in order to receive feedback.

1,758 citations

Posted Content
TL;DR: The Exclusively Dark dataset as discussed by the authors is a dataset consisting of ten different types of low-light images (i.e. low, ambient, object, single, weak, strong, screen, window, shadow and twilight) captured in visible light only with image and object level annotations.
Abstract: Low-light is an inescapable element of our daily surroundings that greatly affects the efficiency of our vision. Research works on low-light has seen a steady growth, particularly in the field of image enhancement, but there is still a lack of a go-to database as benchmark. Besides, research fields that may assist us in low-light environments, such as object detection, has glossed over this aspect even though breakthroughs-after-breakthroughs had been achieved in recent years, most noticeably from the lack of low-light data (less than 2% of the total images) in successful public benchmark dataset such as PASCAL VOC, ImageNet, and Microsoft COCO. Thus, we propose the Exclusively Dark dataset to elevate this data drought, consisting exclusively of ten different types of low-light images (i.e. low, ambient, object, single, weak, strong, screen, window, shadow and twilight) captured in visible light only with image and object level annotations. Moreover, we share insightful findings in regards to the effects of low-light on the object detection task by analyzing visualizations of both hand-crafted and learned features. Most importantly, we found that the effects of low-light reaches far deeper into the features than can be solved by simple "illumination invariance'". It is our hope that this analysis and the Exclusively Dark dataset can encourage the growth in low-light domain researches on different fields. The Exclusively Dark dataset with its annotation is available at this https URL

180 citations

Journal ArticleDOI
TL;DR: A unified framework is proposed to use a unified framework to classify palmprint images into four categories: 1) the contact-based; 2) contactless; 3) high-resolution; and 4) 3-D palm print images.
Abstract: Palmprint processes a number of unique features for reliable personal recognition. However, different types of palmprint images contain different dominant features. Instead, only some features of the palmprint are visible in a palmprint image, whereas the other features may not be notable. For example, the low-resolution palmprint image has visible principal lines and wrinkles. By contrast, the high-resolution palmprint image contains clear ridge patterns and minutiae points. In addition, the three dimensional (3-D) palmprint image possesses curvatures of the palmprint surface. So far, there is no work to summarize the feature extraction of different types of palmprint images. In this paper, we have an aim to completely study the feature extraction and recognition of palmprint. We propose to use a unified framework to classify palmprint images into four categories: 1) the contact-based; 2) contactless; 3) high-resolution; and 4) 3-D palmprint images. Then, we analyze the motivations and theories of the representative extraction and matching methods for different types of palmprint images. Finally, we compare and test the state-of-the-art methods via the widely used palmprint databases, and point out some potential directions for future research.

159 citations

Journal ArticleDOI
TL;DR: A new classification of the main techniques of low-light image enhancement developed over the past decades is presented, dividing them into seven categories: gray transformation methods, histogram equalization methods, Retinex methods, frequency-domain methods, image fusion methods, defogging model methods and machine learning methods.
Abstract: Images captured under poor illumination conditions often exhibit characteristics such as low brightness, low contrast, a narrow gray range, and color distortion, as well as considerable noise, which seriously affect the subjective visual effect on human eyes and greatly limit the performance of various machine vision systems. The role of low-light image enhancement is to improve the visual effect of such images for the benefit of subsequent processing. This paper reviews the main techniques of low-light image enhancement developed over the past decades. First, we present a new classification of these algorithms, dividing them into seven categories: gray transformation methods, histogram equalization methods, Retinex methods, frequency-domain methods, image fusion methods, defogging model methods and machine learning methods. Then, all the categories of methods, including subcategories, are introduced in accordance with their principles and characteristics. In addition, various quality evaluation methods for enhanced images are detailed, and comparisons of different algorithms are discussed. Finally, the current research progress is summarized, and future research directions are suggested.

138 citations

Journal ArticleDOI
TL;DR: Simulation results reveal that CSDE has more ability to find promising results than other 12 algorithms (including traditional algorithms and state-of-the-art algorithm) on 30 unconstrained benchmark functions, 10 constrained benchmark functions and 6 constrained engineering problems.

111 citations