scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Denoising method of low illumination underwater motion image based on improved canny

01 Apr 2021-Microprocessors and Microsystems (Elsevier)-Vol. 82, pp 103862
TL;DR: A method of image denoising based on improved canny with high SNR and highest value of structural similarity proves that the structure similarity between the image after noise reduction and the original image is high, and the visual effect of noise reduction is good, which fully verifies the practical application of the method in image noise reduction.
About: This article is published in Microprocessors and Microsystems.The article was published on 2021-04-01. It has received 3 citations till now. The article focuses on the topics: Anisotropic diffusion & Image segmentation.
Citations
More filters
Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a method for image change detection in surveillance video based on optimized k-medoids clustering and adaptive fusion of difference images (DIs), which can accurately and effectively detect weak changes in the eagle eye surveillance picture in low-illumination environment.
Abstract: In low-illumination environments such as at night, due to factors such as the large monitoring field of an eagle eye, short sensor exposure time, and high-density random noise, the video images collected by image sensors generally have poor visual quality and low signal-to-noise ratio, which makes it difficult for surveillance systems to detect weak changes. To solve this problem, we propose a method for image change detection (CD) in surveillance video based on optimized k-medoids clustering and adaptive fusion of difference images (DIs). First, for the input multitemporal video surveillance images, two DIs are obtained by log-ratio and extremum pixel ratio operators. Then, the two DIs are adaptively fused by combining the local energy of DIs and the Laplacian pyramid. Simultaneously, the fused DI is compressed by the normalization function, and the final DI is obtained via the improved adaptive median filter. Finally, the changed image is obtained by using the optimized k-medoids clustering algorithm. The experimental results show that the proposed method can accurately and effectively detect weak changes in the eagle eye surveillance picture in a low-illumination environment. Compared with those of other methods, the accuracy and robustness of the proposed method are higher, and the running time of the algorithm is shorter. Moreover, it will not generate a false alarm due to the influence of noise in unchanged scenes.
Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a method for image change detection in surveillance video based on optimized k-medoids clustering and adaptive fusion of difference images (DIs), which can accurately and effectively detect weak changes in the eagle eye surveillance picture in a low-illumination environment.
Abstract: In low-illumination environments such as at night, due to factors such as the large monitoring field of an eagle eye, short sensor exposure time, and high-density random noise, the video images collected by image sensors generally have poor visual quality and low signal-to-noise ratio, which makes it difficult for surveillance systems to detect weak changes. To solve this problem, we propose a method for image change detection (CD) in surveillance video based on optimized k-medoids clustering and adaptive fusion of difference images (DIs). First, for the input multitemporal video surveillance images, two DIs are obtained by log-ratio and extremum pixel ratio operators. Then, the two DIs are adaptively fused by combining the local energy of DIs and the Laplacian pyramid. Simultaneously, the fused DI is compressed by the normalization function, and the final DI is obtained via the improved adaptive median filter. Finally, the changed image is obtained by using the optimized k-medoids clustering algorithm. The experimental results show that the proposed method can accurately and effectively detect weak changes in the eagle eye surveillance picture in a low-illumination environment. Compared with those of other methods, the accuracy and robustness of the proposed method are higher, and the running time of the algorithm is shorter. Moreover, it will not generate a false alarm due to the influence of noise in unchanged scenes.
Book ChapterDOI
01 Jan 2022
TL;DR: In this article , a vehicle queue length detection method based on the improvement of Canny edge detection operator is proposed, where vehicle motion detection is achieved by the three-frame differential method, vehicle presence detection is performed using a combination of the improved Canny Edge Detection operator and the background differential method and then the maximum and minimum values of the vertical coordinates of the queue length in the region of interest (ROI) region of the video image are collected and the transformation from the pixel distance to the actual queue length is completed using the camera calibration technique.
Abstract: AbstractIn order to collect real-time vehicle queue length at intersections to determine congestion, further optimize traffic light control duration and alleviate traffic congestion. Integrates the advantages of Canny edge detection operator and proposes a vehicle queue length detection method based on the improvement of Canny edge detection operator. Vehicle motion detection is achieved by the three-frame differential method, vehicle presence detection is performed using a combination of the improved Canny edge detection operator and the background differential method, and then the maximum and minimum values of the vertical coordinates of the queue length in the region of interest (ROI) region of the video image are collected and the transformation from the pixel distance to the actual queue length is completed using the camera calibration technique to achieve queue length detection. The experimental results show that the method can achieve queue length detection, and the improved detection effect is more accurate than before the improvement, and the error is within the allowable range to meet the measurement requirements.KeywordsThree-frame differential methodBackground differential methodEdge detectionVehicle queue lengthCamera calibration
References
More filters
Journal ArticleDOI
TL;DR: A new appearance matching framework is introduced to determine their parameters and it is found that using a fiber-specific scattering model is crucial to good results as it achieves considerably higher accuracy than prior work.
Abstract: Micro-appearance models explicitly model the interaction of light with microgeometry at the fiber scale to produce realistic appearance. To effectively match them to real fabrics, we introduce a new appearance matching framework to determine their parameters. Given a micro-appearance model and photographs of the fabric under many different lighting conditions, we optimize for parameters that best match the photographs using a method based on calculating derivatives during rendering. This highly applicable framework, we believe, is a useful research tool because it simplifies development and testing of new models.Using the framework, we systematically compare several types of micro-appearance models. We acquired computed microtomography (micro CT) scans of several fabrics, photographed the fabrics under many viewing/illumination conditions, and matched several appearance models to this data. We compare a new fiber-based light scattering model to the previously used microflake model. We also compare representing cloth microgeometry using volumes derived directly from the micro CT data to using explicit fibers reconstructed from the volumes. From our comparisons, we make the following conclusions: (1) given a fiber-based scattering model, volume- and fiber-based microgeometry representations are capable of very similar quality, and (2) using a fiber-specific scattering model is crucial to good results as it achieves considerably higher accuracy than prior work.

530 citations

Journal ArticleDOI
11 Jul 2016
TL;DR: An algorithm for automatically generating high-performance schedules for Halide programs that does not require costly (and often impractical) auto-tuning, and generates schedules for a broad set of image processing benchmarks that are performance-competitive with, and often better than, schedules manually authored by expert Halide developers on server and mobile CPUs, as well as GPUs.
Abstract: The Halide image processing language has proven to be an effective system for authoring high-performance image processing code. Halide programmers need only provide a high-level strategy for mapping an image processing pipeline to a parallel machine (a schedule), and the Halide compiler carries out the mechanical task of generating platform-specific code that implements the schedule. Unfortunately, designing high-performance schedules for complex image processing pipelines requires substantial knowledge of modern hardware architecture and code-optimization techniques. In this paper we provide an algorithm for automatically generating high-performance schedules for Halide programs. Our solution extends the function bounds analysis already present in the Halide compiler to automatically perform locality and parallelism-enhancing global program transformations typical of those employed by expert Halide developers. The algorithm does not require costly (and often impractical) auto-tuning, and, in seconds, generates schedules for a broad set of image processing benchmarks that are performance-competitive with, and often better than, schedules manually authored by expert Halide developers on server and mobile CPUs, as well as GPUs.

157 citations

Journal ArticleDOI
TL;DR: A new programming language for image processing pipelines, called Halide, that separates the algorithm from its schedule, and is expressive enough to describe organizations that match or outperform state-of-the-art hand-written implementations of many computational photography and computer vision algorithms.
Abstract: Writing high-performance code on modern machines requires not just locally optimizing inner loops, but globally reorganizing computations to exploit parallelism and locality---doing things such as tiling and blocking whole pipelines to fit in cache. This is especially true for image processing pipelines, where individual stages do much too little work to amortize the cost of loading and storing results to and from off-chip memory. As a result, the performance difference between a naive implementation of a pipeline and one globally optimized for parallelism and locality is often an order of magnitude. However, using existing programming tools, writing high-performance image processing code requires sacrificing simplicity, portability, and modularity. We argue that this is because traditional programming models conflate the computations defining the algorithm with decisions about intermediate storage and the order of computation, which we call the schedule.We propose a new programming language for image processing pipelines, called Halide, that separates the algorithm from its schedule. Programmers can change the schedule to express many possible organizations of a single algorithm. The Halide compiler then synthesizes a globally combined loop nest for an entire algorithm, given a schedule. Halide models a space of schedules which is expressive enough to describe organizations that match or outperform state-of-the-art hand-written implementations of many computational photography and computer vision algorithms. Its model is simple enough to do so often in only a few lines of code, and small changes generate efficient implementations for x86, ARM, Graphics Processors (GPUs), and specialized image processors, all from a single algorithm.Halide has been public and open source for over four years, during which it has been used by hundreds of programmers to deploy code to tens of thousands of servers and hundreds of millions of phones, processing billions of images every day.

123 citations

Journal ArticleDOI
TL;DR: This work summarized the current state of work and applications in the three main techniques leveraging structured light: spatial frequency-domain imaging, optical tomography, and single-pixel imaging, which benefit greatly from structured light methodologies.
Abstract: Diffuse optical imaging probes deep living tissue enabling structural, functional, metabolic, and molecular imaging. Recently, due to the availability of spatial light modulators, wide-field quantitative diffuse optical techniques have been implemented, which benefit greatly from structured light methodologies. Such implementations facilitate the quantification and characterization of depth-resolved optical and physiological properties of thick and deep tissue at fast acquisition speeds. We summarize the current state of work and applications in the three main techniques leveraging structured light: spatial frequency-domain imaging, optical tomography, and single-pixel imaging. The theory, measurement, and analysis of spatial frequency-domain imaging are described. Then, advanced theories, processing, and imaging systems are summarized. Preclinical and clinical applications on physiological measurements for guidance and diagnosis are summarized. General theory and method development of tomographic approaches as well as applications including fluorescence molecular tomography are introduced. Lastly, recent developments of single-pixel imaging methodologies and applications are reviewed.

87 citations

Journal ArticleDOI
11 Jul 2016
TL;DR: This paper presents Rigel, which takes pipelines specified in the new multi-rate architecture and lowers them to FPGA implementations, and demonstrates depth from stereo, Lucas-Kanade, the SIFT descriptor, and a Gaussian pyramid running on two FPGAs.
Abstract: Image processing algorithms implemented using custom hardware or FPGAs of can be orders-of-magnitude more energy efficient and performant than software. Unfortunately, converting an algorithm by hand to a hardware description language suitable for compilation on these platforms is frequently too time consuming to be practical. Recent work on hardware synthesis of high-level image processing languages demonstrated that a single-rate pipeline of stencil kernels can be synthesized into hardware with provably minimal buffering. Unfortunately, few advanced image processing or vision algorithms fit into this highly-restricted programming model. In this paper, we present Rigel, which takes pipelines specified in our new multi-rate architecture and lowers them to FPGA implementations. Our flexible multi-rate architecture supports pyramid image processing, sparse computations, and space-time implementation tradeoffs. We demonstrate depth from stereo, Lucas-Kanade, the SIFT descriptor, and a Gaussian pyramid running on two FPGA boards. Our system can synthesize hardware for FPGAs with up to 436 Megapixels/second throughput, and up to 297x faster runtime than a tablet-class ARM CPU.

67 citations