scispace - formally typeset
Search or ask a question

Showing papers on "Night vision published in 2022"


Journal ArticleDOI
TL;DR: Li et al. as mentioned in this paper reported a broadband NIR phosphor LiInP2O7: Cr3+ in phosphate system, which achieved an energy transfer efficiency of 81% in this material.

25 citations



Journal ArticleDOI
TL;DR: In this article , an ultra-sensitive and low power-consumption organic phototransistor that consists of a unique Schottky-barrier structure and separated light absorption and carrier transport layers is reported.
Abstract: Emulating human vision using solid‐state devices is critical in the fields of robotics, artificial intelligence, and visual prostheses, driving intense research interest. However, bionic vision devices made from routine structures suffer from low light‐perception sensitivity to nighttime low illuminations and high power consumption, impeding their applications in many advanced scenarios from nighttime autopilot to night vision neuroprosthesis. Here, an ultrasensitive and low‐power‐consumption organic phototransistor that consists of a unique Schottky‐barrier structure and separated light absorption and carrier transport layers is reported. This device design shuns the introduction of trap states into the carrier transport route, which guarantees an ultra‐steep subthreshold swing and thus significantly amplifies the photocurrent while lowering operation voltage. In consequence, the weak‐light detection capacity for this device is enhanced dramatically, which can perceive nighttime low light illuminations with ultrahigh light‐perception sensitivity of 102–104 and low power consumption of <10 nW. Leveraging these findings, it is demonstrated that the phototransistor has neuromorphic vision perception behaviors and energy efficiency like human brain under faint light, opening a new opportunity for artificial vision.

10 citations


Journal ArticleDOI
TL;DR: In this paper , the Cr3+ doped Zn3Ga2Ge2O10 long persistent phosphor materials were synthesized by solid state reaction method by using NIR security ink.

8 citations


Journal ArticleDOI
TL;DR: In this article, the Cr3+ doped Zn3Ga2Ge2O10 long persistent phosphor materials were synthesized by solid state reaction method and the crystal structure of the synthesized materials are cubic with space group Fd3m.

8 citations


Journal ArticleDOI
TL;DR: In this paper , inorganic quantum dot material is combined with organic materials to promote carrier separation and introduce interface defects that adjust the carrier concentration in transistors, which induce the synaptic behavior under SIR (1100 nm) light in darkness and the ability to adapt to white ambient light.
Abstract: Flexible organic monitoring system that can work in short‐wave infrared (SIR) region has great potential in autonomous driving, night driving safety, military encryption, biomedical imaging, and robot engineering. Especially, the development of infrared artificial vision system device that can autonomously improve the computing speed and adapt to different brightness ambient light is very important. However, it is a challenge for mimicking infrared visual adaptation because of the need for infrared absorbing materials and the need to control the concentration of carriers. In this study, inorganic quantum dot material is combined with organic materials to promote carrier separation and introduce interface defects that adjust the carrier concentration in transistors, which induce the synaptic behavior under SIR (1100 nm) light in darkness and the ability to adapt to white ambient light. Furthermore, the device array realizes the image recognition of SIR light at night with white ambient light of different brightness, exhibiting good self‐adaptability, and strong anti‐interference ability. These results demonstrate promising applications of the infrared synaptic phototransistors in adapted bionic optoelectronic devices.

7 citations


Journal ArticleDOI
TL;DR: The visual light environment at night is described, the visual challenges that this environment imposes and the adaptations that have evolved to overcome them, and the advantages of colour vision for nocturnal insects and its usefulness in discriminating night-opening flowers are explained.
Abstract: The ability to see colour at night is known only from a handful of animals. First discovered in the elephant hawk moth Deilephila elpenor, nocturnal colour vision is now known from two other species of hawk moths, a single species of carpenter bee, a nocturnal gecko and two species of anurans. The reason for this rarity—particularly in vertebrates—is the immense challenge of achieving a sufficient visual signal-to-noise ratio to support colour discrimination in dim light. Although no less challenging for nocturnal insects, unique optical and neural adaptations permit reliable colour vision and colour constancy even in starlight. Using the well-studied Deilephila elpenor, we describe the visual light environment at night, the visual challenges that this environment imposes and the adaptations that have evolved to overcome them. We also explain the advantages of colour vision for nocturnal insects and its usefulness in discriminating night-opening flowers. Colour vision is probably widespread in nocturnal insects, particularly pollinators, where it is likely crucial for nocturnal pollination. This relatively poorly understood but vital ecosystem service is threatened from increasingly abundant and spectrally abnormal sources of anthropogenic light pollution, which can disrupt colour vision and thus the discrimination and pollination of flowers. This article is part of the theme issue ‘Understanding colour vision: molecular, physiological, neuronal and behavioural studies in arthropods’.

4 citations


Journal ArticleDOI
27 Apr 2022-Energies
TL;DR: In this paper , an image processing-based approach to quantify the vision quality through smart windows is proposed, which can determine the available contrast band of the scenes seen through the window and adjust the excitation of the PDLC film to maintain a desired vision level within the determined vision band.
Abstract: The visual linking of a building’s occupants with the outside views is a basic property of windows. However, vision through windows is not yet a metricized factor. The previous research employs a human survey methods to assess the vision through conventional windows. The recently fabricated smart films add a changeable visual transparency feature to the windows. The varied operating transparency challenges the evaluation of vision. Therefore, surveying human preferences is no longer a feasible approach for smart windows. This paper proposes an image-processing-based approach to quantify the vision quality through smart windows. The proposed method was experimentally applied to a polymer dispersed liquid crystal (PDLC) double-glazed window. The system instantaneously determines the available contrast band of the scenes seen through the window. The system adjusts the excitation of the PDLC film to maintain a desired vision level within the determined vision band. A preferred vision ratio (PVR) is proposed to meet the requirements of occupant comfort. The impact of the PVR on vision quality, solar heat gain, and daylight performance was investigated experimentally. The results show that the system can determine the available vision comfort band during daytime considering different occupant requirements.

2 citations


Journal ArticleDOI
TL;DR: Contrast sensitivity and low contrast VA predict night-time hazard detection ability in a manner that conventional high contrast VA does not and either may provide a useful metric for assessing fitness to drive at night, particularly in older individuals.
Abstract: Purpose (i) To assess how well contrast sensitivity (CS) predicts night-time hazard detection distance (a key component of night driving ability), in normally sighted older drivers, relative to a conventional measure of high contrast visual acuity (VA); (ii) To evaluate whether CS can be accurately quantified within a night driving simulator. Materials and Methods Participants were 15 (five female) ophthalmologically healthy adults, aged 55–81 years. CS was measured in a driving simulator using Landolt Cs, presented under static or dynamic driving conditions, and with or without glare. In the dynamic driving conditions, the participant was asked to simultaneously maintain a (virtual) speed of 60 km/h on a country road. In the with glare conditions, two calibrated LED arrays, moved by cable robots, simulated the trajectories and luminance characteristics of the (low beam) headlights of an approaching car. For comparison, CS was also measured clinically (with and without glare) using a Optovist I instrument (Vistec Inc., Olching, Germany). Visual acuity (VA) thresholds were also assessed at high and low contrast using the Freiburg Visual Acuity Test (FrACT) under photopic conditions. As a measure of driving performance, median hazard detection distance (MHDD) was computed, in meters, across three kinds of simulated obstacles of varying contrast. Results Contrast sensitivity and low contrast VA were both significantly associated with driving performance (both P < 0.01), whereas conventional high contrast acuity was not (P = 0.10). There was good correlation (P < 0.01) between CS measured in the driving simulator and a conventional clinical instrument (Optovist I). As expected, CS was shown to decrease in the presence of glare, in dynamic driving conditions, and as a function of age (all P < 0.01). Conclusion Contrast sensitivity and low contrast VA predict night-time hazard detection ability in a manner that conventional high contrast VA does not. Either may therefore provide a useful metric for assessing fitness to drive at night, particularly in older individuals. CS measurements can be made within a driving simulator, and the data are in good agreement with conventional clinical methods (Optovist I).

2 citations


Journal ArticleDOI
06 Apr 2022-PLOS ONE
TL;DR: In this article , a convolutional neural network with a U-Net-like architecture was used to predict human visible spectrum scenes from imperceptible near-infrared illumination.
Abstract: Humans perceive light in the visible spectrum (400-700 nm). Some night vision systems use infrared light that is not perceptible to humans and the images rendered are transposed to a digital display presenting a monochromatic image in the visible spectrum. We sought to develop an imaging algorithm powered by optimized deep learning architectures whereby infrared spectral illumination of a scene could be used to predict a visible spectrum rendering of the scene as if it were perceived by a human with visible spectrum light. This would make it possible to digitally render a visible spectrum scene to humans when they are otherwise in complete “darkness” and only illuminated with infrared light. To achieve this goal, we used a monochromatic camera sensitive to visible and near infrared light to acquire an image dataset of printed images of faces under multispectral illumination spanning standard visible red (604 nm), green (529 nm) and blue (447 nm) as well as infrared wavelengths (718, 777, and 807 nm). We then optimized a convolutional neural network with a U-Net-like architecture to predict visible spectrum images from only near-infrared images. This study serves as a first step towards predicting human visible spectrum scenes from imperceptible near-infrared illumination. Further work can profoundly contribute to a variety of applications including night vision and studies of biological samples sensitive to visible light.

2 citations


Journal ArticleDOI
TL;DR: In this paper , an ultra-broadband photodetector based on the PdSe2/BP van der Waals heterodiode with a fast response speed was proposed.
Abstract: Uncooled long-wavelength infrared photodetectors based on two-dimensional materials have wide applications, such as remote sensing, missile guide, imaging, and night vision. However, realizing high-performance photodetectors based on 2D materials with high photoresponsivity and fast response speed is still a challenge. Here, we report an ultra-broadband photodetector based on the PdSe2/BP van der Waals heterodiode with a fast response speed. The detection range of the PdSe2/BP heterodiode is covered from visible to long-wave infrared (0.4–10.6 μm). A high photoresponsivity of 116.0 A/W and a low noise equivalence power of 8.4 × 10−16 W/Hz1/2 and D* of 2.05 × 109 cm Hz1/2/W were demonstrated. Notably, the heterodiode exhibits a very fast response speed with τr = 2.9 and τd = 4.0 μs. Our results introduced a promising application in broadband and fast photoresponse at weak light intensity.

Journal ArticleDOI
TL;DR: A new optical instrument based on psychophysical methodology is developed and the recovery time from total disability glare (photobleaching) as a function of the contrast of the visual target and the glare angle of the source in healthy volunteers is investigated.
Abstract: Disability glare is defined as the loss of contrast sensitivity of the retinal image due to intraocular straylight originated from the presence of an intense and broad bright light in the field of vision. This loss of vision can range between vision loss at high spatial frequencies or total temporal blindness. If the extreme case occurs, the recovery time is crucial in night driving conditions or those professional activities in which maximum visual acuity is required at any moment. The recovery time depends mainly on the intensity and glare angle of the light source, ocular straylight, and the photoreceptor response at the retina. The recovery time can also be affected by ocular pathologies, aging, or physiological factors that increase ocular straylight. The aim of this work is to develop a new optical instrument based on psychophysical methodology as well as to investigate the recovery time from total disability glare (photobleaching) as a function of the contrast of the visual target and the glare angle of the source in healthy volunteers. Results showed significant exponential correlation between recovery time and contrast of the visual target and linear correlation between contrast sensitivity and the glare angle. Those findings allowed to obtain an empirical expression to compute the recovery time required to restore contrast sensitivity baseline vision after photobleaching. Finally, a statistical dependence of recovery time with age was found for short glare angles that disappear as the glare angle increases.

Journal ArticleDOI
TL;DR: A fusion model of infrared and visible images with Generative Adversarial Networks (GAN) for vehicle detection named GF-detection is proposed and Detection features and self-attention mechanism are added to the fusion model aiming to build a detection task-driven fusion model to improve vehicle detection performance at nighttime.
Abstract: Vehicles are important targets in the remote sensing applications and nighttime vehicle detection has been a hot study topic in recent years. Vehicles in the visible images at nighttime have inadequate features for object detection. Infrared images retain the contours of vehicles while they lose the color information. Thus, it is valuable to fuse infrared and visible images to improve the vehicle detection performance at nighttime. However, it is still a challenge to design effective fusion models due to the complexity of visible and infrared images. In order to improve vehicle detection performance at nighttime, this paper proposes a fusion model of infrared and visible images with Generative Adversarial Networks (GAN) for vehicle detection named GF-detection. GAN is utilized in the image reconstruction and introduced in the image fusion recently. To be specific, to exploit more features for the fusion, GAN is utilized to fuse the infrared and visible images via the image reconstruction. The generator fuses the image features and detection features, and then generates the reconstructed images for the discriminator to classify. Two branches, visible and infrared branches, are designed in the GF-detection model. Different feature extraction strategies are conducted according to the variance of the visible and infrared images. Detection features and self-attention mechanism are added to the fusion model aiming to build a detection task-driven fusion model of infrared and visible images. Extensive experiments based on nighttime images are conducted to demonstrate the effectiveness of the proposed fusion model in night vehicle detection.

Proceedings ArticleDOI
27 Mar 2022
TL;DR: The development of night vision technology and several technical methods for obtaining color night vision images, including indirect methods based on color conversion or fusion, and direct methodsbased on hardware are discussed, which provides reference for researchers to understand the color night Vision technology.
Abstract: Low-light-level night vision technology, as the sign of the development of national military science and technology, has received extensive attention in recent years. However, the gray-scale level images provided by traditional low-lightlevel night vision devices are unable to meet the requirements of modern war, color night vision technology developed accordingly. This article mainly discusses the development of night vision technology and several technical methods for obtaining color night vision images, including indirect methods based on color conversion or fusion, and direct methods based on hardware, which provides reference for researchers to understand the color night vision technology.

Proceedings ArticleDOI
25 Jan 2022
TL;DR: In this paper , an accessory prototype capable of modifying a helmet of a motorcycle turning it into a smart helmet with the aim of increasing the front field of vision and accessing the rear vision through cameras that project on a lcd screen in real time, as well as establishing a Bluetooth connection system with a mobile device to access its audio, a proximity warning system from some entity less than a meter and a half, the same that will produce an alert and a night warning system that will allow the driver to be seen in a greater perimeter of visibility.
Abstract: This document presents an accessory prototype capable of modifying a helmet of motorcycle turning it into a smart helmet with the aim of increasing the front field of vision and accessing the rear vision through cameras that project on a lcd screen in real time, as well as establishing a Bluetooth connection system with a mobile device to access its audio, a proximity warning system from some entity less than a meter and a half, the same that will produce an alert and a night warning system that will allow the driver to be seen in a greater perimeter of visibility. This helmet was developed in different scenarios and it can be intuited that it will provide safety advantages for the motorized vehicle and reduce the risks involved in vehicular mobility.

Journal ArticleDOI
TL;DR: In this article, the role of photonic crystals behind the Luneburg lens was analyzed and it was shown that they can be regarded as a retroreflector and greatly improve the light focusing intensity of the lens in a broad band of frequencies.
Abstract: It is well known that cats have fascinating eyes with various colors, such as green, blue, and brown. In addition, they possess strong night vision ability, which can distinguish things clearly even in a poor light environment. These drive us to reveal the secrets behind them. In fact, cats’ eyes can be considered as special lenses (which we would like to mimic by using a Luneburg lens). We make an analysis of the role of photonic crystals behind the lens and demonstrate that the integration of photonic crystals into Luneburg lens can be regarded as a retroreflector and greatly improve the light focusing intensity of the lens in a broad band of frequencies. This wonderful bioinspired phenomenon is expected to design more interesting and serviceable devices by combining photonic crystals with transformation optics.

Book ChapterDOI
21 Aug 2022
TL;DR: In this paper , the influence of night vision goggles on depth perception and distance assessment was studied, focusing on the internal factors leading to depth perception problems such as narrow field of vision, resolution, image noise, instrument myopia, spectral sensitivity effect and binocular vision, as well as some external factors, such as contrast, illumination observation condition and the interaction between various factors.
Abstract: AbstractObjective: To review the research progress of depth perception and distance assessment under night vision goggles in foreign and domestic countries, and to understand the influence of night vision goggles on depth perception and distance assessment. Method: 35 references and reports in related fields at home and abroad were cited. Result: This paper studies the influence of night vision goggles on depth perception or distance assessment, and focuses on the internal factors of night vision goggles leading to depth perception problems, such as narrow field of vision, resolution, image noise, instrument myopia, spectral sensitivity effect and binocular vision, as well as some external factors, such as contrast, illumination observation condition and the interaction between various factors. Finally, depth perception training is briefly summarized. Conclusions: Due to the influence of the working environment of night vision goggles and the limitation of the special structure of night vision goggles, night vision goggles reduce ability of airmen to judge the depth perception and distance. The accuracy of judgment can be improved through the depth perception training under night vision goggles.KeywordsDepth perceptionStereo visionDistance assessmentNight vision goggles

Proceedings ArticleDOI
19 Aug 2022
TL;DR: In this paper , the authors used a combination method: images registration by SURF feature, pixel level fusion through YUV channels after wavelet transform, and transfer training with Yolo-v4 detection model.
Abstract: Front view target detection on vehicles mostly adopts visible light imaging, there exists low detection accuracy of cars and pedestrian under poor illumination at night. Visible light images and near infrared images with complementary light, are studied in this work, by combination method: images registration by SURF feature, pixel level fusion through YUV channels after wavelet transform, and transfer training with Yolo-v4 detection model. It is confirmed by experimental results, these fused NIR images at dark night, which were captured by cheap silicon RGB CMOS, can increase information entropy 20.4% beyond NIR, 38.9% beyond visible light, and the average detection accuracy from 20.0% to 82.4%.


Journal ArticleDOI
TL;DR: A brief overview of the various night vision gadgets (NVD) that enable images to be created by levels of light approaching the rising of the dark and specifies different programs in which night vision renaming is used to accommodate different objects as mentioned in this paper .
Abstract: Abstract: This research paper presents different ways of seeing at night. "Night Vision" is referred to as a new phenomenon that gives us a supernatural experience of seeing in the dark and a change of vision in low light conditions. This initiative is a combination of a few unique strategies each with its own focus points and challenges. Well-known techniques shown here are Lowlight Imaging, Thermal Imaging, and Lighting. This research paper also provides a brief overview of the various night vision gadgets (NVD) that enable images to be created by levels of light approaching the rising of the dark and specifies different programs in which night vision renaming is used to accommodate different objects. problems due to low light conditions.

Proceedings ArticleDOI
27 May 2022
TL;DR: A novel modality known as monocular 3D Thermal Ranging™ that dramatically improves pedestrian safety to reduce accidents and save lives is discussed that is based on custom HD thermal imaging and innovative AI based computer vision algorithms.
Abstract: The current de-facto Automotive Driver Assist System (ADAS) sensor suite typically comprises mutually dependent visible-light cameras and radar, but when one of these sensors becomes ineffective, so too does the entire sensor suite. This scenario happens often especially when it comes to pedestrians, cyclists, and animals at night or in inclement weather. We will discuss a novel modality known as monocular 3D Thermal Ranging™ that dramatically improves pedestrian safety to reduce accidents and save lives. The solution is based on custom HD thermal imaging and innovative AI based computer vision algorithms. Operating in the thermal spectrum these algorithms exploit angular, temporal and intensity data to produce ultra-dense point clouds (up to 150x that of LIDAR) along with highly refined classification for object detection and identification. We will discuss how to derive ultra-high-density range maps from a monocular thermal camera running a purpose-built AI CNN and bespoke embedded optics. The resulting new sensor modality provides all the benefits of a thermal camera, including all weather and day/night operation and instant detection of animals and vehicles, while simultaneously delivering a geospatially registered 3D range map of such density that perception stacks may enjoy unprecedented awareness. This new sensor maybe an ideal complement to the ADAS & AV sensor suite where thermal perception is sorely needed and the redundancy of real time imaging and ranging channels will be most welcome to improve the utility, comfort, and safety of autonomous and semi-autonomous vehicles.

Journal ArticleDOI
TL;DR: A study of low illumination-based low-light night image enhancement techniques are presented which work on reflectance, degradation, unsatisfactory lightings, noise, limited range visibility, low contrast, color variations, illumination, color distortion, and quality is reduced.
Abstract: Abstract: Different kinds of Images captured are an important medium to represent meaningful information. It can be problematic for artificial intelligence, computer vision techniques and detection algorithms to extract valuable information from those images with poor lighting. In this paper, a study of low illumination-based low-light night image enhancement techniques are presented which work on reflectance, degradation, unsatisfactory lightings, noise, limited range visibility, low contrast, color variations, illumination, color distortion, and quality is reduced. Improving the images in low light conditions is a prerequisite in many fields, such as surveillance systems, road safety and inland waterway transport, object tracking, scientific research, the detection system, the counting system and the navigation system. Low-illumination or night image enhancement algorithms can advance the visual quality of low-light images and these images can be used in many practical application’s artificial intelligence and computer vision techniques. The methods used for enhancement of low illumination must perform, preserving details, contrast improvement, color correction, noise reduction, image enhancement, restoration, etc. Keywords: Image Enhancement, Low Illumination, Reflectance, Low Contrast, Low Light Images, Night Time Images, Low Visibility Images.

Proceedings ArticleDOI
02 Nov 2022
TL;DR: In this article , a portable short-wave infrared (SWIR) sensor system was developed aiming at vision enhancement through fog and smoke for support of emergency forces such as fire fighters or the police.
Abstract: A portable short-wave infrared (SWIR) sensor system was developed aiming at vision enhancement through fog and smoke for support of emergency forces such as fire fighters or the police. In these environments, wavelengths in the SWIR regime have superior transmission and less backscatter in comparison to the visible spectral range received by the human eye or RGB cameras. On the emitter side, the active SWIR sensor system features a light-emitting diode (LED) array consisting of 55 SWIR-LEDs with a total optical power output of 280 mW emitting at wavelengths around λ = 1568 nm with a Full Width at Half Maximum (FWHM) of 137 nm, which are more eye-safe compared to the visible range. The receiver consists of an InGaAs camera equipped with a lens with a field of view slightly exceeding the angle of radiation of the LED array. For convenient use as a portable device, a display for live video from the SWIR camera is embedded within the system. The dimensions of the system are 270 x 190 x 110 mm and the overall weight is 3470 g. The superior potential of SWIR in contrast to visible wavelengths in scattering environments is first theoretically estimated using the Mie scattering theory, followed by an introduction of the SWIR sensor system including a detailed description of its assembly and a characterisation of the illuminator regarding optical power, spatial emission profile, heat dissipation, and spectral emission. The performance of the system is then estimated by design calculations based on the lidar equation. First field experiments using a fog machine show an improved performance compared to a camera in the visible range (VIS), as a result of less backscattering from illumination, lower extinction and thus producing a clearer image.

Journal ArticleDOI
TL;DR: In this paper , the authors describe recent advances in facial recognition utilizing infrared as a source are described, with the main issue being that the lighting on the face varies in outside circumstances.
Abstract: Recent advances in facial recognition utilizing infrared as a source are described. Recent research has concentrated on face identification using visible light, with the main issue being that the lighting on the face varies in outside circumstances. Recent studies employ infrared light as a source to produce infrared face pictures to overcome this and increase performance. This is known as a thermal face image, and it is extremely valuable in a variety of application systems. Night surveillance systems and military applications are two applications where night vision comes into picture. The choice of infrared, intensity fluctuation, and angle of incidence all play crucial roles in these applications. Keywords: Face recognition, multi spectral images, LBP, SIFT

Proceedings ArticleDOI
15 Dec 2022
TL;DR: In this article , a modified convolutional neural network based on DeepLabV3+ was proposed to solve the problem of insufficient training samples, transfer learning and a new image enhancement strategy are carried out to complete the training.
Abstract: Under low illumination environments, the insufficient visible light and the existence of near-infrared light will cause photon noise and color distortion for the imaging of night-vision CMOS sensor. The light source highly affects the imaging of surveillance camera and declines the accuracy of semantic segmentation. In this work, we report a modified convolutional neural network based on DeepLabV3+. We modify the backbone of the network from Xception to MobileNetV2 to deal with the real-time vision task of night-vision surveillance camera. Linear bottleneck and inverted residuals are adopted in MobileNetV2, and they greatly reduce parameters of the network. A real-world low-light dataset with fine annotations for night-vision surveillance camera is proposed to train and evaluated the new framework. Aiming at the problem of insufficient training samples, transfer learning and a new image enhancement strategy are carried out to complete the training. We also change the loss function to a joint loss function to further improve the results of segmentation. Comparing with other existing state-of-the-art algorithms, the modified neural network shows competitive performance on both subjective and objective assessments. The ablation study comparing with the baseline model proves the effectiveness of the modifications.

Proceedings ArticleDOI
22 Aug 2022
TL;DR: Wang et al. as mentioned in this paper proposed a license plate recognition method with night vision enhancement, which can detect and recognize license plates under extremely poor lighting conditions, and tested on 75k images of the simulated CCPD night dataset.
Abstract: License plate recognition technology plays an important role in traffic management. It is widely used in parking, high-speed and road traffic management. The existing license plate recognition system is easy to be disturbed by the external environment, and the detection performance is poor in the night scene. This paper investigates the problem of license plate recognition in night vision scenarios and license plate tilt. This paper proposes a license plate recognition method with night vision enhancement, which can detect and recognize license plates under extremely poor lighting conditions. The method first uses a night vision enhancement module, Recursive Encoder-Decoder Network (RED-Net), and a set of non-reference loss functions designed for properties of the image. And then the License Plate Location and Recognition (LPLR) system is used to get the license plate number. The algorithm is tested on 75k images of the simulated CCPD night dataset. The testing result that the accuracy of the license plate recognition algorithm after night vision enhancement can reach 72.29% increase 65.5% than without night vision enhancement.

Journal ArticleDOI
TL;DR: A novel backlight unit with night vision compatible function is designed, composed of evenly distributed ORGB 4‐chips‐in‐1 LED array, meeting the requirements of GJB 1394.
Abstract: Night vision technology plays an important role in modern war. In order to reduce the night vision radiance of LCD, a dual‐mode backlight unit (BLU) is used, which includes white LEDs and OGB color LEDs. Whether direct or side backlight, this dual‐mode backlight mechanism leads to a big pitch between physically spaced LED array, which makes LCD module thickness increase and hotspots appearance. In this paper, a novel backlight unit with night vision compatible function is designed. The backlight is composed of evenly distributed ORGB 4‐chips‐in‐1 LED array. one 21.5inch direct LCD module was designed and assembled, with thickness of 4.1cm, only 61% of previous generation. The value of NR is 1.934×10−9W/(cm2·SR·nm), meeting the requirements of GJB 1394. The color coordinates of white field are (x0.3087, y0.3403), with the NTSC of 87.7%.

Journal ArticleDOI
TL;DR: In this article , a camera system with a thermal image sensor was developed that identifies pedestrians and labels each identified pedestrian with location and distance data, using a combination of convolutional neural networks and fusion processing to provide the data that an automobile host computer needs to implement fast, safe, and accurate automatic braking.
Abstract: To react to the presence of pedestrians, an automated braking system must first find the pedestrian(s) including locations relative to the automobile both in angular position and in distance. In the daytime, cameras and radar can provide the necessary information, but this combination, which requires ambient or active illumination, fails at night. Passive thermal sensors are now being enlisted to dramatically improve imaging at night, whereas substantial effort is underway to assure proper fusing of object information from the thermal sensor with other sensors on the automobile. To simplify the acquisition of information needed to make valid automated braking decisions at night, a camera system with a thermal image sensor was developed that identifies pedestrians and labels each identified pedestrian with location and distance data. The camera utilizes a single uncooled custom microbolometer sensor and a software suite implementing artificial intelligence and machine learning capabilities, running on a combination of convolutional neural networks and fusion processing to provide the data that an automobile host computer needs to implement fast, safe, and accurate automatic braking. We present details of the system construction and operation as well as initial test results showing the potential this technology has to dramatically reduce pedestrian fatalities at night as well as augment safety across all conditions, whether day, night, fog, rain, snow, dust, sun glare, or headlight glare.

Journal ArticleDOI
TL;DR: In this article , a mathematical program in Mathcad has been used to simulate the Bar Spread Function (BSF) as a target function with a range factor (distance between the target and thermal camera) to calculate the intensity distribution in thermal images.
Abstract: Thermal imaging cameras are widely used in military applications for their night vision capabilities at different observation ranges. Detection of a target is influenced by the case of weather conditions. In this work, a mathematical program in Mathcad has been used to simulate the Bar Spread Function (BSF) as a target function with a range factor (distance between the target and thermal camera) to calculate the intensity distribution in thermal images. Images of thermal camera (PT-602CZ HD) are determined at different range (0.5,1,1.5,2,2.5,3,3.5,4) Km and the effective range is investigated. Atmospheric transmission, in the infrared band at (5,10) µm, is evaluated at different visibility (200,300,400,500,600,700,800) m with the presence of fog at different ranges (0.2,0.3,0.4,0.5,0.6,0.7,0.8) Km in Ain Al-Tamur area for the holy Karbala city, Iraq. Thermal images are captured by the thermal camera (PT-602CZ HD) at various ranges (0.2,0.3,0.4,0.5) Km in the same area of the study.

Journal ArticleDOI
TL;DR: Intraocular straylight was a poor predictor of visual function and driving performance within this experiment, while contrast sensitivity (CS) was a strong predictor of both hazard recognition and halo extent.
Abstract: Purpose To evaluate the relationship between intraocular straylight perception and: (i) contrast sensitivity (CS), (ii) halo size, and (iii) hazard recognition distance, in the presence and absence of glare. Subjects and methods Participants were 15 (5 female) ophthalmologically healthy adults, aged 54.6–80.6 (median: 67.2) years. Intraocular straylight (log s) was measured using a straylight meter (C-Quant; Oculus GmbH, Wetzlar, Germany). CS with glare was measured clinically using the Optovist I device (Vistec Inc., Olching, Germany) and also within a driving simulator using Landolt Cs. These were presented under both static or dynamic viewing conditions, and either with or without glare. Hazard detection distance was measured for simulated obstacles of varying contrast. For this, the participant was required to maintain a speed of 60 km/h within a custom-built nighttime driving simulator. Glare was simulated by LED arrays, moved by cable robots to mimic an oncoming car’s headlights. Halo size (“halometry”) was measured by moving Landolt Cs outward originating from the center of a static glare source. The outcome measure from “halometry” was the radius of the halo (angular extent, in degrees visual angle). Results The correlation between intraocular straylight perception, log s, and hazard recognition distance under glare was poor for the low contrast obstacles (leading/subdominant eye: r = 0.27/r = 0.34). Conversely, log CS measured with glare strongly predicted hazard recognition distances under glare. This was true both when log CS was measured using a clinical device (Optovist I: r = 0.93) and within the driving simulator, under static (r = 0.69) and dynamic (r = 0.61) conditions, and also with “halometry” (r = 0.70). Glare reduced log CS and hazard recognition distance for almost all visual function parameters. Conclusion Intraocular straylight was a poor predictor of visual function and driving performance within this experiment. Conversely, CS was a strong predictor of both hazard recognition and halo extent. The presence of glare and motion lead to a degradation of CS in a driving simulator. Future studies are necessary to evaluate the effectiveness of all above-mentioned vision-related parameters for predicting fitness to drive under real-life conditions.