scispace - formally typeset
Search or ask a question

Showing papers on "Night vision published in 2017"


Journal ArticleDOI
TL;DR: It is concluded that although various image fusion methods have been proposed, there still exist several future directions in different image fusion applications and the researches in the image fusion field are still expected to significantly grow in the coming years.

871 citations


Journal ArticleDOI
16 Feb 2017-Nature
TL;DR: A photovoltage field-effect transistor that uses silicon for charge transport, but is also sensitive to infrared light owing to the use of a quantum dot light absorber, and shows that colloidal quantum dots can be used as an efficient platform for silicon-based infrared detection, competitive with state-of-the-art epitaxial semiconductors.
Abstract: The detection of infrared radiation enables night vision, health monitoring, optical communications and three-dimensional object recognition. Silicon is widely used in modern electronics, but its electronic bandgap prevents the detection of light at wavelengths longer than about 1,100 nanometres. It is therefore of interest to extend the performance of silicon photodetectors into the infrared spectrum, beyond the bandgap of silicon. Here we demonstrate a photovoltage field-effect transistor that uses silicon for charge transport, but is also sensitive to infrared light owing to the use of a quantum dot light absorber. The photovoltage generated at the interface between the silicon and the quantum dot, combined with the high transconductance provided by the silicon device, leads to high gain (more than 104 electrons per photon at 1,500 nanometres), fast time response (less than 10 microseconds) and a widely tunable spectral response. Our photovoltage field-effect transistor has a responsivity that is five orders of magnitude higher at a wavelength of 1,500 nanometres than that of previous infrared-sensitized silicon detectors. The sensitization is achieved using a room-temperature solution process and does not rely on traditional high-temperature epitaxial growth of semiconductors (such as is used for germanium and III-V semiconductors). Our results show that colloidal quantum dots can be used as an efficient platform for silicon-based infrared detection, competitive with state-of-the-art epitaxial semiconductors.

187 citations


Journal ArticleDOI
TL;DR: In this article, the authors theoretically analyzed and experimentally realized a Huygens metasurface platform capable of fulfilling a diverse cross-section of optical functions in the mid-IR.
Abstract: The mid-infrared (mid-IR) is a strategically important band for numerous applications ranging from night vision to biochemical sensing. Unlike visible or near-infrared optical parts which are commonplace and economically available off-the-shelf, mid-IR optics often requires exotic materials or complicated processing, which accounts for their high cost and inferior quality compared to their visible or near-infrared counterparts. Here we theoretically analyzed and experimentally realized a Huygens metasurface platform capable of fulfilling a diverse cross-section of optical functions in the mid-IR. The meta-optical elements were constructed using high-index chalcogenide films deposited on fluoride substrates:the choices of wide-band transparent materials allow the design to be scaled across a broad infrared spectrum. Capitalizing on a novel two-component Huygens' meta-atom design, the meta-optical devices feature an ultra-thin profile ($\lambda_0/8$ in thickness, where $\lambda_0$ is the free-space wavelength) and measured optical efficiencies up to 75% in transmissive mode, both of which represent major improvements over state-of-the-art. We have also demonstrated, for the first time, mid-IR transmissive meta-lenses with diffraction-limited focusing and imaging performance. The projected size, weight and power advantages, coupled with the manufacturing scalability leveraging standard microfabrication technologies, make the Huygens meta-optical devices promising for next-generation mid-IR system applications.

85 citations


Journal ArticleDOI
TL;DR: This paper proposes the use of non-parametric statistical analysis for comparisons of fusion algorithms along with the Image fusion Toolbox Employing Significance Testing (ImTEST).

76 citations


Journal ArticleDOI
TL;DR: The study indicates that in the model species, in low-light conditions, fluorescence accounts for an important fraction of the total emerging light, largely enhancing brightness of the individuals and matching the sensitivity of night vision in amphibians.
Abstract: Fluorescence, the absorption of short-wavelength electromagnetic radiation reemitted at longer wavelengths, has been suggested to play several biological roles in metazoans. This phenomenon is uncommon in tetrapods, being restricted mostly to parrots and marine turtles. We report fluorescence in amphibians, in the tree frog Hypsiboas punctatus, showing that fluorescence in living frogs is produced by a combination of lymph and glandular emission, with pigmentary cell filtering in the skin. The chemical origin of fluorescence was traced to a class of fluorescent compounds derived from dihydroisoquinolinone, here named hyloins. We show that fluorescence contributes 18-29% of the total emerging light under twilight and nocturnal scenarios, largely enhancing brightness of the individuals and matching the sensitivity of night vision in amphibians. These results introduce an unprecedented source of pigmentation in amphibians and highlight the potential relevance of fluorescence in visual perception in terrestrial environments.

70 citations


Journal ArticleDOI
TL;DR: The TNO Multiband Image Collection provides intensified visual, near-infrared, and longwave infrared nighttime imagery of different military and surveillance scenarios, showing different objects and targets in a range of different backgrounds, useful for the development of static and dynamic image fusion algorithms, color fusion algorithm, multispectral target detection and recognition algorithms, and dim target detection algorithms.

63 citations


Journal ArticleDOI
TL;DR: In this paper, the authors presented a (Li,Na)8Al6Si6O6O24(Cl,S)2:Ti, a heavy-metal and rare-earth-free low-cost material that can give white persistent luminescence (PeL) that stays 7 h above the 0.3 mcd m−2 limit and is observable for more than 100 h with a spectrometer.
Abstract: Persistent luminescence (PeL) materials are used in everyday glow-in-the-dark applications and they show high potential for, e.g., medical imaging, night-vision surveillance, and enhancement of solar cells. However, the best performing materials contain rare earths and/or other heavy metal and expensive elements such as Ga and Ge, increasing the production costs. Here, (Li,Na)8Al6Si6O24(Cl,S)2:Ti, a heavy-metal- and rare-earth-free low-cost material is presented. It can give white PeL that stays 7 h above the 0.3 mcd m−2 limit and is observable for more than 100 h with a spectrometer. This is a record-long duration for white PeL and visible PeL without rare earths. The material has great potential to be applied in white light emitting devices (LEDs) combined with self-sustained night vision using only a single phosphor. The material also exhibits PeL in aqueous suspensions and is capable of showing easily detectable photoluminescence even in nanomolar concentrations, indicating potential for use as a diagnostic marker. Because it is excitable with sunlight, this material is expected to additionally be well-suited for outdoor applications.

57 citations


Journal ArticleDOI
TL;DR: In this article, the Lamb Prize for the development of infrared materials for night vision and thermal imagers useful for defense, but also for civilian applications has been discussed, and the contribution has been particularly innovative in different sectors: broadening of chalcogenide glasses window of transparency, IR glass-ceramics with high thermomechanical properties and the design of a new way of synthesis of these materials by a mechanical process.

51 citations


Journal ArticleDOI
TL;DR: This special issue brings together a unique combination of recent research on deep-sea and nocturnal animals and moreover from a wide spectrum of scientific disciplines, from ecology, evolution and quantitative visual behaviour to cellular electrophysiology, mathematical modelling and molecular biology.
Abstract: On a moonless night or in the depths of the sea, where light levels are many orders of magnitude dimmer than sunlight, animals rely on their visual systems to orient and navigate, to find food and mates and to avoid predators. To see well at such low light levels is far from trivial. The paucity of light means that visual signals generated in the light-sensitive photoreceptors of the retina can easily be drowned in neural noise. Despite this, research over the past 15 years has revealed that nocturnal and deep-sea animals—even very small animals like insects with tiny eyes and brains—can have formidable visual abilities in dim light. The latest research in the field is now beginning to reveal how this visual performance is possible, and in particular which optical and neural strategies have evolved that permit reliable vision in dim light. This flurry of research is rapidly changing our understanding of both the limitations and the capabilities of animals active in very dim light. For instance, while the long-held view was that night vision allows only an impoverished, noisy and monochrome view of the world, we now know that many nocturnal animals see the world more or less in the same manner as their day-active relatives. Many are able to see colour, to use optic flow cues to control flight, and to navigate using learned visual landmarks and celestial cues such as polarized light. Much of our appreciation of the richness of the visual world seen by nocturnal animals has derived primarily from behavioural, anatomical and optical studies. More recently, enormous advances have also been made in understanding the neural basis of this performance in both single cells and circuits of cells from both nocturnal vertebrates (notably mice) and nocturnal invertebrates (notably insects). These studies indicate that the remarkable behavioural performance of these animals in dim light can only partially be explained by what we currently know of the performance of the underlying visual cells. We are thus now at an important point in the field where this gap is closing. It is thus particularly timely that this special issue brings together a unique combination of recent research on deep-sea and nocturnal animals and moreover from a wide spectrum of scientific disciplines, from ecology, evolution and quantitative visual behaviour to cellular electrophysiology, mathematical modelling and molecular biology. This landmark collection of papers is the first to exclusively address the topic of comparative vision in dim light.

32 citations


Journal ArticleDOI
TL;DR: A large pupil center shift and misalignment between the visual and pupillary axis (angle kappa) may play a role in the occurrence of photic phenomena after implantation of rotationally asymmetric MIOLs.
Abstract: Aim To investigate the independent factors associated with photic phenomena in patients implanted with refractive, rotationally asymmetric, multifocal intraocular lenses (MIOLs). Methods Thirty-four eyes of 34 patients who underwent unilateral cataract surgery, followed by implantation of rotationally asymmetric MIOLs were included. Distance and near visual acuity outcomes, intraocular aberrations, preferred reading distances, preoperative and postoperative refractive errors, mesopic and photopic pupil diameters, and the mesopic and photopic kappa angles were assessed. Patients were also administered a satisfaction survey. Photic phenomena were graded by questionnaire. Independent-related factors were identified by correlation and bivariate logistic regression analyses. Results The distance from the photopic to the mesopic pupil center (pupil center shift) was significantly associated with glare/halo symptoms [odds ratio (OR)=2.065, 95% confidence interval (CI)=0.916-4.679, P=0.006] and night vision problems (OR=1.832, 95% CI=0.721-2.158, P=0.007). The preoperative photopic angle kappa was significantly associated with glare/halo symptoms (OR=2.155, 95% CI=1.065-4.362, P=0.041). The photopic angle kappa was also significantly associated with glare/halo symptoms (OR=2.155, 95% CI=1.065-4.362, P=0.041) and with night vision problems (OR=1.832, 95% CI=0.721-2.158, P=0.007) in patients implanted with rotationally asymmetric MIOLs. Conclusion A large pupil center shift and misalignment between the visual and pupillary axis (angle kappa) may play a role in the occurrence of photic phenomena after implantation of rotationally asymmetric MIOLs.

31 citations


Journal ArticleDOI
TL;DR: Major enigmas of some of the functional properties of nocturnal photoreceptors are discussed, and recent advances in methodologies that may help to solve them and broaden the field of insect vision research to new model animals are described.
Abstract: Night vision is ultimately about extracting information from a noisy visual input. Several species of nocturnal insects exhibit complex visually guided behaviour in conditions where most animals are practically blind. The compound eyes of nocturnal insects produce strong responses to single photons and process them into meaningful neural signals, which are amplified by specialized neuroanatomical structures. While a lot is known about the light responses and the anatomical structures that promote pooling of responses to increase sensitivity, there is still a dearth of knowledge on the physiology of night vision. Retinal photoreceptors form the first bottleneck for the transfer of visual information. In this review, we cover the basics of what is known about physiological adaptations of insect photoreceptors for low-light vision. We will also discuss major enigmas of some of the functional properties of nocturnal photoreceptors, and describe recent advances in methodologies that may help to solve them and broaden the field of insect vision research to new model animals.This article is part of the themed issue 'Vision in dim light'.

Proceedings ArticleDOI
19 May 2017
TL;DR: A Raspbian operating system based spy robot platform with remote monitoring and control algorithm through Internet of Things (IoT) has been developed which will save human live, reduces manual error and protect the country from enemies.
Abstract: At present the surveillance of International border areas is a difficult task. The border guarding forces are patrolling the border seriously, but it is not possible to watch the border at each and every moment. An essential requirement of this situation is a robot which automatically detects trespasser in the border and report nearby board security control unit. Many of the military departments now utilize the robots to carry out risky jobs that cannot be done by the soldiers. In this present work, a Raspbian operating system based spy robot platform withremote monitoring and control algorithm through Internet of Things (IoT)has been developed which will save human live, reduces manual error and protect the country from enemies. The spy robot system comprises the Raspberry Pi (small single-board computer), night vision pi camera and sensors. The information regarding the detection of living objects by PIR sensor is sent to the users through the web server and pi camera capture the moving object which is posted inside the webpage simultaneously. The user in control room able to access the robotwithwheel drive control buttons on the webpage. The movement of a robot is also controlled automatically through obstacle detecting sensors to avoiding the collision. This surveillance system using spy robot can be customized for various fields like industries, banks and shopping malls.

Posted Content
TL;DR: In this article, the authors show how attackers can use surveillance cameras and infrared light to establish bi-directional covert communication between the internal networks of organizations and remote attackers, and demonstrate that data can be covertly exfiltrated from an organization at a rate of 20 bit/sec per surveillance camera to a distance of tens of meters away.
Abstract: Infrared (IR) light is invisible to humans, but cameras are optically sensitive to this type of light. In this paper, we show how attackers can use surveillance cameras and infrared light to establish bi-directional covert communication between the internal networks of organizations and remote attackers. We present two scenarios: exfiltration (leaking data out of the network) and infiltration (sending data into the network). Exfiltration. Surveillance and security cameras are equipped with IR LEDs, which are used for night vision. In the exfiltration scenario, malware within the organization access the surveillance cameras across the local network and controls the IR illumination. Sensitive data such as PIN codes, passwords, and encryption keys are then modulated, encoded, and transmitted over the IR signals. Infiltration. In an infiltration scenario, an attacker standing in a public area (e.g., in the street) uses IR LEDs to transmit hidden signals to the surveillance camera(s). Binary data such as command and control (C&C) and beacon messages are encoded on top of the IR signals. The exfiltration and infiltration can be combined to establish bidirectional, 'air-gap' communication between the compromised network and the attacker. We discuss related work and provide scientific background about this optical channel. We implement a malware prototype and present data modulation schemas and a basic transmission protocol. Our evaluation of the covert channel shows that data can be covertly exfiltrated from an organization at a rate of 20 bit/sec per surveillance camera to a distance of tens of meters away. Data can be covertly infiltrated into an organization at a rate of over 100 bit/sec per surveillance camera from a distance of hundreds of meters to kilometers away.

Journal ArticleDOI
TL;DR: This cross-sectional analysis demonstrates associations of patient-reported functional deficits, as assessed on the Low Luminance Questionnaire, with both reduced DA and reduced choroidal thickness, in a population of older adults with varying degrees of AMD severity and good visual acuity in at least 1 eye.

Journal ArticleDOI
TL;DR: A novel all-organic upconversion device architecture has been first proposed and developed by incorporating a NIR absorption layer between the carrier transport layer and the emission layer in heterostructured organic light-emitting field effect transistors (OLEFETs).
Abstract: The near-infrared (NIR) to visible upconversion devices have attracted great attention because of their potential applications in the fields of night vision, medical imaging, and military security. Herein, a novel all-organic upconversion device architecture has been first proposed and developed by incorporating a NIR absorption layer between the carrier transport layer and the emission layer in heterostructured organic light-emitting field effect transistors (OLEFETs). The as-prepared devices show a typical photon-to-photon upconversion efficiency as high as 7% (maximum of 28.7% under low incident NIR power intensity) and millisecond-scale response time, which are the highest upconversion efficiency and one of the fastest response time among organic upconversion devices as referred to the previous reports up to now. The high upconversion performance mainly originates from the gain mechanism of field-effect transistor structures and the unique advantage of OLEFETs to balance between the photodetection and...

Journal ArticleDOI
TL;DR: The broad loss of Tmem30a in adult mice led to a reduced scotopic photoresponse, mislocalization of ATP8A2 to the inner segment and cell body, and increased apoptosis in the retina, which demonstrated novel essential roles of T Mem30aIn the retina.
Abstract: Phosphatidylserine (PS) is asymmetrically distributed between the outer and inner leaflets of the plasma membrane in eukaryotic cells. PS asymmetry on the plasma membrane depends on the activities of P4-ATPases, and disruption of PS distribution can lead to various disease conditions. Folding and transporting of P4-ATPases to their cellular destination requires the β subunit TMEM30A proteins. However, the in vivo functions of Tmem30a remain unknown. To this end, we generated retinal-specific Tmem30a-knockout mice to investigate its roles in vivo for the first time. Our data demonstrated that loss of Tmem30a in mouse cone cells leads to mislocalization of cone opsin, loss of photopic electroretinogram (ERG) responses and loss of cone cells. Mechanistically, Tmem30a-mutant mouse embryonic fibroblasts (MEFs) exhibited diminished PS flippase activity and increased exposure of PS on the cell surface. The broad loss of Tmem30a in adult mice led to a reduced scotopic photoresponse, mislocalization of ATP8A2 to the inner segment and cell body, and increased apoptosis in the retina. Our data demonstrated novel essential roles of Tmem30a in the retina.

Journal ArticleDOI
TL;DR: An easy and cost-effective process to fabricate flexible and ultrathin electrolyte-gated organic phototransistors with high transparent nanocomposite membranes of high-conductivity silver nanowire (AgNW) networks and large-capacitance iontronic films that provide the possibility of the organic photosensors for constructing cost- effective and smart optoelectronic systems in the future.
Abstract: Flexible and low-voltage photosensors with high near-infrared (NIR) sensitivity are critical for realization of interacting humans with robots and environments by thermal imaging or night vision techniques. In this work, we for the first time develop an easy and cost-effective process to fabricate flexible and ultrathin electrolyte-gated organic phototransistors (EGOPTs) with high transparent nanocomposite membranes of high-conductivity silver nanowire (AgNW) networks and large-capacitance iontronic films. A high responsivity of 1.5 × 103 A·W1-, high sensitivity of 7.5 × 105, and 3 dB bandwidth of ∼100 Hz can be achieved at very low operational voltages. Experimental studies in temporal photoresponse characteristics reveal the device has a shorter photoresponse time at lower light intensity since strong interactions between photoexcited hole carriers and anions induce extra long-lived trap states. The devices, benefiting from fast and air-stable operations, provide the possibility of the organic photosensors for constructing cost-effective and smart optoelectronic systems in the future.

Journal ArticleDOI
TL;DR: It is believed that high contrast and visual resolution in daylight are provided by the quantum mechanism of energy transfer in the form of excitons, whereas the ultimate retinal sensitivity of the night vision is provided byThe classical mechanism of photons transmitted by the Müller cell light-guides.
Abstract: Presently we continue our studies of the quantum mechanism of light energy transmission in the form of excitons by axisymmetric nanostructures with electrically conductive walls Using our theoretical model, we analyzed the light energy transmission by biopolymers forming optical channels within retinal Muller cells There are specialized intermediate filaments (IF) 10-18nm in diameter, built of electrically conductive polypeptides Presently, we analyzed the spectral selectivity of these nanostructures We found that their transmission spectrum depends on their diameter and wall thickness We also considered the classical approach, comparing the results with those predicted by the quantum mechanism We performed experimental measurements on model quantum waveguides, made of rectangular nanometer-thick chromium (Cr) tracks The optical spectrum of such waveguides varied with their thickness We compared the experimental absorption/transmission spectra with those predicted by our model, with good agreement between the two We report that the observed spectra may be explained by the same mechanisms as operating in metal nanolayers Both the models and the experiment show that Cr nanotracks have high light transmission efficiency in a narrow spectral range, with the spectral maximum dependent on the layer thickness Therefore, a set of intermediate filaments with different geometries may provide light transmission over the entire visible spectrum with a very high (~90%) efficiency Thus, we believe that high contrast and visual resolution in daylight are provided by the quantum mechanism of energy transfer in the form of excitons, whereas the ultimate retinal sensitivity of the night vision is provided by the classical mechanism of photons transmitted by the Muller cell light-guides

Proceedings ArticleDOI
13 Mar 2017
TL;DR: Results indicate that the non-intrusive approach based on infrared night vision cameras and video magnification method can accurately extract heart and respiration rates, and can be used for continuous healthcare monitoring in the night.
Abstract: With the aging of the population, comes increased incidence of chronic diseases affecting the cardiac and respiratory systems. Monitoring of these chronic conditions at home via family members or in institutions via healthcare providers is usually adequate during the day. Non-intrusive video-based monitoring approaches have been proposed using optical cameras whose performance significantly deteriorates in low light conditions. This paper proposed the use of infrared night vision cameras to monitor the heart and respiration rates in low light conditions and in complete darkness. An infrared camera in conjunction with video magnification method is used to capture and analyze the video of subjects in dark conditions. To validate the extracted heart rate, a finger photoplethysmograph (PPG) device that can display the real-time heart rate was used. To validate the respiration rate a BioHarness chest strap was used. The proposed framework was tested on different sizes of regions of interest (ROIs) and different distances between the subject and the camera. A post-processing procedure was applied on the video magnification signal to reduce noise. To characterize and rule out artifacts, an experiment on inanimate objects was also conducted. Results indicate that the non-intrusive approach based on infrared night vision cameras and video magnification method can accurately extract heart and respiration rates, and can be used for continuous healthcare monitoring in the night.

Proceedings ArticleDOI
09 Oct 2017
TL;DR: In this article, both young and old drivers were asked to drive at night on a test track while distance and accuracy of target detection, subjective workload, and longitudinal and lateral control of the vehicle were measured.
Abstract: Infrared night vision systems have the potential to improve visibility of critical objects at night well beyond the levels that can be achieved with low-beam headlamps. This could be especially valuable for older drivers, who have difficulty seeing at night and who are especially sensitive to glare. It is unclear whether this benefit comes without ancillary costs, such as additional workload to monitor and interpret the forward view depicted by the night vision system. In this study, young and old subjects were asked to drive at night on a test track while distance and accuracy of target detection, subjective workload, and longitudinal and lateral control of the vehicle were measured. In some conditions, their direct view of the road was supplemented by a far infrared (FIR) night vision system. Two display configurations were used with the night vision system: a head-up display mounted above the dashboard and centered on the driver, and a head-down display mounted lower and near the vehicle midline. Night vision systems increased target detection distance for both young and old drivers, with noticeably more benefit for younger drivers. Workload measures did not differ between the unassisted visual detection task and the detection tasks assisted by night vision systems, suggesting that the added workload imposed by the night vision system in this study is small.

Journal ArticleDOI
TL;DR: Nightly activity patterns of night-active prey seem to be less strongly linked to avoidance of predation than previously thought, suggesting that foraging and predator detection benefits may play a more important role than usually acknowledged.

Patent
10 May 2017
TL;DR: In this paper, a deep convolution-deconvolutional neural network (DCNN) was used for scene identification in a night vision image using a multi-classification algorithm.
Abstract: The invention relates to a night vision image scene identification method based on a deep convolution-deconvolution neural network. The method is characterized in comprising the steps of S1, establishing a night vision image data set; S2, carrying out mirror symmetry processing on original sample images; S3, establishing the deep convolution-deconvolution neural network; S4, obtaining to-be-processed images with sizes of h*w in real time and inputting the images into the deep convolution-deconvolution neural network, thereby obtaining feature graphs with the sizes of h*w; and S5, dividing the objects in the night vision images into k different classes, determining the class to which each pixel in the feature graphs obtained in the S4 belongs through adoption of a multi-classification algorithm and outputting probability graphs with the sizes of h*w*k. According to the method, the scene sensation of the night vision images is clearly increased, the target identification efficiency is improved, and the manual operation complexity is reduced.

Journal ArticleDOI
TL;DR: Visual performance, CS function, marksmanship, and threshold target identification demonstrated no statistically significant differences over time between the two treatments, translating to excellent and comparable visual and military performance.
Abstract: Purpose: To compare visual performance, marksmanship performance, and threshold target identification following wavefront-guided (WFG) versus wavefront-optimized (WFO) photorefractive keratectomy (PRK). Methods: In this prospective, randomized clinical trial, active duty U.S. military Soldiers, age 21 or over, electing to undergo PRK were randomized to undergo WFG (n = 27) or WFO (n = 27) PRK for myopia or myopic astigmatism. Binocular visual performance was assessed preoperatively and 1, 3, and 6 months postoperatively: Super Vision Test high contrast, Super Vision Test contrast sensitivity (CS), and 25% contrast acuity with night vision goggle filter. CS function was generated testing at five spatial frequencies. Marksmanship performance in low light conditions was evaluated in a firing tunnel. Target detection and identification performance was tested for probability of identification of varying target sets and probability of detection of humans in cluttered environments. Results: Visual perfo...

Journal ArticleDOI
TL;DR: Patients with PACG associated with RP had the same biometric parameter characteristic as the patients with CPACG and APACG, which may suggest that RP is a coincidental relationship with angle-closure glaucoma.
Abstract: Background. Retinitis pigmentosa (RP) comprises a group of inherited disorders in which patients typically lose night vision in adolescence and then lose peripheral vision in young adulthood before eventually losing central vision later in life. A retrospective case-control study was performed to evaluate differences in ocular biometric parameters in primary angle-closure glaucoma (PACG) patients with and without concomitant RP to determine whether a relationship exists between PACG and RP. Methods. We used ultrasound biomicroscopy (UBM) to measure anterior chamber depth (ACD). A-scan biometry was carried out to measure lens thickness (LT) and axial length (AL). Propensity score matching and mixed linear regression model analysis were conducted. 23 patients with chronic primary angle-closure glaucoma (CPACG) associated with RP, 21 patients with acute primary angle-closure glaucoma (APACG) associated with RP, 270 patients with CPACG, and 269 patients with APACG were recruited for this study. Results. There were no significant differences on ACDs, ALs, and relative lens position (RLP) ( ) between patients with PACG associated with RP and patients with PACG; however, patients with APACG associated with RP had a significantly greater LT than patients with APACG ( ). Conclusion. Patients with PACG associated with RP had the same biometric parameter characteristic as the patients with CPACG and APACG. This may suggest that RP is a coincidental relationship with angle-closure glaucoma.

Proceedings ArticleDOI
01 Jan 2017
TL;DR: The design methodology of the NVIL system is presented and some experimental results obtained when the system is used for imaging the targets during complete dark conditions in the field are presented.
Abstract: Night Vision Imaging Lidar system technology finds many applications including public safety, surveillance, defense etc. This paper describes the constructional features of the Night Vision Imaging Lidar System (NVIL) developed in the laboratory for imaging of targets with at least 90% recognition capability up to a range of about 2 km for field use in night time and up to a range of about 4 km in day time. The system is capable of continues monitoring and store the video. This lidar system is based on a compact Nd: YAG pulsed laser system and uses a novel transmitter receiver configuration. Also the system can be used for the recognition of people at strategic places for security applications. We present the design methodology of the NVIL system and some experimental results obtained when the system is used for imaging the targets during complete dark conditions in the field. The gating technology is also investigated which improve the capability of the system for accurate detections of the targets even under bad weather conditions.

Journal ArticleDOI
TL;DR: The proposed algorithm based on two-scale decomposition and modified Frie-Chen operators to fuse images acquired from infrared and visible image sensors and its corresponding hardware implementation is suitable for low-power, real-time image fusion applications.
Abstract: Real-time fusion of images acquired from multiple sensors is significant in various fields, including military and aviation, to reduce the uncertainty in the acquired images and for wider temporal and spatial coverage. Current approaches to multi-sensor image fusion have a high computational complexity and difficult to implement in hardware. This paper presents a method based on two-scale decomposition and modified Frie-Chen operators to fuse images acquired from infrared and visible image sensors and its corresponding hardware implementation. The proposed method achieves 48%, 15%, and 100% improvements in total edge transfers, structural similarity, and night vision contrast, respectively, with respect to those of the latest publications known to the authors. The corresponding hardware architecture, synthesized using the Xilinx tool, is shown to consume 4% of the resources in the Virtex 4 field programmable gate array (FPGA-xc4vlx200). The proposed architecture has one unit of throughput per clock cycle and is able to process 30 high definition images/sec. The proposed architecture is also analyzed using the synopsis design vision tool with the 90-nm UMC standard complementary metal–oxide–semiconductor cell library. It is found that the architecture consumes 251.6 mW of power and has an area equivalent to 580K NAND2 gates. The lower hardware resource requirement and support of parallelism and pipelining make the proposed algorithm suitable for low-power, real-time image fusion applications.

Proceedings ArticleDOI
22 Mar 2017
TL;DR: Two algorithms for detecting human being in night vision videos are reviewed, proposed hot-spot algorithm uses black body radiation theory and the background subtraction algorithm uses the difference image obtained from the input image and a generated background image.
Abstract: Human detection has always been a challenge in many automations of computer vision area. This problem gains more importance when the scene involving human detection is of night time. Surveillance is one such application which has a very serious requirement of night vision mechanism. An automated system for human detection in night vision could be a helping hand to surveillance at any sensitive locations. This paper reviewed two algorithms for detecting human being in night vision videos. Proposed hot-spot algorithm uses black body radiation theory and the background subtraction algorithm uses the difference image obtained from the input image and a generated background image. The result analysis is done of the experiments performed on these approaches.

Patent
11 May 2017
TL;DR: In this paper, a color night vision system including a single-4-color image sensor configured to acquire a red, green, blue (RGB) image and an infrared (IR) image by processing RGB light signals and an IR light signal for each wavelength.
Abstract: Disclosed is a color night vision system including a single-4-color image sensor configured to acquire a red, green, blue (RGB) image and an infrared (IR) image by processing RGB light signals and an IR light signal for each wavelength; and a processor configured to determine an exposure state of the RGB image by analyzing a brightness distribution of the RGB image, to decide at least one of an exposure compensation level of the RGB image, a denoising level of the RGB image, or a synthesis ratio between the RGB image and the IR image based on the determination result, and to create an output image based on the decision result that is made using the RGB image and the IR image.

Journal ArticleDOI
TL;DR: This study explored heritability of retinal electrophysiologic parameters and included measurements reflecting ganglion cell function, indicating that genetic factors are important, determining up to 85% of the variance in some cone system response parameters.

Patent
25 Jul 2017
TL;DR: In this article, an infrared image and radar data-based night unmanned parking lot scene depth estimation method is presented. But, the method is not suitable for real-time applications.
Abstract: The present invention provides an infrared image and radar data-based night unmanned parking lot scene depth estimation method. According to the method, firstly, a night vision image data set is established, wherein the night vision image data set comprises original sample images and radar data obtained through pre-classifying the original sample images. The original sample images and the radar data are written into corresponding text files. Secondly, a depth convolution/reverse convolution neural network is constructed and trained by utilizing the night vision image data set. Thirdly, a to-be-processed image is acquired in real time, and the to-be-processed image is input into the depth convolution/reverse convolution neural network. Through the depth convolution neural network, a feature map is obtained. The feature map is input into the reverse convolution neural network, and then the category of each pixel point in the feature map can be obtained. After that, a probability map is outputted. Finally, the probability map is subjected to anti-log transformation to obtain the estimated depth of each pixel point. The test proves that the method provided by the invention can effectively estimate the depth of a night scene. Meanwhile, the estimation correctness and the estimation real-time performance are ensured.