scispace - formally typeset
Search or ask a question

Showing papers on "Digital camera published in 2020"


Proceedings ArticleDOI
14 Jun 2020
TL;DR: A highly accurate noise formation model based on the characteristics of CMOS photosensors is presented, thereby enabling us to synthesize realistic samples that better match the physics of image formation process.
Abstract: Lacking rich and realistic data, learned single image denoising algorithms generalize poorly in real raw images that not resemble the data used for training. Although the problem can be alleviated by the heteroscedastic Gaussian noise model, the noise sources caused by digital camera electronics are still largely overlooked, despite their significant effect on raw measurement, especially under extremely low-light condition. To address this issue, we present a highly accurate noise formation model based on the characteristics of CMOS photosensors, thereby enabling us to synthesize realistic samples that better match the physics of image formation process. Given the proposed noise model, we additionally propose a method to calibrate the noise parameters for available modern digital cameras, which is simple and reproducible for any new device. We systematically study the generalizability of a neural network trained with existing schemes, by introducing a new low-light denoising dataset that covers many modern digital cameras from diverse brands. Extensive empirical results collectively show that by utilizing our proposed noise formation model, a network can reach the capability as if it had been trained with rich real data, which demonstrates the effectiveness of our noise formation model.

129 citations


Journal ArticleDOI
TL;DR: In this article, a multi-view photometric stereo (MVPS) technique was proposed to capture both 3D shape and spatially varying reflectance with a single camera and an optional automatic turntable.
Abstract: We present a method to capture both 3D shape and spatially varying reflectance with a multi-view photometric stereo (MVPS) technique that works for general isotropic materials. Our algorithm is suitable for perspective cameras and nearby point light sources. Our data capture setup is simple, which consists of only a digital camera, some LED lights, and an optional automatic turntable. From a single viewpoint, we use a set of photometric stereo images to identify surface points with the same distance to the camera. We collect this information from multiple viewpoints and combine it with structure-from-motion to obtain a precise reconstruction of the complete 3D shape. The spatially varying isotropic bidirectional reflectance distribution function (BRDF) is captured by simultaneously inferring a set of basis BRDFs and their mixing weights at each surface point. In experiments, we demonstrate our algorithm with two different setups: a studio setup for highest precision and a desktop setup for best usability. According to our experiments, under the studio setting, the captured shapes are accurate to 0.5 millimeters and the captured reflectance has a relative root-mean-square error (RMSE) of 9%. We also quantitatively evaluate state-of-the-art MVPS on a newly collected benchmark dataset, which is publicly available for inspiring future research.

29 citations


Posted Content
TL;DR: A method to capture both 3D shape and spatially varying reflectance with a multi-view photometric stereo technique that works for general isotropic materials and quantitatively evaluate state-of-the-art MVPS on a newly collected benchmark dataset is presented.
Abstract: We present a method to capture both 3D shape and spatially varying reflectance with a multi-view photometric stereo (MVPS) technique that works for general isotropic materials. Our algorithm is suitable for perspective cameras and nearby point light sources. Our data capture setup is simple, which consists of only a digital camera, some LED lights, and an optional automatic turntable. From a single viewpoint, we use a set of photometric stereo images to identify surface points with the same distance to the camera. We collect this information from multiple viewpoints and combine it with structure-from-motion to obtain a precise reconstruction of the complete 3D shape. The spatially varying isotropic bidirectional reflectance distribution function (BRDF) is captured by simultaneously inferring a set of basis BRDFs and their mixing weights at each surface point. In experiments, we demonstrate our algorithm with two different setups: a studio setup for highest precision and a desktop setup for best usability. According to our experiments, under the studio setting, the captured shapes are accurate to 0.5 millimeters and the captured reflectance has a relative root-mean-square error (RMSE) of 9%. We also quantitatively evaluate state-of-the-art MVPS on a newly collected benchmark dataset, which is publicly available for inspiring future research.

28 citations


Journal ArticleDOI
TL;DR: An efficient color sensor design has been proposed in this study that uses a vertically stacked arrangement of perovskite diodes and it could be theoretically shown that both vertically arranged sensor and conventional color filter-based sensor provide almost comparable color errors.
Abstract: Color image sensing by a smartphone or digital camera employs sensor elements with an array of color filters for capturing basic blue, green, and red color information. However, the normalized optical efficiency of such color filter-based sensor elements is limited to only one-third. Optical detectors based on perovskites are described, which can overcome this limitation. An efficient color sensor design has been proposed in this study that uses a vertically stacked arrangement of perovskite diodes. As compared to the conventional color filter-based sensors, the proposed sensor structure can potentially reach normalized optical efficiency approaching 100%. In addition, the proposed sensor design does not exhibit color aliasing or color Moire effects, which is one of the main limitations for the filter-based sensors. Furthermore, up to our knowledge, for the first time, it could be theoretically shown that both vertically arranged sensor and conventional color filter-based sensor provide almost comparable color errors. The optical properties of the perovskite materials are determined by optical measurements in combination with an energy shift model. The optics of the stacked perovskite sensors is investigated by threedimensional finite-difference timedomain simulations. Finally, colorimetric characterization was carried out to determine the color error of the sensors.

27 citations


Journal ArticleDOI
TL;DR: This paper shows how to make better use of the multi-spectral capabilities of commercial digital cameras and shows their application for airglow analysis, and recommends a novel sky quality metric the “Dark Sky Unit”, based on an easily usable and SI traceable unit.
Abstract: Multi-spectral imaging radiometry of the night sky provides essential information on light pollution (skyglow) and sky quality. However, due to the different spectral sensitivity of the devices used for light pollution measurement, the comparison of different surveys is not always trivial. In addition to the differences between measurement approaches, there is a strong variation in natural sky radiance due to the changes of airglow. Thus, especially at dark locations, the classical measurement methods (such as Sky Quality Meters) fail to provide consistent results. In this paper, we show how to make better use of the multi-spectral capabilities of commercial digital cameras and show their application for airglow analysis. We further recommend a novel sky quality metric the “Dark Sky Unit”, based on an easily usable and SI traceable unit. This unit is a natural choice for consistent, digital camera-based measurements. We also present our camera system calibration methodology for use with the introduced metrics.

27 citations


Journal ArticleDOI
TL;DR: The processed images by the novel color correction method and an enhancement method based on Retinex with dense pixels and adaptive linear histogram transformation for degraded color-biased underwater images have clearer details and uniform visual effect for all channels in RGB color space and can also obtain good performance metrics.
Abstract: Color correction and enhancement for underwater images is challenging due to attenuation and scattering. The underwater images often have low visibility and suffer from color bias. This paper presents a novel color correction method based on color filter array (CFA) and an enhancement method based on Retinex with dense pixels and adaptive linear histogram transformation for degraded color-biased underwater images. For any digital image in the RGB space, which is captured by digital camera with CFA, their RGB values are dependent and coupled because of the interpolation process. So we try to compensate red channel attenuation of underwater degraded images from the green channel and blue channel. Retinex model has been widely used to efficiently handle low brightness and blurred images. The McCann Retinex (MR) method selects a spiral path for pixel comparison to estimate illumination. However, the simple path selection doesn’t include global light dark relationship of the whole image. So we design a scheme to gain much well-distributed and denser pixels to obtain more precise intensity of illumination. Besides, we design a piecewise linear function for histogram transform, which is adaptive to the whole RGB value. Experiments on a large number of underwater degraded images show that, the processed images by our method have clearer details and uniform visual effect for all channels in RGB color space and our method can also obtain good performance metrics.

27 citations


Journal ArticleDOI
01 Sep 2020
TL;DR: Methods dealing with camera's photo response non uniformity (PRNU) identification, statistical methods, analysis of camera's optical defects, machine learning and deep models which include convolutional neural networks are investigated.
Abstract: Digital forensics is a topic that has attracted many attention. One of the most common tasks in digital forensics is imaging sensor identification. It may be understood as recognizing devices origin based on subject that this device produced. Therefore, areas that match digital forensics include among others: digital camera, flatbed scanner or printer identification. In this paper we survey methods and algorithms for digital camera identification. The goal of digital camera identification algorithm is to identify and distinct camera's sensor based on produced images. This topic is especially popular in forensics' community since last years. The paper discusses two concepts for camera identification: individual source camera identification (ISCI) and source camera model identification (SCMI). The ISCI aims to distinguish a certain camera among cameras of both the same and the different camera models, while the SCMI distinguishes a certain camera model among others but cannot distinguish a certain camera among the same camera models. We investigate methods dealing with these concepts that include: camera's photo response non uniformity (PRNU) identification, statistical methods, analysis of camera's optical defects, machine learning and deep models which include convolutional neural networks. We also provide a description of popular image datasets that can be used for camera identification algorithms evaluation.

20 citations


Journal ArticleDOI
TL;DR: The nature of a typical camera raw space is investigated, including its gamut and reference white, and the strategy used by internal image-processing engines of traditional digital cameras is shown to be based upon color rotation matrices accompanied by raw channel multipliers.
Abstract: Color conversion matrices and chromatic adaptation transforms (CATs) are of central importance when converting a scene captured by a digital camera in the camera raw space into a color image suitable for display using an output-referred color space. In this article, the nature of a typical camera raw space is investigated, including its gamut and reference white. Various color conversion strategies that are used in practice are subsequently derived and examined. The strategy used by internal image-processing engines of traditional digital cameras is shown to be based upon color rotation matrices accompanied by raw channel multipliers, in contrast to the approach used by smartphones and commercial raw converters, which is typically based upon characterization matrices accompanied by conventional CATs. Several advantages of the approach used by traditional digital cameras are discussed. The connections with the color conversion methods of the DCRaw open-source raw converter and the Adobe digital negative converter are also examined, along with the nature of the Adobe color and forward matrices.

14 citations


Journal ArticleDOI
TL;DR: A sequential weighted nonlinear regression technique from digital camera responses for spectral reflectance estimation that consists of two stages taking colorimetric and spectral errors between training set and target set into accounts successively is proposed.
Abstract: A sequential weighted nonlinear regression technique from digital camera responses is proposed for spectral reflectance estimation. The method consists of two stages taking colorimetric and spectral errors between training set and target set into accounts successively. Based on polynomial expansion model, local optimal training samples are adaptively employed to recover spectral reflectance as accurately as possible. The performance of the method is compared with several existing methods in the cases of simulated camera responses under three kinds of noise levels and practical camera responses under the self as well as cross test conditions. Results show that the proposed method is able to recover spectral reflectance with a higher accuracy than other methods considered.

13 citations


Proceedings ArticleDOI
04 May 2020
TL;DR: Non-line-of-sight imaging of multi-depth scenes using only a single photograph from an ordinary digital camera is demonstrated.
Abstract: We demonstrate non-line-of-sight imaging of multi-depth scenes using only a single photograph from an ordinary digital camera. The hidden scene, comprising two images at different depths, is partially occluded from a visible wall by an opaque occluding object. The distance from the visible wall to the hidden surfaces, and the images they contain, are recovered.

12 citations


Journal ArticleDOI
TL;DR: A system composed of a digital camera and optical emitters affixed to selected nodal points is introduced as a complement to conventional strain gauge sensors for state estimation of adaptive buildings with active load-bearing elements.

Posted Content
TL;DR: In this article, the authors proposed a highly accurate noise formation model based on the characteristics of CMOS photosensors, thereby enabling them to synthesize realistic samples that better match the physics of image formation process.
Abstract: Lacking rich and realistic data, learned single image denoising algorithms generalize poorly to real raw images that do not resemble the data used for training. Although the problem can be alleviated by the heteroscedastic Gaussian model for noise synthesis, the noise sources caused by digital camera electronics are still largely overlooked, despite their significant effect on raw measurement, especially under extremely low-light condition. To address this issue, we present a highly accurate noise formation model based on the characteristics of CMOS photosensors, thereby enabling us to synthesize realistic samples that better match the physics of image formation process. Given the proposed noise model, we additionally propose a method to calibrate the noise parameters for available modern digital cameras, which is simple and reproducible for any new device. We systematically study the generalizability of a neural network trained with existing schemes, by introducing a new low-light denoising dataset that covers many modern digital cameras from diverse brands. Extensive empirical results collectively show that by utilizing our proposed noise formation model, a network can reach the capability as if it had been trained with rich real data, which demonstrates the effectiveness of our noise formation model.

Journal ArticleDOI
TL;DR: In this article, a photogrammetry-based experimental modal analysis (EMA) method is developed, where a single-point laser shines a laser beam to a surface of a test object and an exterior feature is added to the test object in the form of a high-contrast laser spot.

Journal ArticleDOI
28 Aug 2020-Symmetry
TL;DR: A low-cost system for identifying shapes in order to program industrial robots (on the base of the six-axis “ABB IRB 140” robot) for a welding process in 2D using a binarization and contour recognition method.
Abstract: The purpose of the article was to build a low-cost system for identifying shapes in order to program industrial robots (on the base of the six-axis “ABB IRB 140” robot) for a welding process in 2D. The whole system consisted of several elements developed in individual stages. The first step was to identify the existing robot control systems, which analysed images from an attached low-cost digital camera. Then, a computer program, which handles communication with the digital camera capturing and processing, was written. In addition, the program’s task was to detect geometric shapes (contours) drawn by humans and to approximate them. This study also presents research on a binarization and contour recognition method for this application. Based on this, the robot is able to weld the same contours on a 2D plane.

Journal ArticleDOI
18 Nov 2020-Sensors
TL;DR: An improved method for observing and calculating water leaving reflectance from digital images based on multiple reflectance reference cards is developed and provided a more accurate theoretical foundation for quantitative water quality monitoring using digital and smartphone cameras.
Abstract: With the development of citizen science, digital cameras and smartphones are increasingly utilized in water quality monitoring. The smartphone application HydroColor quantitatively retrieves water quality parameters from digital images. HydroColor assumes a linear relationship between the digital pixel number (DN) and incident radiance and applies a grey reference card to derive water leaving reflectance. However, image DNs change with incident light brightness non-linearly, according to a power function. We developed an improved method for observing and calculating water leaving reflectance from digital images based on multiple reflectance reference cards. The method was applied to acquire water, sky, and reflectance reference card images using a Cannon 50D digital camera at 31 sampling stations; the results were validated using synchronously measured water leaving reflectance using a field spectrometer. The R2 for the red, green, and blue color bands were 0.94, 0.95, 0.94, and the mean relative errors were 27.6%, 29.8%, 31.8%, respectively. The validation results confirm that this method can derive accurate water leaving reflectance, especially when compared with the results derived by HydroColor, which systematically overestimates water leaving reflectance. Our results provide a more accurate theoretical foundation for quantitative water quality monitoring using digital and smartphone cameras.

Journal ArticleDOI
TL;DR: The proposed method can estimate the spectral sensitivity of the trichromatic digital camera very well, which is of great significance for the colorimetric characterization and evaluation of imaging systems.
Abstract: The three-channel spectral sensitivity of a trichromatic camera represents the characteristics of system color space. It is a mapping bridge from the spectral information of a scene to the response value of a camera. In this paper, we propose an estimation method for three-channel spectral sensitivity of a trichromatic camera. It includes calibration experiment by orthogonal test design and the data processing by window filtering. The calibration experiment was first designed by an orthogonal table of the 9-level and 3-factor. A rough estimation model of spectral sensitivity is established on the data pairs of the system input and output in calibration experiments. The data of rough estimation is then modulated by two window filters on frequency and spatial domain. The Luther-Ives condition and the smoothness condition are introduced to design the window, and help to achieve the optimal estimation of the system spectral sensitivity. Finally, the proposed method is verified by some comparison experiments. The results show that the estimated spectral sensitivity is basically consistent with the measured results of the monochromator experiments, the relative full-scale errors of the RGB three-channel is obviously lower than the Wiener filtering method and the Fourier band-limitedness method. The proposed method can estimate the spectral sensitivity of the trichromatic digital camera very well, which is of great significance for the colorimetric characterization and evaluation of imaging systems.

Journal ArticleDOI
TL;DR: Aiming at better clarifying the performances of calibrated digital compact cameras, a comparison with a calibrated DSLR camera is presented in outdoor situations showing a good agreement both for luminance and color temperature measurements.
Abstract: This work presents the possibility of using the extremely popular compact digital cameras of smartphones or action cameras to perform sky photometry. The newest generation of these devices allows to save raw images. They are not as good as digital single-lens reflex camera, in particular in terms of sensitivity, noise and pixel depth (10 bit versus 12 bit or more), but they have the advantage of being extremely widespread on the population and relatively cheap. These economical digital compact cameras work with an electronic shutter, it overcomes the consumption of mechanics and allows to gather images for long time. The work uses a simple calibration method to transfer raw data from the proprietary RGB color space to the standard CIE 1931 color space. It allows the measurement of sky luminance in cd m−2 with an expected uncertainty of about 20%. Furthermore, the colorimetric calibration allows to know the correlated color temperature of a portion of the sky, it can help the identification of the kind of polluting sources. Aiming at better clarifying the performances of calibrated digital compact cameras, a comparison with a calibrated DSLR camera is presented in outdoor situations showing a good agreement both for luminance and color temperature measurements.

Journal ArticleDOI
30 Apr 2020-Sensors
TL;DR: The theory analysis and experimental results demonstrate that the presented model can accurately describe the temperature characteristics and further calculate the thermal equilibrium state of working digital camera, all of which contribute to guiding mechanics measurement and thermal design based on such camera sensors.
Abstract: Digital cameras represented by industrial cameras are widely used as image acquisition sensors in the field of image-based mechanics measurement, and their thermal effect inevitably induces thermal-induced errors of the mechanics measurement. To deeply understand the errors, the research for digital camera's thermal effect is necessary. This study systematically investigated the heat transfer processes and temperature characteristics of a working digital camera. Concretely, based on the temperature distribution of a typical working digital camera, the heat transfer of the working digital camera was investigated, and a model describing the temperature variation and distribution was presented and verified experimentally. With this model, the thermal equilibrium time and thermal equilibrium temperature of the camera system were calculated. Then, the influences of thermal parameters of digital camera and environmental temperature on the temperature characteristics of working digital camera were simulated and experimentally investigated. The theory analysis and experimental results demonstrate that the presented model can accurately describe the temperature characteristics and further calculate the thermal equilibrium state of working digital camera, all of which contribute to guiding mechanics measurement and thermal design based on such camera sensors.

Journal ArticleDOI
30 Jun 2020
TL;DR: 2D air track platform based on digital cameras and Tracker software can be used as a physics learning media on motion kinematics materials that can display various kinematic graphs so that information about motion is complete.
Abstract: This research describes the results of the development of 2D air track tools designed for one and two-dimensional motion experiments with small frictional forces. Friction is minimized by using wind gusts through small holes made in all parts of the runway. Motion detection devices used are digital cameras and trackers. Digital cameras are used to record the motion of objects on a platform in the form of video with a specific frame-rate. Tracker is used to analyzing videos that contain information about object motion. This tool has been tested on one-dimensional motion, that is, an object that slides over an inclined plane and two-dimensional motion in the case of a collision of two objects. In the case of one-dimensional position graphs against time can be displayed, the instantaneous velocity and average and acceleration can be accurately determined. In the case of the collision of two objects, the position graph against time can also be displayed for each object before and after the collision. The velocity vector can be determined accurately so that the law of conservation of momentum and kinetic energy can be verified. One and twodimensional motion are the concepts that underlie almost all other concepts in physics. Therefore one and two-dimensional motion experiments are important to build students’ experiences of the concept. Thus 2D air track platform based on digital cameras and Tracker software can be used as a physics learning media on motion kinematics materials that can display various kinematics graphs so that information about motion is complete.

Book ChapterDOI
01 Jan 2020
TL;DR: This study is mainly focused on computer vision’s 3D reconstruction out of commodity hardware for surveying purpose with the help of camera sensor (Sony IMX298) and the ground truth has been carried out with advance surveying instrument.
Abstract: Three-dimensional (3D) reconstruction has been evolved for modern surveying techniques because it provides a visual interpretation of real-world scene. 3D model can be generated from a cluster of points, which are known as cloud points. Cloud point can be compiled from various sources like Laser scanner, Microsoft Kinnect, digital images, etc. Digital images are more easily accessible technology from others for cloud point creation, and advancements of digital camera in last few years have made the camera sensors, capable of capturing in-depth details, high-resolution digital images. This study is mainly focused on computer vision’s 3D reconstruction out of commodity hardware for surveying purpose with the help of camera sensor (Sony IMX298). For validation, the ground truth has been carried out with advance surveying instrument by distributing several points around Region of Interest (ROI) and evaluate the dimensions.

Journal ArticleDOI
TL;DR: The results demonstrate that even if only the visible light spectrum emitted from a plasma is captured, the color method can provide sufficient discharge information for economic and convenient use in discharge state detection because the species producing visible radiation are affected by radiation in all bands.
Abstract: Can we detect electric discharge states in gases based on the information on visual images? This article proposes a new kind of method where we build several detection models for different states of corona discharge by applying four kinds of machine learning algorithms to extract color, brightness, and shape information characteristics of visible images taken by a digital camera. Every model is then tested on a new set of images to measure its performance. The four different machine learning algorithms are support vector machine (SVM), ${K}$ -nearest neighbor regression (KNN), single layer perceptron (SLP), and decision tree (DT) algorithms. The prediction results show that the color features perform best among all three types of features and the KNN algorithm performs best among all four algorithms. This article also presents a discussion on how to choose the optimal detection areas of images for better detection performance. Our approach shows consistent results across different cameras and camera settings. The results demonstrate that even if only the visible light spectrum emitted from a plasma is captured, the color method can provide sufficient discharge information for economic and convenient use in discharge state detection because the species producing visible radiation are affected by radiation in all bands.

Journal ArticleDOI
TL;DR: A dataset of images to test the performance of image processing algorithms, in particular demosaicing and denoising methods, is presented, composed by twenty 16 bit-depth images that can be used to test full-reference image quality metrics.
Abstract: In this paper we present a dataset of images to test the performance of image processing algorithms, in particular demosaicing and denoising methods. Despite the plethora of demosaicing and denoising algorithms present in the literature, only few benchmarks are available to test their performance, and most of them are quite old, thus inadequate to represent the images captured by modern devices. The proposed dataset is composed by twenty 16 bit-depth images that can be used to test full-reference image quality metrics. More specifically, twelve pictures have been synthetically created by means of 2D or 3D softwares, while eight images have been captured by a high-end digital camera.

03 Jul 2020
TL;DR: Skin visualization for beauty industries using deep learning using U-net and convolutional autoencoder for UV skin images taken by a medical dermoscopy digital camera is discussed.
Abstract: Skin visualization for beauty industries using deep learning is discussed. UV skin images were taken by a medical dermoscopy digital camera, and we created datasets for training. Neural networks called U-net and convolutional autoencoder were constructed and trained with our datasets. Once our neural network was trained, skin images that approximated the UV image could be generated without the medical camera. The performance of our U-net and convolutional autoencoder is discussed.

Journal ArticleDOI
TL;DR: In this paper, a methodology based on the use of a single digital camera has been developed to determine the horizontal attenuation of solar radiation between the field of heliostats and the receiver in a solar tower plant.

Journal ArticleDOI
29 Aug 2020
TL;DR: A low-cost and portable set-up controlled by an Arduino board to perform Reflectance Transformation Imaging technique, from the information derived from 45 digital photographs of an object acquired using a stationary camera, revealed corrosion, loss of material, scratches and other details, which were not perceived in standard images.
Abstract: This article examines the development of a low-cost and portable set-up controlled by an Arduino board to perform Reflectance Transformation Imaging technique, from the information derived from 45 digital photographs of an object acquired using a stationary camera. The set-up consists of 45 high-intensity light emitting diodes (LEDs) distributed over a hemispherical dome of 70 cm in diameter and a digital camera on the top of the dome. The LEDs are controlled by an Arduino board, and the user can individually control the LEDs state (ON or OFF) and duration of illumination. An old manuscript written with iron-gall ink and a set of 1 Euro coins mint in 2002 were photographed with the set-up. The interactive re-lighting and the mathematical enhancement of the object’s surface revealed corrosion, loss of material, scratches and other details, which were not perceived in standard images. These unique features, which can be extracted using edge detection processing, have immediate application in different fields such as cultural heritage or forensic studies, where they can be used as fingerprints to identify unique objects, allowing also recognizing the use of tools to alter the surface of coins to increase the price in the market.

Journal ArticleDOI
TL;DR: This work investigates the performances of a customised Raspberry Pi camera module V2 system and three additional low-cost camera systems including an ELP-USB8MP02G camera module, a compact digital camera (Nikon S3100) and a DSLR ( Nikon D3) available at comparable price.
Abstract: . Photogrammetry is becoming a widely used technique for slope monitoring and rock fall data collection. Its scalability, simplicity of components and low costs for hardware and operations makes its use constantly increasing for both civil and mining applications. Recent on site permanent installation of cameras resulted particularly viable for the monitoring of extended surfaces at very reasonable costs. The current work investigates the performances of a customised Raspberry Pi camera module V2 system and three additional low-cost camera systems including an ELP-USB8MP02G camera module, a compact digital camera (Nikon S3100) and a DSLR (Nikon D3). All system, except the Nikon D3, are available at comparable price. The comparison was conducted by collecting images of rock surfaces, one located in Australia and three located in Italy, from distances between 55 and 110 m. Results are presented in terms of image quality and three dimensional reconstruction error. Thereby, the multi-view reconstructions are compared to a reference model acquired with a terrestrial laser scanner.

Journal ArticleDOI
TL;DR: This work calibrated the imaging parameters of the full-color photographs of aurora using a city light image taken from the Defense Meteorological Satellite Program satellite following the method of Hozumi et al.
Abstract: The full-color photographs of aurora have been taken with digital single-lens reflex cameras mounted on the International Space Station (ISS). Since these photographs do not have accurate time and geographical information, in order to use them as scientific data, it is necessary to calibrate the imaging parameters (such as looking direction and angle of view of the camera) of the photographs. For this purpose, we calibrated the imaging parameters using a city light image taken from the Defense Meteorological Satellite Program satellite following the method of Hozumi et al. (2016, https://doi.org/10. 1186/s40623-016-0532-z). We mapped the photographs onto the geographic coordinate system using the calibrated imaging parameters. To evaluate the accuracy of the mapping, we compared the aurora taken simultaneously from ISS and ground. Comparing the spatial structure of discrete aurora and the temporal variation of pulsating aurora, the accuracy of the data set is less than 0.3 s in time and less than 5 km in space in the direction perpendicular to the looking direction of the camera. The generated data set has a wide field of view (∼1,100× 900 km), and their temporal resolution is less than 1 s. Not only that, the field of view can sweep a wide area (∼3,000 km in longitude) in a short time (∼10 min). Thus, this new imaging capability will enable us to capture the evolution of fine-scale spatial structure of aurora in a wide area.

Proceedings ArticleDOI
10 Oct 2020
TL;DR: RBF neural network can be used to solve the problem of colorimetric characterization of digital camera because of its fast training speed, small amount of data and small color difference, and the traditional polynomial fitting method and BP network has smaller color difference.
Abstract: Modern chromatic equipment has been widely used. With the popularity of color digital equipment, accurate acquisition of color has a wide range of uses. The colorimetric characterization of equipment is the basic link of color management system, and how to accurately convert the color of various color devices has become a basic problem. The color space of the camera depends on the equipment. Therefore, the colorimetric characterization of digital camera is an important method to improve the color reproduction of images, which is the basis of color conversion between various devices. In the process of camera colorimetric characterization, the traditional neural network method, such as Back propagation(BP) neural network, needs a large number of sample data to obtain a large number of data, and the processing is complex. Because of its fast training speed, small amount of data and small color difference, RBF neural network can be used to solve the problem of colorimetric characterization of digital camera. On the 140 color blocks, half is used as training data set and half as test data set. In the first part, RBF neural network is used to train the data set, and the second part is used to experiment with the traditional BP neural network under the same data set. The experimental results show that the average color difference of training samples is 1.79ΔE CMC(1:1), the average color difference of the test sample is 4.89 ΔE. Compared with the traditional polynomial fitting method and BP network, RBF neural network has smaller color difference in colorimetric characterization.

Proceedings ArticleDOI
01 Jul 2020
TL;DR: Device's design, basic characteristics and current results, as well as steps for further actions to improve the system are described below in the paper.
Abstract: Computer vision systems and algorithms are designed to process digital images and extract necessary information from it. In this research we propose computer vision system consisting of several spherical mobile devices with digital camera and microcomputer inside. Device's design, basic characteristics and current results, as well as steps for further actions to improve the system are described below in the paper. At the core of image processing OpenCV library and modern convolutional neural networks such as YOLOv3, MobileNETSSDv2 are used.

Journal ArticleDOI
C. C. Huang1, You Li, S. H. Tang1, Yuli Zhang1, Y. Xiao1 
TL;DR: The experimental results showed that the error of the plane position of the solved target point was less than 10cm at a shooting distance of about 10m, and it met the accuracy requirements of detail survey.
Abstract: . Aiming at the high demand of close-range photogrammetry for the object control points and the inconvenient carrying of digital camera to take photogrammetry in the field, a method of detail survey based on PhotoModeler Scanner software is proposed. USB camera combines with the centering rod, USB camera captures image data, Total station obtains the coordinates of the centering rod, the coordinate of projective center is calculated by using the coordinate of the centering rod, PhotoModeler Scanner software processes image data and coordinate data. Ultimately, image stereo measurement of the no object control point was realized. The experimental results showed that the error of the plane position of the solved target point was less than 10cm at a shooting distance of about 10m, and it met the accuracy requirements of detail survey. Therefore, the method can reduce the workload of field detail survey, reduce the cost and volume of the photographic equipment, and has certain application value.