scispace - formally typeset
Search or ask a question

Showing papers on "Digital camera published in 2018"


Journal ArticleDOI
16 Jan 2018-Sensors
TL;DR: It is shown that HydroColor can measure the remote sensing reflectance to within 26% of a precision radiometer and turbidity within 24%" of a portable turbidimeter.
Abstract: HydroColor is a mobile application that utilizes a smartphone's camera and auxiliary sensors to measure the remote sensing reflectance of natural water bodies HydroColor uses the smartphone's digital camera as a three-band radiometer Users are directed by the application to collect a series of three images These images are used to calculate the remote sensing reflectance in the red, green, and blue broad wavelength bands As with satellite measurements, the reflectance can be inverted to estimate the concentration of absorbing and scattering substances in the water, which are predominately composed of suspended sediment, chlorophyll, and dissolved organic matter This publication describes the measurement method and investigates the precision of HydroColor's reflectance and turbidity estimates compared to commercial instruments It is shown that HydroColor can measure the remote sensing reflectance to within 26% of a precision radiometer and turbidity within 24% of a portable turbidimeter HydroColor distinguishes itself from other water quality camera methods in that its operation is based on radiometric measurements instead of image color HydroColor is one of the few mobile applications to use a smartphone as a completely objective sensor, as opposed to subjective user observations or color matching using the human eye This makes HydroColor a powerful tool for crowdsourcing of aquatic optical data

73 citations


Proceedings ArticleDOI
01 Jan 2018
TL;DR: An unsupervised learning-based method is proposed that learns its parameter values after approximating the unknown ground-truth illumination of the training images, thus avoiding calibration and outperforms all statistics-based and many learning- based methods in terms of accuracy.
Abstract: Most digital camera pipelines use color constancy methods to reduce the influence of illumination and camera sensor on the colors of scene objects. The highest accuracy of color correction is obtained with learning-based color constancy methods, but they require a significant amount of calibrated training images with known ground- truth illumination. Such calibration is time consuming, preferably done for each sensor individually, and therefore a major bottleneck in acquiring high color constancy accuracy. Statistics-based methods do not require calibrated training images, but they are less accurate. In this paper an unsupervised learning-based method is proposed that learns its parameter values after approximating the unknown ground-truth illumination of the training images, thus avoiding calibration. In terms of accuracy the proposed method outperforms all statistics-based and many state-of-the-art learning-based methods. The results are presented and discussed.

73 citations


Book ChapterDOI
01 Jan 2018
TL;DR: In this article, only raster GIS-based digital elevation models are discussed and the fundamental principle involved is the concept of elevation being related to parallax as developed originally for stereo aerial photography.
Abstract: In principle, a digital elevation model (DEM) describes elevations of various points in a given area in digital format. In this chapter, only raster GIS based DEMs are discussed. Data for generating a DEM can be acquired from different sources such as: ground surveys, digitization of topographic contour maps, conventional aerial photographic photogrammetry, digital photogrammetry utilizing remote sensing image data, UAV-borne digital camera, satellite SAR data etc. The fundamental principle involved is the concept of elevation being related to parallax as developed originally for stereo aerial photography. The DEMs have found numerous geologic applications.

53 citations


Journal ArticleDOI
TL;DR: An ultrathin digital camera inspired by the vision principle of Xenos peckii, an endoparasite of paper wasps, which has an unusual visual system that exhibits distinct benefits for high resolution and high sensitivity, unlike the compound eyes found in most insects and some crustaceans.
Abstract: Increased demand for compact devices leads to rapid development of miniaturized digital cameras. However, conventional camera modules contain multiple lenses along the optical axis to compensate for optical aberrations that introduce technical challenges in reducing the total thickness of the camera module. Here, we report an ultrathin digital camera inspired by the vision principle of Xenos peckii, an endoparasite of paper wasps. The male Xenos peckii has an unusual visual system that exhibits distinct benefits for high resolution and high sensitivity, unlike the compound eyes found in most insects and some crustaceans. The biologically inspired camera features a sandwiched configuration of concave microprisms, microlenses, and pinhole arrays on a flat image sensor. The camera shows a field-of-view (FOV) of 68 degrees with a diameter of 3.4 mm and a total track length of 1.4 mm. The biologically inspired camera offers a new opportunity for developing ultrathin cameras in medical, industrial, and military fields.

50 citations


Journal ArticleDOI
TL;DR: An image mosaic technique based on Speed Up Robust Feature (SURF) is introduced to achieve rapid image splicing and can effectively reduce the influence of cumulative errors and achieve automatic panoramic mosaic of the survey area.
Abstract: The remote sensing technology of unmanned aerial vehicle (UAV) is a low altitude remote sensing technology. The technology has been widely used in military, agricultural, medical, geographical mapping, and other fields by virtue of the advantages of fast acquisition, high resolution, low cost, and good security. But limited by the flying height of UAV and the focal length of the digital camera, the single image obtained by the UAV is difficult to form the overall cognition of the ground farmland area. In order to further expand the field of view, it is necessary to mosaic multiple single images acquired by UAV into a complete panoramic image of the farmland. In this paper, aiming at the problem of UAV low-altitude remote sensing image splicing, an image mosaic technique based on Speed Up Robust Feature (SURF) is introduced to achieve rapid image splicing. One hundred fifty ground farmland remote sensing images collected by UAV are used as experimental splicing objects, and the image splicing is completed by the global stitching strategy optimized by Levenberg-Marquardt (L-M). Experiments show that the strategy can effectively reduce the influence of cumulative errors and achieve automatic panoramic mosaic of the survey area.

37 citations


Proceedings ArticleDOI
01 Jun 2018
TL;DR: The limitations of the current colorimetric mapping approach are discussed and two methods that are able to improve color accuracy are proposed that show improvements of up to 30% and 59% in terms of color reproduction error.
Abstract: One of the key operations performed on a digital camera is to map the sensor-specific color space to a standard perceptual color space. This procedure involves the application of a white-balance correction followed by a color space transform. The current approach for this colorimetric mapping is based on an interpolation of pre-calibrated color space transforms computed for two fixed illuminations (i.e., two white-balance settings). Images captured under different illuminations are subject to less color accuracy due to the use of this interpolation process. In this paper, we discuss the limitations of the current colorimetric mapping approach and propose two methods that are able to improve color accuracy. We evaluate our approach on seven different cameras and show improvements of up to 30% (DSLR cameras) and 59% (mobile phone cameras) in terms of color reproduction error.

36 citations


Journal ArticleDOI
TL;DR: An objective test grading system via Android mobile phone was developed to save cost and time in grading and can work effectively and accurately more than 95%.
Abstract: Grading devices are expensive causing budget waste, in addition some are difficult to use. Therefore, an objective test grading system via Android mobile phone was developed to save cost and time in grading. The system uses image processing technique developed by Java. A camera on a mobile phone was used to capture the edge of answers and an equation of geometric simulation of digital camera sensor was applied to identify answers selected from calculation of pixel intensity in real time. The objective test grading system via Android mobile phone can work effectively and accurately more than 95%.

27 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a triangulation-based image representation method to accelerate the registration process by triangulating the images effectively, and applied a surface registration algorithm to obtain a registration map which is used to compute the registration of the high resolution image.

24 citations


Journal ArticleDOI
TL;DR: The objective of establishing a 3D calibration field for the digital cameras mounted on UASs in terms of accuracy and robustness is established, being the largest reported publication to date.
Abstract: Due to the large number of technological developments in recent years, UAS systems are now used for monitoring purposes and in projects with high precision demand, such as 3D model-based creation of dams, reservoirs, historical monuments etc. These unmanned systems are usually equipped with an automatic pilot device and a digital camera (photo/video, multispectral, Near Infrared etc.), of which the lens has distortions; but this can be determined in a calibration process. Currently, a method of “self-calibration” is used for the calibration of the digital cameras mounted on UASs, but, by using the method of calibration based on a 3D calibration object, the accuracy is improved in comparison with other methods. Thus, this paper has the objective of establishing a 3D calibration field for the digital cameras mounted on UASs in terms of accuracy and robustness, being the largest reported publication to date. In order to test the proposed calibration field, a digital camera mounted on a low-cost UAS was calibrated at three different heights: 23 m, 28 m, and 35 m, using two configurations for image acquisition. Then, a comparison was made between the residuals obtained for a number of 100 Check Points (CPs) using self-calibration and test-field calibration, while the number of Ground Control Points (GCPs) variedand the heights were interchanged. Additionally, the parameters where tested on an oblique flight done 2 years before calibration, in manual mode at a medium altitude of 28 m height. For all tests done in the case of the double grid nadiral flight, the parameters calculated with the proposed 3D field improved the results by more than 50% when using the optimum and a large number of GCPs, and in all analyzed cases with 75% to 95% when using a minimum of 3 GCP. In this context, it is necessary to conduct accurate calibration in order to increase the accuracy of the UAS projects, and also to reduce field measurements.

23 citations


Journal ArticleDOI
TL;DR: This study proposes a single exposure, spatially incoherent and interferenceless method capable of imaging multi-plane objects through scattering media using only a single lens and a digital camera.
Abstract: Scattering media have always posed obstacles for imaging through them. In this study, we propose a single exposure, spatially incoherent and interferenceless method capable of imaging multi-plane objects through scattering media using only a single lens and a digital camera. A point object and a resolution chart are precisely placed at the same axial location, and light scattered from them is focused onto an image sensor using a spherical lens. For both cases, intensity patterns are recorded under identical conditions using only a single camera shot. The final image is obtained by an adaptive non-linear cross-correlation between the response functions of the point object and of the resolution chart. The clear and sharp reconstructed image demonstrates the validity of the method.

22 citations


Journal ArticleDOI
TL;DR: In this article, pyColorimetry software was developed and tested taking into account the recommendations of the Commission Internationale de l'Eclairage (CIE), which allows users to control the entire digital image processing and the colorimetric data workflow.
Abstract: Determining the correct color is essential for proper cultural heritage documentation and cataloging. However, the methodology used in most cases limits the results since it is based either on perceptual procedures or on the application of color profiles in digital processing software. The objective of this study is to establish a rigorous procedure, from the colorimetric point of view, for the characterization of cameras, following different polynomial models. Once the camera is characterized, users obtain output images in the sRGB space that is independent of the sensor of the camera. In this article we report on pyColorimetry software that was developed and tested taking into account the recommendations of the Commission Internationale de l'Eclairage (CIE). This software allows users to control the entire digital image processing and the colorimetric data workflow, including the rigorous processing of raw data. We applied the methodology on a picture targeting Levantine rock art motifs in Remigia Cave (Spain) that is considered part of a UNESCO World Heritage Site. Three polynomial models were tested for the transformation between color spaces. The outcomes obtained were satisfactory and promising, especially with RAW files. The best results were obtained with a second-order polynomial model, achieving residuals below three CIELAB units. We highlight several factors that must be taken into account, such as the geometry of the shot and the light conditions, which are determining factors for the correct characterization of a digital camera.

Proceedings ArticleDOI
01 May 2018
TL;DR: A novel screen watermarking technique is proposed that embeds hidden information on computer screens displaying text documents that is imperceptible during regular use, but can be extracted from pictures of documents shown on the screen, which allows an organization to reconstruct the place and time of the data leak from recovered leaked pictures.
Abstract: Organizations not only need to defend their IT systems against external cyber attackers, but also from malicious insiders, that is, agents who have infiltrated an organization or malicious members stealing information for their own profit. In particular, malicious insiders can leak a document by simply opening it and taking pictures of the document displayed on the computer screen with a digital camera. Using a digital camera allows a perpetrator to easily avoid a log trail that results from using traditional communication channels, such as sending the document via email. This makes it difficult to identify and prove the identity of the perpetrator. Even a policy prohibiting the use of any device containing a camera cannot eliminate this threat since tiny cameras can be hidden almost everywhere. To address this leakage vector, we propose a novel screen watermarking technique that embeds hidden information on computer screens displaying text documents. The watermark is imperceptible during regular use, but can be extracted from pictures of documents shown on the screen, which allows an organization to reconstruct the place and time of the data leak from recovered leaked pictures. Our approach takes advantage of the fact that the human eye is less sensitive to small luminance changes than digital cameras. We devise a symbol shape that is invisible to the human eye, but still robust to the image artifacts introduced when taking pictures. We complement this symbol shape with an error correction coding scheme that can handle very high bit error rates and retrieve watermarks from cropped and compressed pictures. We show in an experimental user study that our screen watermarks are not perceivable by humans and analyze the robustness of our watermarks against image modifications.

Journal ArticleDOI
TL;DR: An effective defect detection method is presented for reflective surfaces such as glass, tile, and steel that is higher than that of other algorithms and is compared with commonly used methods.
Abstract: Automatic defect detection on reflective surfaces is a compelling process. In particular, detection of tiny defects is almost impossible for human eye or simple machine vision methods. Therefore, the need for fast and sensitive machine vision methods has gained importance. In this study, an effective defect detection method is presented for reflective surfaces such as glass, tile, and steel. Defects on the surface of the product are determined automatically without the need for human intervention. The proposed system involves illumination unit, digital camera, and defect detection algorithm. Firstly, color image is taken by digital camera. Then, properties of taken image are selected. At this stage, ambient condition of lighting devices is very important. Reflections are minimized thanks to the true lighting. Selected properties are: red, green, and blue values, and luminance value. These properties are applied to fuzzy inputs. Information from the inputs is evaluated according to determined rules. Finally, each pixel is classified as black or white. Thirty-two glass pieces are tested using the proposed system. The proposed method was compared with commonly used methods. The success rate of the proposed algorithm is 83.5% and is higher than that of other algorithms .

Book ChapterDOI
21 Mar 2018
TL;DR: Perspective parameters of measurements by selecting the scene modes for corn plants photographing with a digital camera are determined and the most prospective camera’s scene mode is “daylight” for the “white balance” setting.
Abstract: Perspective parameters of measurements by selecting the scene modes for corn plants photographing with a digital camera are determined. The digital camera can be used in the field to indicate the level of nitrogen nutrition of corn without additional artificial illumination. The most prospective camera’s scene mode is “daylight” for the “white balance” setting. The future researches are needed to investigate the relationship between the plant optical parameters and plant nitrogen nutrition provision at different growth stages.

Posted Content
TL;DR: In the proposed method, the luminance of the input multi-exposure images is adjusted on the basis of the relationship between exposure values and pixel values, where the relationship is obtained by assuming that a digital camera has a linear response function.
Abstract: This paper proposes a novel multi-exposure image fusion method based on exposure compensation. Multi-exposure image fusion is a method to produce images without color saturation regions, by using photos with different exposures. However, in conventional works, it is unclear how to determine appropriate exposure values, and moreover, it is difficult to set appropriate exposure values at the time of photographing due to time constraints. In the proposed method, the luminance of the input multi-exposure images is adjusted on the basis of the relationship between exposure values and pixel values, where the relationship is obtained by assuming that a digital camera has a linear response function. The use of a local contrast enhancement method is also considered to improve input multi-exposure images. The compensated images are finally combined by one of existing multi-exposure image fusion methods. In some experiments, the effectiveness of the proposed method are evaluated in terms of the tone mapped image quality index, statistical naturalness, and discrete entropy, by comparing the proposed one with conventional ones.

Journal ArticleDOI
TL;DR: An all-optical difference engine (AODE) sensor for detecting the defects in printed electronics produced with roll-to-roll processes that is based on the principle of coherent optical subtraction and is able to achieve high-speed inspection by minimising data post-processing.
Abstract: This paper presents an all-optical difference engine (AODE) sensor for detecting the defects in printed electronics produced with roll-to-roll processes. The sensor is based on the principle of coherent optical subtraction and is able to achieve high-speed inspection by minimising data post-processing. A self-comparison inspection strategy is introduced to allow defect detection by comparing the printed features and patterns that have the same nominal dimensions. In addition, potential applications of the AODE sensor in an on-the-fly pass-or-reject production control scenario are presented. A prototype AODE sensor using a digital camera is developed and demonstrated by detecting defects on several industrial printed electrical circuitry samples. The camera can be easily replaced by a low-cost photodiode to realise high-speed all-optical information processing and inspection. The developed sensor is capable of inspecting areas of 4 mm width with a resolution of the order of several micrometres, and can be duplicated in parallel to inspect larger areas without significant cost.

Journal ArticleDOI
TL;DR: Characterisation of a modified camera is presented to investigate the impact of the modification on the spectroradiometric and geometric image quality and measuring the spectral response quantifies the modified camera as a scientific device for more accurate measurements and provides indications of wavelengths that could improve documentation based on sensitivity.
Abstract: . Spectral and 3D imaging techniques are used for museum imaging and cultural heritage documentation providing complementary information to aid in documenting the condition, informing the care, and increasing our understanding of objects. Specialised devices for spectral and 3D imaging may not be accessible for many heritage institutions, due to cost and complexity, and the modification of a consumer digital camera presents the potential of an accessible scientific tool for 2D and 3D spectral imaging. Consumer digital cameras are optimised for visible light, colour photography, but the underlying sensor is inherently sensitive to near ultraviolet, visible, and near infrared radiation. This research presents the characterisation of a modified camera to investigate the impact of the modification on the spectroradiometric and geometric image quality with the intention of the device being used as a scientific tool for cultural heritage documentation. The characterisation includes the assessment of 2D image quality looking at visual noise, sharpness, and sampling efficiency using the target and software associated with the Federal Agencies Digitization Guidelines Initiative. Results suggest that these modifications give rise to discrepancies in computed surface geometries of the order of ± 0.1 mm for small to medium sized objects used in the study and recorded in the round (maximum dimension 20 cm). Measuring the spectral response quantifies the modified camera as a scientific device for more accurate measurements and provides indications of wavelengths that could improve documentation based on sensitivity. The modification of a consumer digital camera provides a less expensive, high-resolution option for 2D and 3D spectral imaging.

Journal ArticleDOI
TL;DR: The results show that the watermark can be invisibly embedded, and reliably extracted, and its robustness against various types of distortions from the printing and camera-capturing processes is demonstrated.

Journal ArticleDOI
12 Apr 2018-PLOS ONE
TL;DR: The combination of X-ray CT and a digital camera makes it possible to successfully digitize specimens with complicated 3D structures accurately and allows us to browse both surface colors and internal structures.
Abstract: In this paper, we present a three-dimensional (3D) digitization technique for natural objects, such as insects and plants. The key idea is to combine X-ray computed tomography (CT) and photographs to obtain both complicated 3D shapes and surface textures of target specimens. We measure a specimen by using an X-ray CT device and a digital camera to obtain a CT volumetric image (volume) and multiple photographs. We then reconstruct a 3D model by segmenting the CT volume and generate a texture by projecting the photographs onto the model. To achieve this reconstruction, we introduce a technique for estimating a camera position for each photograph. We also present techniques for merging multiple textures generated from multiple photographs and recovering missing texture areas caused by occlusion. We illustrate the feasibility of our 3D digitization technique by digitizing 3D textured models of insects and flowers. The combination of X-ray CT and a digital camera makes it possible to successfully digitize specimens with complicated 3D structures accurately and allows us to browse both surface colors and internal structures.

Proceedings ArticleDOI
15 Apr 2018
TL;DR: In this article, the luminance of the input multi-exposure images is adjusted on the basis of the relationship between exposure values and pixel values, where the relationship is obtained by assuming that a digital camera has a linear response function.
Abstract: This paper proposes a novel multi -exposure image fusion method based on exposure compensation Multi-exposure image fusion is a method to produce images without color saturation regions, by using photos with different exposures However, in conventional works, it is unclear how to determine appropriate exposure values, and moreover, it is difficult to set appropriate exposure values at the time of photographing due to time constraints In the proposed method, the luminance of the input multi-exposure images is adjusted on the basis of the relationship between exposure values and pixel values, where the relationship is obtained by assuming that a digital camera has a linear response function The use of a local contrast enhancement method is also considered to improve input multi-exposure images The compensated images are finally combined by one of existing multi-exposure image fusion methods In some experiments, the effectiveness of the proposed method are evaluated in terms of the tone mapped image quality index, statistical naturalness, and discrete entropy, by comparing the proposed one with conventional ones

Patent
11 May 2018
TL;DR: In this paper, a user interface for operating a dual-aperture digital camera included in host device is presented, consisting of a screen configured to display at least one icon and an image of a scene acquired with at least two cameras, a frame defining a field of view of a Tele image, the frame superposed on a Wide image having a Wide field of views, and means to switch the screen from displaying the Wide image to displaying the Tele image and vice versa.
Abstract: A user interface for operating a dual-aperture digital camera included in host device, the dual-aperture digital camera including a Wide camera and a Tele camera, the user interface comprising a screen configured to display at least one icon and an image of a scene acquired with at least one of the Tele and Wide cameras, a frame defining a field of view of a Tele image, the frame superposed on a Wide image having a Wide field of view, and means to switch the screen from displaying the Wide image to displaying the Tele image and vice versa.

Journal ArticleDOI
TL;DR: This paper proposes a novel pseudo multi-exposure image fusion method based on a single image that utilizes the relationship between the exposure values and pixel values, which is obtained by assuming that a digital camera has a linear response function.
Abstract: This paper proposes a novel pseudo multi-exposure image fusion method based on a single image. Multi-exposure image fusion is used to produce images without saturation regions, by using photos with different exposures. However, it is difficult to take photos suited for the multi-exposure image fusion when we take a photo of dynamic scenes or record a video. In addition, the multi-exposure image fusion cannot be applied to existing images with a single exposure or videos. The proposed method enables us to produce pseudo multi-exposure images from a single image. To produce multi-exposure images, the proposed method utilizes the relationship between the exposure values and pixel values, which is obtained by assuming that a digital camera has a linear response function. Moreover, it is shown that the use of a local contrast enhancement method allows us to produce pseudo multi-exposure images with higher quality. Most of conventional multi-exposure image fusion methods are also applicable to the proposed multi-exposure images. Experimental results show the effectiveness of the proposed method by comparing the proposed one with conventional ones.

Proceedings ArticleDOI
18 Oct 2018
TL;DR: The Zwicky Transient Facility as mentioned in this paper is a robotic-observing program, in which a newly engineered 600-MP digital camera with a pioneeringly large field of view, 47 square degrees, will be installed into the 48-inch Samuel Oschin Telescope at the Palomar Observatory.
Abstract: The Zwicky Transient Facility is a new robotic-observing program, in which a newly engineered 600-MP digital camera with a pioneeringly large field of view, 47 square degrees, will be installed into the 48-inch Samuel Oschin Telescope at the Palomar Observatory. The camera will generate ~1 petabyte of raw image data over three years of operations. In parallel related work, new hardware and software systems are being developed to process these data in real time and build a long-term archive for the processed products. The first public release of archived products is planned for early 2019, which will include processed images and astronomical-source catalogs of the northern sky in the g and r bands. Source catalogs based on two different methods will be generated for the archive: aperture photometry and point-spread-function fitting.

Journal ArticleDOI
TL;DR: In this paper, the green stability assumption is used to fine-tune the values of some common illumination estimation methods by using only non-calibrated images, but the whole process is much faster since calibration is not required and thus time is saved.
Abstract: In the image processing pipeline of almost every digital camera, there is a part for removing the influence of illumination on the colors of the image scene. Tuning the parameter values of an illumination estimation method for maximal accuracy requires calibrated images with known ground-truth illumination, but creating them for a given sensor is time-consuming. In this paper, the green stability assumption is proposed that can be used to fine-tune the values of some common illumination estimation methods by using only non-calibrated images. The obtained accuracy is practically the same as when training on calibrated images, but the whole process is much faster since calibration is not required and thus time is saved. The results are presented and discussed. The source code website is provided in Section Experimental Results.

Patent
04 Sep 2018
TL;DR: In this article, a GAN-based CFA (Color Filer Array) image demosaicing joint denoising method was proposed, which consisted of the following steps: (1) a training sample setis acquired; (2) the GAN is built; (3) parameters of a 9-layer convolution neural network are updated; (4) parameters for a 39-layer CNN are updated, and (5) whether the updating times of the CNN reach 200 times or not.
Abstract: The invention discloses a GAN (Generative Adversarial Nets)-based CFA (Color Filer Array) image demosaicing joint denoising method The method comprises the following steps: (1) a training sample setis acquired; (2) the GAN is built; (3) parameters of a 9-layer convolution neural network are updated; (4) parameters of a 39-layer convolution neural network are updated; (5) whether the updating times of the 39-layer convolution neural network and the 9-layer convolution neural network reach 200 times is judged, if yes, a step (6) is executed, or otherwise, the step (3) is executed; (6) a nonlinear mapping relationship is built; and (7) an image after demosaicing and denoising is acquired The color information of a noisy CFA image obtained by a digital camera can be well recovered, the noise introduced during an image acquisition process of the digital camera can be effectively suppressed, the appearance of unnatural colors is reduced, and the visual effects of a color image are improved

Book ChapterDOI
01 Jan 2018
TL;DR: This chapter described an introduction to computer vision and provided in detail digital image processing, device characterization and calibration, color measurement using digital camera and scanner, and sRGB methods for measuring and demonstrating color of textiles.
Abstract: Color in textile industry makes the textiles more appealing and attractive. Color is one of the most important characteristics of textile, and computer vision techniques have been widely used to measure and evaluate them. Computer vision tasks include several techniques for processing, analyzing, and understanding digital images and these involve the developments of a theoretical and algorithmic basis to achieve automatic visual understanding. Color-imaging technology in the form of color photography, color monitors, scanners, and digital cameras has become almost ubiquitous in modern life. This chapter described an introduction to computer vision and provided in detail digital image processing, device characterization and calibration, color measurement using digital camera and scanner. Also, the applications of computer vision techniques, namely, color measurement by means of digital camera and scanner using various procedures, i.e., polynomial regression method, neurofuzzy method, artificial neural network method, target-based, targetless, look-up table (LUT), and sRGB methods were presented for measuring and demonstrating color of textiles.

Patent
02 Jul 2018
TL;DR: A digital camera processing system with software to manage taking photos with a digital camera is described in this article, where a downloaded software component controls the digital camera software and causes a handheld mobile device to perform operations.
Abstract: A digital camera processing system with software to manage taking photos with a digital camera. Camera software controls the digital camera. A downloaded software component controls the digital camera software and causes a handheld mobile device to perform operations. The operations may include instructing a user to have the digital camera take photos of a check; displaying an instruction on a display of the handheld mobile device to assist the user in having the digital camera take the photos; or assisting the user as to an orientation for taking the photos with the digital camera. The digital camera processing system may generate a log file including a bi-tonal image formatted as a TIFF image.

Patent
11 Jan 2018
TL;DR: In this article, the authors proposed a solution to prevent a situation in which a user is confused from accessing an external image processing device such as a digital camera to perform conversion processing of image data.
Abstract: PROBLEM TO BE SOLVED: To prevent a situation in which a user is confused from occurring in the case that an external image processing device such as a digital camera performs conversion processing of image dataSOLUTION: A large capacity storage (109A) stores RAW data or the like A CPU (101A) determines whether the RAW data stored in the large capacity storage (109A) can be converted into JPEG data by a camera (100B) Then, the CPU (101A) causes the camera (100B) to display a GUI for performing a start request of conversion processing in a display device (106A) in the case that the RAW data is determined to be convertible into JPEG data by the camera (100B)SELECTED DRAWING: Figure 1

DOI
01 Jan 2018
TL;DR: The two‐part entry of this series will combine simple geometrical relationships and fundamental laws of electromagnetic radiation to shed some light on the term spatial resolution and explain its difference with the related concept of spatial resolving power.
Abstract: One of the most common words in the remote sensing (or even general imaging) literature is ‘resolution’. Despite its abundant use and because the concept is often misjudged as uncomplicated, most modern literature relies on rather sloppy ‘resolution’ definitions that sometimes even contradict each other within the same text. In part, this confusion and misconception arises from the fact that technical as well as broader, application‐specific explanations for resolution exist, both of them even relying on different ways to describe resolution characteristics. As a result, the term ‘resolution’ has been used for many years as a handy go‐to term to cover many concepts: “this satellite produces images with a resolution of 30 m”; “there is an increasing number of high‐resolution camera sensors on the market” or “the resolution of the human eye is coarser than an eagle’s eye”. Nowadays, one might wonder if resolution is a particular image characteristic, a property of the imaged scene or instead related to the imaging sensor or maybe the camera’s lens. It is thus fair to say that the technical concept of resolution – or more specifically spatial resolution – and all its implications are commonly poorly understood, which leads to many popular, accepted but completely wrong statements. In the photographic literature, a widespread example is to refer to the total number of camera image pixels (i.e. the pixel count) as the image resolution of that specific digital camera. This is erroneous since the same 24‐megapixel camera can capture a photograph of an Attic black‐figure amphora as well as a complete submerged Greek temple. The resulting two photographs, although both are counting 24 megapixels, might reveal scene details of 0.01 cm and 2 cm respectively. In the remote sensing community, a prevalent misconception is that a satellite image with a 1 m resolution automatically means that we can recognise all objects in that image which have a width equal to or larger than 1 m. In this two‐part entry of our series, we will combine simple geometrical relationships (part 1) and fundamental laws of electromagnetic radiation (part 2) to shed some light on the term spatial resolution and explain its difference with the related concept of spatial resolving power. Similar to the previous two entries, this two‐pieced text can only scratch the surface of this very complex topic. Notwithstanding, the aim is still to provide solid definitions and enough background knowledge to easily correct many of the “common knowledge” but ill‐founded statements such as the ones mentioned above.

Proceedings ArticleDOI
12 Jan 2018
TL;DR: This paper develops an optoelectronic measurement system by using monocular digital camera, and presents the research of measurement theory, visual target design, calibration algorithm design, software programming and so on.
Abstract: The dynamic envelope measurement plays very important role in the external dimension design for high-speed train. Recently there is no digital measurement system to solve this problem. This paper develops an optoelectronic measurement system by using monocular digital camera, and presents the research of measurement theory, visual target design, calibration algorithm design, software programming and so on. This system consists of several CMOS digital cameras, several luminous targets for measuring, a scale bar, data processing software and a terminal computer. The system has such advantages as large measurement scale, high degree of automation, strong anti-interference ability, noise rejection and real-time measurement. In this paper, we resolve the key technology such as the transformation, storage and calculation of multiple cameras’ high resolution digital image. The experimental data show that the repeatability of the system is within 0.02mm and the distance error of the system is within 0.12mm in the whole workspace. This experiment has verified the rationality of the system scheme, the correctness, the precision and effectiveness of the relevant methods.