scispace - formally typeset
Search or ask a question

Showing papers on "Digital camera published in 2022"


Journal ArticleDOI
TL;DR: In this article, the authors evaluated the usefulness of a digital camera and a flatbed scanner for acquiring the cross-section images intended to calculate the texture parameters for cultivar discrimination of quince.

8 citations


Journal ArticleDOI
TL;DR: In this paper , the authors evaluated the usefulness of a digital camera and a flatbed scanner for acquiring the cross-section images intended to calculate the texture parameters for cultivar discrimination of quince.

8 citations


Journal ArticleDOI
TL;DR: The proposed method can automatically obtain stable and high-quality speckle pattern images and thus better DIC measurement compared with regular DIC techniques using a fixed camera exposure time and shows great potential in high-temperature DIC applications.
Abstract: We present a method that can automatically determine the optimal camera exposure time for high-quality deformation measurement with digital image correlation (DIC) techniques. The proposed method needs to capture a series of surface images of a test sample at its reference state with different camera exposure times. The relationship between the mean intensity gradients (MIGs) and average grayscales of these images reveals that the best quality (i.e. maximum MIG) of a speckled sample surface always corresponds to a certain average grayscale. Thus, the proposed method can serve two purposes in DIC practice. First, at the initial state, the camera exposure time can be adjusted automatically to obtain a reference image with the best speckle pattern quality. Second, by adjusting the camera exposure time to make the average grayscale of an image close to the predetermined optimal value, the proposed method can adaptively output high-quality deformed images with an almost constant speckle pattern quality, regardless of serious ambient light variations. Experimental results demonstrated that the proposed method can automatically obtain stable and high-quality speckle pattern images, thus delivering better DIC measurement compared with regular DIC techniques using a fixed camera exposure time. Because the present automatic camera exposure time control method allows a nonprofessional operator to consistently obtain high-quality speckle pattern images that warrant high-accuracy DIC measurements, it is therefore suggested that the present method should be used as a routine practice in practical DIC applications.

7 citations


Journal ArticleDOI
TL;DR: In this article , an experimental campaign was carried out in the Institute of Engineering Geodesy and Measurement Systems Laboratory at Graz University of Technology (Austria), where pictures of a predetermined moving target, acquired from the IATS onboard optical camera (focal length of 231 mm), have been processed through Digital Image Correlation (DIC) to get displacements measures.
Abstract: ABSTRACT The paper addresses a relevant subject in structural health monitoring for civil engineering applications such as bridges or other large structures. Experimental laboratory program on the accuracy and precision of remote displacement measurements was reported. Could an onboard optical camera of a Total Station (Image-Assisted Total Station (IATS) Leica Nova) be exploited to integrate other measurement techniques or even temporarily replace ordinary instrumental observations in case of instrument failure? The experimental campaign was carried out in the Institute of Engineering Geodesy and Measurement Systems Laboratory at Graz University of Technology (Austria). In particular, pictures of a predetermined moving target, acquired from the IATS’ onboard optical camera (focal length of 231 mm), have been processed through Digital Image Correlation (DIC) to get displacements measures. The obtained results were validated using the “ground truth” assessed by the laboratory equipment. The same was done, capturing pictures with a Digital Single-Lens Reflex (DSLR) consumer camera (focal length of 55 and 85 mm) and using the same experimental setup. Once validated, a comparison between IATS’ optical camera and DSLR observations returned very similar accuracy and precision. The experimental outcomes suggest that comparable results can be achieved by processing pictures from both IATS’ onboard camera.

6 citations


Journal ArticleDOI
TL;DR: In this paper , an analysis of digital images by smartphone was used for copper quantification in sugarcane spirit (cachaça) samples through the formation of blue complex between copper and cuprizone.

6 citations


Journal ArticleDOI
TL;DR: In this paper, an analysis of digital images by smartphone was used for copper quantification in sugarcane spirit (cachaca) samples through the formation of blue complex between copper and cuprizone.

6 citations


Journal ArticleDOI
TL;DR: In this paper , a 3D model was reconstructed using Structure-from-Motion Multi-View Stereo (SfM-MVS) photogrammetry from 27 locations and compared with a reference laser scan and a more conventional digital single-lens reflex (DSLR) camera-based model.
Abstract: Structure-from-Motion Multi-View Stereo (SfM-MVS) photogrammetry is a viable method to digitize underground spaces for inspection, documentation, or remote mapping. However, the conventional image acquisition process can be laborious and time-consuming. Previous studies confirmed that the acquisition time can be reduced when using a 360-degree camera to capture the images. This paper demonstrates a method for rapid photogrammetric reconstruction of tunnels using a 360-degree camera. The method is demonstrated in a field test executed in a tunnel section of the Underground Research Laboratory of Aalto University in Espoo, Finland. A 10 m-long tunnel section with exposed rock was photographed using the 360-degree camera from 27 locations and a 3D model was reconstructed using SfM-MVS photogrammetry. The resulting model was then compared with a reference laser scan and a more conventional digital single-lens reflex (DSLR) camera-based model. Image acquisition with a 360-degree camera was 3x faster than with a conventional DSLR camera and the workflow was easier and less prone to errors. The 360-degree camera-based model achieved a 0.0046 m distance accuracy error compared to the reference laser scan. In addition, the orientation of discontinuities was measured remotely from the 3D model and the digitally obtained values matched the manual compass measurements of the sub-vertical fracture sets, with an average error of 2–5°.

5 citations


Journal ArticleDOI
01 May 2022-Sensors
TL;DR: A novel point-to-point camera distortion calibration method that requires only dozens of images to get a dense distortion rectification map and contributes a 28.5% improvement to the reprojection error over the polynomial distortion model.
Abstract: The camera is the main sensor of vison-based human activity recognition, and its high-precision calibration of distortion is an important prerequisite of the task. Current studies have shown that multi-parameter model methods achieve higher accuracy than traditional methods in the process of camera calibration. However, these methods need hundreds or even thousands of images to optimize the camera model, which limits their practical use. Here, we propose a novel point-to-point camera distortion calibration method that requires only dozens of images to get a dense distortion rectification map. We have designed an objective function based on deformation between the original images and the projection of reference images, which can eliminate the effect of distortion when optimizing camera parameters. Dense features between the original images and the projection of the reference images are calculated by digital image correlation (DIC). Experiments indicate that our method obtains a comparable result with the multi-parameter model method using a large number of pictures, and contributes a 28.5% improvement to the reprojection error over the polynomial distortion model.

4 citations


Journal ArticleDOI
TL;DR: In this article , the authors evaluate the efficacy of a commercially available, close-focus automated camera trap to monitor insect-plant interactions and insect behavior and conclude that scheduled camera traps are an effective and relatively inexpensive tool for monitoring interactions between plants and insects of all size classes.
Abstract: Abstract Insect and pollinator populations are vitally important to the health of ecosystems, food production, and economic stability, but are declining worldwide. New, cheap, and simple monitoring methods are necessary to inform management actions and should be available to researchers around the world. Here, we evaluate the efficacy of a commercially available, close‐focus automated camera trap to monitor insect–plant interactions and insect behavior. We compared two video settings—scheduled and motion‐activated—to a traditional human observation method. Our results show that camera traps with scheduled video settings detected more insects overall than humans, but relative performance varied by insect order. Scheduled cameras significantly outperformed motion‐activated cameras, detecting more insects of all orders and size classes. We conclude that scheduled camera traps are an effective and relatively inexpensive tool for monitoring interactions between plants and insects of all size classes, and their ease of accessibility and set‐up allows for the potential of widespread use. The digital format of video also offers the benefits of recording, sharing, and verifying observations.

4 citations


Journal ArticleDOI
TL;DR: In this article , the authors explore the theory behind the photogrammetry process and describe the impact, of the number of pictures taken per rotation, when taking pictures of archaeological object, and how these factors affect the digital reconstruction of the object and their impact of the quality of the final model.

3 citations


Journal ArticleDOI
TL;DR: In this paper , a micro image strain sensing (MISS) method was improved based on a previous preliminary study, where a digital microscope camera is adopted instead of the combination of smartphone and microscope for long-term monitoring.
Abstract: Strain response is one of the most widely used methods for structural health monitoring because of its high accuracy in static and dynamic measurement. Although several sensing technologies have been developed for strain measurement, they need specialized equipment such as fiber Bragg grating interrogator, which is costly and cumbersome for general structures, especially for small infrastructures. Therefore, a micro image strain sensing (MISS) method was improved based on our previous preliminary study. The novelty of MISS sensor is that digital microscope camera is adopted instead of the combination of smartphone and microscope for long-term monitoring. Moreover, the micro image strain sensing system was developed on the basis of improved speeded-up robust features (SURF) algorithm. Experimental results showed that the strain measured by the micro image strain sensing system is consistent with that measured by the linear variable differential transformer, indicating that the system is stable and robust. This system has the advantages of low cost, high accuracy, and easy to operate. It does not need interrogators and can be used to measure strain quickly even for non-professionals. The proposed micro image strain sensing method will serve as a promising strain measuring alternative in the field of structural health monitoring.

Journal ArticleDOI
TL;DR: In this paper , the authors used the Yuneec E10T thermal imaging camera with a 320 × 240 pixel matrix and 4.3 mm focal length dedicated to working with the H520 UAV in obtaining data on the natural environment.
Abstract: Thermal imaging is an important source of information for geographic information systems (GIS) in various aspects of environmental research. This work contains a variety of experiences related to the use of the Yuneec E10T thermal imaging camera with a 320 × 240 pixel matrix and 4.3 mm focal length dedicated to working with the Yuneec H520 UAV in obtaining data on the natural environment. Unfortunately, as a commercial product, the camera is available without radiometric characteristics. Using the heated bed of the Omni3d Factory 1.0 printer, radiometric calibration was performed in the range of 18–100 °C (high sensitivity range–high gain settings of the camera). The stability of the thermal camera operation was assessed using several sets of a large number of photos, acquired over three areas in the form of aerial blocks composed of parallel rows with a specific sidelap and longitudinal coverage. For these image sets, statistical parameters of thermal images such as the mean, minimum and maximum were calculated and then analyzed according to the order of registration. Analysis of photos taken every 10 m in vertical profiles up to 120 m above ground level (AGL) were also performed to show the changes in image temperature established within the reference surface. Using the established radiometric calibration, it was found that the camera maintains linearity between the observed temperature and the measured brightness temperature in the form of a digital number (DN). It was also found that the camera is sometimes unstable after being turned on, which indicates the necessity of adjusting the device’s operating conditions to external conditions for several minutes or taking photos over an area larger than the region of interest.


Journal ArticleDOI
TL;DR: In this article , the authors proposed an image-assisted total station (IATS) which combines a geodetic total station with a digital camera to perform image analysis of the captured images together with angle measurement.
Abstract: Abstract The combination of a geodetic total station with a digital camera opens up the possibilities of digital image analysis of the captured images together with angle measurement. In general, such a combination is called image-assisted total station (IATS). The prototype of an IATS called MoDiTa (Modular Digital Imaging Total Station) developed at i3mainz is designed in such a way that an existing total station or a tachymeter can be extended by an industrial camera in a few simple steps. The ad hoc conversion of the measuring system opens up further areas of application for existing commercial measuring systems, such as high-frequency aiming, autocollimation tasks or tracking of moving targets. MoDiTa is calibrated directly on site using image-processing and adjustment methods. The crosshair plane is captured for each image and provides identical points in the camera image as well as in the reference image. However, since the camera is not precisely coaxially mounted and movement of the camera cannot be ruled out, the camera is continuously observed during the entire measurement. Various image-processing algorithms determine the crosshairs in the image and compare the results to detect movement. In the following, we explain the self-calibration and the methods of crosshair detection as well as the necessary matching. We use exemplary results to show to what extent the parameters of self-calibration remain valid even if the distance and thus the focus between instrument and target object changes. Through this, one calibration is applicable for different distances and eliminates the need for repeated, time-consuming calibrations during typical applications.

Journal ArticleDOI
TL;DR: In this paper , a coaxial dual-camera digital image correlation system using a hypercentric lens was proposed to determine the defect position in the inner wall of a pipeline under loads.
Abstract: A coaxial dual-camera digital image correlation system using a hypercentric lens was proposed to determine the defect position in the inner wall of a pipeline under loads. Compared with the traditional dual-camera system, this system ensures that both cameras can capture a 360-degree panoramic image in the same position. Herein, the imaging principle of the system was introduced in detail. In addition, the effectiveness and accuracy of the proposed method were verified through verification and application experiments.

Proceedings ArticleDOI
01 Aug 2022
TL;DR: In this article , the authors proposed the use of high-performance computing and deep learning to create prediction models that can be deployed as a part of smart agriculture solutions in the poultry sector.
Abstract: This paper proposes the use of high-performance computing and deep learning to create prediction models that can be deployed as a part of smart agriculture solutions in the poultry sector. The idea is to create object detection models that can be ported onto edge devices equipped with camera sensors for the use in Internet of Things systems for poultry farms. The object detection prediction models could be used to create smart camera sensors that could evolve into sensors for counting chickens or detecting dead ones. Such camera sensor kits could become a part of digital poultry farm management systems in shortly. The paper discusses the approach to the development and selection of machine learning and computational tools needed for this process. Initial results, based on the use of Faster R-CNN network and high-performance computing are presented together with the metrics used in the evaluation process. The achieved accuracy is satisfactory and allows for easy counting of chickens. More experimentation is needed with network model selection and training configurations to increase the accuracy and make the prediction useful for developing a dead chicken detector.

Book ChapterDOI
01 Jan 2022
TL;DR: With a fixed array, or "bramble," of Raspberry Pi computer boards and camera modules placed strategically in a greenhouse, this fixed camera platform implements rapid and automated plant phenotyping.

Journal ArticleDOI
TL;DR: In this paper , the authors describe optical principles and utility of inexpensive, portable, non-contact digital smartphone-based camera for the acquisition of fundus photographs for the evaluation of retinal disorders.
Abstract: To describe optical principles and utility of inexpensive, portable, non-contact digital smartphone-based camera for the acquisition of fundus photographs for the evaluation of retinal disorders.The digital camera has a high-quality glass 25 D condensing lens attached to a 21.4-megapixel smartphone camera. The white-emitting LED light of the smartphone at low illumination levels is used to visualize the fundus and limit source reflection. The camera captures a high-definition fundus (5344 × 4016) image on a complementary metal oxide semiconductor (CMO) with an area of 6.3 mm × 4.5 mm. The auto-acquisition mode of the device facilitates the quick capture of the image from continuous video streaming in a fraction of a second.This new smartphone-based camera provides high-resolution digital images of the retina (50° telescopic view) in patients at a fraction of the cost (USD 1000) of established, non-transportable, office-based fundus photography systems.The portable user-friendly smartphone-based digital camera is a useful alternative for the acquisition of fundus photographs and provides a tool for screening retinal diseases in various clinical settings such as primary care clinics or emergency rooms. The ease of acquisition of photographs from a continuously streaming video of fundus obviates the need for a skilled photographer.

Journal ArticleDOI
TL;DR: In this paper , an image-based machine learning model was proposed to predict the correlated color temperature in a scene with the help of the Macbeth ColorChecker color rendition chart and a DSLR camera.
Abstract: Information about correlated color temperature influencing the scene due to the surrounding lighting is vital, especially for circadian lighting and photography. This paper proposes a novel image-based machine learning model to predict the correlated color temperature in a scene with the help of the Macbeth ColorChecker color rendition chart and a DSLR camera. In the proposed technique, the researcher fixes the white balance setting in the camera, thereby forcing color difference in the captured image of the Macbeth ColorChecker chart placed in the scene. The Bayesian neural network model considers the color difference values of the six spectrally neutral patches of the Macbeth ColorChecker chart as inputs for CCT prediction. The color differences are calculated using the CIEDE2000 color difference formula. Fours models with white balance settings in the camera as 5000 K, 6500 K, 8000 K, and 10000 K were developed and analyzed. It is experimentally found that the correlated color temperature prediction error is less than five percent by the proposed model with white balance setting in the DSLR camera as 10000 K. The proposed model performed consistently during varied lighting levels and mixed CCT lighting conditions set up with LED, incandescent, and fluorescent lamps.


Proceedings ArticleDOI
27 Mar 2022
TL;DR: A light and high-resolution video camera system based on FPGA is designed and the design of time-driving of CMOS sensor, output data remapping and Camlink interface with Verilog hardware language is realized and a imaging experiment is carried out.
Abstract: To obtain the high-resolution and real-time digital image of the monitoring target and meet the requirements of miniaturization, a light and high-resolution video camera system based on FPGA is designed. The camera uses the large array CMOS sensor CMV12000 produced by the CMOSIS company and transfers the output data to the computer through Camlink interface. By using the FPGA as the core of timing control and completing the design of time-driving of CMOS sensor, output data remapping and Camlink interface with Verilog hardware language, the design of the camera is realized and a imaging experiment is carried out. The result shows that the driving sequence of the camera is reasonable and the communication with computer is correct. The camera operates stably and takes high quality images with the image resolution is 4096×3072.

Journal ArticleDOI
TL;DR: In this article , the authors compared the results obtained by two different camera lenses in the process of measuring and processing data, as well as in the accuracy of the 3D model obtained.
Abstract: Technological advances in digital photogrammetry enable sufficiently accurate results to be obtained independently of the type of camera used. This paper focuses on the application of a specific methodology to obtain high-precision three-dimensional models using photogrammetric techniques, comparing the process and the results obtained when using two different camera lenses. In order to carry out this comparative analysis, two different frieze plasterworks of high heritage value have been selected as a case study. These are two friezes located in the Courtyard of the Maidens of the Royal Alcazar of Seville. As a result of this study it has been possible to draw conclusions about the impact of the characteristics of the camera lens both in the process of measuring and processing data, as well as in the accuracy of the three-dimensional model obtained.

Journal ArticleDOI
TL;DR: In this paper , a 2D P2-invariant of five coplanar points derived from cross ratios is adopted in template point registration and identification, and affine transformation is used for decoding.
Abstract: In close-range or unmanned aerial vehicle (UAV) photogrammetry, Schneider concentric circular coded targets (SCTs), which are public, are widely used for image matching and as ground control points. GSI point-distributed coded targets (GCTs), which are only mainly applied in a video-simultaneous triangulation and resection system (V-STARS), are non-public and rarely applied in UAV photogrammetry. In this paper, we present our innovative detailed solution to identify GCTs. First, we analyze the structure of a GCT. Then, a special 2D P2-invariant of five coplanar points derived from cross ratios is adopted in template point registration and identification. Finally, the affine transformation is used for decoding. Experiments indoors—including different viewing angles ranging from 0° to 80° based on 6 mm-diameter GCTs, smaller 3 mm-diameter GCTs, and different sizes mixed—and outdoors with challenging scenes were carried out. Compared with V-STARS, the results show that the proposed method can preserve the robustness and achieves a high accuracy rate in identification when the viewing angle is not larger than 65° through indoor experiments, and the proposed method can achieve approximate or slightly weaker effectiveness than V-STARS on the whole. Finally, we attempted to extend and apply the designed GCTs in UAV photogrammetry for a preliminary experiment. This paper demonstrates that GCTs can be designed, printed, and identified easily through our method. It is expected that the proposed method may be helpful when applied to image matching, camera calibration, camera orientation, or 3D measurements or serving as control points in UAV photogrammetry for scenarios with complex structures in the future.

Journal ArticleDOI
TL;DR: The appropriate methodology for rat heart and brain tissue slicing and staining is shown and guidelines for establishing lighting and camera setups and photography techniques for high-resolution image acquisition are provided.
Abstract: Macro photography is applicable for imaging various tissue samples at high magnification to perform qualitative and quantitative analyses. Tissue preparation and subsequent image capture are steps performed immediately after the ischemia-reperfusion (IR) experiment and must be performed in a timely manner and with appropriate care. For the evaluation of IR-induced damage in the heart and brain, this paper describes 2,3,5-triphenyl-2H-tetrazolium chloride (TTC)-based staining followed by macro photography. Scientific macro photography requires controlled lighting and an appropriate imaging setup. The standardized methodology ensures high-quality, detailed digital images even if a combination of an inexpensive up-to-date digital camera and macro lens is used. Proper techniques and potential mistakes in sample preparation and image acquisition are discussed, and examples of the influence of correct and incorrect setups on image quality are provided. Specific tips are provided on how to avoid common mistakes, such as overstaining, improper sample storage, and suboptimal lighting conditions. This paper shows the appropriate methodology for rat heart and brain tissue slicing and staining and provides guidelines for establishing lighting and camera setups and photography techniques for high-resolution image acquisition.

Proceedings ArticleDOI
07 Jun 2022
TL;DR: Green Channel, Chrominance-based signal processing method (CHROM), and Plane Orthogonal to Skin (POS) have been identified as the best algorithms to estimate HR since they showed a better agreement with the reference system than the other algorithms.
Abstract: Non-contact technologies are gaining much interest as promising systems for remote monitoring of physiological parameters (e.g., heart rate -HR-, respiratory rate, blood pressure, blood oxygen saturation) without interfering with the subject's comfort. Among the existing technologies, digital cameras integrated into smartphones or laptops are widely used due to their ease of use, availability, and portability. In this study, we investigated the influence of distance on the HR estimation from a video recorded with a smartphone's frontal camera. HR values were provided from a spectral analysis every 1 s. We evaluated the performances of six different algorithms to identify the best one for HR estimation simulating an occupational scenario at two distances (i.e., 0.5 m and 1 m). Data were recorded from 8 healthy volunteers of both sexes; a wearable device was used to record medical-grade ECG to estimate reference HR values. Results show that the higher the camera distance, the higher the mean absolute error (MAE): the average MAE was 1.49 bpm at 0.5 m and 2.59 bpm at 1.0 m. Moreover, Green Channel, Chrominance-based signal processing method (CHROM), and Plane Orthogonal to Skin (POS) have been identified as the best algorithms to estimate HR since they showed a better agreement with the reference system than the other algorithms.

Book ChapterDOI
Ayhan HİRA1
01 Jan 2022
TL;DR: In this article , the authors present a rich body of literature on camera identification from sensor noise fingerprints with an emphasis on still images from digital cameras and the evolving challenges in this domain.
Abstract: Abstract Every imaging sensor introduces a certain amount of noise to the images it captures—slight fluctuations in the intensity of individual pixels even when the sensor plane was lit absolutely homogeneously. One of the breakthrough discoveries in multimedia forensics is that photo-response non-uniformity (PRNU), a multiplicative noise component caused by inevitable variations in the manufacturing process of sensor elements, is essentially a sensor fingerprint that can be estimated from and detected in arbitrary images. This chapter reviews the rich body of literature on camera identification from sensor noise fingerprints with an emphasis on still images from digital cameras and the evolving challenges in this domain.

Journal ArticleDOI
TL;DR: In this paper , a color alignment model is proposed that considers the camera image formation as a black-box and formulates colour alignment as a three-step process: camera response calibration, response linearisation, and colour matching.
Abstract: Relative colour constancy is an essential requirement for many scientific imaging applications. However, most digital cameras differ in their image formations and native sensor output is usually inaccessible, e.g., in smartphone camera applications. This makes it hard to achieve consistent colour assessment across a range of devices, and that undermines the performance of computer vision algorithms. To resolve this issue, we propose a colour alignment model that considers the camera image formation as a black-box and formulates colour alignment as a three-step process: camera response calibration, response linearisation, and colour matching. The proposed model works with non-standard colour references, i.e., colour patches without knowing the true colour values, by utilising a novel balance-of-linear-distances feature. It is equivalent to determining the camera parameters through an unsupervised process. It also works with a minimum number of corresponding colour patches across the images to be colour aligned to deliver the applicable processing. Three challenging image datasets collected by multiple cameras under various illumination and exposure conditions, including one that imitates uncommon scenes such as scientific imaging, were used to evaluate the model. Performance benchmarks demonstrated that our model achieved superior performance compared to other popular and state-of-the-art methods.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed an image tampering detection algorithm based on sample guidance and individual camera device's convolutional neural network (CNN) features to address the challenges in the authenticity and integrity of images.
Abstract: This paper proposes an image tampering detection algorithm based on sample guidance and individual camera device's convolutional neural network (CNN) features (SGICD‐CF) to address the challenges in the authenticity and integrity of images. Due to the development of the digital image processing technology, which makes image editing and processing, image tampering and forgery easy and lot simplified, thus solving the problem of image tamper detection, to maintain information security. The principle of SGICD‐CF assumes that pixels of the pristine image come from a single camera device, but on the contrary, if an image to be tested is spliced by multiple images from different cameras, then the pixels from the multiple camera devices will be detected. SGICD‐CF divides the image to be tested into 64 × 64 pixel image patches, extracts the camera‐related features and some camera model‐related information of image patches by source camera identification network (SCI‐Net) which is proposed by us, and obtains the classification confidence degree of the image patch. Furthermore, it determines whether the image patch contains foreign pixels according to the obtained confidence degree and finally determines whether the image was tampered according to the classification results of all the image patches, thus locating the tampered area. However, the experimental results show that SGICD‐CF can detect and locate the tampered area of an image accurately and our methods have a better performance than other existing methods. Our algorithm can achieve an average correct rate of 0.855 on the synthetic data set based on Dresden, which is higher than other existing detection methods.

Journal ArticleDOI
TL;DR: In this paper , a framework for calculating and comparing the performance of three types of score-based likelihood ratios (SLRs) for camera device identification is introduced. But, it is based on a single image.
Abstract: Forensic camera device identification addresses the scenario, where an investigator has two pieces of evidence: a digital image from an unknown camera involved in a crime, such as child pornography, and a person of interest’s (POI’s) camera. The investigator wants to determine whether the image was taken by the POI’s camera. Small manufacturing imperfections in the photodiode cause slight variations among pixels in the camera sensor array. These spatial variations, called photo-response non-uniformity (PRNU), provide an identifying characteristic, or fingerprint, of the camera. Most work in camera device identification leverages the PRNU of the questioned image and the POI’s camera to make a yes-or-no decision. As in other areas of forensics, there is a need to introduce statistical and probabilistic methods that quantify the strength of evidence in favor of the decision. Score-based likelihood ratios (SLRs) have been proposed in the forensics community to do just that. Several types of SLRs have been studied individually for camera device identification. We introduce a framework for calculating and comparing the performance of three types of SLRs — source-anchored, trace-anchored, and general match. We employ PRNU estimates as camera fingerprints and use correlation distance as a similarity score. Three types of SLRs are calculated for 48 camera devices from four image databases: ALASKA; BOSSbase; Dresden; and StegoAppDB. Experiments show that the trace-anchored SLRs perform the best of these three SLR types on the dataset and the general match SLRs perform the worst.

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a new direction that leverages the unintentional electromagnetic (EM) emanations of the camera to detect it, which can filter out potential camera EM emanations from numerous EM signals quickly and achieve accurate hidden camera detection.
Abstract: Hidden cameras in sensitive locations have become an increasing threat to personal privacy all over the world. Because the camera is small and camouflaged, it is difficult to detect the presence of the camera with naked eyes. Existing works on this subject have either only covered using wireless transmission to detect cameras, or using other methods which are cumbersome in practical use. In this paper, we introduce a new direction that leverages the unintentional electromagnetic (EM) emanations of the camera to detect it. We first find that the digital output of the camera’s image sensor will be amplitude-modulated to the EM emanations of the camera’s clock. Thus, changes in the scope of the camera will directly cause changes in the camera’s EM emanations, which constitutes a unique characteristic for a hidden camera. Based on this, we propose a novel camera detection system named CamRadar, which can filter out potential camera EM emanations from numerous EM signals quickly and achieve accurate hidden camera detection. Benefitting from the camera’s EM emanations, CamRadar will not be limited by the camera transmission types or the detection angle. Our extensive real-world experiments using CamRadar and 19 hidden cameras show that CamRadar achieves a fast detection (in 16.75s) with a detection rate of 93.23% as well as a low false positive rate of 3.95%. CCS Concepts: • Security and privacy → Privacy protections .