scispace - formally typeset
Search or ask a question

Showing papers on "Digital camera published in 2014"


Journal ArticleDOI
TL;DR: Results of PPG measurements from a novel five band camera are presented and it is shown that alternate frequency bands, in particular an orange band, allowed physiological measurements much more highly correlated with an FDA approved contact PPG sensor.
Abstract: Remote measurement of the blood volume pulse via photoplethysmography (PPG) using digital cameras and ambient light has great potential for healthcare and affective computing. However, traditional RGB cameras have limited frequency resolution. We present results of PPG measurements from a novel five band camera and show that alternate frequency bands, in particular an orange band, allowed physiological measurements much more highly correlated with an FDA approved contact PPG sensor. In a study with participants (n = 10) at rest and under stress, correlations of over 0.92 (p 0.01) were obtained for heart rate, breathing rate, and heart rate variability measurements. In addition, the remotely measured heart rate variability spectrograms closely matched those from the contact approach. The best results were obtained using a combination of cyan, green, and orange (CGO) bands; incorporating red and blue channel observations did not improve performance. In short, RGB is not optimal for this problem: CGO is better. Incorporating alternative color channel sensors should not increase the cost of such cameras dramatically.

266 citations


Journal ArticleDOI
TL;DR: In this article, the authors evaluate the use of consumer grade digital cameras modified to capture infrared wavelengths for monitoring vegetation and show that infrared converted cameras perform less than standard color cameras in a monitoring setting.

118 citations


Patent
12 Jun 2014
TL;DR: In this paper, a dual-aperture zoom digital camera is presented, which includes Wide and Tele imaging sections with respective lens/sensor combinations and image signal processors and a camera controller operatively coupled to the wide and tele imaging sections.
Abstract: A dual-aperture zoom digital camera operable in both still and video modes. The camera includes Wide and Tele imaging sections with respective lens/sensor combinations and image signal processors and a camera controller operatively coupled to the Wide and Tele imaging sections. The Wide and Tele imaging sections provide respective image data. The controller is configured to combine in still mode at least some of the Wide and Tele image data to provide a fused output image from a particular point of view, and to provide without fusion continuous zoom video mode output images, each output image having a given output resolution, wherein the video mode output images are provided with a smooth transition when switching between a lower zoom factor (ZF) value and a higher ZF value or vice versa, and wherein at the lower ZF the output resolution is determined by the Wide sensor while at the higher ZF value the output resolution is determined by the Tele sensor.

115 citations


Patent
03 Oct 2014
TL;DR: In this article, the authors present methods, devices and systems for providing augmented and virtual reality experiences, including a set of display instructions for displaying a display image which is at least partially based on information within a digital image frame of the acquired image and one or more processing circuit rendered virtual objects.
Abstract: Disclosed are methods, devices and systems for providing an augmented and a virtual reality experiences. According to some embodiments, there may be provided a device comprising a digital camera assembly including an imaging sensor, one or more optical elements, and image data generation circuits adapted to convert image information acquired from a surrounding of said device into one or more digital image frames indicative of the acquired image information. A graphical display assembly including at least one display and driving circuits may be adapted to receive display instructions and to convert received display instructions into electrical signals which regulate illumination or appearance of one or more display elements. Processing circuitry, including image processing circuitry, may generate a set of display instructions for displaying a display image which is at least partially based on information within a digital image frame of the acquired image and one or more processing circuit rendered virtual objects, wherein selection of which virtual objects to render and how to position the virtual objects within the display image is at least partially based on a context state of said device.

74 citations


Journal ArticleDOI
TL;DR: The purpose of this study was to acquire images with conventional RGB cameras using UAVs and process them to obtain geo-referenced ortho-images with the aim of characterizing the main plant growth parameters required in the management of irrigated crops under semi-arid conditions.
Abstract: There are many aspects of crop management that might benefit from aerial observation. Unmanned aerial vehicle (UAV) platforms are evolving rapidly both technically and with regard to regulations. The purpose of this study was to acquire images with conventional RGB cameras using UAVs and process them to obtain geo-referenced ortho-images with the aim of characterizing the main plant growth parameters required in the management of irrigated crops under semi-arid conditions. The paper is in two parts, the first describes the image acquisition and processing procedures, and the second applies the proposed methodology to a case study. In the first part of the paper, the type of UAV utilized is described. It was a vertical take-off and landing quadracopter aircraft with a conventional RGB compact digital camera. Other types of on-board sensors are also described, such as near-infrared sensors and thermal sensors, and the problems of using these types of expensive sensor is discussed. In addition, software developed by the authors for photogrammetry processing, and information extraction from the geomatic products are described and analysed for agronomic applications. This software can also be used in other applications. To obtain agronomic parameters, different strategies were analysed, such as the use of computer vision for canopy cover extraction, as well as the use of vegetation indices derived from the visible spectrum, as a proper solution when very-high resolution imagery is available. The use of high-resolution images obtained with UAVs together with proper treatment might be considered a useful tool for precision in monitoring crop growth and development, advising farmers on water requirements, yield production, weed and insect infestations, among others. More studies, focusing on the calibration and validation of these relationships in other crops are required.

73 citations


Journal ArticleDOI
TL;DR: New algorithms for open set modes of image source attribution and device linking are introduced and rely on a new multi-region feature generation strategy that models the decision space of a trained SVM classifier by taking advantage of a few known cameras to adjust the decision boundaries to decrease false matches from unknown classes.

60 citations


Proceedings ArticleDOI
TL;DR: Wide Angle High-Resolution Sky Imaging System (WAHRSIS) as mentioned in this paper is a ground-based whole sky imagers that captures the entire hemisphere in a single picture using a digital camera with a Fish-eye lens.
Abstract: Cloud imaging using ground-based whole sky imagers is essential for a fine-grained understanding of cloud formations, which can be useful in many applications. Some such imagers are available commercially, but their cost is relatively high, and their flexibility is limited. Therefore, we built a new daytime Whole Sky Imager (WSI) called Wide Angle High-Resolution Sky Imaging System (WAHRSIS). The strengths of our new design are its simplicity, low manufacturing cost, and high image resolution. Our imager captures the entire hemisphere in a single picture using a digital camera with a Fish-eye lens. The camera was modified to capture light across the visible and near-infrared spectral ranges. This paper describes the design of the device as well as the geometric and radiometric calibration of the imaging system.

56 citations


Proceedings ArticleDOI
TL;DR: An extensive empirical evaluation of focus measures for digital photography and advocate using three standard statistical measures of performance— precision, recall, and mean absolute error—as evaluation criteria indicates that some popular focus measures perform poorly when applied to autofocusing in digital photography.
Abstract: Automatic focusing of a digital camera in live preview mode, where the cameras display screen is used as aview“nder, is done through contrast detection. In focusing using contrast detection, a focus measure is usedto map an image to a value that represents the degree of focus of the image. Many focus measures have beenproposed and evaluated in the literature. However, prev ious studies on focus measures have either used a smallnumber of benchmarks images in their evaluation, been directed at microscopy and not digital cameras, orhave been based on ad hoc evaluation criteria. In this paper, we perform an extensive empirical evaluation offocus measures for digital photography and advocate using three standard statistical measures of performance„precision, recall, and mean absolute error„as evaluation criteria. Our experimental results indicate that (i) somepopular focus measures perform poorly when applied to autofocusing in digital photography, and (ii) simple focusmeasures based on taking the “rst derivative of an image perform exceedingly well in digital photography.Keywords: Passive autofocus, contrast-detection, focus measures, live preview, digital camera

55 citations


Journal ArticleDOI
25 Feb 2014
TL;DR: In this article, a rotary-wing UAV was used for large scale stream mapping based on different flying height and the photogrammetric output such as stereomodel in three dimensional (3D), contour lines, digital elevation model (DEM) and orthophoto were produced from a small stream of 200m long and 10m width.
Abstract: Photogrammetry is the earliest technique used to collect data for topographic mapping. The recent development in aerial photogrammetry is the used of large format digital aerial camera for producing topographic map. The aerial photograph can be in the form of metric or non-metric imagery. The cost of mapping using aerial photogrammetry is very expensive. In certain application, there is a need to map small area with limited budget. Due to the development of technology, small format aerial photogrammetry technology has been introduced and offers many advantages. Currently, digital map can be extracted from digital aerial imagery of small format camera mounted on light weight platform such as unmanned aerial vehicle (UAV). This study utilizes UAV system for large scale stream mapping. The first objective of this study is to investigate the use of light weight rotary-wing UAV for stream mapping based on different flying height. Aerial photograph were acquired at 60% forward lap and 30% sidelap specifications. Ground control points and check points were established using Total Station technique. The digital camera attached to the UAV was calibrated and the recovered camera calibration parameters were then used in the digital images processing. The second objective is to determine the accuracy of the photogrammetric output. In this study, the photogrammetric output such as stereomodel in three dimensional (3D), contour lines, digital elevation model (DEM) and orthophoto were produced from a small stream of 200m long and 10m width. The research output is evaluated for planimetry and vertical accuracy using root mean square error (RMSE). Based on the finding, sub-meter accuracy is achieved and the RMSE value decreases as the flying height increases. The difference is relatively small. Finally, this study shows that UAV is very useful platform for obtaining aerial photograph and subsequently used for photogrammetric mapping and other applications.

53 citations


Book
12 Mar 2014
TL;DR: In this article, the main task of photogrammetry is to estimate the distance of a single image to a single point of interest using a single model and ortho images of the image.
Abstract: Introduction - Basic ideas and main task of photogrammetry Image sources - Analogue and digital cameras Short history of photogrammetric evaluation methods Geometric principles 1 - Flying height, focal length Geometric principles 2 - Image orientation Some definitions Length and angle units Included software and data - Hardware requirements, operating system Image material Overview of the software Installation Additional programmes, copyright, data General remarks Scanning of photos - Scanner types Geometric resolution Radiometric resolution Some practical advice Import of the scanned images Example 1 - A single model - Project definition Model definition Stereoscopic viewing Measurement of object coordinates Creation of DTMs via image matching Ortho images Example 2 - Aerial triangulation - Aerial triangulation measurement Block adjustment with BLUH Mosaics of DTMs and ortho images Example 3 - Some special cases - Scanning aerial photos with an A4 scanner Interior orientation without camera parameters Images from a digital camera An example of close-range photogrammetry A view into the future - Photogrammetry in 2020 Programme description - Some definitions Basic functions Aims and limits of the programme Operating the programme Buttons in the graphics windows File handling Pre-programmes Aerial triangulation measurement Aerial triangulation with BLUH Processing Display Appendix - Codes GCP positions for tutorial 2.

49 citations


Patent
09 Apr 2014
TL;DR: In this article, a CCD digital camera, a camera lens, a light source, a six-axle joint robot main body, an electrical control cabinet and a vacuum sucker are connected to an industrial computer by a switch.
Abstract: The invention discloses a system and a method for machine vision-based robot sorting. The system comprises a CCD digital camera, a camera lens, a light source, a six-axle joint robot main body, an electrical control cabinet and a vacuum sucker. The CCD digital camera is connected to an industrial computer by a switch. The six-axle joint robot main body is connected to the electrical control cabinet. The electrical control cabinet accesses the switch. The vacuum sucker is rigidly fixed to the tail end of the six-axle joint robot main body. The camera unit is used for making a photo of an object to be sorted, collecting data and transmitting the data to the industrial computer by the switch. The industrial computer is used for treating the acquired photo of the object to be sorted, carrying out accurate positioning and then transmitting a control signal to the electrical control cabinet by the switch. The electrical control cabinet is used for controlling the six-axle joint robot main body to carry out corresponding sorting processes according to the received control signal. The system and the method improve work efficiency, reduce operation workers and reduce a production cost in sorting.

Patent
07 Jan 2014
TL;DR: In this paper, a camera body is arranged such that longitudinal directions of the display screen and the camera body correspond to each other, and the display posture of the image is also rotated by 90 degrees.
Abstract: A digital camera is provided with a vertically long camera body having an approximately rectangular solid shape. An LCD panel provided in a rear surface of the camera body is arranged such that longitudinal directions of the display screen and the camera body correspond to each other. The digital camera is operated through a touch panel provided in a lower portion of the display screen. In a taking mode, an image is displayed in a small size on an upper portion of the display screen. In reproducing, the camera body is rotated sideways by 90 degree. In a reproducing mode, display posture of the image is also rotated by 90 degree, and the image is displayed in a large size on the entire display screen.

Patent
16 Jan 2014
TL;DR: In this paper, a 3D trajectory of an axisymmetric object in 3D physical space using a digital camera which records 2-D image data is described, based on a characteristic length of the axiames and a physical position of the camera.
Abstract: Methods and apparatus for determining a trajectory of a axisymmetric object in 3-D physical space using a digital camera which records 2-D image data are described. In particular, based upon i) a characteristic length of the axisymmetric object, ii) a physical position of the camera determined from sensors associated with the camera (e.g., accelerometers) and iii) captured 2-D digital images from the camera including a time at which each image is generated relative to one another, a position, a velocity vector and an acceleration vector can be determined in three dimensional physical space for axisymmetric object objects as a function of time. In one embodiment, the method and apparatus can be applied to determine the trajectories of objects in games which utilize axisymmetric object objects, such as basketball, baseball, bowling, golf, soccer, rugby or football.

Patent
15 Dec 2014
TL;DR: In this paper, a digital camera system with a remote control module and an image capture system is described, and the status display of the status of the camera is displayed on the remote control unit's status display.
Abstract: A digital camera system having a digital camera and a remote control module is disclosed. The digital camera includes an image capture system, memory and wireless modem. The remote control module includes another wireless modem (to communicate with the digital camera's wireless modem), first and second user controls and a status display. One of the user controls can (via the wireless modems) cause the camera to capture an image. The other user control can cause the digital camera (via the wireless modems) to deliver camera status information which can then be displayed on the remote control module's status display.

Journal ArticleDOI
TL;DR: In this paper, a methodology is proposed to obtain simultaneously the near-infrared and red bands from a standard single RGB camera, after having removed the nearinfrared blocking filter inside.

Patent
12 Mar 2014
TL;DR: In this paper, a system and method for estimating an ambient light condition using an image sensor of a digital camera is presented, where an array of pixels is obtained using the image sensor and a matrix of grid elements is defined.
Abstract: A system and method for estimating an ambient light condition is using an image sensor of a digital camera. An array of pixels is obtained using the image sensor. A matrix of grid elements is defined. Each grid element comprises multiple adjacent pixels of the array of pixels. A first measurement value is generated for a grid element of the matrix of grid elements based on the pixels associated with the grid element. A set of grid elements are identified having a first measurement value that satisfies a brightness criteria. A second measurement is generated using the identified set of grid elements. A simulated-light-sensor array is generated using the second measurement value. An estimate of the ambient light condition is calculated using the simulated-light-sensor array.

Journal ArticleDOI
TL;DR: In this paper, a digital CMOS camera was calibrated for use as a non-contact colorimeter for measuring the color of granite artworks, which proved to be the most challenging aspect of the task.

Journal ArticleDOI
TL;DR: A method for color stabilization of shots of the same scene, taken under the same illumination, where one image is chosen as reference and one or several other images are modified so that their colors match those of the reference.
Abstract: We propose a method for color stabilization of shots of the same scene, taken under the same illumination, where one image is chosen as reference and one or several other images are modified so that their colors match those of the reference. We make use of two crucial but often overlooked observations: first, that the core of the color correction chain in a digital camera is simply a multiplication by a 3 × 3 matrix; second, that to color-match a source image to a reference image we do not need to compute their two color correction matrices, it is enough to compute the operation that transforms one matrix into the other. This operation is a 3 × 3 matrix as well, which we call H. Once we have H, we just multiply by it each pixel value of the source and obtain an image which matches in color the reference. To compute H we only require a set of pixel correspondences, we do not need any information about the cameras used, neither models nor specifications or parameter values. We propose an implementation of our framework which is very simple and fast, and show how it can be successfully employed in a number of situations, comparing favorably with the state of the art. There is a wide range of applications of our technique, both for amateur and professional photography and video: color matching for multicamera TV broadcasts, color matching for 3D cinema, color stabilization for amateur video, etc.

Journal ArticleDOI
TL;DR: A 3D terrestrial calibration field, designed for calibrating digital cameras and omnidirectional sensors, and designed for calibration of a catadrioptic system, is presented.
Abstract: The aim of this paper is to present results achieved with a 3D terrestrial calibration field, designed for calibrating digital cameras and omnidirectional sensors. This terrestrial calibration field is composed of 139 ARUCO coded targets. Some experiments were performed using a Nikon D3100 digital camera with 8mm Samyang Bower fisheye lens. The camera was calibrated in this terrestrial test field using a conventional bundle adjustment with the Collinearity and mathematical models specially designed for fisheye lenses. The CMC software (Calibration with Multiple Cameras), developed in-house, was used for the calibration trials. This software was modified to use fisheye models to which the Conrady-Brown distortion equations were added. The target identification and image measurements of its four corners were performed automatically with a public software. Several experiments were performed with 16 images and the results were presented and compared. Besides the calibration of fish-eye cameras, the field was designed for calibration of a catadrioptic system and brief informations on the calibration of this unit will be provided in the paper.

Patent
04 Jul 2014
TL;DR: In this paper, a dual-aperture zoom camera comprising a Wide camera with a respective Wide lens and a Tele camera with an respective Tele lens, the Wide and Tele cameras mounted directly on a single printed circuit board, are presented.
Abstract: A dual-aperture zoom camera comprising a Wide camera with a respective Wide lens and a Tele camera with a respective Tele lens, the Wide and Tele cameras mounted directly on a single printed circuit board, wherein the Wide and Tele lenses have respective effective focal lengths EFLW and EFLT and respective total track lengths TTLW and TTLT and wherein TTLW/EFLW>1.1 and TTLT/EFLT<1.0. Optionally, the dual-aperture zoom camera may further comprise an optical OIS controller configured to provide a compensation lens movement according to a user-defined zoom factor (ZF) and a camera tilt (CT) through LMV=CT*EFLZF, where EFLZF is a zoom-factor dependent effective focal length.

Proceedings ArticleDOI
21 Jun 2014
TL;DR: This Pictorial takes a different look at digital cameras and photos and frames this look within a counterfunctional design perspective as a means to present new concepts composed of both the textual-theoretical and visual-designerly varieties.
Abstract: This Pictorial takes a different look at digital cameras and photos. It frames this look within a counterfunctional design perspective. This works is presented not as a design process documentation, but rather as a type of visual-textual design artifact. We see it as a means to present new concepts composed of both the textual-theoretical and visual-designerly varieties. While cameras and photos are the ostensible thematic focus, these technologies are in turn used as a focusing device for a broader conceptual theme: designing digital limitations.

Patent
07 Nov 2014
TL;DR: In this article, a digital signal processing unit 17 of a digital camera which includes a solid-state imaging element 5 having an imaging pixel cell 30 and a pair of focus detecting pixel cells 31 R and 31 L determines whether a captured image signal obtained by imaging by the image element 5 has a region affected by at least one of the flare and the ghost.
Abstract: A digital signal processing unit 17 of a digital camera which includes a solid-state imaging element 5 having an imaging pixel cell 30 and a pair of focus detecting pixel cells 31 R and 31 L determines whether a captured image signal obtained by imaging by the imaging element 5 has a region affected by at least one of the flare and the ghost. And, when it is determined that there is the region, the digital signal processing unit 17 performs correction processing by signal interpolation using an output signal of imaging pixel cells around the focus detecting pixel cell included in the captured image signal on an output signal of all the focus detecting pixel cells included in the captured image signal.

Book ChapterDOI
01 Sep 2014
TL;DR: It is concluded that the proposed 3D reconstruction is a promising generalized technique for the non-destructive phenotyping of various plants during their whole growth cycles.
Abstract: Plant phenotyping involves the measurement, ideally objectively, of characteristics or traits. Traditionally, this is either limited to tedious and sparse manual measurements, often acquired destructively, or coarse image-based 2D measurements. 3D sensing technologies (3D laser scanning, structured light and digital photography) are increasingly incorporated into mass produced consumer goods and have the potential to automate the process, providing a cost-effective alternative to current commercial phenotyping platforms. We evaluate the performance, cost and practicability for plant phenotyping and present a 3D reconstruction method from multi-view images acquired with a domestic quality camera. This method consists of the following steps: (i) image acquisition using a digital camera and turntable; (ii) extraction of local invariant features and matching from overlapping image pairs; (iii) estimation of camera parameters and pose based on Structure from Motion(SFM); and (iv) employment of a patch based multi-view stereo technique to implement a dense 3D point cloud. We conclude that the proposed 3D reconstruction is a promising generalized technique for the non-destructive phenotyping of various plants during their whole growth cycles.

Journal ArticleDOI
TL;DR: A novel, fast, and accurate in-plane displacement distribution measurement method is proposed that uses a digital camera and arbitrary repeated patterns based on the moiré methodology, which is useful for various applications ranging from the study of displacement and strain distributions in materials science, the biomimetics field, and mechanical material testing.
Abstract: In this study, a novel, fast, and accurate in-plane displacement distribution measurement method is proposed that uses a digital camera and arbitrary repeated patterns based on the moire methodology. The key aspect of this method is the use of phase information of both the fundamental frequency and the high-order frequency components of the moire fringe before and after deformations. Compared with conventional displacement methods and sensors, the main advantages of the method developed herein are its high resolution, accuracy, speed, low cost, and easy implementation. The effectiveness is confirmed by a simple in-plane displacement measurement experiment, and the experimental results indicate that an accuracy of 1/1000 of the pitch can be achieved for various repeated patterns. This method is useful for various applications ranging from the study of displacement and strain distributions in materials science, the biomimetics field, and mechanical material testing, to secure the integrity of infrastructures.

Patent
30 Apr 2014
TL;DR: In this paper, an object 3D information acquisition method based on digital close range photography is presented. But the method requires additional equipment such as a laser and a projector for assisting completion of 3D reconstruction.
Abstract: The invention discloses an object three-dimensional information acquisition method based on digital close range photography. According to the method, some code mark points are placed in any scene, and calibration is performed on internal parameters of a camera through a self-calibration algorithm; when three-dimensional reconstruction is performed on an object, the digital camera is held in hand to shoot the object from different angles to obtain two pictures, then an SIFI key point detection method is used for detecting coordinate information of key points on the object, and calibration of external parameters of the camera is completed; a light stream detection method is used for detecting matching points, corresponding to pixel points on one picture, on the other picture, and a double-view reconstruction method is used for obtaining three-dimensional information of the object according to the matching points, the internal parameters of the camera and the external parameters of the camera. Additional equipment such as a laser and a projector is needed in a traditional three-dimensional measurement method for assisting completion of three-dimensional reconstruction, while in the object three-dimensional information acquisition method, the picture light stream detection method instead of the equipment is adopted to obtain information of the matching points on the object, therefore, an active three-dimensional reconstruction mode is achieved, hardware cost is reduced, and the method is convenient to realize.

Proceedings ArticleDOI
TL;DR: Several modifications of the standard method to suit specific system characteristics, unique measurement needs, or computational shortcomings in the original method are described, and how they have improved the reliability of the resulting system evaluations are described.
Abstract: The well-established Modulation Transfer Function (MTF) is an imaging performance parameter that is well suited to describing certain sources of detail loss, such as optical focus and motion blur. As performance standards have developed for digital imaging systems, the MTF concept has been adapted and applied as the spatial frequency response (SFR). The international standard for measuring digital camera resolution, ISO 12233, was adopted over a decade ago. Since then the slanted edge-gradient analysis method on which it was based has been improved and applied beyond digital camera evaluation. Practitioners have modified minor elements of the standard method to suit specific system characteristics, unique measurement needs, or computational shortcomings in the original method. Some of these adaptations have been documented and benchmarked, but a number have not. In this paper we describe several of these modifications, and how they have improved the reliability of the resulting system evaluations. We also review several ways the method has been adapted and applied beyond camera resolution.

Patent
31 Jul 2014
TL;DR: In this article, the authors present a cup assembly including a cup holder having a base having a microcontroller, weight sensor and accelerometer incorporated therein, a handle extending upwardly from the base, and a camera support supporting a digital camera.
Abstract: In described embodiments, the present invention is a cup assembly including a cup holder having a base having a microcontroller, weight sensor and accelerometer incorporated therein, a handle extending upwardly from the base, and a camera support extending upwardly from the base. The camera support supports a digital camera. The digital camera is electronically coupled to the microcontroller. A cup is removably insertable into the cup holder. A method of using the cup assembly is also disclosed.

Patent
06 Nov 2014
TL;DR: In this article, a method and device for obtaining size and shape data of a subject comprising positioning a substantially two-dimensional reference object on a plane near to the subject is presented.
Abstract: A method and device for obtaining size and shape data of a subject comprising positioning a substantially two dimensional reference object on a plane near to the subject; providing a digital camera comprising a display screen, a digital imaging chip, aprocessor, a memory and a transmitter; imaging the object and subject on the display screen together with a framework corresponding to a projection of the reference object from a desired angle, and tilting the screen together with a framework corresponding to an outline of the reference object to align the outline with the perimeter of the reference object on the screen and mapping an interior space that comprising a stereo vision camera that comprises a pair of cameras, and a laser pattern projector for generating a laser beam that is observable in images obtained by the video cameras as a spot of light reflected from an inner surface of the container

Journal ArticleDOI
TL;DR: In this paper, the authors compared the performance of three modern smartphones (Samsung GalaxyS4, Samsung GalaxyS5 and iPhone4) compared to external mass-market IMU platform in order to verify their accuracy levels in terms of positioning.
Abstract: . Nowadays the modern smartphones include several sensors which are usually adopted in geomatic application, as digital camera, GNSS (Global Navigation Satellite System) receivers, inertial platform, RFID and Wi-Fi systems. In this paper the authors would like to testing the performances of internal sensors (Inertial Measurement Unit, IMU) of three modern smartphones (Samsung GalaxyS4, Samsung GalaxyS5 and iPhone4) compared to external mass-market IMU platform in order to verify their accuracy levels, in terms of positioning. Moreover, the Image Based Navigation (IBN) approach is also investigated: this approach can be very useful in hard-urban environment or for indoor positioning, as alternative to GNSS positioning. IBN allows to obtain a sub-metrical accuracy, but a special database of georeferenced images (Image DataBase, IDB) is needed, moreover it is necessary to use dedicated algorithm to resizing the images which are collected by smartphone, in order to share it with the server where is stored the IDB. Moreover, it is necessary to characterize smartphone camera lens in terms of focal length and lens distortions. The authors have developed an innovative method with respect to those available today, which has been tested in a covered area, adopting a special support where all sensors under testing have been installed. Geomatic instrument have been used to define the reference trajectory, with purpose to compare this one, with the path obtained with IBN solution. First results leads to have an horizontal and vertical accuracies better than 60 cm, respect to the reference trajectories. IBN method, sensors, test and result will be described in the paper.

Journal ArticleDOI
TL;DR: In this survey paper, an energy efficient hardware based image compression is highly requested to counter the severe hardware constraints in the WSNs.
Abstract: Multidimensional sensors, such as digital camera sensors in the visual sensor networks VSNs generate a huge amount of information compared with the scalar sensors in the wireless sensor networks WSNs. Processing and transmitting such data from low power sensor nodes is a challenging issue through their limited computational and restricted bandwidth requirements in a hardware constrained environment. Source coding can be used to reduce the size of vision data collected by the sensor nodes before sending it to its destination. With image compression, a more efficient method of processing and transmission can be obtained by removing the redundant information from the captured image raw data. In this paper, a survey of the main types of the conventional state of the art image compression standards such as JPEG and JPEG2000 is provided. A literature review of their advantages and shortcomings of the application of these algorithms in the VSN hardware environment is specified. Moreover, the main factors influencing the design of compression algorithms in the context of VSN are presented. The selected compression algorithm may have some hardware-oriented properties such as; simplicity in coding, low memory need, low computational load, and high-compression rate. In this survey paper, an energy efficient hardware based image compression is highly requested to counter the severe hardware constraints in the WSNs.