scispace - formally typeset
Search or ask a question

Showing papers on "Digital camera published in 2006"


Journal ArticleDOI
TL;DR: A new method is proposed for the problem of digital camera identification from its images based on the sensor's pattern noise, which serves as a unique identification fingerprint for each camera under investigation by averaging the noise obtained from multiple images using a denoising filter.
Abstract: In this paper, we propose a new method for the problem of digital camera identification from its images based on the sensor's pattern noise. For each camera under investigation, we first determine its reference pattern noise, which serves as a unique identification fingerprint. This is achieved by averaging the noise obtained from multiple images using a denoising filter. To identify the camera from a given image, we consider the reference pattern noise as a spread-spectrum watermark, whose presence in the image is established by using a correlation detector. Experiments on approximately 320 images taken with nine consumer digital cameras are used to estimate false alarm rates and false rejection rates. Additionally, we study how the error rates change with common image processing, such as JPEG compression or gamma correction.

1,195 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a method for estimating the parameters of the models based on a minimization of the mean absolute error between the color measurements obtained by the models, and by a commercial colorimeter for uniform and homogenous surfaces.

710 citations


Journal ArticleDOI
TL;DR: The current approaches adopted for camera calibration in close-range photogrammetry and computer vision are overviewed, and operational aspects for self-calibration are discussed, including chromatic aberration on modelled radial distortion.
Abstract: Camera calibration has always been an essential component of photogrammetric measurement, with self-calibration nowadays being an integral and routinely applied operation within photogrammetric triangulation, especially in high-accuracy close-range measurement. With the very rapid growth in adoption of off-the-shelf digital cameras for a host of new 3D measurement applications, however, there are many situations where the geometry of the image network will not support robust recovery of camera parameters via on-the-job calibration. For this reason, stand-alone camera calibration has again emerged as an important issue in close-range photogrammetry, and it also remains a topic of research interest in computer vision. This paper overviews the current approaches adopted for camera calibration in close-range photogrammetry and computer vision, and discusses operational aspects for self-calibration. Also, the results of camera calibrations using different algorithms are summarized. Finally, the impact of chromatic aberration on modelled radial distortion is touched upon to highlight the fact that there are still issues of research interest in the photogrammetric calibration of consumer-grade digital cameras.

543 citations


Book
01 Jan 2006
TL;DR: In this paper, the authors present a unified solution to focus problems by instead recording the light field inside the camera: not just the position but also the direction of light rays striking the image plane.
Abstract: Focusing images well has been difficult since the beginnings of photography in 1839. Three manifestations of the problem are: the chore of having to choose what to focus on before clicking the shutter, the awkward coupling between aperture size and depth of field, and the high optical complexity of lenses required to produce aberration-free images. These problems arise because conventional cameras record only the sum of all light rays striking each pixel on the image plane. This dissertation presents a unified solution to these focus problems by instead recording the light field inside the camera: not just the position but also the direction of light rays striking the image plane. I describe the design, prototyping and performance of a digital camera that records this light field in a single photographic exposure. The basic idea is to use an array of microlenses in front of the photosensor in a regular digital camera. The main price behind this new kind of photography is the sacrifice of some image resolution to collect directional ray information. However, it is possible to smoothly vary the optical configuration from the light field camera back to a conventional camera by reducing the separation between the microlenses and photosensor. This allows a selectable trade-off between image resolution and refocusing power. More importantly, current semiconductor technology is already capable of producing sensors with an order of magnitude more resolution than we need in final images. The extra ray directional information enables unprecedented capabilities after exposure. For example, it is possible to compute final photographs that are refocused at different depths, or that have extended depth of field, by re-sorting the recorded light rays appropriately. Theory predicts, and experiments corroborate, that blur due to incorrect focus can be reduced by a factor approximately equal to the directional resolution of the recorded light rays. Similarly, digital correction of lens aberrations re-sorts aberrant light rays to where they should ideally have converged, improving image contrast and resolution. Future cameras based on these principles will be physically simpler, capture light more quickly, and provide greater flexibility in finishing photographs.

542 citations


Journal ArticleDOI
TL;DR: A neural network particle finding algorithm and a new four-frame predictive tracking algorithm are proposed for three-dimensional Lagrangian particle tracking (LPT) and the best algorithms are verified to work in a real experimental environment.
Abstract: A neural network particle finding algorithm and a new four-frame predictive tracking algorithm are proposed for three-dimensional Lagrangian particle tracking (LPT). A quantitative comparison of these and other algorithms commonly used in three-dimensional LPT is presented. Weighted averaging, one-dimensional and two-dimensional Gaussian fitting, and the neural network scheme are considered for determining particle centers in digital camera images. When the signal to noise ratio is high, the one-dimensional Gaussian estimation scheme is shown to achieve a good combination of accuracy and efficiency, while the neural network approach provides greater accuracy when the images are noisy. The effect of camera placement on both the yield and accuracy of three-dimensional particle positions is investigated, and it is shown that at least one camera must be positioned at a large angle with respect to the other cameras to minimize errors. Finally, the problem of tracking particles in time is studied. The nearest neighbor algorithm is compared with a three-frame predictive algorithm and two four-frame algorithms. These four algorithms are applied to particle tracks generated by direct numerical simulation both with and without a method to resolve tracking conflicts. The new four-frame predictive algorithm with no conflict resolution is shown to give the best performance. Finally, the best algorithms are verified to work in a real experimental environment.

439 citations


Journal ArticleDOI
TL;DR: In this paper, the potential, limitations and applicability of the high dynamic range (HDR) photography technique are evaluated as a luminance mapping tool, and the camera response function was computationally derived by using Photosphere software, and was used to fuse the multiple photographs into an HDR image.
Abstract: In this paper, the potential, limitations and applicability of the High Dynamic Range (HDR) photography technique are evaluated as a luminance mapping tool. Multiple exposure photographs of static scenes were taken with a commercially available digital camera to capture the wide luminance variation within the scenes. The camera response function was computationally derived by using Photosphere software, and was used to fuse the multiple photographs into an HDR image. The vignetting effects and point spread function of the camera and lens system were determined. Laboratory and field studies showed that the pixel values in the HDR photographs correspond to the physical quantity of luminance with reasonable precision and repeatability.

259 citations


Patent
31 Aug 2006
TL;DR: In this article, a dual-use camera is provided for a portable or laptop computer, or a cellular phone, handset, personal digital assistant or other handheld device with a digital camera, in which one of the camera or a display is movable with respect to the other to enable the camera in a first mode to capture images of the display for enabling calibration of display, and in a second mode for capturing image other than of display.
Abstract: Color calibration of color image rendering devices, such as large color displays, which operate by either projection or emission of images, utilize internal color measurement instrument or external color measurement modules locatable on a wall or speaker. A dual use camera is provided for a portable or laptop computer, or a cellular phone, handset, personal digital assistant or other handheld device with a digital camera, in which one of the camera or a display is movable with respect to the other to enable the camera in a first mode to capture images of the display for enabling calibration of the display, and in a second mode for capturing image other than of the display. The displays may represent rendering devices for enabling virtual proofing in a network, or may be part of stand-alone systems and apparatuses for color calibration. Improved calibration is also provided for sensing and correcting for non-uniformities of rendering devices, such as color displays, printer, presses, or other color image rendering device.

251 citations


Patent
27 Nov 2006
TL;DR: In this paper, a fixed-focal-length lens is split into two beams by a beam splitter, to form respective images on a first image sensor and a second image sensor.
Abstract: A digital camera enables high-speed zooming operation without use of a zoom lens Light originating from a fixed-focal-length lens is split into two beams by a beam splitter, to thus form respective images on a first image sensor and a second image sensor The first image sensor and the second image sensor are equal to each other in terms of the number of pixels, but differ from each other in terms of a pixel size The first image sensor acquires a wide image, and the second image sensor acquires a telephotography image An output is produced by means of switching between the first image sensor and the second image sensor, in response to zooming operation When the image from the first image sensor is recorded, focus detection is performed by use of an image signal from the second image sensor, to thus effect automatic focusing

179 citations


BookDOI
01 Oct 2006
TL;DR: This chapter discusses adaptation in the Visual System to Color, Spatial, and Temporal Contrast, and the role of light distribution in this transformation.
Abstract: 1 Processing of Information in the Human Visual System (Prof. Dr. F. Schaeffel, University of Tubingen). 1.1 Preface. 1.2 Design and Structure of the Eye. 1.3 Optical Aberrations and Consequences for Visual Performance. 1.4 Chromatic Aberration. 1.5 Neural Adaptation to Monochromatic Aberrations. 1.6 Optimizing Retinal Processing with Limited Cell Numbers, Space and Energy. 1.7 Adaptation to Different Light Levels. 1.8 Rod and Cone Responses. 1.9 Spiking and Coding. 1.10 Temporal and Spatial Performance. 1.11 ON/OFF Structure, Division of theWhole Illuminance Amplitude in Two Segments. 1.12 Consequences of the Rod and Cone Diversity on Retinal Wiring. 1.13 Motion Sensitivity in the Retina. 1.14 Visual Information Processing in Higher Centers. 1.15 Effects of Attention. 1.16 Color Vision, Color Constancy, and Color Contrast. 1.17 Depth Perception. 1.18 Adaptation in the Visual System to Color, Spatial, and Temporal Contrast. 1.19 Conclusions. References. 2 Introduction to Building a Machine Vision Inspection (Axel Telljohann, Consulting Team Machine Vision (CTMV)). 2.1 Preface. 2.2 Specifying a Machine Vision System. 2.3 Designing a Machine Vision System. 2.4 Costs. 2.5 Words on Project Realization. 2.6 Examples. 3 Lighting in Machine Vision (I. Jahr, Vision & Control GmbH). 3.1 Introduction. 3.2 Demands on Machine Vision lighting. 3.3 Light used in Machine Vision. 3.4 Interaction of Test Object and Light. 3.5 Basic Rules and Laws of Light Distribution. 3.6 Light Filters. 3.7 Lighting Techniques and Their Use. 3.8 Lighting Control. 3.9 Lighting Perspectives for the Future. References. 4 Optical Systems in Machine Vision (Dr. Karl Lenhardt, Jos. Schneider OptischeWerke GmbH). 4.1 A Look on the Foundations of Geometrical Optics. 4.2 Gaussian Optics. 4.3 The Wave Nature of Light. 4.4 Information Theoretical Treatment of Image Transfer and Storage. 4.5 Criteria for Image Quality. 4.6 Practical Aspects. References. 5 Camera Calibration (R. Godding, AICON 3D Systems GmbH). 5.1 Introduction. 5.2 Terminology. 5.3 Physical Effects. 5.4 Mathematical Calibration Model. 5.5 Calibration and Orientation Techniques. 5.6 Verification of Calibration Results. 5.7 Applications. References. 6 Camera Systems in Machine Vision (Horst Mattfeldt, Allied Vision Technologies GmbH). 6.1 Camera Technology. 6.2 Sensor Technologies. 6.3 CCD Image Artifacts. 6.4 CMOS Image Sensor. 6.5 Block Diagrams and their Description. 6.6 Digital Cameras. 6.7 Controlling Image Capture. 6.8 Configuration of the Camera. 6.9 Camera Noise1. 6.10 Digital Interfaces. References. 7 Camera Computer Interfaces (Tony Iglesias, Anita Salmon, Johann Scholtz, Robert Hedegore, Julianna Borgendale, Brent Runnels, Nathan McKimpson, National Instruments). 7.1 Overview. 7.2 Analog Camera Buses. 7.3 Parallel Digital Camera Buses. 7.4 Standard PC Buses. 7.5 Choosing a Camera Bus. 7.6 Computer Buses. 7.7 Choosing a Computer Bus. 7.8 Driver Software. 7.9 Features of a Machine Vision System. 8 Machine Vision Algorithms (Dr. Carsten Steger, MVTec Software GmbH). 8.1 Fundamental Data Structures. 8.2 Image Enhancement. 8.3 Geometric Transformations. 8.4 Image Segmentation. 8.5 Feature Extraction. 8.6 Morphology. 8.7 Edge Extraction. 8.8 Segmentation and Fitting of Geometric Primitives. 8.9 Template Matching. 8.10 Stereo Reconstruction. 8.11 Optical Character Recognition. References. 9 Machine Vision in Manufacturing (Dr.-Ing. Peter Waszkewitz, Robert Bosch GmbH). 9.1 Introduction. 9.2 Application Categories. 9.3 System Categories. 9.4 Integration and Interfaces. 9.5 Mechanical Interfaces. 9.6 Electrical Interfaces. 9.7 Information Interfaces. 9.8 Temporal Interfaces. 9.9 Human-Machine Interfaces. 9.10 Industrial Case Studies. 9.11 Constraints and Conditions. References. Index.

154 citations


Journal ArticleDOI
TL;DR: The experimental results confirm that the proposed method suppresses noise (CMOS/CCD image sensor noise model) while effectively interpolating the missing pixel components, demonstrating a significant improvement in image quality when compared to treating demosaicing and denoising problems independently.
Abstract: The output image of a digital camera is subject to a severe degradation due to noise in the image sensor. This paper proposes a novel technique to combine demosaicing and denoising procedures systematically into a single operation by exploiting their obvious similarities. We first design a filter as if we are optimally estimating a pixel value from a noisy single-color (sensor) image. With additional constraints, we show that the same filter coefficients are appropriate for color filter array interpolation (demosaicing) given noisy sensor data. The proposed technique can combine many existing denoising algorithms with the demosaicing operation. In this paper, a total least squares denoising method is used to demonstrate the concept. The algorithm is tested on color images with pseudorandom noise and on raw sensor data from a real CMOS digital camera that we calibrated. The experimental results confirm that the proposed method suppresses noise (CMOS/CCD image sensor noise model) while effectively interpolating the missing pixel components, demonstrating a significant improvement in image quality when compared to treating demosaicing and denoising problems independently

151 citations


Journal ArticleDOI
TL;DR: A new algorithm was developed to classify each pixel according to a criteria decision process and the accuracy of the method was superior to 94%.
Abstract: This work describes the development of a simple method of field estimating the sky cloud coverage percentage for several applications at the Brazilian Antarctic Station, Ferraz (62°05′S, 58°23.5′W). The database of this method was acquired by a digital color camera in the visible range of the spectrum. A new algorithm was developed to classify each pixel according to a criteria decision process. The information on the pixel contamination by clouds was obtained from the saturation component of the intensity, hue, and saturation space (IHS). For simplicity, the images were acquired with a limited field of view of 36° pointing to the camera’s zenith to prevent direct sunlight from reaching the internal charge-coupled device (CCD) on the camera. For a priori–classified clear-sky images, the accuracy of the method was superior to 94%. For overcast-sky conditions, the corresponding accuracy was larger than 99%. A comparison test was performed with two human observers and our method. The results for the...

Patent
08 Jun 2006
TL;DR: In this article, a gel-like transparent material is used to transform the microlenses in response to the change of the internal air pressure, and thereby changes the surface curvature thereof.
Abstract: An imaging device includes an image sensor chip and a package for containing the image sensor chip. Formed in the package is a vent hole that is connected to an air pump. In a light receiving area of the image sensor chip, there are photodiodes and microlenses above them. The microlenses are made of a gel-like transparent material. When the internal air pressure of the package is changed by the air pump, each microlens transforms in response to the change of the internal air pressure, and thereby changes the surface curvature thereof.

Journal ArticleDOI
TL;DR: In this article, the IBR method for Image-Based Registration (IBR) is proposed for TLS point cloud registration, which is a one-step registration of the point clouds from each scanner position.
Abstract: Building 3D models using terrestrial laser scanner (TLS) data is currently an active area of research, especially in the fields of heritage recording and site documentation. Multiple TLS scans are often required to generate an occlusion-free 3D model in situations where the object to be recorded has a complex geometry. The first task associated with building 3D models from laser scanner data in such cases is to transform the data from the scanner’s local coordinate system into a uniform Cartesian reference datum, which requires sufficient overlap between the scans. Many TLS systems are now supplied with an SLR-type digital camera, such that the scene to be scanned can also be photographed. The provision of overlapping imagery offers an alternative, photogrammetric means to achieve point cloud registration between adjacent scans. The images from the digital camera mounted on top of the laser scanner are used to first relatively orient the network of images, and then to transfer this orientation to the TLS stations to provide exterior orientation. The proposed approach, called the IBR method for Image-Based Registration, offers a one-step registration of the point clouds from each scanner position. In the case of multiple scans, exterior orientation is simultaneously determined for all TLS stations by bundle adjustment. This paper outlines the IBR method and discusses test results obtained with the approach. It will be shown that the photogrammetric orientation process for TLS point cloud registration is efficient and accurate, and offers a viable alternative to other approaches, such as the well-known iterative closest point algorithm.

Patent
Tetsuya Hashimoto1, Hiroki Fukuoka1
21 Mar 2006
TL;DR: In this article, the RS-232 signal of an external device is monitored to determine whether the external device can be properly connected and in a state which permits communication, and the camera can either transmit or receive images and/or audio from the external devices.
Abstract: An electronic camera and method of operating an electronic camera which detects whether an external device such as a personal computer is properly connected to the camera and in a state which permits communication. The camera monitors a data terminal ready (DTR) signal of an RS-232 connection in order to determine that the external device is properly connected and in a state which permits communication. Once the proper connection is detected, the camera can either transmit or receive images and/or audio from the external device. Accordingly, a specific switch which places the camera in a communication mode can be eliminated. Further, a single switch may be utilized for both controlling whether the camera records or plays images when there is no device connected, and which controls whether the camera transmits or receives images and/or audio when an external device is determined to be connected.

Patent
28 Jun 2006
TL;DR: A skin testing and imaging station and corresponding method for capturing, displaying and analyzing images of a person and for testing the skin using a variety of probes includes a digital camera, a light source capable of providing at least two different wavelengths of light, a plurality of probes for conducting skin tests, a touch-screen display and a computer for controlling the components of the station as mentioned in this paper.
Abstract: A skin testing and imaging station and corresponding method for capturing, displaying and analyzing images of a person and for testing the skin using a variety of probes includes a digital camera, a light source capable of providing at least two different wavelengths of light, a plurality of probes for conducting skin tests, a touch-screen display and a computer for controlling the components of the station. The apparatus selectively captures and displays a plurality of digital images using different wavelengths of illuminating light, e.g., using a plurality of flashes and filters, some of which may be adjustable to adjust the angle of incidence of the illuminating light on the subject. In video mode, the camera displays a real time image on the display facilitating a user to position a probe for testing any specific area of the skin. Preferably, the apparatus is self-serve, allowing any person to capture, review and analyze the images and skin data. Verbal and/or graphic instructions to a user aid in use of the station. An intuitive graphic user interface with thumbnail images is employed. Focus control, zoom and synchronized side-by side comparison of images are available.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: A digital image forensics technique to distinguish images captured by a digital camera from computer generated images, based on images generated by the Maya and 3D Studio Max software, and various digital camera images.
Abstract: We describe a digital image forensics technique to distinguish images captured by a digital camera from computer generated images. Our approach is based on the fact that image acquisition in a digital camera is fundamentally different from the generative algorithms deployed by computer generated imagery. This difference is captured in terms of the properties of the residual image (pattern noise in case of digital camera images) extracted by a wavelet based denoising filter. In (Jan Lukas, et al., 2005), it is established that each digital camera has a unique pattern noise associated with itself. In addition, our results indicate that the two type of residuals obtained from different digital camera images and computer generated images exhibit some common characteristics that is not present in the other type of images. This can be attributed to fundamental differences in the image generation processes that yield the two types of images. Our results are based on images generated by the Maya and 3D Studio Max software, and various digital camera images.

Patent
01 Mar 2006
TL;DR: In this article, a digital camera has a first image capturing optical system having a lens and a first sensor, and a second image capturing system with a second sensor and a clock driver.
Abstract: In a digital camera having multiple optical systems, multiple image capturing elements are effectively driven to reduce power consumption. A digital camera has a first image capturing optical system having a lens and a first image sensor and a second image capturing optical system having a lens and a second image sensor. A controller and timing generator selects the image signal from the first image capturing optical system while controlling an operation or power of the second image sensor and a clock driver to be OFF when the zoom position falls within a first zoom range. When the zoom position falls within a second zoom range, the image signal from the second image capturing optical system is selected while an operation or power of the first image sensor and a clock driver is controlled to be OFF. An operation or power of the image capturing optical system which is not selected is stopped so that power consumption is reduced.

Patent
06 Apr 2006
TL;DR: In this paper, a plurality of characteristics of the initial set of evaluation images are assessed to provide a first assessment and a final capture state of the camera is set responsive to the first assessment.
Abstract: In a method and digital camera, an initial set of evaluation images are captured. A plurality of characteristics of the initial set of evaluation images are assessed to provide a first assessment. The characteristics include subject motion between the initial set of evaluation images. When the subject motion is in excess of a predetermined threshold, a final capture state of the camera is set responsive to the first assessment. When the subject motion is less than the predetermined threshold, the evaluation images are analyzed to provide analysis results and the final capture state of the camera is set responsive to the first assessment and the analysis results.

Journal ArticleDOI
TL;DR: This study presents a study based on simulations and real measurements describing the shot-noise influence in the quality of the reconstructed phase images, derived from a model for image quality estimation proposed by Wagner and Brown.
Abstract: In digital holographic microscopy, shot noise is an intrinsic part of the recording process with the digital camera. We present a study based on simulations and real measurements describing the shot-noise influence in the quality of the reconstructed phase images. Different configurations of the reference wave and the object wave intensities will be discussed, illustrating the detection limit and the coherent amplification of the object wave. The signal-to-noise ratio (SNR) calculation of the reconstructed phase images based on the decision statistical theory is derived from a model for image quality estimation proposed by Wagner and Brown [Phys. Med. Biol. 30, 489 (1985)]. It will be shown that a phase image with a SNR above 10 can be obtained with a mean intensity lower than 10 photons per pixel and per hologram coming from the observed object. Experimental measurements on a glass-chrome probe will be presented to illustrate the main results of the simulations.

Patent
16 Aug 2006
TL;DR: In this article, a digital camera has a plurality of image-capturing systems capable of essentially simultaneously capturing images of a single subject at mutually-different angles of view, and information about relevant image data which are items of the other simultaneously-captured image data is imparted as relevant information to at least one item of image data among items of images captured by the plurality of systems.
Abstract: A digital camera capable of compensating a portion of a captured image with another image without putting a squeeze on storage capacity. The digital camera has a plurality of image-capturing systems capable of essentially simultaneously capturing images of a single subject at mutually-different angles of view. Information about relevant image data which are items of the other simultaneously-captured image data is imparted as relevant information to at least one item of image data among items of image data captured by the plurality of image-capturing systems. The image data imparted with the relevant information and the relevant image data captured simultaneously with the image data are stored as separate items of data in user memory which serves as storage means.

Patent
21 Sep 2006
TL;DR: A pointing and identification device (PID) as discussed by the authors allows the user to point at objects in the real world, on television or movie screens, or otherwise not on the computer screen.
Abstract: A pointing and identification device (PID) allows the user to point at objects in the real world, on television or movie screens, or otherwise not on the computer screen. The PID includes a digital camera and one or both of a laser and a reticle for aiming the digital camera. An image taken with the digital camera is transmitted to a computer or the like.

Patent
15 Jun 2006
TL;DR: In this article, a computer performs image processing of the photographed image to detect an area in which unevenness exists and then, a V-I curve of each pixel in the area is measured to calculate necessary correction values.
Abstract: Nonuniformity in an organic EL display device is effectively detected All display pixels of an organic EL panel are turned on and the display is photographed with a digital camera A computer performs image processing of the photographed image to detect an area in which unevenness exists Then, a V-I curve of each pixel in the area is measured to calculate necessary correction values The calculated correction values are stored in a memory for use in correcting a signal input to the organic EL panel

Patent
29 Jun 2006
TL;DR: In this article, a retrieval-source printout (RSP) is extracted from an image of a subject including a retrieval source printout, and a feature value of the extracted region is used to construct a database capable of retrieving image data on the basis of feature value.
Abstract: When an image of a subject including a retrieval-source printout (1) is acquired by a digital camera (10), the digital camera (10) extracts a region corresponding to the retrieval-source printout (1) from the acquired image data, extracts a feature value of the extracted region, accesses a storage (20) in which a database capable of retrieval of image data on the basis of the feature value is constructed, and retrieves original image data of the retrieval-source printout (1) from the database on the basis of the extracted feature value.

Proceedings ArticleDOI
11 Dec 2006
TL;DR: A method of digital zooming by automatically recognizing the game situation or events such as penalty kick and free kick based on player and ball tracking is proposed, which is compared with a conventional technique by AHP method that can reflect an individual subjectivity.
Abstract: We are studying about automatic production of soccer sports videos for easy understanding by using digital camera work on camera fixed videos. The digital camera work is a movie technique which uses virtual panning and zooming by clipping frames from hi-resolution images and controlling the frame size and position. We have studied so far digital panning. In this paper, we propose a method of digital zooming by automatically recognizing the game situation or events such as penalty kick and free kick based on player and ball tracking. These recognition results are used as key indices to retrieve the event scenes from soccer videos. We compared the proposed technique with a conventional technique by AHP method that can reflect an individual subjectivity.

Patent
08 Nov 2006
TL;DR: In this paper, a camera location landmark search system was proposed, in which the position data and image data of the captured image were memorized in association with each other, and a landmark corresponding to the camera position was determined, and the landmark name was memorized with the image data.
Abstract: In a camera location landmark search system, when an image is captured by a digital camera, a GPS calculator calculates position data indicating a camera position. The position data and image data of the captured image are memorized in association with each other. Map data is divided at regular intervals of latitude and longitude into a lot of areas. Based on the position data, a divisional area including the camera position is selected with reference to a divisional area index table of the map data, and landmark data prepared for the determined divisional area are retrieved from a landmark data table of the map data. Based on the landmark data, a landmark corresponding to the camera position is determined, and the landmark name is memorized in association with the image data. The image data as sorted according to the landmark names may be displayed with the landmark names.

Proceedings ArticleDOI
04 Jun 2006
TL;DR: In this paper, a light field camera is sampled with integral photography techniques, using a microlens array in front of the sensor inside a conventional digital camera, and the authors explore computation of photographs with reduced lens aberrations by re-sorting aberrated rays to where they should have terminated.
Abstract: Digital light field photography consists of recording the radiance along all rays (the 4D light field) flowing into the image plane inside the camera, and using the computer to control the final convergence of rays in final images. The light field is sampled with integral photography techniques, using a microlens array in front of the sensor inside a conventional digital camera. Previous work has shown that this approach enables refocusing of photographs after the fact. This paper explores computation of photographs with reduced lens aberrations by digitally re-sorting aberrated rays to where they should have terminated. The paper presents a test with a prototype light field camera, and simulated results across a set of 35mm format lenses.


Patent
19 Jul 2006
TL;DR: In this article, a hand-held digital camera for obtaining images of a portion of a patient's body and having a handheld housing, a visible light source located within the housing for providing light along an illumination path from the housing aperture to the patient's head, an image sensor located inside the housing that detects light returning from the patient body along an imaging path that passes into the housing, an optical system located within a housing with separate illumination and imaging paths, an external optical aperture common to the illumination, wherein the illumination and image sub-apertures are wholly contained within the
Abstract: A hand-held digital camera for obtaining images of a portion of a patient's body and having a hand-held housing, a visible light source located within the housing for providing light along an illumination path from the housing aperture to the patient's body, an image sensor located within the housing that detects light returning from the patient's body along an imaging path that passes into the housing aperture, an optical system located within the housing with separate illumination and imaging paths, an external optical aperture common to the illumination and imaging systems, wherein the illumination and imaging sub-apertures are wholly contained within the common external aperture, are longitudinally coincident, and are laterally separated and non-overlapping, a digital memory device for storing captured images, an output display carried by the housing, and the ability to electronically transmit stored images. The camera can be used for retinal imaging and for otoscopy.

Patent
17 Jul 2006
TL;DR: In this paper, optomechanical and digital ocular sensor reader systems are described for eye self-exam, where the reader is a digital camera system for capturing an image of an eye and the sensor is implanted in the eye.
Abstract: System, methods, and devices are described for eye self-exam. In particular, optomechanical and digital ocular sensor reader systems are provided. The optomechanical system provides a device for viewing an ocular sensor implanted in one eye with the other eye. The digital ocular sensor system is a digital camera system for capturing an image of an eye, including an image of a sensor implanted in the eye.

Patent
26 May 2006
TL;DR: In this article, a digital camera includes a face detecting section, a color temperature detecting section and a flash device having an LED array in which RGB LEDs are regularly arranged as a light source.
Abstract: A digital camera (10) includes a face detecting section (74), a color temperature detecting section (76) and a flash device (86) having an LED array in which RGB LEDs are regularly arranged as a light source. When a shutter button (18) is pressed halfway, the face detecting section (74) reads out image data of a through image from a memory (60) and detects a person's face in the image. A CPU (64) identifies a scene based on brightness values of face and surrounding areas as, for example, a backlit scene, and specifies a face peripheral area according to an exposure pattern corresponding to the backlit scene. When the shutter button (18) is fully pressed, the CPU (64) sends a flash projection command to an LED control circuit (87), thereby illuminating the LEDs corresponding to the face peripheral area. By controlling illumination of the RGB LEDs, the LED control circuit (87) directs to project the flash light having color temperature that corrects the person's face color into an appropriate skin color.