scispace - formally typeset
Search or ask a question

Showing papers on "Three-CCD camera published in 2007"


Proceedings ArticleDOI
26 Dec 2007
TL;DR: It is found that simple summary statistics are sufficient to geolocate cameras without determining correspondences between cameras or explicitly reasoning about weather in the scene, and most cameras can be localized to within 50 miles of the known location.
Abstract: A key problem in widely distributed camera networks is locating the cameras This paper considers three scenarios for camera localization: localizing a camera in an unknown environment, adding a new camera in a region with many other cameras, and localizing a camera by finding correlations with satellite imagery We find that simple summary statistics (the time course of principal component coefficients) are sufficient to geolocate cameras without determining correspondences between cameras or explicitly reasoning about weather in the scene We present results from a database of images from 538 cameras collected over the course of a year We find that for cameras that remain stationary and for which we have accurate image times- tamps, we can localize most cameras to within 50 miles of the known location In addition, we demonstrate the use of a distributed camera network in the construction a map of weather conditions

131 citations


Journal ArticleDOI
TL;DR: A reflective multiple-fold approach to visible imaging for high-resolution, large aperture cameras of significantly reduced thickness allows for reduced bulk and weight compared with large high-quality camera systems and improved resolution and light collection compared with miniature conventional cameras.
Abstract: We present a reflective multiple-fold approach to visible imaging for high-resolution, large aperture cameras of significantly reduced thickness. This approach allows for reduced bulk and weight compared with large high-quality camera systems and improved resolution and light collection compared with miniature conventional cameras. An analysis of the properties of multiple-fold imagers is presented along with the design, fabrication, and testing of an eightfold prototype camera. This demonstration camera has a 35 mm effective focal length, 0.7 NA, and 27 mm effective aperture folded into a 5 mm total thickness.

105 citations


Patent
Thomas Abrams1
20 Mar 2007
TL;DR: In this article, an exemplary method of controlling a display device includes receiving an executable file and/or code via a network interface, receiving video data via a serial digital interface, executing the executable file or code on a runtime engine, processing the video data based at least in part on the executing to produce processed video data and displaying the processed data.
Abstract: Methods, devices, systems and/or storage media for video and/or audio processing. An exemplary method of controlling a display device includes receiving an executable file and/or code via a network interface, receiving video data via a serial digital interface, executing the executable file and/or code on a runtime engine, processing the video data based at least in part on the executing to produce processed video data and displaying the processed video data. Other exemplary technologies are also disclosed.

86 citations


Patent
Takeshi Shima1, Shoji Muramatsu1, Yuji Otsuka1, Tatsuhiko Monji1, Kota Irie1 
28 Jun 2007
TL;DR: In this article, an on-vehicle camera calibration system was proposed to calculate camera parameters from a characteristic amount of a road surface sign photographed by the on-Vehicle camera and recognized by an image processing and to output the camera parameters.
Abstract: An on-vehicle camera calibration apparatus includes: an on-vehicle camera; a camera parameter calculation unit configured to calculate camera parameters from a characteristic amount of a road surface sign photographed by the on-vehicle camera and recognized by an image processing and to output the camera parameters, wherein the camera parameters include an installation height and installation angle of the on-vehicle camera in photographing; and a camera parameter calibration unit configured to perform optical axis calibration control of the on-vehicle camera by the camera parameters output from the camera parameter calculation unit.

55 citations


Patent
31 Jan 2007
TL;DR: In this paper, a camera calibration apparatus derives a transformation parameter (homography matrix) for projecting and combining images shot by the respective cameras on the ground, and the images are then aligned utilizing the calibration patterns, thereby temporarily calculating the transformation parameter.
Abstract: PROBLEM TO BE SOLVED: To attain simplification of calibration environment maintenance and improvement of calibration accuracy. SOLUTION: A front camera, a right camera, a left camera and a back camera are installed on a vehicle and calibration patterns (A1-A4) in the known shape are disposed within respective common shooting areas (3 FR , 3 FL , 3 BR and 3 BL ) between the front and right cameras, the front and left cameras, the back and right cameras, and the back and left cameras. A camera calibration apparatus derives a transformation parameter (homography matrix) for projecting and combining images shot by the respective cameras on the ground. The images shot by the respective cameras are projected on the ground by planar projective transformation or perspective projective transformation and the images are then aligned utilizing the calibration patterns, thereby temporarily calculating the transformation parameter. The temporary transformation parameter is then adjusted so as to minimize a projection error of the calibration patterns based on known information on the shape of the calibration patterns. COPYRIGHT: (C)2008,JPO&INPIT

54 citations


01 Jan 2007
TL;DR: It is demonstrated that the viewer’s image without the occluding object can be synthesized for every camera on-line, even though all the cameras are freely moving for all cameras each camera that relates.
Abstract: In this paper, we present a system for Diminished Reality with multiple handheld camera system. In this paper, we assume a situation such that the same scene is captured with multiple handheld cameras, but some objects occlude the scene. In such case, we propose a method for synthesizing image for each camera in which the occluding objects are diminished by the support of the different cameras that capture the same scene at the different viewpoints. In the proposed method, we use the AR-Tag marker to calibrate the multiple cameras, so that online process can be possible. By the use of AR tag, we compute homographies for between each camera’s imaging plane and the objective scene that is approximated as a planar in this research. The homography is used for warping the planar area for synthesizing the viewer’s image that includes only the objective scene without the occluding object that cannot be approximated as a planar area. We demonstrate that the viewer’s image without the occluding object can be synthesized for every camera on-line, even though all the cameras are freely moving for all cameras each camera that relates.

50 citations


Patent
Gary J. Oswald1, Rafael Camargo1
22 Mar 2007
TL;DR: In this paper, the authors present a real-time mobile communication device with two video cameras pointing in a first and a plurality of second directions with respect to the housing and generate a second video signal.
Abstract: Disclosed are mobile communication devices, and methods for mobile communication devices including two video cameras that can operate simultaneously and in real-time. The device includes a first video camera pointing in a first direction and configured to generate a first video signal and a second video camera pointing in a second direction and configured to generate a second video signal. The device includes a processor configured to receive the first video signal and the second video signal and to encode the first video signal and the second video signal for simultaneous transmission. Disclosed is another device, including a housing having a fixed first video camera configured to point in a first direction with respect to the housing and generate a first video signal and a movable second video camera configured to point in a plurality of second directions with respect to the housing and generate a second video signal.

45 citations


Patent
31 Oct 2007
TL;DR: In this article, a method of remotely viewing a video from a selected viewpoint selected by the viewer from a continuous segment is proposed, including, recording a video of a subject using at least one depth video camera that records a video comprising a sequence of picture frames and additionally records a depth value for each pixel of the picture frames, and rendering a depth hull that defines a 3D outline of the subject being recorded using the depth values recorded by the depth video cameras.
Abstract: A method of remotely viewing a video from a selected viewpoint selected by the viewer from a continuous segment, including, recording a video of a subject using at least one depth video camera that records a video comprising a sequence of picture frames and additionally records a depth value for each pixel of the picture frames, recording a video of the subject using at least one standard video camera positioned to record a video at a viewpoint that differs from the viewpoint of the depth video camera, rendering a depth hull that defines a three dimensional outline of the subject being recorded using the depth values recorded by the depth video cameras, providing the recorded video from one or more cameras positioned on either side of the selected viewpoint, incorporating the recorded video from the one or more cameras onto the rendered depth hull to render a viewable video from the selected viewpoint; and displaying the rendered viewable video to the viewer.

42 citations


Proceedings ArticleDOI
22 Oct 2007
TL;DR: This work investigates the possibility of letting the cameras calibrate and localize themselves relative to each other by tracking one arbitrary and fixed calibration object (e.g.: a traffic sign) in a two-camera system.
Abstract: Most recent developments in car technology promise that future cars will be equipped with many cameras facing different directions (e.g.: headlights, wing mirrors, break lights etc.). This work investigates the possibility of letting the cameras calibrate and localize themselves relative to each other by tracking one arbitrary and fixed calibration object (e.g.: a traffic sign). Since the fields of view for each camera may not be overlapping, the calibration object serves as logical connection between different views. Under the assumption that the intrinsic camera parameters and the vehicle's speed are known, we suggest a method for computing the extrinsic camera parameters (rotation, translation) for a two-camera system, where one camera is defined as the origin.

39 citations


Patent
24 May 2007
TL;DR: In this paper, a mobile camera system is described, which connects at least first and second camera apparatuses mounted on a mobile body to one another and combines images photographed by the first and the second camera apparatus.
Abstract: A mobile camera system is disclosed. The camera system connects plural camera apparatuses including at least first and second camera apparatuses mounted on a mobile body to one another and combines images photographed by the first and the second camera apparatuses, wherein reference data obtained by the first camera apparatus is transferred to the second camera apparatus via a camera control unit, signal processing is performed in the second camera apparatus on the basis of the reference data transferred to generate a corrected image, and an image from the first camera apparatus and the corrected image outputted from the second camera apparatus are combined to output a combined image.

39 citations


Proceedings ArticleDOI
22 Aug 2007
TL;DR: This paper divides cameras into groups according to their positions and orientations first, and then calibrate each camera in the world coordinate system of its own group via a factorization-based method.
Abstract: In this paper, we propose a practical factorization- and-position based method for multiple cameras calibration. The method yields a simple calibration means for an arbitrary number of linear projective cameras while maintaining the handiness and flexibility of the original method. A freely moving planar pattern as a calibration object at a few different orientations is only required. All the cameras do not have to see this pattern at all orientations, and only reasonable overlap between camera subgroups is necessary. We divide these cameras into groups according to their positions and orientations first, and then calibrate each camera in the world coordinate system of its own group via a factorization-based method. Common view fields of planar pattern are used to estimate the Euclidean transformation between these world coordinate systems and represent all cameras in a same world coordinate system. Both the intrinsic and extrinsic parameters of cameras can be obtained in a uniform world coordinate system of accuracy to within a pixel.

Patent
02 Mar 2007
TL;DR: In this paper, a camera system may be used to capture iris images of targeted people who may be unaware of being targeted and hence their movement may not be constrained in any way.
Abstract: A camera system may be used to capture iris images of targeted people who may be unaware of being targeted and hence their movement may not be constrained in any way. Iris images may be used for identification and/or tracking of people. In one illustrative embodiment, a camera system may include a focus camera and an iris camera, where the focus camera is sensitive to ambient light or some spectrum thereof, and the iris camera is sensitive to infrared or some other wavelength light. The focus camera and the iris camera may share an optical lens, and the focus camera may be used to auto-focus the lens on a focus target. A beam splitter or other optical element may be used to direct light of some wavelengths to the focus camera for auto-focusing the lens, and other wavelengths to the iris camera for image capture of the iris images.

Proceedings ArticleDOI
26 Dec 2007
TL;DR: A method for localizing the cameras in a camera network to recover the orientation and position up to scale of each camera, even when cameras are wide-baseline or have different photometric properties is shown.
Abstract: Camera networks are being used in more applications as different types of sensor networks are used to instrument large spaces. Here we show a method for localizing the cameras in a camera network to recover the orientation and position up to scale of each camera, even when cameras are wide-baseline or have different photometric properties. Using moving objects in the scene, we use an intra-camera step and an inter-camera step in order to localize. The intra-camera step compares frames from a single camera to build the tracks of the objects in the image plane of the camera. The inter-camera step uses these object image tracks from each camera as features for correspondence between cameras. We demonstrate this idea on both simulated and real data.

Proceedings ArticleDOI
05 Sep 2007
TL;DR: A calibration algorithm of two cameras using observations of a moving person and a method to find the relative position and orientation of two camera: the rotation matrix and the translation vector which describe the rigid motion between the coordinate frames fixed in two cameras.
Abstract: A calibration algorithm of two cameras using observations of a moving person is presented. Similar methods have been proposed for self-calibration with a single camera, but internal parameter estimation is only limited to the focal length. Recently it has been demonstrated that principal point supposed in the center of the image causes inaccuracy of all estimated parameters. Our method exploits two cameras, using image points of head and foot locations of a moving person, to determine for both cameras the focal length and the principal point. Moreover with the increasing number of cameras there is a demand of procedures to determine their relative placements. In this paper we also describe a method to find the relative position and orientation of two cameras: the rotation matrix and the translation vector which describe the rigid motion between the coordinate frames fixed in two cameras. Results in synthetic and real scenes are presented to evaluate the performance of the proposed method.

Proceedings ArticleDOI
13 Jun 2007
TL;DR: A multiband camera has been developed that can provide both color images and near-infrared images and a method for estimating the driver's visibility when using the camera is described in this paper.
Abstract: Various driver-assistance systems are currently being developed that make use of on-vehicle cameras. However, the imaging conditions and the methods used to detect objects are different for each system. Therefore, a special camera is often needed in order to satisfy the requirements of each system. A camera that can be shared by multiple systems will become essential when more systems are put to practical use in the future. Therefore, a multiband camera has been developed that can provide both color images and near-infrared images. The camera includes a special filter that improves on the Bayer filter arrays that are used in single-chip digital color cameras, and it can simultaneously obtain images covering four wavelength bands that have the same optical axes and fields of view. Moreover, a method for estimating the driver's visibility when using the camera is described in this paper.

Patent
30 Aug 2007
TL;DR: In this paper, a surveillance camera system includes a first camera 5 having an angle of view θ 1, a second camera 6 which is a combination of two camera modules 6a, 6b each having a different angle, and a third camera 7 which consists of three camera modules 7a, 7b, 7c each having different angles of view.
Abstract: A surveillance camera system includes a first camera 5 having an angle of view θ1, a second camera 6 which is a combination of two camera modules 6a, 6b each having an angle of view θ2, a third camera 7 which is a combination of three camera modules 7a, 7b, 7c each having an angle of view θ3, and a local camera 8 having an angle of view θs. The first to third cameras 5 to 7 act as area surveillance cameras to which the optimum shooting distance is set, respectively. The local camera 8 takes a shot of a local area, which is set in a shooting area of the third camera 7, at the narrowest angle of view θs. The respective cameras take a shot individually under automatic exposure control.

Patent
Haruo Kogane1
31 Jul 2007
TL;DR: In this paper, a camera determined so as to act as a sub camera converts the main camera direction information into sub camera direction based on a relative positional relation with the primary camera and controls the direction thereof in accordance with the sub camera's direction information.
Abstract: A camera control apparatus simultaneously transmits main camera direction information to all cameras. Then, the camera determined so as to act as a main camera in accordance with an address contained in the main camera direction information controls the direction thereof in accordance with the main camera direction information. The camera determined so as to act as a sub camera converts the main camera direction information into sub camera direction information based on a relative positional relation with the main camera and controls the direction thereof in accordance with the sub camera direction information. Thus, a subject to be imaged can be always caught and displayed on the monitor screen without causing a control delay. That is, a subject to be imaged can be automatically tracked without being lost from the monitor screen.

Patent
Kwang-Jun Kim1
12 Apr 2007
TL;DR: In this article, an apparatus and a method for aligning images obtained by a stereo camera apparatus is presented, where the apparatus changes a range of the region displayed on a screen among the image obtained by the non-reference camera according to the position of the searched pixel block to generate and output a stereo image using an image having a screen display region focus of the respective cameras where the two cameras are aligned in the horizontal line.
Abstract: An apparatus and a method for aligning images obtained by a stereo camera apparatus are provided. The apparatus receives images from a first camera and a second camera and searches for a pixel block from an image obtained by one of the cameras (a non-reference camera) having the highest consistency ratio with a pixel block in a specific position of the image obtained by the other camera (a reference camera). Then, the apparatus changes a range of the region displayed on a screen among the image obtained by the non-reference camera according to the position of the searched pixel block to generate and output a stereoscopic image using an image having a screen display region focus of the respective cameras where the two cameras are aligned in the horizontal line.

Patent
29 May 2007
TL;DR: A camera and a method for controlling the camera also a face recognition secure access camera and an image capturing method of capturing an image that is not saturated or too dark as mentioned in this paper...
Abstract: A camera and a method for controlling the camera Also a face recognition secure access camera and a method of capturing an image that is not saturated or too dark

Patent
14 Aug 2007
TL;DR: In this paper, a system for controlling a cursor on a screen automatically and dynamically when using a video camera as a pointing device is presented, where a computer displays static or dynamic content to a screen.
Abstract: A system provides for controlling a cursor on a screen automatically and dynamically when using a video camera as a pointing device. A computer displays static or dynamic content to a screen. A video camera connected to the computer points at the screen. As the video camera films the screen, frames captured by the video camera are sent to the computer. A target image is displayed by the computer onto the screen and marks the position of the screen cursor of the video camera. Frames captured by the video camera include the target image, and the computer dynamically moves the target image on the screen to ensure that the target image stays in the center of the view of the video camera.

Patent
04 Sep 2007
TL;DR: A video camera calibration system includes a video camera, having a fixed location and a variable viewing orientation with respect to a fixed object, and a video calibration target, integral with the fixed object and having a known position.
Abstract: A video camera calibration system includes a video camera, having a fixed location and a variable viewing orientation with respect to a fixed object, and a video calibration target, integral with the fixed object and having a known position. The viewing orientation of the video camera can be adjusted by aligning the position of the video calibration target in a video image produced by the video camera.

Patent
Kenneth McCormack1
21 Dec 2007
TL;DR: In this article, a video camera assembly includes a pan mechanism rotatable about a pan axis, a camera mounted on the pan mechanism such that the camera is rotated about the pan axis and a controller communicatively coupled to the pan and camera.
Abstract: Method and apparatus for a video camera assembly are provided. The video camera assembly includes a pan mechanism rotatable about a pan axis, a video camera mounted on the pan mechanism such that the video camera is rotatable about the pan axis, and a controller communicatively coupled to the pan mechanism and the video camera. The controller is configured to control the rotation of the video camera about the pan axis at a predetermined speed, acquire a plurality of images from the video camera at a predetermined rate, and display the acquired images panoramically.

Proceedings ArticleDOI
12 Nov 2007
TL;DR: This paper proposes a novel approach for source camera identification based on camera gain histogram using the photon transfer curve (PTC) as camera noise model and demonstrates that the distinction rate in identifying different cameras achieves promising performance.
Abstract: In this paper, we propose a novel approach for source camera identification based on camera gain histogram. By using the photon transfer curve (PTC) as camera noise model, we construct camera gain histogram from the occurrences of different camera gain constants. With the distribution of camera gain histogram for each camera, we extract four features to characterize the camera. In our experiments, 400 photos acquired from two high-end digital cameras at two different exposure levels are used to evaluate the effectiveness of the proposed approach. A two-class support vector machine (SVM) is employed as a classifier. Our experimental results demonstrate that the distinction rate in identifying different cameras achieves promising performance.

Proceedings ArticleDOI
27 Apr 2007
TL;DR: This paper describes True-Color Night Vision cameras that are sensitive to the visible to near-infrared (V-NIR) portion of the spectrum allowing for the "true-color" of scenes and objects to be displayed and recorded under low-light-level conditions.
Abstract: This paper describes True-Color Night Vision cameras that are sensitive to the visible to near-infrared (V-NIR) portion of the spectrum allowing for the “true-color” of scenes and objects to be displayed and recorded under low-light-level conditions. As compared to traditional monochrome (gray or green) night vision imagery, color imagery has increased information content and has proven to enable better situational awareness, faster response time, and more accurate target identification. Urban combat environments, where rapid situational awareness is vital, and marine operations, where there is inherent information in the color of markings and lights, are example applications that can benefit from TrueColor Night Vision technology. Two different prototype cameras, employing two different true-color night vision technological approaches, are described and compared in this paper. One camera uses a fast-switching liquid crystal filter in front of a custom Gen-III image intensified camera, and the second camera is based around an EMCCD sensor with a mosaic filter applied directly to the sensor. In addition to visible light, both cameras utilize NIR to (1) increase the signal and (2) enable the viewing of laser aiming devices. The performance of the true-color cameras, along with the performance of standard (monochrome) night vision cameras, are reported and compared under various operating conditions in the lab and the field. In addition to subjective criterion, figures of merit designed specifically for the objective assessment of such cameras are used in this analysis.

Proceedings ArticleDOI
01 Sep 2007
TL;DR: A joint McGill - University of Victoria team is deploying a high definition video camera on the VENUS project node in the Saanich Inlet and will be using the experience gained to deploy similar cameras on the NEPTUNE project network.
Abstract: A joint McGill - University of Victoria team is deploying a high definition video camera on the VENUS project node in the Saanich Inlet and will be using the experience gained to deploy similar cameras on the NEPTUNE project network. Underwater HD camera selection is discussed for both scientific research and public outreach. Low light level capability and an extended lens zoom range enable the study of small invertebrate behaviour. A unique camera control user interface was designed that will allow scientists to see a large scale map of their sites of interest while the camera is zoomed in on one particular site. Camera control software challenges included designing a system that would simultaneously control the camera, two sets of lighting and the pan/tilt, which are separate devices, from a single user interface over a shared network. A camera support and control system was designed and deployed. The high definition camera and zoom lens are mounted in a pressure canister on a large support frame that has an integrated camera pan/tilt mechanism. There are two types of illumination and a set of lasers that project reference lines for gauging object size. The entire camera system is connected to the VENUS node using a wet-mate hybrid connector (copper and fiber optic). The paper also discusses the deployment of the camera and support structure in Saanich Inlet and the subsequent connection to the VENUS node.

Journal ArticleDOI
TL;DR: A method that combines a small semiconductor gamma camera with an optical camera to synthesize the two respective kinds of images and help surgeons to easily identify sentinel lymph nodes in various cancer surgery is developed.
Abstract: We have developed a method that combines a small semiconductor gamma camera with an optical camera to synthesize the two respective kinds of images and help surgeons to easily identify sentinel lymph nodes in various cancer surgery. The proposed method includes some key techniques such as distortion correction of the optical camera image, distance estimation between the camera head and the object surface using a laser and the optical camera, and perspective transformation of the gamma camera image to fuse with the optical camera image. The method, along with preliminary experimental results with a prototype setup are presented here.

Patent
Dong-Hoon Lee1
13 Jul 2007
TL;DR: In this article, a mobile terminal including a plurality of cameras and a method of processing images acquired in a mobile camera array is provided, where the active time period during which one camera provides the data image signal occurs during the inactive time period of the other camera or cameras of the plurality of camera nodes.
Abstract: A mobile terminal including a plurality of cameras and a method of processing images acquired in a plurality of cameras is provided. The image processing method includes simultaneously operating a plurality of cameras, outputting a synchronous signal during an inactive time period and a data image signal during an active time period, wherein the active time period during which one camera of the plurality of cameras provides the data image signal occurs during the inactive time period of the other camera or cameras of the plurality of cameras.

Patent
Koichi Washisu1
22 Oct 2007
TL;DR: In this paper, a camera which can obtain an image without image blur is disclosed, which obtains a synthesized image whose exposure has been corrected by synthesizing a plurality of images obtained through successive image-taking, comprising: a detection unit which detects, with respect to a reference image, amounts of displacement on other images; a coordinate conversion unit which applies coordinate conversion to the other images based on the results of detection of the detection unit; and a synthesis unit which synthesizes the other image subjected to coordinate conversion and the reference image.
Abstract: A camera which can obtain an image without image blur is disclosed. The camera which obtains a synthesized image whose exposure has been corrected by synthesizing a plurality of images obtained through successive image-taking, comprising: a detection unit which detects, with respect to a reference image, amounts of displacement on other images; a coordinate conversion unit which applies coordinate conversion to the other images based on the results of detection of the detection unit; and a synthesis unit which synthesizes the other images subjected to coordinate conversion and the reference image.

Patent
22 May 2007
TL;DR: In this paper, a prior distribution of camera parameters for a family of cameras is estimated and used to obtain accurate calibration results for individual cameras of the camera family even where the calibration is carried out online, in an environment which is structure-poor.
Abstract: Online camera calibration methods have been proposed whereby calibration information is extracted from the images that the system captures during normal operation and is used to continually update system parameters. However, such existing methods do not cope well with structure-poor scenes having little texture and/or 3D structure such as in a home or office environment. By considering camera families (a set of cameras that are manufactured at least partially in a common manner) it is possible to provide calibration methods which are suitable for use with structure-poor scenes. A prior distribution of camera parameters for a family of cameras is estimated and used to obtain accurate calibration results for individual cameras of the camera family even where the calibration is carried out online, in an environment which is structure-poor.

Proceedings ArticleDOI
TL;DR: In this paper, a generalized rule-of-thumb was proposed for camera motion in a special case when camera-motion can be approximated by a linear motion at 1.667°/sec.
Abstract: Due to the demanding size and cost constraints of camera phones, the mobile imaging industry needs to address several key challenges in order to achieve the quality of a digital still camera. Minimizing camera-motion introduced image blur is one of them. Film photographers have long used a rule-of-thumb that a hand held 35mm format film camera should have an exposure in seconds that is not longer than the inverse of the focal length in millimeters. Due to the lack of scientific studies on camera-motion, it is still an open question how to generalize this rule-of-thumb to digital still cameras as well as camera phones. In this paper, we first propose a generalized rule-of-thumb with the original rule-of-thumb as a special case when camera-motion can be approximated by a linear motion at 1.667 °/sec. We then use a gyroscope-based system to measure camera-motion patterns for two camera phones (one held with one hand and the other held in two hands) and one digital still camera. The results show that effective camera-motion function can be approximated very well by a linear function for exposure durations less than 100ms. While the effective camera-motion speed for camera phones (5.95 °/sec and 4.39 °/sec respectively) is significantly higher than that of digital still cameras (2.18 °/sec), it was found that holding a camera phone with two hands while taking pictures does reduce the amount of camera motion. It was also found that camera-motion not only varies significantly across subjects but also across captures for the same subject. Since camera phones have significantly higher motion and longer exposure durations than 35mm format film cameras and most digital still cameras, it is expected that many of the pictures taken by camera phones today will not meet the sharpness criteria used in 35mm film print. The mobile imaging industry is aggressively pursuing a smaller and smaller pixel size in order to meet the digital still camera's performance in terms of total pixels while retaining the small size needed for the mobile industry. This makes it increasingly more important to address the camera-motion challenge associated with smaller pixel size.