scispace - formally typeset
Search or ask a question

Showing papers on "Smart camera published in 2006"


Journal ArticleDOI
TL;DR: This work designed the smart camera as a fully embedded system, focusing on power consumption, QoS management, and limited resources, and combined several smart cameras to form a distributed embedded surveillance system that supports cooperation and communication among cameras.
Abstract: Recent advances in computing, communication, and sensor technology are pushing the development of many new applications. This trend is especially evident in pervasive computing, sensor networks, and embedded systems. Smart cameras, one example of this innovation, are equipped with a high-performance onboard computing and communication infrastructure, combining video sensing, processing, and communications in a single embedded device. By providing access to many views through cooperation among individual cameras, networks of embedded cameras can potentially support more complex and challenging applications - including smart rooms, surveillance, tracking, and motion analysis - than a single camera. We designed our smart camera as a fully embedded system, focusing on power consumption, QoS management, and limited resources. The camera is a scalable, embedded, high-performance, multiprocessor platform consisting of a network processor and a variable number of digital signal processors (DSPs). Using the implemented software framework, our embedded cameras offer system-level services such as dynamic load distribution and task reconfiguration. In addition, we combined several smart cameras to form a distributed embedded surveillance system that supports cooperation and communication among cameras.

302 citations


Proceedings ArticleDOI
27 Mar 2006
TL;DR: The system developed and which is described in this paper achieves these goals using a single high resolution camera with a fixed field of view which results in rapid reacquisition of the eye after loss of tracking.
Abstract: Eye-gaze as a form of human machine interface holds great promise for improving the way we interact with machines. Eye-gaze tracking devices that are non-contact, non-restrictive, accurate and easy to use will increase the appeal for including eye-gaze information in future applications. The system we have developed and which we describe in this paper achieves these goals using a single high resolution camera with a fixed field of view. The single camera system has no moving parts which results in rapid reacquisition of the eye after loss of tracking. Free head motion is achieved using multiple glints and 3D modeling techniques. Accuracies of under 1° of visual angle are achieved over a field of view of 14x12x20 cm and over various hardware configurations, camera resolutions and frame rates.

235 citations


Journal ArticleDOI
TL;DR: The general camera placement problem is first defined with assumptions that are more consistent with the capabilities of real world cameras, and a solution to this problem is obtained via binary optimization over a discrete problem space.

229 citations


Patent
24 Mar 2006
TL;DR: In this article, a system for creating video from multiple sources utilizing intelligence to designate the most relevant sources, facilitating their adjacent display and/or catenation of their video streams.
Abstract: Methods and systems for creating video from multiple sources utilize intelligence to designate the most relevant sources, facilitating their adjacent display and/or catenation of their video streams.

168 citations


Patent
06 Oct 2006
TL;DR: In this paper, a smart surveillance camera stores a map corresponding to an area subject to monitoring, and the camera provides at least one image detected by the camera and at least a portion of the stored map in conjunction with the at least detected by said camera.
Abstract: Methods and apparatus related to smart surveillance camera operation and implementation are described. A smart surveillance camera stores a map corresponding to an area subject to monitoring. The camera provides at least one image detected by said camera and at least a portion of the stored map in conjunction with the at least one image detected by said camera, e.g., to a wireless communications device of an emergency responder in its local vicinity. Current camera position and/or viewing controls such as camera angle setting and/or zoom, are sometimes used, to determine the portion of the overlay map to be communicated in conjunction with a video stream. Externally detectable trigger events, e.g., from a 911 call or from a gunshot audio detector, and/or internally detectable trigger events, e.g., a detected mob in the camera viewing area, are sometimes used to initiate transmission of a video stream and corresponding map overlay.

164 citations


Proceedings ArticleDOI
15 Oct 2006
TL;DR: This paper presents TinyMotion, a pure software approach for detecting a mobile phone user's hand movement in real time by analyzing image sequences captured by the built-in camera, and the design and implementation of TinyMotion and several interactive applications based on TinyMotion.
Abstract: This paper presents TinyMotion, a pure software approach for detecting a mobile phone user's hand movement in real time by analyzing image sequences captured by the built-in camera. We present the design and implementation of TinyMotion and several interactive applications based on TinyMotion. Through both an informal evaluation and a formal 17-participant user study, we found that 1. TinyMotion can detect camera movement reliably under most background and illumination conditions. 2. Target acquisition tasks based on TinyMotion follow Fitts' law and Fitts law parameters can be used for TinyMotion based pointing performance measurement. 3. The users can use Vision TiltText, a TinyMotion enabled input method, to enter sentences faster than MultiTap with a few minutes of practicing. 4. Using camera phone as a handwriting capture device and performing large vocabulary, multilingual real time handwriting recognition on the cell phone are feasible. 5. TinyMotion based gaming is enjoyable and immediately available for the current generation camera phones. We also report user experiences and problems with TinyMotion based interaction as resources for future design and development of mobile interfaces.

161 citations


BookDOI
01 Oct 2006
TL;DR: This chapter discusses adaptation in the Visual System to Color, Spatial, and Temporal Contrast, and the role of light distribution in this transformation.
Abstract: 1 Processing of Information in the Human Visual System (Prof. Dr. F. Schaeffel, University of Tubingen). 1.1 Preface. 1.2 Design and Structure of the Eye. 1.3 Optical Aberrations and Consequences for Visual Performance. 1.4 Chromatic Aberration. 1.5 Neural Adaptation to Monochromatic Aberrations. 1.6 Optimizing Retinal Processing with Limited Cell Numbers, Space and Energy. 1.7 Adaptation to Different Light Levels. 1.8 Rod and Cone Responses. 1.9 Spiking and Coding. 1.10 Temporal and Spatial Performance. 1.11 ON/OFF Structure, Division of theWhole Illuminance Amplitude in Two Segments. 1.12 Consequences of the Rod and Cone Diversity on Retinal Wiring. 1.13 Motion Sensitivity in the Retina. 1.14 Visual Information Processing in Higher Centers. 1.15 Effects of Attention. 1.16 Color Vision, Color Constancy, and Color Contrast. 1.17 Depth Perception. 1.18 Adaptation in the Visual System to Color, Spatial, and Temporal Contrast. 1.19 Conclusions. References. 2 Introduction to Building a Machine Vision Inspection (Axel Telljohann, Consulting Team Machine Vision (CTMV)). 2.1 Preface. 2.2 Specifying a Machine Vision System. 2.3 Designing a Machine Vision System. 2.4 Costs. 2.5 Words on Project Realization. 2.6 Examples. 3 Lighting in Machine Vision (I. Jahr, Vision & Control GmbH). 3.1 Introduction. 3.2 Demands on Machine Vision lighting. 3.3 Light used in Machine Vision. 3.4 Interaction of Test Object and Light. 3.5 Basic Rules and Laws of Light Distribution. 3.6 Light Filters. 3.7 Lighting Techniques and Their Use. 3.8 Lighting Control. 3.9 Lighting Perspectives for the Future. References. 4 Optical Systems in Machine Vision (Dr. Karl Lenhardt, Jos. Schneider OptischeWerke GmbH). 4.1 A Look on the Foundations of Geometrical Optics. 4.2 Gaussian Optics. 4.3 The Wave Nature of Light. 4.4 Information Theoretical Treatment of Image Transfer and Storage. 4.5 Criteria for Image Quality. 4.6 Practical Aspects. References. 5 Camera Calibration (R. Godding, AICON 3D Systems GmbH). 5.1 Introduction. 5.2 Terminology. 5.3 Physical Effects. 5.4 Mathematical Calibration Model. 5.5 Calibration and Orientation Techniques. 5.6 Verification of Calibration Results. 5.7 Applications. References. 6 Camera Systems in Machine Vision (Horst Mattfeldt, Allied Vision Technologies GmbH). 6.1 Camera Technology. 6.2 Sensor Technologies. 6.3 CCD Image Artifacts. 6.4 CMOS Image Sensor. 6.5 Block Diagrams and their Description. 6.6 Digital Cameras. 6.7 Controlling Image Capture. 6.8 Configuration of the Camera. 6.9 Camera Noise1. 6.10 Digital Interfaces. References. 7 Camera Computer Interfaces (Tony Iglesias, Anita Salmon, Johann Scholtz, Robert Hedegore, Julianna Borgendale, Brent Runnels, Nathan McKimpson, National Instruments). 7.1 Overview. 7.2 Analog Camera Buses. 7.3 Parallel Digital Camera Buses. 7.4 Standard PC Buses. 7.5 Choosing a Camera Bus. 7.6 Computer Buses. 7.7 Choosing a Computer Bus. 7.8 Driver Software. 7.9 Features of a Machine Vision System. 8 Machine Vision Algorithms (Dr. Carsten Steger, MVTec Software GmbH). 8.1 Fundamental Data Structures. 8.2 Image Enhancement. 8.3 Geometric Transformations. 8.4 Image Segmentation. 8.5 Feature Extraction. 8.6 Morphology. 8.7 Edge Extraction. 8.8 Segmentation and Fitting of Geometric Primitives. 8.9 Template Matching. 8.10 Stereo Reconstruction. 8.11 Optical Character Recognition. References. 9 Machine Vision in Manufacturing (Dr.-Ing. Peter Waszkewitz, Robert Bosch GmbH). 9.1 Introduction. 9.2 Application Categories. 9.3 System Categories. 9.4 Integration and Interfaces. 9.5 Mechanical Interfaces. 9.6 Electrical Interfaces. 9.7 Information Interfaces. 9.8 Temporal Interfaces. 9.9 Human-Machine Interfaces. 9.10 Industrial Case Studies. 9.11 Constraints and Conditions. References. Index.

154 citations


Journal ArticleDOI
TL;DR: It is demonstrated that it is possible to achieve a high rate of accuracy in the identification of source camera identification by noting the intrinsic lens radial distortion of each camera.
Abstract: Source camera identification refers to the task of matching digital images with the cameras that are responsible for producing these images. This is an important task in image forensics, which in turn is a critical procedure in law enforcement. Unfortunately, few digital cameras are equipped with the capability of producing watermarks for this purpose. In this paper, we demonstrate that it is possible to achieve a high rate of accuracy in the identification by noting the intrinsic lens radial distortion of each camera. To reduce manufacturing cost, the majority of digital cameras are equipped with lenses having rather spherical surfaces, whose inherent radial distortions serve as unique fingerprints in the images. We extract, for each image, parameters from aberration measurements, which are then used to train and test a support vector machine classifier. We conduct extensive experiments to evaluate the success rate of a source camera identification with five cameras. The results show that this is a viable approach with high accuracy. Additionally, we also present results on how the error rates may change with images captured using various optical zoom levels, as zooming is commonly available in digital cameras.

134 citations


Proceedings ArticleDOI
19 Apr 2006
TL;DR: A fully distributed approach for camera network calibration that scales easily to very large camera networks and requires minimal overlap of the cameras' fields of view and makes very few assumptions about the motion of the object.
Abstract: Camera networks are perhaps the most common type of sensor network and are deployed in a variety of real-world applications including surveillance, intelligent environments and scientific remote monitoring. A key problem in deploying a network of cameras is calibration, i.e., determining the location and orientation of each sensor so that observations in an image can be mapped to locations in the real world. This paper proposes a fully distributed approach for camera network calibration. The cameras collaborate to track an object that moves through the environment and reason probabilistically about which camera poses are consistent with the observed images. This reasoning employs sophisticated techniques for handling the difficult nonlinearities imposed by projective transformations, as well as the dense correlations that arise between distant cameras. Our method requires minimal overlap of the cameras' fields of view and makes very few assumptions about the motion of the object. In contrast to existing approaches, which are centralized, our distributed algorithm scales easily to very large camera networks. We evaluate the system on a real camera network with 25 nodes as well as simulated camera networks of up to 50 cameras and demonstrate that our approach performs well even when communication is lossy.

121 citations


Patent
Tetsuya Hashimoto1, Hiroki Fukuoka1
21 Mar 2006
TL;DR: In this article, the RS-232 signal of an external device is monitored to determine whether the external device can be properly connected and in a state which permits communication, and the camera can either transmit or receive images and/or audio from the external devices.
Abstract: An electronic camera and method of operating an electronic camera which detects whether an external device such as a personal computer is properly connected to the camera and in a state which permits communication. The camera monitors a data terminal ready (DTR) signal of an RS-232 connection in order to determine that the external device is properly connected and in a state which permits communication. Once the proper connection is detected, the camera can either transmit or receive images and/or audio from the external device. Accordingly, a specific switch which places the camera in a communication mode can be eliminated. Further, a single switch may be utilized for both controlling whether the camera records or plays images when there is no device connected, and which controls whether the camera transmits or receives images and/or audio when an external device is determined to be connected.

94 citations


Proceedings ArticleDOI
17 Jun 2006
TL;DR: This work presents a system consisting of a distributed network of cameras that allows for tracking and handover of multiple persons in real time and discusses the benefits of such a distributed surveillance network compared to a host centralized approach.
Abstract: The demand for surveillance systems has increased extremely over recent times We present a system consisting of a distributed network of cameras that allows for tracking and handover of multiple persons in real time The intercamera tracking results are embedded as live textures in an integrated 3D world model which is available ubiquitously and can be viewed from arbitrary perspectives independent of the persons’ movements We mainly concentrate on our implementation of embedded camera nodes in the form of smart cameras and discuss the benefits of such a distributed surveillance network compared to a host centralized approach We also briefly describe our way of hassle free 3D model acquisition to cover the complete system from setup to operation and finally show some results of both an indoor and an outdoor system in operation

Patent
30 Jun 2006
TL;DR: In this article, a mobile imaging device having memory, a video camera system and a still camera system is shown to be able to process motion detection output of the video camera to form motion correction input for the still camera.
Abstract: Disclosed are devices including a mobile imaging device having memory, a video camera system and a still camera system. The video camera system can be configured for video imaging, and configured to generate motion detection output. An application can be stored in memory of the device and configured to process the motion detection output of the video camera system to form motion correction input for the still camera system. The still camera system is configured for still photography imaging and configured to process the motion correction input. Also disclosed are methods of a mobile imaging device including a still camera system and a video camera system. A method includes processing the sequential image data of the video camera system to generate motion detection output, processing the motion detection output to form motion correction input and processing still image correction by the still camera system based on the motion correction input.

Journal ArticleDOI
TL;DR: This work forms the multi-camera control strategy as an online scheduling problem and proposes a solution that combines the information gathered by the wide-FOV cameras with weighted round-robin scheduling to guide the available PTZ cameras, such that each pedestrian is observed by at least one PTZ camera while in the designated area.
Abstract: We present a surveillance system, comprising wide field-of-view (FOV) passive cameras and pan/tilt/zoom (PTZ) active cameras, which automatically captures high-resolution videos of pedestrians as they move through a designated area. A wide-FOV static camera can track multiple pedestrians, while any PTZ active camera can capture high-quality videos of one pedestrian at a time. We formulate the multi-camera control strategy as an online scheduling problem and propose a solution that combines the information gathered by the wide-FOV cameras with weighted round-robin scheduling to guide the available PTZ cameras, such that each pedestrian is observed by at least one PTZ camera while in the designated area. A centerpiece of our work is the development and testing of experimental surveillance systems within a visually and behaviorally realistic virtual environment simulator. The simulator is valuable as our research would be more or less infeasible in the real world given the impediments to deploying and experimenting with appropriately complex camera sensor networks in large public spaces. In particular, we demonstrate our surveillance system in a virtual train station environment populated by autonomous, lifelike virtual pedestrians, wherein easily reconfigurable virtual cameras generate synthetic video feeds. The video streams emulate those generated by real surveillance cameras monitoring richly populated public spaces.

Journal ArticleDOI
TL;DR: In this article, the authors examine issues of localization, exploration, and planning in the context of a hybrid robot/camera-network system using fiducial markers embedded in the robot and selecting robot trajectories in front of each camera that provide good field-of-view accuracy.

Proceedings ArticleDOI
05 Oct 2006
TL;DR: This paper examines node localization and camera calibration using the shared field of view of camera pairs using a new distributed camera sensor network and proposes an algorithm that combines a sparse set of distance measurements with image information to accurately localize nodes in 3D.
Abstract: Camera sensors constitute an information rich sensing modality with many potential applications in sensor networks. Their effectiveness in a sensor network setting however greatly relies on their ability to calibrate with respect to each other, and other sensors in the field. This paper examines node localization and camera calibration using the shared field of view of camera pairs. Using a new distributed camera sensor network we compare two approaches from computer vision and propose an algorithm that combines a sparse set of distance measurements with image information to accurately localize nodes in 3D. Our algorithms are evaluated using a network of iMote2 nodes equipped with COTS camera modules. The sensor nodes identify themselves to cameras using modulated LED emissions. Our indoor experiments yielded a 2-7cm error in a 6×6m room. Our outdoor experiments in a 30×30m field resulted in errors 20-80cm, depending on the method used.

Patent
09 Feb 2006
TL;DR: In this article, a communication system includes a portable phone and a digital still camera, and the user selects an image for transmission, and data indicating a frame number of the selected image is sent from the portable phone to the digital camera.
Abstract: A communication system includes a portable phone and a digital still camera. The user selects an image for transmission, and data indicating a frame number of the selected image is sent from the portable phone to the digital still camera. The portable phone sends a re-size instruction to the digital still camera. The digital still camera re-sizes the image data to reduce a data quantity thereof. The re-sized image data is sent from the digital still camera to the portable phone. The image data is sent from the portable phone via a network to a partner communication system. The image data has a reduced data quantity through the re-sizing and hence can be completely transmitted thereto in a relatively short period of time.

Patent
21 Sep 2006
TL;DR: A pointing and identification device (PID) as discussed by the authors allows the user to point at objects in the real world, on television or movie screens, or otherwise not on the computer screen.
Abstract: A pointing and identification device (PID) allows the user to point at objects in the real world, on television or movie screens, or otherwise not on the computer screen. The PID includes a digital camera and one or both of a laser and a reticle for aiming the digital camera. An image taken with the digital camera is transmitted to a computer or the like.

Patent
02 Nov 2006
TL;DR: In this article, a system, method, and device for remotely controlling the camera functions of a mobile camera phone are disclosed, where a camera application is coupled with Hie Bluetooth module (210) for establishing a master/slave relationship.
Abstract: A system, method, and device for remotely controlling the camera functions of a mobile camera phone are disclosed. The mobile camera phone and another device are communicable with one another using the Bluetooth™ protocol. Each includes a Bluetooth module (210) that establishes a wireless connection with one another (310) such that data and instructions can be exchanged. The other device receives and displays data representative of the mobile camera phone's viewfmder (350). A camera application is coupled with Hie Bluetooth module (210) for establishing a master/slave relationship (320) with a corresponding camera application (220) in the mobile camera phone. The other device sends camera control commands to the mobile camera phone wherein the camera control commands can manipulate camera settings (360), take a picture (380), and dispose of the resultant picture (390). The other device further includes a user interface coupled with the display (140) that provides a means of inputting data to manipulate the camera application (220).

Patent
21 Dec 2006
TL;DR: In this paper, a wireless camera (102) is configured to removeably connect to a complementary camera (101) and includes machine-readable software instructions configured to communicate with the complementary camera to determine a designated role (e.g., "master" or "slave").
Abstract: A wireless camera (102) is configured to removeably connect to a complementary camera (101) and includes machine-readable software instructions configured to communicate with the complementary camera to determine a designated role (e.g., 'master' or 'slave'). The software instructions are capable of performing, when the designated role is 'master,' the steps of: capturing a first video frame; instructing the complementary camera (101) to capture a second video frame; receiving the second video frame from the complementary camera (101); combining the first frame and the second frame to create a combined frame associated with a stereoscopic image, and wirelessly transmitting the combined frame to a mobile device (140). When the designated role is 'slave,' the wireless camera (102) is capable of performing the steps of: capturing a first video frame and sending the first video frame to the complementary camera ( 101).

Patent
13 Apr 2006
TL;DR: In this article, a miniature camera robot which can be placed entirely within an open space such as an abdominal cavity is presented, with pan and tilt capabilities, an adjustable focus camera, and a support means for supporting the robot body.
Abstract: The present invention is a miniature camera robot which can be placed entirely within an open space such as an abdominal cavity The instant camera robot has pan and tilt capabilities, an adjustable focus camera, and a support means for supporting the robot body In particular embodiments, the camera robot further contains a light source for illumination and a handle to position the camera robot A system and method for using the instant camera robot are also provided

Book ChapterDOI
01 Jan 2006
TL;DR: Electronic Perception Technology, an advanced range camera module based on measuring the time delay of modulated infrared light from an active emitter, using a single detector chip fabricated on standard CMOS process is presented.
Abstract: A variety of safety-enhancing automobile features can be enabled by microsystems that can sense and analyze the dynamic 3D environment inside and outside the vehicle. It is desirable to directly sense the 3D shape of the scene, since the appearance of objects in a 2D image is confounded by illumination conditions, surface materials, and object orientation. To overcome the disadvantages of 3D sensing methods such as stereovision, radar, ultrasound, or scanning LADAR, we present Electronic Perception Technology, an advanced range camera module based on measuring the time delay of modulated infrared light from an active emitter, using a single detector chip fabricated on standard CMOS process. This paper overviews several safety applications and their sensor performance requirements, describes the principles of operation of the range camera, and characterizes its performance as configured for airbag deployment occupant sensing and backup obstacle warning applications.

Patent
16 Aug 2006
TL;DR: In this paper, a teleconferencing system includes a camera system for imaging a plurality of persons, a voice collector for capturing voices generated by a plurality, and a transmitter for multiplexing an image signal acquired from the camera system and a voice signal obtained from the voice collector and transmitting a multiplexed signal via a communication line.
Abstract: A teleconferencing system includes: a camera system for imaging a plurality of persons; a voice collector for capturing voices generated by a plurality of persons; and a transmitter for multiplexing an image signal acquired from the camera system and a voice signal acquired from the voice collector and transmitting a multiplexed signal via a communication line. The camera system includes: a camera; a driver for changing the viewing direction of the camera; and a camera controller for controlling the driver. The camera controller includes: a face position detection unit; a registration unit; a timing unit; a drive control unit; and a hold time control unit.

Proceedings ArticleDOI
10 May 2006
TL;DR: A system named Photo-to-Search is designed and implemented to carry out queries from camera phones simply by taking some photos of interested objects to select the ones which contain the same prominent object.
Abstract: With the pervasive use of camera phones, the embedded camera has been considered as a promising HCI manner for mobiles. With necessary technologies, it is possible to become a powerful tool to acquire the information in daily life. We have designed and implemented a system named Photo-to-Search to carry out queries from camera phones simply by taking some photos of interested objects. The captured pictures are compared with a large amount of Web images to select the ones which contain the same prominent object. Consequently, the related information is extracted from the Web pages where the matched images locate. In our demo, data of large buildings, storefronts and products are collected and these kinds of queries are specifically demonstrated to show the efficiency and the effectiveness of our system.

Patent
20 Dec 2006
TL;DR: In this article, a camera that is set by the user to capture any emergency condition detected by any of the said inbuilt detectors, and automatically send picture or pictures of the emergency along with an audio speech pre-recorder or a written text pre-enter by a user at the top of the picture or bottom, side, side or middle of the captured image, the voice synthesizer, or the text generator states the exact emergency which can be fire or intrusion, why the prerecorder audio or text entered by the users states the address, both information combined together (
Abstract: This invention is more particular to a camera that is fully equip with an inbuilt motion and smoke detector and a digital audio recorder and a vice synthesizer and a text generator, the camera is specially design to be set by the user to capture any emergency condition detected by any of the said inbuilt detectors, and automatically send picture or pictures of the emergency along with an audio speech pre-recorder by the user or a written text pre-enter by the user at the top of the picture or bottom, side or middle of the captured image, the voice synthesizer, or the text generator states the exact emergency which can be fire or intrusion, why the pre-recorder audio or text entered by the user states the address of the emergency both information combined together (the emergency and the address) is sent out by the camera to one or more preset number of a cellular telephone of the user choice, cellular telephone that can receive the picture with the text or the audio by means of any available telecommunication network, the picture sent by the camera can be a still picture or a motion picture and the camera can be a visible light camera or an infrared camera.

Journal ArticleDOI
14 Feb 2006
TL;DR: The problem of establishing a computational model for visual attention using cooperation between two cameras is addressed through the understanding and modeling of the geometric and kinematic coupling between a static camera and an active camera.
Abstract: In this paper we address the problem of establishing a computational model for visual attention using cooperation between two cameras. More specifically we wish to maintain a visual event within the field of view of a rotating and zooming camera through the understanding and modeling of the geometric and kinematic coupling between a static camera and an active camera. The static camera has a wide field of view thus allowing panoramic surveillance at low resolution. High-resolution details may be captured by a second camera, provided that it looks in the right direction. We derive an algebraic formulation for the coupling between the two cameras and we specify the practical conditions yielding a unique solution. We describe a method for separating a foreground event (such as a moving object) from its background while the camera rotates. A set of outdoor experiments shows the two-camera system in operation.

Book ChapterDOI
13 Jan 2006
TL;DR: In this article, the authors consider the problem of computing the pose of an object relative to a camera, for the case where the camera has no direct view of the object and place planar mirrors such that the camera sees the calibration grid's reflection.
Abstract: We consider the task of computing the pose of an object relative to a camera, for the case where the camera has no direct view of the object. This problem was encountered in work on vision-based inspection of specular or shiny surfaces, that is often based on analyzing images of calibration grids or other objects, reflected in such a surface. A natural setup consists thus of a camera and a calibration grid, put side-by-side, i.e. without the camera having a direct view of the grid. A straightforward idea for computing the pose is to place planar mirrors such that the camera sees the calibration grid’s reflection. In this paper, we consider this idea, describe geometrical properties of the setup and propose a practical algorithm for the pose computation.

Proceedings ArticleDOI
22 Nov 2006
TL;DR: A novel technique for camera tampering detection implemented in real-time and developed for use in surveillance and security applications that identifies camera tampering by detecting large differences between older frames of video and more recent frames.
Abstract: This paper presents a novel technique for camera tampering detection. It is implemented in real-time and was developed for use in surveillance and security applications. This method identifies camera tampering by detecting large differences between older frames of video and more recent frames. A buffer of incoming video frames is kept and three different measures of image dissimilarity are used to compare the frames. After normalization, a set of conditions is tested to decide if camera tampering has occurred. The effects of adjusting the internal parameters of the algorithm are examined. The performance of this method is shown to be extremely favorable in real-world settings.

Patent
26 Oct 2006
TL;DR: A capsule camera as discussed by the authors consists of a swallowable housing, a light source within the housing, and a camera within the camera for capturing a first digital image and a second digital image of a view of the camera illuminated by the light source.
Abstract: A capsule camera apparatus includes a swallowable housing, a light source within the housing, a camera within the housing for capturing a first digital image and a second digital image of a view of the camera illuminated by the light source, a a motion detector that detects a motion of the housing using the first digital image and the second digital image, and a motion evaluator that selects a disposition of the second digital image, based on a metric on the motion. The disposition may include writing the second image into an archival storage or providing the second digital image to the outside by a wireless communication link.

Patent
28 Sep 2006
TL;DR: A traffic information detector (100) comprises a drive unit (101) having a camera installed therein and defining the direction of the camera, a control unit (102) for controlling the drive of the drive unit, a sensor section (103) having various sensor functions, a storage section (104) for storing various information, an information input section (105) for inputting information, and an information output section (106) for outputting image information or the like, a vehicle information interface (I/F) (107) for connecting an external image processor, a computer, or the
Abstract: A traffic information detector (100) comprises a drive unit (101) having a camera installed therein and defining the direction of the camera, a control unit (102) for controlling the drive of the drive unit (101), a sensor section (103) having various sensor functions, a storage section (104) for storing various information, an information input section (105) for inputting information, an information output section (106) for outputting image information or the like, a vehicle information interface (I/F) (107) for connecting an external image processor, a computer, or the like, an external device interface (I/F) (108) for connecting all devices incorporating a car navigation device, a computer, and an image processing means, and an image processing section (109) for performing a predetermined image processing on an acquired image.

Patent
24 Jan 2006
TL;DR: In this paper, a portable camera and lighting unit for standalone use in videography is presented, which can create a high-resolution well-illuminated video feed from a vast array of camera angles and positions, the illumination source always inherently tracking with the camera.
Abstract: The present invention is a portable camera and lighting unit for standalone use in videography to create a high-resolution well-illuminated video feed from a vast array of camera angles and positions, the illumination source always inherently tracking with the camera. The unit may also be used as a satellite in combination with a primary video conferencing and production station (VVPR) for multi-camera production and teleconferencing capabilities. The portable camera and lighting unit includes a portable base, a mast extending upward from the base, and an articulating boom that is fully-pivotable and extendable. A remote control Pan-Tilt-Zoom camera is mounted at the end of the boom for overhead images of healthcare procedures, and an adjustable beam light source is mounted directly on the camera for lighting. The mast is equipped with a color monitor coupled to the camera for operator previewing at the portable unit, and the remote control camera provides a single video feed that can be teleconferenced, recorded, and even mixed with other cameras when used as a satellite adjunct to the primary VVPR, thereby allow full production capabilities for live interactive broadcasts, all in real time by a single operator from a single point of control. The portable unit is mobile and offers more diverse lighting and camera angles than previously possible.