scispace - formally typeset
Search or ask a question

Showing papers on "Video camera published in 2007"


Patent
06 Sep 2007
TL;DR: In this paper, a self-contained portable and compact omni-directional monitoring and automatic alarm video device is presented, which enables automatic omnidirectional detection, directional imaging inspection, tracking, real-time alarm transmission, and remote monitoring.
Abstract: The present invention is a self contained portable and compact omni-directional monitoring and automatic alarm video device. The device enables automatic omni-directional detection, directional imaging inspection, tracking, real-time alarm transmission, and remote monitoring. The device comprises an omni-directional detection sensor, processor, and directional video imaging camera located inside a rotatable housing. The omni-directional detection sensor enables detection of moving objects near the device's surroundings; the processing assembly enables extraction of information such as the relative direction to the moving object and automatic pointing of the directional video imaging camera in that direction by rotation of the housing. The information acquired by the video camera is then transmitted to a remote control and observation unit.

197 citations


Patent
13 Jul 2007
TL;DR: In this paper, a system for video monitoring a retail business process includes a video analytics engine to process video obtained by a video camera and generate video primitives regarding the video, a user interface is used to define at least one activity of interest regarding an area being viewed, each activity is associated with a rule or a query.
Abstract: A system for video monitoring a retail business process includes a video analytics engine to process video obtained by a video camera and generate video primitives regarding the video, A user interface is used to define at least one activity of interest regarding an area being viewed, each activity of interest identifying at least one of a rule or a query regarding the area being viewed. An activity inference engine processes the generated video primitives based on each defined activity of interest to determine if an activity of interest occurred in the video.

129 citations


Journal ArticleDOI
TL;DR: Two new techniques are proposed (edge-based and clustering-based) to classify video frames into two classes, informative and non-informative frames, and it is suggested that precision, sensitivity, specificity, and accuracy for the specular reflection detection technique and the two informative frame classification techniques are greater than 90% and 95%, respectively.

122 citations


Proceedings ArticleDOI
01 Oct 2007
TL;DR: This paper proposes a new coded structured light projection method that can select not only a temporal encoding but also a spatial encoding adaptively for obtaining three-dimensional images at a high-speed frame rate and demonstrates the effectiveness of the proposed method by showing the obtained three- dimensional shapes for moving objects.
Abstract: In various application fields, high-speed cameras are used to analyze high-speed phenomena. Coded structured light projection methods have been proposed for acquiring three-dimensional images. Most of them are not suitable for measuring high-speed phenomena because the errors are caused when the measured objects move due to light projection of multiple patterns. In this paper, we propose a new coded structured light projection method that can select not only a temporal encoding but also a spatial encoding adaptively for obtaining three-dimensional images at a high-speed frame rate. We also develop an evaluation system that uses a DLP projector and an off-line high-speed video camera, and verify the effectiveness of the proposed method by showing the obtained three-dimensional shapes for moving objects.

102 citations


Patent
19 Mar 2007
TL;DR: In this article, a system to detect the transit of vehicles having license plates includes at least one video camera to detect license plates capable of framing the plates of said vehicles and, preferably, at least two video cameras to detect vehicles capable of detecting vehicles in transit.
Abstract: A system to detect the transit of vehicles having license plates includes at least one video camera to detect license plates capable of framing the plates of said vehicles and, preferably, at least one video camera to detect vehicles capable of framing a zone of transit of said vehicles having license plates. A series of processing operations is capable, starting from the video signals generated by the video camera to detect license plates, of detecting the presence of a vehicle in transit and, starting from the video signals generated by the video camera to detect vehicles, of detecting the position and three-dimensional shape of vehicles in transit in said zone. A supervisor module aggregates the results of these processing operations to generate records of information each identifying the modality of transit in said zone of a vehicle identified by a given license plate that has been recognized.

94 citations


Proceedings ArticleDOI
Andrew D. Wilson1
19 Nov 2007
TL;DR: An interactive tabletop system which uses a depth-sensing camera to build a height map of the objects on the table surface that is used in a driving simulation game that allows players to drive a virtual car over real objects placed on a table.
Abstract: Recently developed depth-sensing video camera technologies provide precise per-pixel range data in addition to color video. Such cameras will find application in robotics and vision-based human computer interaction scenarios such as games and gesture input systems. We present an interactive tabletop system which uses a depth-sensing camera to build a height map of the objects on the table surface. This height map is used in a driving simulation game that allows players to drive a virtual car over real objects placed on the table. Players can use folded bits of paper, for example, to lay out a course of ramps and other obstacles. A projector displays the position of the car on the surface, such that when the car is driven over a ramp, for example, it jumps appropriately. A second display shows a synthetic graphical view of the entire surface, or a traditional arcade view from behind the car. Micromotorcross is a fun initial investigation into the applicability of depth-sensing cameras to tabletop interfaces. We present details on its implementation, and speculate on how this technology will enable new tabletop interactions.

86 citations


Patent
22 Jun 2007
TL;DR: In this article, a controller can be in a dormant mode or an active mode, and the controller transitions from dormant mode to active mode when the image processor detects a progression of two states within the captured photographs, the two states being (i) a closed fist and (ii) an open hand.
Abstract: A video processor for recognizing gestures, including a video camera for capturing photographs of a region within the camera's field of view, in real-time, an image processor coupled with the video camera for detecting a plurality of hand gestures from the photographs captured by the video camera, and a controller coupled with the image processor, wherein the controller can be in a dormant mode or an active mode, and wherein the controller transitions from dormant mode to active mode when the image processor detects a progression of two states within the captured photographs, the two states being (i) a closed fist and (ii) an open hand, and wherein the controller performs a programmed responsive action to an electronic device based on the hand gestures detected by the image processor when the controller is in active mode. A method and a computer-readable storage medium are also described and claimed.

80 citations


Patent
15 Jun 2007
TL;DR: In this paper, a feature extracting unit obtains sensor data from a plurality of sensors to calculate each feature, and a display data constructor generates remote-controller display data for displaying the event, and control unit controls the sensors to be turned ON or OFF.
Abstract: A feature extracting unit obtains sensor data from a plurality of sensors to calculate each feature. When an event determining unit determines the occurrence of an event based on each feature, a display data constructor generates remote-controller display data for displaying the event, and controls a remote-controller display device to display the remote-controller display data. When a user decision is input from a user input IF based on this display, a control unit controls the sensors to be turned ON or OFF. When an infrared sensor detects an abnormality, a microwave sensor whose power consumption is small after the infrared sensor is turned ON. When the microwave sensor detects an abnormality, a video camera and a microphone are turned ON, and the microwave sensor is turned OFF. A communication unit wirelessly transmits an image signal captured by the video camera and an audio signal processed by the microphone. Then, if the infrared sensor does not detect an abnormality, the video camera and the microphone are turned OFF. With this arrangement, power consumption can be suppressed. The present invention is applied to, for example, a security system, for example, for monitoring outside a vehicle by a video camera disposed in the vehicle when the vehicle is parked.

80 citations


Proceedings ArticleDOI
04 Jun 2007
TL;DR: The behavior analysis module of the OBSERVER, a video surveillance system that detects and predicts abnormal behaviors aiming at the intelligent surveillance concept, is presented, where the DOG method outperforms the previously used N-ary trees classifier.
Abstract: The OBSERVER is a video surveillance system that detects and predicts abnormal behaviors aiming at the intelligent surveillance concept. The system acquires color images from a stationary video camera and applies state of the art algorithms to segment, track and classify moving objects. In this paper we present the behavior analysis module of the system. A novel method, called dynamic oriented graph (DOG) is used to detect and predict abnormal behaviors, using real-time unsupervised learning. The DOG method characterizes observed actions by means of a structure of unidirectional connected nodes, each one defining a region in the hyperspace of attributes measured from the observed moving objects and having assigned a probability to generate an abnormal behavior. An experimental evaluation with synthetic data was held, where the DOG method outperforms the previously used N-ary trees classifier

77 citations


Journal ArticleDOI
TL;DR: A novel approach is proposed in which a large number of simple motion sensors and a small set of video cameras are used to monitor a large office space to help designers and managers of building systems gain access to information about occupants' behavior in the context of an entire building in a way that is only minimally intrusive to the occupants' privacy.
Abstract: The technology available to building designers now makes it possible to monitor buildings on a very large scale. Video cameras and motion sensors are commonplace in practically every office space, and are slowly making their way into living spaces. The application of such technologies, in particular video cameras, while improving security, also violates privacy. On the other hand, motion sensors, while being privacy-conscious, typically do not provide enough information for a human operator to maintain the same degree of awareness about the space that can be achieved by using video cameras. We propose a novel approach in which we use a large number of simple motion sensors and a small set of video cameras to monitor a large office space. In our system we deployed 215 motion sensors and six video cameras to monitor the 3,000-square-meter office space occupied by 80 people for a period of about one year. The main problem in operating such systems is finding a way to present this highly multidimensional data, which includes both spatial and temporal components, to a human operator to allow browsing and searching recorded data in an efficient and intuitive way. In this paper we present our experiences and the solutions that we have developed in the course of our work on the system. We consider this work to be the first step in helping designers and managers of building systems gain access to information about occupants' behavior in the context of an entire building in a way that is only minimally intrusive to the occupants' privacy.

72 citations


Patent
27 Jun 2007
TL;DR: In this article, a vehicle vision system is described in which at least one video camera is mounted with respect to the vehicle structure so as to view the vehicle interior and/or exterior.
Abstract: A vehicle vision system (10) is provided. At least one video camera (14) (or other video source) is mounted with respect to the vehicle structure so as to view the vehicle interior and/or exterior. A display (48) is in communication with the video camera so as to receive video signals from the video camera. The video camera is movable with respect to the vehicle structure. In one example, the camera is included in a cellular telephone which may be mounted with respect to the vehicle. Also provided is a method of monitoring the interior and/or exterior of a vehicle.

Journal ArticleDOI
TL;DR: Docking success under certain conditions is proved mathematically and simulation studies show the control law to be robust to camera intrinsic parameter errors.

Patent
15 Mar 2007
TL;DR: In this paper, an Electronic Personal Computing and VideoPhone System Composed of: a Remote Online Server System that provides Virtual Subscription-based Computing Resources, Computer Programs, Internet-based Television Programming, Web-based Video, Television Programming Video Card Technology, Multi-location/Multi-Point Video Conferencing Programs, Data Storage, Usage Tracking, and Video, Text & Sound Feed.
Abstract: This invention is an Electronic Personal Computing and VideoPhone System Composed of: 1. a Remote Online Server System That Provides Virtual Subscription-based Computing Resources, Computer Programs, Internet-based Television Programming, Web-Based Video, Television Programming Video Card Technology, Multi-location/Multi-Point Video Conferencing Programs, Data Storage, Usage Tracking, and Video, Text & Sound Feed. 2. a Hardware Device Utilizing Software Programs To: Output Video and Sound To A TV Set; Receive User Input From a Wireless Keyboard and Mouse; Output Light, Motion and Sound Signals to Light Emitting, Motion and Sound Devices; Connect and Communicate With A Remote Online Server System; Receive Computing Services From a Virtual Computing Instance On A Remote Server; Output to a Printing Device; Input from a Scanning Device; Input from a Microphone; Input from a Video Camera; Output to Speakers; and Input/Output to a DVD/CD Drive 3. a Wireless Mobile Electronic Device Composed of a Housing, an Electronic Circuit Board, a Flash Memory, an LCD screen, a Keyboard, a Mouse, a Microphone, Speakers, a Camera, and a Software Program that Connects and Communicates With The Remote Online Server System To Send Computing Commands and Audio/Video Data and to Receive Computing Services and Audio/Video Data From a Virtual Computing Instance On A Remote Server.

Patent
Edward Walter1
05 Apr 2007
TL;DR: In this paper, a computerized system and method for distributing video conference data over an internet protocol television (IPTV) system (1000) are disclosed including structures and methods for allocating an IPTV video conference channel to groups of video conference participants' set top boxes (STBs).
Abstract: A computerized system and method for distributing video conference data over an internet protocol television (IPTV) system (1000) are disclosed including structures and methods for allocating an IPTV video conference channel to groups of video conference participants' set top boxes (STBs). A video data source (902) originates video conference data from an IP video camera (904). The video conference data can be sent through a switch (908) and a router (909) to a public or private IPTV network (1100) and delivered to the video conference participants (906).

Patent
17 Jan 2007
TL;DR: In this paper, a method for detecting and tracking vehicle based on video camera and computer system includes obtaining preliminary edge object region by carrying out global domain value division on obtained difference image of background edge and front ground edge then carrying out interclass variance domain value Division on intensified difference image to obtain preliminary object region, forming object region based on integration, carrying out expansion calculation and corrosion calculation on object region to pick up object character.
Abstract: A method for detecting and tracking vehicle based on video camera and computer system includes obtaining preliminary edge object region by carrying out global domain value division on obtained difference image of background edge and front ground edge then carrying out interclass variance domain value division on intensified difference image to obtain preliminary object region, forming object region based on integration, carrying out expansion calculation and corrosion calculation on object region to pick up object character, carrying out object identification then locking and tracking object in real time to form object trace.

Patent
13 Apr 2007
TL;DR: In this paper, a hardware independent virtual camera that can be seamlessly integrated with existing video camera and computer system equipment is presented, which supports the ability to track a defined set of three-dimensional coordinates within a video stream and dynamically insert rendered 3-D objects within the video stream on a real-time basis.
Abstract: A method and apparatus are described that provide a hardware independent virtual camera that may be seamlessly integrated with existing video camera and computer system equipment. The virtual camera supports the ability to track a defined set of three-dimensional coordinates within a video stream and to dynamically insert rendered 3-D objects within the video stream on a real-time basis. The described methods and apparatus may be used to manipulate any sort of incoming video signal regardless of the source of the video. Exemplary application may include real-time manipulation of a video stream associated, for example, with a real-time video conference generated by a video camera, or a video stream generated by a video player (e.g., a video tape player, DVD, or other device) reading a stored video recording.

Patent
07 Feb 2007
TL;DR: In this paper, an integrated video surveillance system and associated method of use is disclosed that includes at least one alarm monitoring center, each alarm monitoring centre includes a main control panel with a first video recorder interface, at least 1 subscriber, wherein each subscriber is electrically connected to a control module, and at least mobile unit, each mobile unit includes a first electronic display and is connected to the wireless access network.
Abstract: An integrated video surveillance system and associated method of use is disclosed that includes at least one alarm monitoring center, each alarm monitoring center includes a main control panel with a first video recorder interface, at least one subscriber, wherein each subscriber is electrically connected to a control module, at least one mobile unit, wherein each mobile unit includes at least one first electronic display and is electrically connected to a wireless access network, at least one video camera for providing video data of a predetermined area, and at least one computer network, which includes a global computer network, that is operatively connected between the at least one alarm monitoring center, the at least one subscriber, the at least one video camera and the at least one mobile unit through the wireless access network.

Journal ArticleDOI
TL;DR: This paper presents an automatic and robust approach to synthesize stereoscopic videos from ordinary monocular videos acquired by commodity video cameras that synthesizes the binocular parallax in stereoscopic video directly from the motion par allax in monocular video.
Abstract: This paper presents an automatic and robust approach to synthesize stereoscopic videos from ordinary monocular videos acquired by commodity video cameras. Instead of recovering the depth map, the proposed method synthesizes the binocular parallax in stereoscopic video directly from the motion parallax in monocular video, The synthesis is formulated as an optimization problem via introducing a cost function of the stereoscopic effects, the similarity, and the smoothness constraints. The optimization selects the most suitable frames in the input video for generating the stereoscopic video frames. With the optimized selection, convincing and smooth stereoscopic video can be synthesized even by simple constant-depth warping. No user interaction is required. We demonstrate the visually plausible results obtained given the input clips acquired by ordinary handheld video camera.

Patent
22 Feb 2007
TL;DR: In this paper, a method and apparatus for controlling power up of an electronic device with a video camera is provided, where the video camera includes a processor and memory that compare consecutive frames captured by the camera.
Abstract: A method and apparatus for controlling power up of an electronic device with a video camera is provided. The present invention provides for using a video camera attached to an electronic device, such as a computer system, to cause the electronic device to be powered up from sleep mode when motion is detected. The electronic device may also be powered up from being shut down. In one embodiment, the video camera includes a processor and memory that compare consecutive frames captured by the video camera. When the electronic device is in sleep mode, if consecutive frames are the same, the video camera continues to monitor the scene without generating an output signal. If the frames are different, motion is detected and the video camera generates a signal that is used to determine whether the electronic device should power up. In this manner, the electronic device may begin the powering up process before the user of the device interacts with the device.

Patent
27 Sep 2007
TL;DR: In this paper, the authors present a method and apparatus for controlling video streams, which includes monitoring for an event associated with one of a plurality of video camera controllers providing video streams where each of the video streams has a first quality level, and propagating a control message toward the controller for which the event is detected.
Abstract: The invention includes a method and apparatus for controlling video streams. A method includes monitoring for an event associated with one of a plurality of video camera controllers providing a plurality of video streams where each of the video streams has a first quality level, and, in response to detecting an event associated with one of the plurality of video camera controllers, propagating a control message toward the one of the video camera controllers for which the event is detected, where a control message adapted for requesting the one of the video camera controllers to switch from providing the video stream using the first quality level to providing the video stream using a second quality level. The first quality level may be a low level of quality and the second quality level may be a high level of quality.

Journal ArticleDOI
TL;DR: This paper proposes to align dynamic scenes using a new notion of "dynamics constancy," which is more appropriate for this task than the traditional assumption of "brightness constancy", and formulate the problem of finding optimal time front geometry as one of finding a minimal cut in a 4D graph, and solve it using max-flow methods.
Abstract: This paper explores the manipulation of time in video editing, which allows us to control the chronological time of events. These time manipulations include slowing down (or postponing) some dynamic events while speeding up (or advancing) others. When a video camera scans a scene, aligning all the events to a single time interval will result in a panoramic movie. Time manipulations are obtained by first constructing an aligned space-time volume from the input video, and then sweeping a continuous 2D slice (time front) through that volume, generating a new sequence of images. For dynamic scenes, aligning the input video frames poses an important challenge. We propose to align dynamic scenes using a new notion of "dynamics constancy," which is more appropriate for this task than the traditional assumption of "brightness constancy." Another challenge is to avoid visual seams inside moving objects and other visual artifacts resulting from sweeping the space-time volumes with time fronts of arbitrary geometry. To avoid such artifacts, we formulate the problem of finding optimal time front geometry as one of finding a minimal cut in a 4D graph, and solve it using max-flow methods.

Patent
01 Oct 2007
TL;DR: In this article, a system for automatically setting video signal processing parameters for an endoscopic video camera system based upon characteristics of an attached endoscope, with reduced EMI and improved inventory tracking, maintenance and quality assurance, reducing the necessity for adjustment and alignment of the endoscope and camera to achieve the data transfer.
Abstract: A system for automatically setting video signal processing parameters for an endoscopic video camera system based upon characteristics of an attached endoscope, with reduced EMI and improved inventory tracking, maintenance and quality assurance, and reducing the necessity for adjustment and alignment of the endoscope and camera to achieve the data transfer.

Patent
21 Sep 2007
TL;DR: In this article, a plurality of video cameras are configured to at least one of zoom for changing the camera field of view, tilt for rotating the camera about a horizontal tilt axis, and pan for rotating a camera on a vertical pan axis.
Abstract: Methods and systems for a video surveillance system include a plurality of video cameras, each including a field of view, the cameras are configured to at least one of zoom for changing the camera field of view, tilt for rotating the camera about a horizontal tilt axis, and pan for rotating the camera about a vertical pan axis. The system also includes a processor configured to receive a signal indicative of an image in the field of view of at least one video camera, recognize a target using the received signal, determine a direction to the target from the cameras that recognize the target, and transmit the determined direction to other ones of the plurality of video cameras.

Patent
02 Oct 2007
TL;DR: In this paper, a 3D imaging system, a video camera, and one or more environmental sensors are combined to detect and identify threats during a structure clearing or inspection operation, including object range, volume, and geometry.
Abstract: A sensor suite for a vehicle, the sensor suite comprising a 3D imaging system, a video camera, and one or more environmental sensors. Data from the sensor suite is combined to detect and identify threats during a structure clearing or inspection operation. Additionally, a method for detecting and identifying threats during a structure clearing or inspection operation. The method comprises: gathering 3D image data including object range, volume, and geometry; gathering video data in the same physical geometry of the 3D image; gathering non-visual environmental characteristic data; and combining and analyzing the gathered data to detect and identify threats.

Patent
25 Jun 2007
TL;DR: In this article, the authors proposed a stabilization method for images from a scene, acquired by means of an observation device for an imaging system, and comprises a stage for digitally processing a flow of successive images.
Abstract: The invention concerns a stabilization method for images from a scene, acquired by means of an observation device for an imaging system, and comprises a stage for digitally processing a flow of successive images. It comprises a stage for acquiring gyrometric measurements by means of at least one gyrometric sensor fixedly connected to the observation device using these gyrometric measurements to determine so-called approximate errors incurred between successive images, and the image processing stage comprises a sub-stage using the approximate errors and the flow of acquired images to determine so-called fine errors incurred between successive images.

Journal ArticleDOI
TL;DR: In this article, a video camera operating at 4,500 frames per second (fps) was developed in 1991, and the basic configuration of the camera later became a de facto standard of high-speed video cameras.
Abstract: This paper reviews the high-speed video cameras developed by the authors. A video camera operating at 4,500 frames per second (fps) was developed in 1991. The partial and parallel readout scheme combined with fully digital memory with overwriting function enabled the world fastest imaging at the time. The basic configuration of the camera later became a de facto standard of high-speed video cameras. A video camera mounting an innovative image sensor achieved 1,000,000 fps in 2001. In-situ storage with more than 100 CCD memory elements is installed in each pixel of the sensor, which is capable of recording image signals in all pixels in parallel. Therefore, the sensor was named ISIS, the in-situ storage image sensor. The ultimate parallel recording operation promises the theoretical maximum frame rate. A sequence of more than one hundred consecutive images reproduces a smoothly moving image at 10 fps for more than 10 seconds. Currently, an image sensor with ultrahigh sensitivity is being developed in addition to the ultra-high frame rate, named PC-ISIS, the photon-counting ISIS, for microscopic biological observation. Some other technologies supporting the ultra-high-speed imaging developed are also presented.

Patent
24 Sep 2007
TL;DR: In this article, a system for video monitoring at least one banking business process may comprise a video analytics engine to process video of a bank area obtained by a video camera and to generate video primitives regarding the video; a user interface to define a set of activities of interest regarding the bank area being viewed, wherein each activity of interest identifies a rule and/or a query about the bank areas being viewed.
Abstract: A system for video monitoring at least one banking business process may comprise a video analytics engine to process video of a bank area obtained by a video camera and to generate video primitives regarding the video; a user interface to define at least one activity of interest regarding the bank area being viewed, wherein each activity of interest identifies a rule and/or a query regarding the bank area being viewed; and an activity inference engine to process the video primitives according to a banking business process, based on each activity of interest from the user interface to determine if any activity of interest occurred in the video.

Patent
01 Mar 2007
TL;DR: Based on area and color analyses, a cost-effective bi-directional people counter dedicated to the pedestrian flow passing through a gate or a door is proposed in this paper, where the passing people are roughly counted with the area of people projected on an image captured by a zenithal video camera.
Abstract: Based on area and color analyses, a cost-effective bi-directional people counter dedicated to the pedestrian flow passing through a gate or a door is proposed. Firstly, the passing people are roughly counted with the area of people projected on an image captured by a zenithal video camera. The moving direction of the pedestrian can be recognized by tracking each people-pattern with an analysis of its HSI histogram. To improve the accuracy of counting, the color vector extracted from the quantized histograms of intensity or hue is introduced to refine the early counting. Besides, the inherent problems of both people touching together and merge/split phenomenon can be overcome.

Patent
21 Nov 2007
TL;DR: In this article, a correction unit composed of a video camera module for capturing the color information of the screen, an image data buffer module, and a main control module used for receiving and executing the computer command is presented.
Abstract: The apparatus comprises a correction unit composed of a video camera module for capturing the color information of the screen, an image data buffer module, and a main control module used for receiving and executing the computer command Said video camera module and image data buffer module are respectively connected to the multi-screen display wall and computer via the main control module The method comprises: appointing the image displayed by each display unit on the multi-screen display wall; said video camera module shoots at least one picture and saves it in the image data buffer module; the computer extracts the color information from each display unit; according to the color of each display unit, the computer corrects the difference between said display units

Patent
Gary J. Oswald1, Rafael Camargo1
22 Mar 2007
TL;DR: In this paper, the authors present a real-time mobile communication device with two video cameras pointing in a first and a plurality of second directions with respect to the housing and generate a second video signal.
Abstract: Disclosed are mobile communication devices, and methods for mobile communication devices including two video cameras that can operate simultaneously and in real-time. The device includes a first video camera pointing in a first direction and configured to generate a first video signal and a second video camera pointing in a second direction and configured to generate a second video signal. The device includes a processor configured to receive the first video signal and the second video signal and to encode the first video signal and the second video signal for simultaneous transmission. Disclosed is another device, including a housing having a fixed first video camera configured to point in a first direction with respect to the housing and generate a first video signal and a movable second video camera configured to point in a plurality of second directions with respect to the housing and generate a second video signal.