scispace - formally typeset
Search or ask a question

Showing papers on "Smart camera published in 1996"


Patent
Jonathan J. Hull1, John F Cullen1
10 May 1996
TL;DR: In this article, a portable image transfer system includes a digital still camera which captures images in digital form and stores the images in a camera memory, a cellular telephone transmitter, and a central processing unit (CPU).
Abstract: A portable image transfer system includes a digital still camera which captures images in digital form and stores the images in a camera memory, a cellular telephone transmitter, and a central processing unit (CPU). The CPU controls the camera memory to cause it to output data representing an image and the CPU controls the cellular telephone transmitter to cause a cellular telephone to transmit the data received from the camera memory. A receiving station is coupled to the cellular telephone transmitter by a cellular network to receive image data and store the images.

357 citations



Patent
28 Mar 1996
TL;DR: In this paper, an intelligent flash system for a digital camera having components including an image optical pickup, an interface circuit, a flash unit and a processor is presented, where the processor samples the image intensity data, weighs the center image area more heavily, and creates a histogram plot of quantity of pixels v.s. intensity, and separates the plot into a bar graph from which a determination of exposure is obtained.
Abstract: An intelligent flash system for a digital camera having components including an image optical pickup, an interface circuit, a flash unit and a processor. Upon activation of the camera, ambient lighting conditions are evaluated and if flash energy is required, a first low energy pre-flash is radiated, the reflected light received by the optical pickup having a multiplicity of pixels, and the output of the pixels converted to image intensity data by the interface circuit. The processor samples the image intensity data, weighing the center image area more heavily, and creates a histogram plot of quantity of pixels v.s. intensity, and separates the plot into a bar graph from which a determination of exposure is obtained. The histogram is then used to calculate a multiplicative scaling factor used to multiply the first flash energy to an estimate of a flash energy for correct exposure. Conditions of extreme over and under exposure result in the activation of a second flash at an adjusted energy level. The image data of the second flash is then analyzed and the exposure compared with the result of the first flash. A final determination of flash energy is then made based upon the results.

141 citations


Patent
13 May 1996
TL;DR: In this article, an improved airborne, direct digital panoramic camera system and method in which an in-line electro-optical sensor operating in conjunction with a data handling unit (82), a controller unit (84), and real time archive unit (86), eliminates the need for photographic film and film transport apparatus normally associated with prior art airborne reconnaissance cameras and yet still retains the very high image resolution quality which is so important in intelligence operations and commercial geographic information systems (GIS), mapping and other remote sensing applications.
Abstract: The present invention relates to an improved airborne, direct digital panoramic camera system and method in which an in-line electro-optical sensor (80) operating in conjunction with a data handling unit (82), a controller unit (84), and real time archive unit (86), eliminates the need for photographic film and film transport apparatus normally associated with prior art airborne reconnaissance cameras and yet still retains the very high image resolution quality which is so important in intelligence operations and commercial geographic information systems (GIS), mapping and other remote sensing applications. Precise geographic data for the system is provided by the use of navigation aids which include the Global Positioning Satellite (GPS) System (14) and an airborne platform carried GPS receiver (85). The present invention provides a simpler, more efficient and less costly panoramic camera by utilizing a simpler and less expensive line-focus type of lens in conjunction with an electro-optical line array sensor.

139 citations


Patent
11 Dec 1996
TL;DR: In this article, the position of detection measurement frame having a feature pattern with the largest similarity to the standard feature pattern obtained from the standard measurement frame is determined and an imaging condition of a television camera is controlled on the basis of the position information of the detection measurement frames in order to attain a video camera system enabling to suitably track the object motion.
Abstract: A video camera system can suitably track a moving object without influence of other objects outside the desired image. Detection feature patterns are formed after brightness and hue frequency feature data are obtained on the basis of image information of the detection measurement frame. The position of detection measurement frame having a feature pattern with the largest similarity to the standard feature pattern obtained from the standard measurement frame is determined. An imaging condition of a television camera is controlled on the basis of the position information of the detection measurement frame in order to attain a video camera system enabling to suitably track the object motion. Further, a video camera system can obtain a face image of constantly a same size with a simple construction. An area of the face image on the display plane is detected as the detected face area, and by comparing this with a standard face area, zooming-processing is performed such that the difference becomes 0. Thus, it is unnecessary to use the method of a distance sensor, etc., and a video camera system with a simple construction can be obtained.

110 citations


Patent
17 Dec 1996
TL;DR: In this article, a camera tracking system that continuously tracks sound emitting objects is provided, where a video camera is coupled to the microphones via an interface for processing information transmitted from the microphones for directing the camera.
Abstract: A camera tracking system that continuously tracks sound emitting objects is provided. A sound activation feature of the system enables a video camera to track speakers in a manner similar to the natural transition that occurs when people turn their eyes toward different sounds. The invented system is well suited for video-phone applications. The invented tracking system comprises a video camera for transmitting an image from its remote location, a screen for receiving images, and microphones for directing the camera. The camera may be coupled to the microphones via an interface for processing information transmitted from the microphones for directing the camera. The system may utilize the translucent properties of LCD screens by disposing a video camera behind such a screen and enabling persons at each remote location to look directly into the screen and at the camera. The interface enables intelligent framing of a speaker without mechanically repositioning the camera. The microphones are positioned using triangulation techniques. Characteristics of audio signals are processed by the interface for determining movement of the speaker for directing the camera. As the characteristics sensed by the microphones change, the interface directs the camera toward the speaker. The interface continuously directs the camera, until the change in the characteristics stabilizes, thus precisely directing the camera toward the speaker.

83 citations


Patent
05 Sep 1996
TL;DR: In this paper, a single use camera that incorporates a motion sensor enables automatic activation of various camera features and utilities, such as shutter release and the taking of a photograph, activation of a flash unit either to aid in the taking a photograph or as a warning or signaling device and activation of an incorporated audible alarm.
Abstract: The invention relates to single use cameras. In particular it relates to a single use camera that incorporates a motion sensor. The motion sensor enables automatic activation of various camera features and utilities. These include the automatic activation of such normal camera actions as shutter release and the taking of a photograph, activation of a flash unit either to aid in the taking of a photograph or as a warning or signaling device and activation of an incorporated audible alarm.

83 citations


Patent
10 Jun 1996
TL;DR: In this article, a camera tracking system determines the 3D location and orientation of a camera providing live recording of a subject, thereby defining a 3D coordinate system of the live action scene into which animated objects or characters may be automatically mapped with proper scale and 3D visual object by a computer animation system.
Abstract: A camera tracking system determines the three dimensional (3D) location and orientation of the film plane of a camera providing live recording of a subject, thereby defining a 3D coordinate system of the live action scene into which animated objects or characters may be automatically mapped with proper scale and 3D visual object by a computer animation system.

60 citations


Patent
26 Nov 1996
TL;DR: In this paper, a controller is provided for a terminal input apparatus which is connected by a communication network to a camera on the partner side and which has an image display apparatus with a multi-window display function for selecting and displaying the camera setting.
Abstract: In a video system of the invention, a controller is provided for a terminal input apparatus which is connected by a communication network to a camera on the partner side and which has an image display apparatus with a multi-window display function for selecting and displaying the camera setting. An image pickup operation which is required to operate the camera on the partner side, for example, the image pickup direction, focal distance, panning, exposure amount, white balance, automatic focusing, and the like of the designated camera are input by using an image display and a window display of the image display apparatus. The operation of the camera on the partner side and the operations of a tripod, a movable arm, and the like to hold the camera are controlled through a communicating device. A photographed image is displayed by the display apparatus.

55 citations


Proceedings ArticleDOI
04 Nov 1996
TL;DR: Experimental results showed intelligent monitoring to be effective for super-long distance telerobotic systems facing time delay and limited communication capacity.
Abstract: Time delay and limited communication capacity are the primary constraints in super-long distance telerobotic systems such as space telerobotic systems. Intelligent monitoring is efficient for this problem to provide a function which selects important scenes to help the operator through a monitoring camera. We constructed a telerobotic testbed which includes a connection through the international ISDN and typical space structure (space robot, truss structure land ORU). We conducted trans-Pacific teleoperation experiments using the testbed in ETL as the remote site and a telerobotic console at JPL (Jet Propulsion Laboratory in Pasadena, California) as a local site. Experimental results showed intelligent monitoring to be effective for the above problems.

38 citations


Patent
Louis Paul Herzberg1
03 May 1996
TL;DR: In this paper, a multiple position camera apparatus and method for a three-dimensional computer controlled telepresence camera system useful in surrogate travel type applications is presented, which enables a tele-presence application provider to form multi-view application specific stored image sequences from up to six orthogonally positioned cameras.
Abstract: A multiple position camera apparatus and method for a three-dimensional computer controlled telepresence camera system useful in surrogate travel type applications. In one embodiment, it enables a telepresence application provider to form multi-view application specific stored image sequences from up to six orthogonally positioned cameras. It furthermore enables a telepresence user to subsequently retrieve for remote viewing particular subsets of the available image sequences and/or groups of sequences via a monitoring and display system. A viewing procedure provides the user with the ability to proceed spatially and/or contiguously along any camera to camera direction. It further uses a wire cage to enable the definition and setting of the field of view of each camera.

Patent
26 Sep 1996
TL;DR: In this article, the authors proposed a video camera system such as a supervisory system, where the amount of calculation for motion detection is reduced to reduce the required capacity for hardware, and the microcomputer controls the position of the video camera so that the area in which the motion has been detected may be positioned at a central portion of the field of view of the camera.
Abstract: The invention provides a video camera system such as a supervisory system wherein the amount of calculation for motion detection is reduced to reduce the required capacity for hardware. An output of a video camera is sent to a data processing apparatus. In the data processing apparatus, an evaluation value calculation block extracts a video signal of each of a plurality of areas and calculates an evaluation value of the image for each of the areas. Further, a microcomputer calculates a reference value based on the evaluation value obtained in an ordinary state and stores the reference value, and then compares the reference value with a current evaluation value to detect motion of the image. Further, the microcomputer controls the position of the video camera so that the area in which the motion has been detected may be positioned at a central portion of the field of view of the video camera.

Patent
Masashi Hori1
14 May 1996
TL;DR: The image pickup apparatus has a camera which encodes object image data and further comprises a recording unit 101 for recording image data, a connector 200 for connecting the camera with an expansion card 111 including a signal processor 701 for processing image data stored in the recording unit and a memory and bus controller 102 for controlling image data.
Abstract: The image pickup apparatus has a camera which encodes object image data, further comprises a recording unit 101 for recording image data, a connector 200 for connecting the camera with an expansion card 111 including a signal processor 701 for processing image data stored in the recording unit 101, and a memory and bus controller 102 for controlling image data, and the signal processor 701 controls so that program data transmitted from the camera 100 is written in a flash ROM 703 via an expansion bus interface 201.

Proceedings ArticleDOI
02 Dec 1996
TL;DR: A framework utilizing intelligent visual modeling, recognition, and serving capabilities for assisting the surgeon in manoeuvring the scope (camera) in laparoscopy is proposed, which integrates top-down model guidance, bottom-up image analysis, and surgeon-in-the-loop monitoring for added patient safety.
Abstract: This paper presents our research at bringing the state-of-the-art in vision and robotics technologies to enhance the emerging laparoscopic surgical procedure. In particular, a framework utilizing intelligent visual modeling, recognition, and serving capabilities for assisting the surgeon in manoeuvring the scope (camera) in laparoscopy is proposed. The proposed framework integrates top-down model guidance, bottom-up image analysis, and surgeon-in-the-loop monitoring for added patient safety. For the top-down directives, high-level models are used to represent the abdominal anatomy and to encode choreographed scope movement sequences based on the surgeon's knowledge. For the bottom-up analysis, vision algorithms are designed for image analysis, modeling, and matching in a flexible, deformable environment (the abdominal cavity). For reconciling the top-down and bottom-up activities, robot serving mechanisms are realized for executing choreographed scope movements with active vision guidance.

Proceedings ArticleDOI
08 Dec 1996
TL;DR: A multisensor-based control system for an active pan/tilt/zoom camera is presented and Pixel-level fusion of skin color with an image produced from interaural sound delay provides a simple means of detecting the face of the current speaker.
Abstract: A multisensor-based control system for an active pan/tilt/zoom camera is presented. Acoustic and visual information from multimedia sensors is used to locate the person currently speaking and track people moving about in a room. Pixel-level fusion of skin color with an image produced from interaural sound delay provides a simple means of detecting the face of the current speaker. For wider-scale surveillance tasks, moving targets are detected using color image differencing. Target data is fed to a behavior-based fuzzy control system which uses expert rules to aim the camera. Applications include video-conferencing, security, surveillance, and advances in human-computer interaction. The system has been implemented in on a multimedia PC equipped with a wide angle camera, a Canon VC-CI pan/tilt/zoom camera, and two microphones.

Proceedings ArticleDOI
19 Sep 1996
TL;DR: The techniques presented in this paper are based on a data driven, active philosophy of vision based intersection navigation and accomplished by imaging relevant parts of the intersection using a combination of active camera control techniques and a virtual active vision tool called a virtual camera.
Abstract: Much progress has been made toward understanding the autonomous on-road navigation problem using vision based methods. A next step in this evolution is the intelligent detection and traversal of road junctions and intersections. The techniques presented in this paper are based on a data driven, active philosophy of vision based intersection navigation. Traversal is accomplished by imaging relevant parts of the intersection using a combination of active camera control techniques and a virtual active vision tool called a virtual camera. By monitoring the response of the underlying lane keeping system to the created images, intersections and road junctions can be detected and traversed.

Proceedings ArticleDOI
25 Aug 1996
TL;DR: This paper presents a new camera arrangement for an autonomous vehicle that consists of three cameras: two cameras with divergent optical axes and 4-mm wide angle lenses provide the system with an overview of the traffic scene, and a third camera with a 16-mm telelens is used to obtain more detailed information in the central field of view.
Abstract: This paper presents a new camera arrangement for an autonomous vehicle. This system consists of three cameras: two cameras with divergent optical axes and 4-mm wide angle lenses provide the system with an overview of the traffic scene, and a third camera with a 16-mm telelens is used to obtain more detailed information in the central field of view. In order to achieve active vision, all cameras are mounted on a pan and tilt platform, so that they can be pointed in azimuth and elevation. Two different camera models are developed and their parameters determined in an off-line automatic calibration process. Since the fields of view of the three cameras overlap, stereo triangulation is possible. A feature based approach using position trees helps to solve the correspondence problem in real time.

Patent
Takashi Oya1, Tomoaki Kawai1
14 Mar 1996
TL;DR: In this paper, a camera control system with a plurality of cameras connected to a network, enabling users to be informed of such status of use of each camera and each user's access right in a real-time manner.
Abstract: A camera control system having a plurality of cameras connected to a network, enabling users to be informed of such as status of use of each camera and each user's access right in a real-time manner. The system, where one of cameras controllable via the network is selected and video-display of output from the selected camera and operation of the camera are controlled, has camera-status display device for real-time displaying the statuses of at least two cameras.

Proceedings ArticleDOI
13 Oct 1996
TL;DR: The easy way for calculating the time-to-impact through this camera suggests the utilization of this sensor system for real-time applications and some guidelines for including this camera in an autonomous navigation system are shown.
Abstract: A foveated camera has been designed and fabricated. The camera is implemented using a new foveated CMOS sensor which incorporates a log-polar transformation. This transformation has specially interesting properties for image processing including the information selective reduction and some invariances. The structure of this sensor permits an individual access to each pixel, exactly in the same way that the access to a RAM. This is a very interesting property that makes a big difference between a CCD based camera and a CMOS based camera. In the other hand, the CMOS nature of the sensor implies a very important fixed pattern noise due to the mismatch of the cell transistors. A fixed pattern noise correction circuitry has been included in the camera, in order to achieve a good image quality. The solutions adopted for achieving a low noise/signal ratio are also presented. Finally, some guidelines for including this camera in an autonomous navigation system are shown. The easy way for calculating the time-to-impact through this camera suggests the utilization of this sensor system for real-time applications.

Patent
28 Nov 1996
TL;DR: In this article, a pseudo-panoramic image window 3-1 displays pseudo- panoramic images, reduces an image size from an actual images or performs display by a low resolution, so as to be matched with the display ability of a display and displays the screen at a level capable of making a user able to recognize the conditions.
Abstract: PROBLEM TO BE SOLVED: To easily recognize what kind of video images are present in a photographable range by photographing the video images of plural screens, corresponding to the instruction of requesting wide visual field video images and generating, transmitting and displaying the wide visual field video images. SOLUTION: A pseudo-panoramic image window 3-1 displays pseudo- panoramic images, reduces an image size from an actual images or performs display by a low resolution, so as to be matched with the display ability of a display and displays the screen at a level capable of making a user able to recognize the conditions. A video window 3-2 displays the video images photographed by a camera server, based on camera control from a camera client device in real time. A camera control window 3-3 specifies camera control parameters required for the operations of panning, tilting and zooming, etc., so as to perform the camera control. A photographable range fetching button 304 requests the pseudo-panoramic images, and when it is clicked with a mouse or the like, a processing for fetching the pseudo-panoramic images is started.

Proceedings ArticleDOI
04 Nov 1996
TL;DR: A new system that can acquire panoramic images quickly, using the camera panning technique is introduced, and a coarse-to-finePanoramic imaging technique that is based on smart sensing principles is developed, which results in high acquisition speeds and proportionally low storage requirements.
Abstract: Mobile robots often require a full 360/spl deg/ view of their environment in order to perform navigational tasks such identifying landmarks, localizing within the environment, and determining free paths in which to move. In the past few years, several research efforts have focused on obtaining panoramic views (i.e. 360/spl deg/ images) around the robot using wide angle lenses, spherical or conic mirrors, or rotating a camera while imaging. Although the wide angle lens and mirror techniques can provide rapid image acquisition, they lack high azimuth angle resolution required by many mobile robot navigational tasks. Panning (i.e. rotating) a camera on the other hand can provide high azimuth angle resolution, but the process requires a long time to obtain a panoramic view. We introduce a new system that can acquire panoramic images quickly, using the camera panning technique. The system makes use of a fast line scan camera, instead of a slower, conventional area scan camera. In addition, we have developed a coarse-to-fine panoramic imaging technique that is based on smart sensing principles. Using the active vision paradigm, we control the motion of the rotating camera using feedback from the images. This results in high acquisition speeds and proportionally low storage requirements. Preliminary experimentation has been carried out, and results we given.

Patent
24 Dec 1996
TL;DR: In this paper, the problem of providing an electronic camera with the global positioning system in which an image, a position and a direction are outputted, displayed or stored with high precision by providing a detection means detecting a pickup position and direction to the electronic camera is solved.
Abstract: PROBLEM TO BE SOLVED: To provide an electronic camera with the global positioning system in which an image, a position and a direction are outputted, displayed or stored with high precision by providing a detection means detecting a pickup position and direction with high precision to the electronic camera. SOLUTION: A positioning system consisting of an antenna 11, a reception means such as a GPS(Global Positioning System) and a data processing means 13 to calculate a position from the received information is provided to a control section 9 of the electronic camera. When the image pickup of the electronic camera is started, the positioning device makes position measurement when a positioning switch SW1 is closed and position data are displayed on a display means 14. The electronic camera is used to pick up an image when an image pickup switch SW2 is closed and the picked-up image information and the position data are stored in an external storage means 8. Thus, the electronic camera with the positioning device is realized, in which the image and the position are outputted, displayed or stored with high precision. COPYRIGHT: (C)1997,JPO

Patent
23 Jul 1996
TL;DR: In this article, the authors proposed a solution to provide an end user with an environment for easily remotely operating a video camera through the general purpose network of an internet or the like, where the characters of the character string part of the file name of the transfer request are handled as the ones for which the characters for camera control are described.
Abstract: PROBLEM TO BE SOLVED: To provide an end user with an environment for easily remotely operating a video camera through the general purpose network of an internet or the like. SOLUTION: When the file transfer request of the description of the file transfer request form of the internet arrives in this camera controller 1001 from an external device 1002 connected to the network, the characters of the character string part of the file name of the transfer request are handled as the ones for which the characters for camera control are described. As a result, when an entry matched with a format for the camera control is present in the character string, the camera is controlled corresponding to the entry, photographing is performed and the photographed video images are transferred as if they were the contents of an image file requested by an origin.

Patent
Eiji Kato1, Koichiro Tanaka1
26 Nov 1996
TL;DR: In this paper, a camera control system for remotely controlling a camera has one or more camera management units (12,14) for managing one or multiple cameras (16,18) whose image-capture orientation and magnification are freely externally controllable, and one controller (20,22) for controlling the cameras via the management units.
Abstract: A camera control system for remotely controlling a camera has one or more camera management units (12,14) for managing one or more cameras (16,18) whose image-capture orientation and magnification are freely externally controlled, and one or more camera controllers (20,22) for controlling the cameras via the camera management units. When control of a camera is designated in the camera controller of this system, a frame-rate alteration request is issued by the camera controller. Upon receiving the frame-rate alteration request, the camera management unit enlarges the frame rate of video obtained from the camera and transfers the image data to the camera controller.

Patent
10 Apr 1996
TL;DR: In this paper, a 3D rendering system coupled with a magnetic tracker system (19) was used to produce a 3-D composite video image corresponding the hand-held camera position and orientation with respect to the rendered scene.
Abstract: A virtual set video production system (5) as shown in the figure, having a hand-held camera tracking system (14) is provided. The virtual set video production system (5) includes a 3-D rendering system (14A) which generates a 3-D rendered scene and a hand-held camera (12A) which captures an image of talent. The 3-D rendering system (14A) is coupled to a magnetic tracker system (19) which provides position and orientation information of the hand-held camera (12A). The 3-D rendering system (14A) then provides a 3-D rendered scene based upon position and orientation information from the magnetic tracker system (19). A compositer (14B) then combines the video image of the talent provided by the hand-held camera (12A) with the 3-D rendered scene to produce a 3-D composite video image corresponding the hand-held camera position and orientation with respect to the 3-D rendered scene. The composite 3-D image is suitable for broadcast.

Patent
22 Apr 1996
TL;DR: In this article, the authors present an equipment capable of being easily operated by a user and supplying required information to the user by a graphical interface by providing a video signal supply means, a symbol display means and a control means.
Abstract: PROBLEM TO BE SOLVED: To provide an equipment capable of being easily operated by a user and supplying required information to the user by a graphical interface by providing a video signal supply means, a symbol display means and a control means. SOLUTION: This equipment is constituted so that video images are transmitted through a network 100 to the monitoring terminal 60 of a remote location, and camera control signals from the monitoring terminal 60 is received to perform camera control. The monitoring terminal 60 originates control signals for a video camera 10 to a video transmission terminal 20 and the video transmission terminal 20 controls the video camera 10 corresponding to the control signals and repeats the state of the video camera 10 obtained as a result. The monitoring terminal 60 displays the state of the video camera 10 at a display device, for instance a bit map display 135. Also, video data sent from the video transmission terminal 20 are received, compression defrosting is performed by a software, the is encoded data are expanded, and they are displayed at the display device in real time. COPYRIGHT: (C)1997,JPO

Journal ArticleDOI
TL;DR: A modeling of an acquisition line made up of a CCD camera, a lens and a frame grabber card to simulate the acquisition process in order to obtain images of virtual objects and can characterise the performance of subpixel accuracy determining methods for object positioning.
Abstract: In this paper we propose a modeling of an acquisition line made up of a CCD camera, a lens and a frame grabber card. The purpose of this modeling is to simulate the acquisition process in order to obtain images of virtual objects. The response time has to be short enough to permit interactive simulation. All the stages are modelised: in the first phase, we present a geometric model which supplies a point to point transformation that provides, for a space point in the camera field, the corresponding point on the plane of the CCD sensor. The second phase consists of modeling the discrete space which implies passing from the continous known object view to a discrete image, in accordance with the different orgin of the contrast loss. In the third phase, the video signal is reconstituted in order to be sampled by the frame grabber card. The practical results are close to reality when compared to image processing. This tool makes it possible to obtain a short computation time simulation of a vision sensor. This enables interactivity either with the user or with software for the design/simulation of an industrial workshop equipped with a vision system. It makes testing possible and validates the choice of sensor placement and image processing and analysis. Thanks to this simulation tool, we can control perfectly the position of the object image placed under the camera and in this way, we can characterise the performance of subpixel accuracy determining methods for object positioning.

Journal ArticleDOI
TL;DR: This paper presents a camera operator oriented interface which incorporates camera motion rules derived from accepted conventions in filming practice and an interactive tool for planning camera shots within any 3D geometric world.
Abstract: Though a camera may theoretically move anywhere in space, when creating a computer film a director may want to perform a limited set of precise manoeuvres. The traditional camera model based on status variables lets users control all the camera degrees of freedom but does not help them define context-driven camera motions. As sophisticated and photorealistic 3D worlds are being created, higher level camera control methods are required to make shots which describe the world content. This paper presents a camera operator oriented interface which incorporates camera motion rules derived from accepted conventions in filming practice. The camera operator can use a set of basic techniques and concepts such as framing, tracking, panning, zooming and shooting to build complex camera actions. This paper describes a procedural interface for writing camera motion procedures and an interactive tool for planning camera shots within any 3D geometric world.

Proceedings ArticleDOI
22 Apr 1996
TL;DR: A novel self-organizing neural network (SOIM) that learns a calibration-free spatial representation of 3D point targets in a manner that is invariant to changing camera configurations to develop a new framework for robot control with active vision.
Abstract: Robots that use an active camera system for visual feedback can achieve greater flexibility, including the ability to operate in a dynamically changing environment. Incorporating active vision into a robot control loop involves some inherent difficulties, including calibration, and the need for redefining the goal as the camera configuration changes. In this paper, we propose a novel self-organizing neural network (SOIM) that learns a calibration-free spatial representation of 3D point targets in a manner that is invariant to changing camera configurations. This representation is used to develop a new framework for robot control with active vision. The salient feature of this framework is that it decouples active camera control from robot control. The feasibility of this approach is explored with the help of computer simulations and experiments with the University of Illinois Active Vision System (UIAVS).

Proceedings ArticleDOI
TL;DR: The resulting platform was driven by payload requirements for binocular motorized C-mount lenses on a platform whose performance and articulation emulate those of the human eye- head system, and was a 4-DOF mechanisms driven by servo controlled DC brush motors.
Abstract: The term 'active vision' was first used by Bajcsy at a NATO workshop in 1982 to describe an emerging field of robot vision which departed sharply from traditional paradigms of image understanding and machine vision. The new approach embeds a moving camera platform as an in-the-loop component of robotic navigation or hand-eye coordination. Visually served steering of the focus of attention supercedes the traditional functions of recognition and gaging. Custom active vision platforms soon proliferated in research laboratories in Europe and North America. In 1990 the National Science Foundation funded the design of a common platform to promote cooperation and reduce cost in active vision research. This paper describes the resulting platform. The design was driven by payload requirements for binocular motorized C-mount lenses on a platform whose performance and articulation emulate those of the human eye- head system. The result was a 4-DOF mechanisms driven by servo controlled DC brush motors. A crossbeam supports two independent worm-gear driven camera vergence mounts at speeds up to 1,000 degrees per second over a range of +/- 90 degrees from dead ahead. This crossbeam is supported by a pan-tilt mount whose horizontal axis intersects the vergence axes for translation-free camera rotation about these axes at speeds up to 500 degrees per second.© (1996) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.