scispace - formally typeset
Search or ask a question

Showing papers on "Smart camera published in 2000"


ReportDOI
01 Aug 2000
TL;DR: The Light Field Video Camera as mentioned in this paper is a modular embedded design based on the 1EEE1394 High Speed Serial Bus, with an image sensor and MPEG2 compression at each node.
Abstract: : We present the Light Field Video Camera, an array of CMOS image sensors for video image based rendering applications. The device is designed to record a synchronized video dataset from over one hundred cameras to a hard disk array using as few as one PC per fifty image sensors. It is intended to be flexible, modular and scalable, with much visibility and control over the cameras. The Light Field Video Camera is a modular embedded design based on the 1EEE1394 High Speed Serial Bus, with an image sensor and MPEG2 compression at each node. We show both the flexibility and scalability of the design with a six camera prototype.

313 citations


Patent
11 Jan 2000
TL;DR: In this paper, a system where one or more controllable camera(s) and one monitor (s) are connected to each other through a communication device, the states of all the cameras can be always understood at any monitor.
Abstract: In a system wherein one or more controllable camera(s) and one or more monitor(s) for displaying video information received from the camera(s) are connected to each other through a communication device, the states of all the cameras can be always understood at any monitor. Map management server software is provided to receive notification(s) of camera state information by all the cameras with the predetermined device, and to transmit the camera state information to all the monitors.

247 citations


Proceedings ArticleDOI
12 Jun 2000
TL;DR: It is illustrated that by judiciously choosing the system modules and performing a careful analysis of the influence of various tuning parameters on the system it is possible to: perform proper statistical inference, automatically set control parameters and quantify limits of a dual-camera real-time video surveillance system.
Abstract: The engineering of computer vision systems that meet application specific computational and accuracy requirements is crucial to the deployment of real-life computer vision systems. This paper illustrates how past work on a systematic engineering methodology for vision systems performance characterization can be used to develop a real-time people detection and zooming system to meet given application requirements. We illustrate that by judiciously choosing the system modules and performing a careful analysis of the influence of various tuning parameters on the system it is possible to: perform proper statistical inference, automatically set control parameters and quantify limits of a dual-camera real-time video surveillance system. The goal of the system is to continuously provide a high resolution zoomed-in image of a person's head at any location of the monitored area. An omni-directional camera video is processed to detect people and to precisely control a high resolution foveal camera, which has pan, tilt and zoom capabilities. The pan and tilt parameters of the foveal camera and its uncertainties are shown to be functions of the underlying geometry, lighting conditions, background color/contrast, relative position of the person with respect to both cameras as well as sensor noise and calibration errors. The uncertainty in the estimates is used to adaptively estimate the zoom parameter that guarantees with a user specified probability, /spl alpha/, that the detected person's face is contained and zoomed within the image.

190 citations


Proceedings ArticleDOI
24 Apr 2000
TL;DR: The motivation of this work is to take advantage of both, free-standing and robot-mounted sensors, in a cooperation scheme, and the system presented performs two separate tasks: a positioning task to perform in the global image, and a tracking task to performs in the local image.
Abstract: The use of a camera in a robot control loop can be performed with two types of architecture: for eye-in-hand camera when it rigidly mounted on the robot end-effector; and for eye-to-hand camera when it observes the robot within its work space. These two schemes have technical differences and they can play very complementary parts. Obviously, the eye-in-hand one has a partial but precise sight of the scene whereas the eye-to-hand camera has a less precise but global sight of it. The motivation of our work is to take advantage of both, free-standing and robot-mounted sensors, in a cooperation scheme. The system presented performs two separate tasks: a positioning task to perform in the global image, and a tracking task to perform in the local image. For robustness considerations, the control law stability is proved and several cooperative schemes are studied and compared in experimental results.

177 citations


PatentDOI
TL;DR: In this paper, an intelligent camera system and method for recognizing license plates, in accordance with the invention, includes a camera adapted to independently capture a license plate image and recognize the image.
Abstract: An intelligent camera system and method for recognizing license plates, in accordance with the invention, includes a camera adapted to independently capture a license plate image and recognize the license plate image. The camera includes a processor for managing image data and executing a license plate recognition program device. The license plate recognition program device includes a program for detecting orientation, position, illumination conditions and blurring of the image and accounting for the orientations, position, illumination conditions and blurring of the image to obtain a baseline image of the license plate. A segmenting program for segmenting characters depicted in the baseline image by employing a projection along a horizontal axis of the baseline image to identify positions of the characters. A statistical classifier is adapted for classifying the characters. The classifier recognizes the characters and returns a confidence score based on the probability of properly identifying each character. A memory is included for storing the license plate recognition program and the license plate images taken by an image capture device of the camera.

115 citations


Proceedings ArticleDOI
30 Oct 2000
TL;DR: A graphical interface that enables 3D visual artists or developers of interactive 3D virtual environments to efficiently define sophisticated camera compositions by creating storyboard frames, indicating how a desired shot should appear.
Abstract: We have designed a graphical interface that enables 3D visual artists or developers of interactive 3D virtual environments to efficiently define sophisticated camera compositions by creating storyboard frames, indicating how a desired shot should appear. These storyboard frames are then automatically encoded into an extensive set of virtual camera constraints that capture the key visual composition elements of the storyboard frame. Visual composition elements include the size and position of a subject in a camera shot. A recursive heuristic constraint solver then searches the space of a given 3D virtual environment to determine camera parameter values which produce a shot closely matching the one in the given storyboard frame. The search method uses given ranges of allowable parameter values expressed by each constraint to reduce the size of the 7 Degree of Freedom search space of possible camera positions, aim direction vectors, and field of view angles. In contrast, some existing methods of automatically positioning cameras in 3D virtual environments rely on pre-defined camera placements that cannot account for unanticipated configurations and movement of objects or use program-like scripts to define constraint-based camera shots. For example, it is more intuitive to directly manipulate an object's size in the frame rather than editing a constraint script to specify that the object should cover 10% of the frame's area.

102 citations


Patent
16 Oct 2000
TL;DR: A portable, hand-held endoscopic camera having all of the necessary components for performing endoscopic procedures comprises power source means, lens means, light source means and video camera means.
Abstract: A portable, hand-held endoscopic camera having all of the necessary components for performing endoscopic procedures comprises power source means, lens means, light source means, and video camera means. The portable endoscopic camera is adaptable to a wide variety of systems and includes a low wattage light source means. The camera is self-contained. A kit is also provided, the kit having all of the components necessary for performing endoscopic procedures.

92 citations


Patent
22 Dec 2000
TL;DR: In this article, a remote camera relay method and apparatus for remotely operating a self-contained, unattended digital camera over a communications link is described, and a portable enclosure is provided for accommodating the remote relay communications and control electronics.
Abstract: A remote camera relay method and apparatus for remotely operating a self-contained, unattended digital camera over a communications link. Format conversion means are included for transparently relaying control signals and remote image data between a local host processor and the remotely located digital camera, independently of specific camera command and image protocols. It thereby functions as a universal remote image transmission adapter, operable as an attachment for use with self-contained digital cameras. A portable enclosure is provided for accommodating the remote relay communications and control electronics and for attaching a hand-held digital camera thereto. Further means are included for remotely actuating the pan and tilt orientation of the camera in accordance with field-of-view selection commands. Data rate conversion and error correction coding are included for providing reliable, low power image forwarding. The communications channel could be a dial-up telephone system, a network connection, modem, an infra-red link or a wireless RF link, for example. Additional control means are provided for remotely selecting the camera field of view for image capture, and includes the ability to access only those subsets of image scenes for which viewing permissions are authorized. A further mode of operation includes protocol training whereby host photographing commands are captured by the remote relay invention during on-line operation, then replayed at programmed times resulting in automatic scheduled remote image capture. Power management is provided for maximizing operating time when used in low power, portable, battery operation.

92 citations


Patent
27 Oct 2000
TL;DR: A remote control for an interactive television system includes an integrated camera, a specifically-designated button for activating the camera, and a wireless transmitter for transmitting video information captured by the camera to the interactive TV system as mentioned in this paper.
Abstract: A remote control for an interactive television system includes an integrated camera, a specifically-designated button for activating the camera, and a wireless transmitter for transmitting video information captured by the camera to the interactive television system. The remote control also includes an activity indicator for visually indicating when the camera is active. A set top box for the interactive television system includes a wireless receiver for receiving the video information and a converter for transforming the video information into a network-compatible video stream for transmission to a network.

76 citations


Proceedings ArticleDOI
08 Oct 2000
TL;DR: This work presents a multimodal sensory intelligent system testbed based on some general requirements for developing intelligent environments and develops an integrated intelligent system utilizing four basic modules for visual and audio processing.
Abstract: Intelligent environments provide challenging research problems for natural and efficient interfaces between humans and computers as well as between humans. We present a multimodal sensory intelligent system testbed based on some general requirements for developing intelligent environments. We also present rigorous experimental investigations on the processing and control modules for the active camera networks and the microphone array which are embedded in the intelligent room. An integrated intelligent system is developed utilizing four basic modules for visual and audio processing. The integrated system has the functionality of human tracking, active camera control, face recognition and speaker recognition. This system is demonstrated to be suitable for teleconferencing type of applications.

75 citations


Proceedings ArticleDOI
24 Apr 2000
TL;DR: The approach is to learn visual features in the appearance domain that can be used to characterize an object or a location using single camera images that are defined statistically and then recognized using principal components in the frequency domain.
Abstract: We present an approach to the automatic recognition of locations or landmarks using single camera images Our approach is to learn visual features in the appearance domain that can be used to characterize an object or a location These features are defined statistically and then are recognized using principal components in the frequency domain We show that this technique can be used to recognize specific objects on varying backgrounds, as well as environmental features

Proceedings ArticleDOI
24 Apr 2000
TL;DR: A new approach to resolve difficulties by planning trajectories in the image by applying the method when object dimension are known or not and/or when the calibration parameters of the camera are well or badly estimated is proposed.
Abstract: Vision feedback control loop techniques are efficient for a number of applications but they come up against difficulties when the initial and desired positions of the camera are distant. We propose a new approach to resolve these difficulties by planning trajectories in the image. Constraints such that the object remains in the camera field of view can be taken into account. Furthermore, using this process, current measurement always remain close to their desired value and a control by image based servoing ensures the robustness with respect to modeling errors. We apply our method when object dimension are known or not and/or when the calibration parameters of the camera are well or badly estimated. Finally, real time experimental results using a camera mounted on the end effector of a 6-DOF robot are presented.

Patent
Jyoji Wada1, Katsumi Yano1, Koji Wakiyama1, Haruo Kogane1, Kazushige Tamura1 
12 Sep 2000
TL;DR: In this article, a security camera system for displaying a picture so as to easily know a place where an abnormal situation occurs is presented, where a moving picture detector is used to detect a motion from pictures taken by the security camera during an automatic monitoring operation.
Abstract: To provide a security camera system for displaying a picture so as to easily know a place where an abnormal situation occurs. The security camera system includes a security camera 61 having at least more than one rotation axis and a controller 70 for controlling the security camera 61 . The controller 70 is provided with a moving picture detector 80 for detecting a motion from pictures taken by the security camera during an automatic monitoring operation. The controller switches an operation of the security camera from automatic monitoring to still monitoring in the case when the moving picture detector detects a motion from monitored pictures. Therefore, in the case when a suspicious person, fire smoke, or the like is detected in a subject building at night, for example, during an automatic monitoring operation for shooting target monitoring places sequentially, a shooting point of the security camera is fixed there so as to shoot and display an abnormal situation on a monitor screen.

Patent
24 Nov 2000
TL;DR: In this paper, a process and apparatus is described to improve a digital camera user interface and increase ease of use and functionality by quickly, accurately and robustly permitting cursor control and designation.
Abstract: A process and apparatus is described to improve a digital camera user interface and increase ease of use and functionality of a digital camera by quickly, accurately and robustly permitting cursor control and designation in a digital camera display. A digital camera is used as a pointing device such as a mouse or trackball. The motion of the camera is detected, and the motion of the camera is used to position graphic elements on the camera's own display. The camera's motion can be detected with sensors, such as gyroscopes, or the camera itself can be used as a motion sensor. One application of this involves using the camera as a computer mouse, or like a gun-sight, to select images from a sheet of low-resolution (“thumbnail”) images. The motion of the camera is tracked, and the user aims at the desired image from a sheet of thumbnail images. The thumbnails appear to be fixed relative to the world because the camera can continuously reposition them in the display based upon the motion of the camera. The user can then select a thumbnail in an intuitive manner by simply pointing the camera at the desired thumbnail. For alternative embodiments, the interface can be used to select regions of greater extent than can be viewed in the viewer or to virtually review images.

Patent
26 Oct 2000
TL;DR: In this article, a system of linked digital cameras for an image capture system is disclosed, where a first and second digital camera can be linked to capture a first image and a second image that are used to form a stereo image.
Abstract: A system of linked digital cameras for an image capture system is disclosed. A first and second digital camera can be linked to capture a first images and a second image that are used to form a stereo image. A first data port on the first digital camera and a second data port on the second digital camera intercommunicate data between each other when the cameras are linked. The data can include the first and second image data, camera control data, and camera synchronization data. After capturing the first and second images, the image from one of the cameras can be transferred to the other camera so that both the first and second images reside in the other camera. The system allows a user who wishes to capture stereo images the ability to do so with out having to purchase two digital cameras. A compatible digital camera can be borrowed from another user for the purpose of stereo image capture. After the stereo image is captured, the user transfers both images to his camera and returns the borrowed camera. The cameras can be equipped with viewfinders that allow a user of the cameras to view the image being captured in stereo. The viewfinders can be adjustable to accommodate variations in user interpupillary distance. A digital camera operating system (OS) can be customized to enable stereo image capture, image data handling, image processing, and camera control for the linked digital cameras.

Proceedings ArticleDOI
03 Sep 2000
TL;DR: This work uses a deformable shape model for humans coupled with a novel variant of the condensation algorithm that uses quasi-random sampling for efficiency to handle unknown camera/human motion with unrestricted camera viewing angles.
Abstract: Research at the Computer Vision Laboratory at the University of Maryland has focussed on developing algorithms and systems that can look at humans and recognize their activities in near real-time. Our earlier implementation while quite successful, was restricted to applications with a fixed camera. In this paper we present some recent work that removes this restriction. Such systems are required for machine vision from moving platforms such as robots, intelligent vehicles, and unattended large field of regard cameras with a small field of view. Our approach is based on the use of a deformable shape model for humans coupled with a novel variant of the condensation algorithm that uses quasi-random sampling for efficiency. This allows the use of simple motion models which results in algorithm robustness, enabling us to handle unknown camera/human motion with unrestricted camera viewing angles. We present the details of our human tracking algorithms and some examples from pedestrian tracking and automated surveillance.

01 Jan 2000
TL;DR: This paper presents a work in progress snapshot of the virtual camera constraint model that the team is currently developing, and describes how a human user or intelligent software system issues a request to visualize subjects of interest and specifies how each should be viewed, then a constraint solver attempts to find a solution camera shot.
Abstract: Automatically planning camera shots in virtual 3D environments requires solving problems similar to those faced by human cinematographers In the most essential terms, each shot must communicate some specified visual message or goal Consequently, the camera must be carefully staged to clearly view the relevant subject(s), properly emphasize the important elements in the shot, and compose an engaging image that holds the viewer’s attention The constraint-based approach to camera planning in virtual 3D environments is built upon the assumption that camera shots are composed to communicate a specified visual message expressed in the form of constraints on how subjects appear in the frame A human user or intelligent software system issues a request to visualize subjects of interest and specifies how each should be viewed, then a constraint solver attempts to find a solution camera shot A camera shot can be determined by a set of constraints on objects in the scene or on the camera itself The constraint solver then attempts to find values for each camera parameter so that the given constraints are satisfied This paper presents a work in progress snapshot of the virtual camera constraint model that we are currently developing

Patent
18 Dec 2000
TL;DR: In this article, a security camera system is provided to provide an overlay image portion at transition regions of a displayed image, where each two cameras providing adjacent areas of coverage are operable to provide a target of interest.
Abstract: A security camera system is provided. Several video cameras operate in conjunction with an image display monitor, a user control and camera selector means whereby a target of interest may be followed. Each two cameras providing adjacent areas of coverage are operable to provide an overlay image portion at transition regions of a displayed image. Upon movement of the target of interest to a specified transition region of a currently displayed image provided by one video camera, the camera selection means is operable to select a second camera whereby to maintain target display continuity by the monitor.

Patent
15 Dec 2000
TL;DR: In this paper, a system and method for using a machine vision system to locate and register patterns in an object using range data is provided, which includes an acquisition system for acquiring a range image of an object.
Abstract: A system and method for using a machine vision system to locate and register patterns in an object using range data is provided. The machine vision system includes an acquisition system for acquiring a range image of an object. The system also includes a machine vision search tool coupled to the acquisition system for locating an instance of a trained pattern in the image. The tool registering the trained pattern transformed by at least two translational degrees of freedom and at least one non-translational degree of freedom with respect to an image plane. The acquisition system preferably includes a three-dimensional camera.

Journal ArticleDOI
TL;DR: This paper shows the theoretical and experimental results of the application of nonmetric vision to augmented reality and proposes an algorithm for augmenting a real video sequence with views of graphics objects without metric calibration of the video camera by representing the motion of theVideo camera in projective space.
Abstract: This paper deals with video-based augmented reality and proposes an algorithm for augmenting a real video sequence with views of graphics objects without metric calibration of the video camera by representing the motion of the video camera in projective space. A virtual camera, by which views of graphics objects are generated, is attached to a real camera by specifying image locations of the world coordinate system of the virtual world. The virtual camera is decomposed into calibration and motion components in order to make full use of graphics tools. The projective motion of the real camera recovered from image matches has the function of transferring the virtual camera and makes the virtual camera move according to the motion of the real camera. The virtual camera also follows the change of the internal parameters of the real camera. This paper shows the theoretical and experimental results of our application of nonmetric vision to augmented reality.

Patent
Kenji Ishibashi1
21 Jul 2000
TL;DR: In this article, the camera has two or more operation modes and is provided with a detector for detecting either the motion state or the physiological state of the user or both of them and a controller for selecting one mode from among the operation modes on the basis of the detector.
Abstract: The camera, which is put on a user, shoots and processes image into image data and records the same. The camera has two or more operation modes and is provided with a detector for detecting either the motion state or the physiological state of the user or both of them and a controller for selecting one mode from among the operation modes on the basis of the detection results by the detector.

Patent
13 Dec 2000
TL;DR: In this article, a method for customizing a digital camera for at least one particular user is disclosed, which includes providing customization software executed external to the digital camera which can access a plurality of firmware components having different camera features.
Abstract: A method for customizing a digital camera for at least one particular user is disclosed. The digital camera includes a reprogrammable memory for storing firmware which controls the operation of the digital camera and a camera graphical user interface responsive to the firmware stored in the reprogrammable memory. The method includes providing customization software executed external to the digital camera which can access a plurality of firmware components having different camera features. A user selects desired camera features to cause the customization software to access the corresponding firmware component(s). The selected corresponding firmware component(s) are provided to the digital camera and the reprogrammable memory is reprogrammed to store the corresponding firmware component(s) to thereby customize the digital camera.

Patent
31 May 2000
TL;DR: In this paper, a method and systems for automatically configuring a hand-held camera to improve quality of an image taken with the camera of a particular subject at a photo opportunity site is disclosed.
Abstract: A method and systems for automatically configuring a hand-held camera to improve quality of an image taken with the camera of a particular subject at a photo opportunity site is disclosed. First, values for a set of camera setting parameters are optimized to enhance image quality of a picture taken at that location are determined. Next, wireless communication with the camera is established, and the set of setting parameter values are pushed to camera via the wireless communication to automatically configure the camera to take a picture of the subject.

Patent
31 Oct 2000
TL;DR: In this paper, a method of tracking an object using a computer, a display device, a camera, and a camera tracking device, the computer being coupled to the display device and the camera and the tracking device is disclosed.
Abstract: A method of tracking objects that allows objects to be tracked across multiple scene changes, with different camera positions, without losing track of the selected object. In one embodiment, a method of tracking an object using a computer, a display device, a camera, and a camera tracking device, the computer being coupled to the display device, the camera and the camera tracking device is disclosed. The method includes: A first image from within a field-of-view of the camera is captured. The first image, which includes an actual object with a tracking device, is displayed on the display device. Information about the tracking device's location is received. The information is used to create a virtual world reflecting the actual object's position within the field-of-view of the camera as a shape in the virtual world. Information about the camera tracking device is received. A virtual-camera position in the virtual world is created. A 3D graphics pipeline is used to create a second image, the second image presenting the shape in the virtual world. The second image is used to obtain the actual object's position. In another embodiment, the method includes using the virtual-camera's position to compute a new position for the camera to track the actual object.

Patent
28 Nov 2000
TL;DR: In this article, a digital camera accessory is provided for use with a game system having a processing system to execute a video game program and player controls operable by a user to generate video game control signals.
Abstract: A digital camera accessory is provided for use with a game system having a processing system to execute a video game program and player controls operable by a user to generate video game control signals. The digital camera accessory includes an image sensor for capturing video images, communication circuitry configured to transmit the captured video images, and a connector that, in use, electrically connects the digital camera accessory to the game system.

Proceedings ArticleDOI
24 Apr 2000
TL;DR: A complete system for automatic alignment and calibration of a stereo pan-tilt camera platform on a mobile robot using visual data from one or two controlled rotations of the head, and a single forward motion of the robot is presented.
Abstract: We present a complete system for automatic alignment and calibration of a stereo pan-tilt camera platform on a mobile robot. The system uses visual data from one or two controlled rotations of the head, and a single forward motion of the robot. We show how the images alone provide head alignment information, camera calibration, and head geometry. We also discuss automatic zeroing of steering angle for a single steering wheel AGV. Results are provided from tests on the working system.

Patent
18 Dec 2000
TL;DR: A host computer includes a device driver which is adaptive to a digital camera and has a storage driver function for writing/reading image data representative of a still picture in/out of the camera by bulk transfer as discussed by the authors.
Abstract: A host computer includes a device driver which is adaptive to a digital camera and has a storage driver function for writing/reading image data representative of a still picture in/out of the camera by bulk transfer, an image driver function for receiving image data representative of a moving picture from the camera by isochronous transfer, an audio driver function for receiving speech data from the camera by isochronous transfer, and an operation driver function having operation commands on the shooting operation of the camera. During an electronic conference the host can receive image data representative of a moving picture and speech data as well as, if necessary, image data representative of a still picture prepared beforehand from the camera. The host also can receive image data representative of a new still picture taken during the conference from the camera.

Patent
Vaughn Iverson1
06 Dec 2000
TL;DR: In this paper, the authors proposed a sealed digital camera system that eliminates the use of wear items in the case, essentially providing a fully-sealed digital camera, where the camera is protected from exposure to the outside environment during use and the integrity of the case is not degraded during normal operation.
Abstract: Embodiments of the invention provide a sealed digital camera system that eliminates the use of wear items in the case, essentially providing a fully-sealed digital camera system. As a result, the camera is protected from exposure to the outside environment during use and the integrity of the case is not degraded during normal operation. Unlike conventional systems, the sealed digital camera system of the present invention is not subject to catastrophic failure, even when used under high pressure for extended periods. Use of a high capacity integrated or replaceable storage system and/or battery system with the present invention eliminates the need for a user to access the camera itself. Images taken by the camera can be sent to an external display unit via a suitable transmission link, again without the need to open the case. The present invention provides vast improvements in reliability, ruggedness, maintainability as well as price competitiveness as compared with conventional underwater cameras.

01 Jan 2000
TL;DR: A self-calibrating camera-assisted presentation interface that enables the user to control presentations using a laser pointer, and works with standard hardware, but could easily be incorporated into the next generation of LCD projector systems.
Abstract: This paper presents a self-calibrating camera-assisted presentation interface that enables the user to control presentations using a laser pointer The setup system consists of a computer connected to an LCD projector and a consumer-level digital camera aimed at the presentation screen Although the locations, orientations and optical parameters of the camera and projector are unknown, the projector-camera calibrates itself by inferring the mapping between pixels in the camera image to pixels in the presentation slide The camera is subsequently used to detect the position of the pointing device (such as a laser pointer dot) on the screen, allowing the laser pointer to emulate the pointing actions of a mouse The user may then select active regions in the presentation, or even draw on the projected image Additionally, arbitrary distortions due to projector placement are negated, allowing the projector (and camera) to be placed anywhere in the presentation room — for instance, at the side rather than the center of the room This solution works with standard hardware, but could easily be incorporated into the next generation of LCD projector systems

Proceedings Article
01 Jan 2000
TL;DR: A general framework that allows the automatic control of a camera in a dynamic environment based on the image-based controlor visual servoing approach is presented and adapted to highly reactive contexts (virtual reality, video games).
Abstract: This paper presents an original solution to the camera control problem in a virtual environment. Our objective is to present a general framework that allows the automatic control of a camera in a dynamic environment. The proposed method is based on the image-based controlor visual servoing approach. It consists in positioning a camera according to the information perceived in the image. This is thus a very intuitive approach of animation. To be able to react automatically to modifications of the environment, we also considered the introduction of constraints into the control. This approach is thus adapted to highly reactive contexts (virtual reality, video games). Numerous examples dealing with classic problems in animation are considered within this framework and presented in this paper.