scispace - formally typeset
Search or ask a question

Showing papers on "Video camera published in 1998"


Patent
Damian M. Lyons1
07 Dec 1998
TL;DR: In this paper, a system and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs of a system user is presented, which consists of a computer-readable memory, a video camera for generating video signals indicative of the gestures of the system user and an interaction area surrounding the system users, and a video image display.
Abstract: A system and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs of a system user. The system comprises a computer-readable memory, a video camera for generating video signals indicative of the gestures of the system user and an interaction area surrounding the system user, and a video image display. The video image display is positioned in front of the system user. The system further comprises a microprocessor for processing the video signals, in accordance with a program stored in the computer-readable memory, to determine the three-dimensional positions of the body and principle body parts of the system user. The microprocessor constructs three-dimensional images of the system user and interaction area on the video image display based upon the three-dimensional positions of the body and principle body parts of the system user. The video image display shows three-dimensional graphical objects within the virtual reality environment, and movement by the system user permits apparent movement of the three-dimensional objects displayed on the video image display so that the system user appears to move throughout the virtual reality environment.

441 citations


Patent
Christos I. Vaios1
02 Oct 1998
TL;DR: In this article, a multi-access remote system having a security surveillance area, a plurality of end-user locations, and a communications network such that one or more of the end user locations can establish a connection with the security monitoring area, and vice versa, using a communications protocol via the communications network is described.
Abstract: A multi-access remote system having a security surveillance area, a plurality of end user locations, and a communications network such that one or more of the end user locations can establish a connection with the security surveillance area, and vice versa, using a communications protocol via the communications network The security surveillance area is comprised of a local computer system, a camera with motion sensor, and a network interface When the motion sensor detects an obstruction the camera starts recording and the local computer system notifies a remote individual of the alarm via a communications device, such as a beeper, telephone, or e-mail Using an end user location, having a remote computer system, a network interface, and one or more communications devices, the remote individual can logon to the local computer system via the communications network and obtain additional information, control the video camera remotely, or view video images Access to the security surveillance area, control of the video camera, and viewing of the video data is accomplished advantageously over the Internet with application specific browser software, plugins, APIs, and other protocols

354 citations


Patent
03 Sep 1998
TL;DR: In this paper, a blind-spot viewing system for a vehicle located to the passenger side of a vehicle towards the rear of the vehicle is presented. But the system is not suitable for the use of cameras and does not have the ability to collect images of objects in the driver's side blind spot.
Abstract: A blind spot viewing system for viewing the blind spot of a vehicle located to the passenger side of the vehicle towards the rear of the vehicle. The system includes a video camera adapted for mounting to the passenger side of the vehicle adjacent the rear of the vehicle. The video camera has a lens facing in an outwards direction from the passenger side of the vehicle to collect images of objects in the driver's passenger side blind spot. A video monitor is electrically connected to the video camera. The video monitor is designed for positioning in the passenger compartment of the vehicle to permit a driver of the vehicle to view images from the video monitor.

350 citations


Patent
03 Sep 1998
TL;DR: In this paper, a method and apparatus for communicating multiple live video feeds over the internet is described, where text, graphics, and other video information supplement one or more video pictures to provide an educational and entertaining system.
Abstract: The present invention relates to a method and apparatus for communicating multiple live video feeds over the internet Users may be able to view a plurality of remote locations (102) in real time In another embodiment of the invention, users are able to remotely control a video picture of a distant location The remote control may be either actual control of a remote video camera or perceived remote control by the manipulation of audiovisual data streams In one embodiment, text, graphics, and other video information supplement one or more video pictures to provide an educational and entertaining system In accordance with the present invention, information is accessible to users who are viewing multiple video pictures The information relates and describes what is being viewed Users who have different types of equipment, with different data rates, are able to access and use the system of the present invention In another embodiment, users may interactively communicate with a video lecturer by asking questions and receiving answers The invention may be connected to, and in communication with, broadcast and/or cable television systems

309 citations


Patent
01 Apr 1998
TL;DR: In this paper, a simple and robust method and system for generating new views from arbitrary camera positions without depth information or feature matching, simply by combining and resampling the available images.
Abstract: Described is a simple and robust method and system for generating new views from arbitrary camera positions without depth information or feature matching, simply by combining and resampling the available images. This technique interprets input images as two-dimensional slices of a four dimensional function--the light field. This function completely characterizes the flow of light through unobstructed space in a static scene with fixed illumination. A sampled representation for light fields allows for both efficient creation and display of inward and outward looking views. Light fields may be created from large arrays of both rendered and digitized image. The latter are acquired with a video camera mounted on a computer-controlled gantry. Once a light field has been created, new views may be constructed in real time by extracting slices in appropriate directions. Also described is a compression system that is able to compress generated light fields by more than a factor of 100:1 with very little loss of fidelity. Issues of antialiasing during creation and resampling during slice extraction are also addressed.

294 citations


Proceedings ArticleDOI
19 Oct 1998
TL;DR: The implementation described here aims to capture events that are likely to get the user's attention and to be remembered, and creates a "flashbulb" memory archive for the wearable which aims to mimic the wearer.
Abstract: StartleCam is a wearable video camera, computer, and sensing system, which enables the camera to be controlled via both conscious and preconscious events involving the wearer Traditionally, a wearer consciously hits record on the video camera, or runs a computer script to trigger the camera according to some pre-specified frequency The system described here offers an additional option: images are saved by the system when it detects certain events of supposed interest to the wearer The implementation described here aims to capture events that are likely to get the user's attention and to be remembered Attention and memory are highly correlated with what psychologists call arousal level, and the latter is often signaled by skin conductivity changes; consequently, StartleCam monitors the wearer's skin conductivity StartleCam looks for patterns indicative of a "startle response" in the skin conductivity signal When this response is detected, a buffer of digital images, recently captured by the wearer's digital camera, is downloaded and optionally transmitted wirelessly to a webserver This selective storage of digital images creates a "flashbulb" memory archive for the wearable which aims to mimic the wearer Using a startle detection filter, the StartleCam system has been demonstrated to work on several wearers in both indoor and outdoor ambulatory environments

258 citations


Patent
Damian M. Lyons1
21 Dec 1998
TL;DR: In this article, a system and method for constructing three-dimensional images using camera-based gesture inputs of a system user is presented, which consists of a computer-readable memory, a video camera for generating video signals indicative of the gestures of the system user and an interaction area surrounding the system users, and a video image display.
Abstract: A system and method for constructing three-dimensional images using camera-based gesture inputs of a system user. The system comprises a computer-readable memory, a video camera for generating video signals indicative of the gestures of the system user and an interaction area surrounding the system user, and a video image display. The video image display is positioned in front of the system users. The system further comprises a microprocessor for processing the video signals, in accordance with a program stored in the computer-readable memory, to determine the three-dimensional positions of the body and principle body parts of the system user. The microprocessor constructs three-dimensional images of the system user and interaction area on the video image display based upon the three-dimensional positions of the body and principle body parts of the system user. The video image display shows three-dimensional graphical objects superimposed to appear as if they occupy the interaction area, and movement by the system user causes apparent movement of the superimposed, three-dimensional objects displayed on the video image display.

245 citations


Book ChapterDOI
02 Jun 1998
TL;DR: An automatic 3D surface modeling system that extracts dense metric 3D surfaces from an uncalibrated video sequence with no restrictions on camera movement and internal camera parameters like zoom are imposed.
Abstract: This contribution describes an automatic 3D surface modeling system that extracts dense metric 3D surfaces from an uncalibrated video sequence. A static 3D scene is observed from multiple viewpoints by freely moving a video camera around the object. No restrictions on camera movement and internal camera parameters like zoom are imposed, as the camera pose and intrinsic parameters are calibrated from the sequence.

221 citations


Journal ArticleDOI
TL;DR: It is shown that, for practical purposes, the chlorophyll content of leaves can be estimated with sufficient accuracy using a portable video camera and a personal computer.

216 citations


Patent
Rajarshi Ray1
31 Mar 1998
TL;DR: In this article, a wireless communication terminal is configured for enabling a user to receive and transmit video images and audio or speech signals associated with the user of the terminal and another user at, for example, a remote location.
Abstract: A wireless communication terminal is configured for enabling a user to receive and transmit video images as well as receive and transmit audio or speech signals associated with the user of the terminal and another user at, for example, a remote location. The received video image is obtained from a video image signal received over a radio frequency communications link established between the wireless communication terminal and a cellular base station. This received video image is displayed in a video image display conveniently associated with the wireless communication terminal. The transmitted video image signal may be that of the user of the terminal, of a scene within the field of view of the video camera or of text either coupled to the terminal through one of many well known data interfaces, or an image of text as captured by the camera. This transmitted video image signal is obtained from a video camera associated with the wireless communication terminal and then transmitted over the radio frequency communications link established between the wireless communication terminal and the cellular base station for displaying in a remotely located video image display.

197 citations


Journal ArticleDOI
TL;DR: A pupil detection technique using two light sources and the image difference method is proposed and a method for eliminating the images of the light sources reflected in the glass lens is proposed for users wearing eye glasses.
Abstract: Recently, some video-based eye-gaze detection methods used in eye-slaved support systems for the severely disabled have been studied. In these methods, infrared light was irradiated to an eye, two feature areas (the corneal reflection light and pupil) were detected in the image obtained from a video camera and then the eye-gaze direction was determined by the relative positions between the two. However, there were problems concerning stable pupil detection under various room light conditions. In this paper, methods for precisely detecting the two feature areas are consistently mentioned. First, a pupil detection technique using two light sources and the image difference method is proposed. Second, for users wearing eye glasses, a method for eliminating the images of the light sources reflected in the glass lens is proposed. The effectiveness of these proposed methods is demonstrated by using an imaging board. Finally, the feasibility of implementing hardware for the proposed methods in real time is discussed.

Proceedings ArticleDOI
29 Oct 1998
TL;DR: The overall purpose of this research is to develop a model-based vision system for orthodontics that will replace traditional approaches and can be used in diagnosis, treatment planning, surgical simulation and implant purposes.
Abstract: A novel integrated system is developed to obtain a record of the patient's occlusion using computer vision. Data acquisition is obtained using intra-oral video camera. A modified Shape from Shading (SFS) technique using perspective projection and camera calibration is then used to extract accurate 3D information from a sequence of 2D images of the jaw. A novel technique for 3D data registration using Grid Closest Point (GCP) transform and genetic algorithms (GA) is used to register the output of the SFS stage. Triangulization is then performed, and a solid 3D model is obtained via a rapid prototype machine. The overall purpose of this research is to develop a model-based vision system for orthodontics that will replace traditional approaches and can be used in diagnosis, treatment planning, surgical simulation and implant purposes.

Patent
18 Mar 1998
TL;DR: In this article, a pair of near-infrared light illuminators are arranged in such a manner that the illumination ranges thereof are adjusted so as to illuminate the information inputting person from different directions.
Abstract: Over an information input space to which an information inputting person comes, a pair of near-infrared light illuminators are arranged in such a manner that the illumination ranges thereof are adjusted so as to illuminate the information inputting person from different directions. A pair of near-infrared-light-sensitive video cameras are also arranged in different positions so as to correspond to the illuminators. The image pickup range of the video cameras is adjusted so that it is out of the range on the floor surface illuminated by the corresponding illuminator, while the information inputting person is within the image pickup range. A controller allows one illuminator at a time to be switched on/off. An image of the information inputting person is picked up by the video camera corresponding to the switched-on illuminator. The information inputting person is extracted based on the images picked up by the video cameras, whereby the position or direction pointed to by the information inputting person is determined.

02 Mar 1998
TL;DR: A combined 2D, 3D approach is presented that allows for robust tracking of moving bodies in a given environment as observed via a single, uncalibrated video camera and enables robust tracking without constraining the system to know the shape of the objects being tracked beforehand.
Abstract: A combined 2D, 3D approach is presented that allows for robust tracking of moving bodies in a given environment as observed via a single, uncalibrated video camera. Tracking is robust even in the presence of occlusions. Low-level features are often insufficient for detection, segmentation, and tracking of non-rigid moving objects. Therefore, an improved mechanism is proposed that combines low-level (image processing) and mid-level (recursive trajectory estimation) information obtained during the tracking process. The resulting system can segment and maintain the tracking of moving objects before, during, and after occlusion. At each frame, the system also extracts a stabilized coordinate frame of the moving objects. This stabilized frame is used to resize and resample the moving blob so that it can be used as input to motion recognition modules. The approach enables robust tracking without constraining the system to know the shape of the objects being tracked beforehand; although, some assumptions are made about the characterstics of the shape of the objects, and how they evolve with time. Experiments in tracking moving people are described.

Patent
14 May 1998
TL;DR: In this paper, a work cell containing robot, video camera, and structured lighting source is calibrated by observing targets with the camera(s) as a robot is displaced through a set of offsets.
Abstract: A work cell containing robot(s), video camera(s), and structured lighting source(s) is calibrated by observing targets with the camera(s) as a robot is displaced through a set of offsets. Complete information is recovered about the camera(s) calibration data and the structure of illumination from the light source(s). The robot is utilized to create known relative movements between the targets and the camera(s) and light source(s). Therefore, this technique is applicable to both the fixed and moving camera cases. Except for the target surface profile, there is no requirement to externally determine any absolute or relative positions, or any relationships either within or between the camera(s), targets, light source(s), and robot(s). Either single or multiple cameras (acting independently or in stereo) are calibrated to the robot's coordinate frame, and then optionally used as measuring devices to determine the position and form of the structured light.

Patent
09 Apr 1998
TL;DR: A color translating UV microscope for research and clinical applications involving imaging of living or dynamic samples in real time and providing several novel techniques for image creation, optical sectioning, dynamic motion tracking and contrast enhancement comprises a light source emitting UV light, and visible and IR light if desired.
Abstract: A color translating UV microscope for research and clinical applications involving imaging of living or dynamic samples in real time and providing several novel techniques for image creation, optical sectioning, dynamic motion tracking and contrast enhancement comprises a light source emitting UV light, and visible and IR light if desired. This light is directed to the condenser via a means of selecting monochromatic, bandpass, shortpass, longpass or notch limited light. The condenser can be a brightfield, darkfield, phase contrast or DIC. The slide is mounted in a stage capable of high speed movements in the X, Y and Z dimensions. The microscope uses broadband, narrowband or monochromat optimized objectives to direct the image of the sample to an image intensifier or UV sensitive video system. When an image intensifier is used it is either followed by a video camera, or in the simple version, by a synchronized set of filters which translate the image to a color image and deliver it to an eyepiece for viewing by the microscopist. Between the objective and the image intensifier there can be a selection of static or dynamic switchable filters. The video camera, if used, produces an image which is digitized by an image capture board in a computer. The image is then reassembled by an overlay process called color translation and the computer uses a combination of feedback from the information in the image and operator control to perform various tasks such as optical sectioning and three dimensional reconstruction, coordination of the monochromater while collecting multiple images sets called image planes, tracking dynamic sample elements in three space, control of the environment of the slide including electric, magnetic, acoustic, temperature, pressure and light levels, color filters and optics, control for microscope mode switching between transmitted, reflected, fluorescent, Raman, scanning, confocal, area limited, autofluorescent, acousto-optical and other modes.

Proceedings ArticleDOI
12 Oct 1998
TL;DR: A system for direct interaction with a video projection screen using a laser pointer is presented and more complex interaction paradigms are composed from the elementary operations "switch on/off" and pointing of the laser pen.
Abstract: A system for direct interaction with a video projection screen using a laser pointer is presented. The laser point on the screen is captured by a video camera, and its location recognized by image processing techniques. The behavior of the point is translated into signals sent to the mouse input of the computer causing the same reactions as if they came from the mouse. More complex interaction paradigms are composed from the elementary operations "switch on/off" and pointing of the laser pen.

Patent
07 Jul 1998
TL;DR: In this paper, a personal computer detects the current state of a video camera with reference to setting information transmitted from a work station, and then displays the predicted image on a CRT monitor.
Abstract: A personal computer detects the current state of a video camera with reference to setting information transmitted thereto from a work station. When a command for controlling the video camera is inputted from a mouse, the personal computer predicts an image, which is assumed to be shot by the video camera upon execution of the command, with reference to both the information transmitted from the work station and the command inputted from the mouse, and then displays the predicted image on a CRT monitor. Referring to the image being displayed on the CRT monitor, a user performs a manipulation to instruct execution of the command in the case of executing the previous input command. As a result, the command inputted previously is transmitted to the work station via an internet, thereby controlling the video camera and a pan tilter. Thus, the video camera connected via a network or the like can be controlled smoothly.

Patent
06 Feb 1998
TL;DR: In this article, a system and method for detecting hand and item movement patterns comprising a video camera positioned to view a scene which includes a scanner for scanning items, wherein the video camera generates a sequence of video frames representing activity in the scene, processing means coupled to the camera, the processing means performing steps of identifying regions of a video frame representing a hand; and tracking hand movement with respect to the scanner over a plurality of video frame.
Abstract: A system and method for detecting hand and item movement patterns comprising a video camera positioned to view a scene which includes therein a scanner for scanning items, wherein the video camera generates a sequence of video frames representing activity in the scene, processing means coupled to the video camera, the processing means performing steps of identifying regions of a video frame representing a hand; and tracking hand movement with respect to the scanner over a plurality of video frames. Event information descriptive of user activity at the self-service checkout terminal is generated based on the tracking information.

Patent
12 Nov 1998
TL;DR: In this article, the user specifies an area of image to be cut out from the displayed still picture, the image data in the specified area is cut out and recorded as a cutout image.
Abstract: A frame of still picture data is captured at an instant specified by a user from video signals supplied from a given video source, such as a television receiver, a video camera, etc., and the image data is displayed. When the user specifies an area of image to be cut out from the displayed still picture, the image data in the specified area is cut out and recorded as a cutout image. Each cutout image recorded is displayed in the form of an icon. When any of the icons is selected by the user, the corresponding cutout image data is read and pasted in a part to be changed in the original image data. Thus an image can be easily created by user's choice.

Patent
02 Nov 1998
TL;DR: In this paper, an amusement park entertainment system that integrates an image of a patron into an audiovisual presentation is presented, where the patron enters a room where the lighting, sound, and scenery can be controlled.
Abstract: An amusement park entertainment system that integrates an image of a patron into an audiovisual presentation. The patron enters a room where the lighting, sound, and scenery can be controlled. A standardized input sequence is obtained by computer automated/assisted process where user images (video camera input, voice parameters, etc.) prompting the user to provide certain views and voiced statements. Alternatively, photo scan or diskette/CD readers can accept user input image data. A one, two, or three dimensional image data representation of the user images is then generated, including the speech parameters of the patron. The image can later be manipulated to change the appearance of the image. This appearance change includes attire and adding tools relevant to the entertainment area used by the patron. The image can then be smoothly integrated into a preexisting audiovisual presentation, thus making the patron a synthetic actor in the production.

Patent
06 Feb 1998
TL;DR: In this article, a tracking system and method for evaluating whether image information for a region cluster of a video frame of a scene represents an hypothesis of an object to be tracked, such as a person, is presented.
Abstract: A tracking system and method for evaluating whether image information for a region cluster of a video frame of a scene represents an hypothesis of an object to be tracked, such as a person. At least one real-world feature of a region cluster corresponding to an object to be tracked is generated. For example, the at least feature is at least one possible location of a predetermined portion of an object represented by the region cluster is determined based on a viewing angle of the scene of the video camera. A distance from the video camera to the object corresponding to the region is determined in real-world coordinates for each possible location of the predetermined portion of the region cluster. Real-world size and location information for the region cluster is determined based on the distance. The real-world size and location information for the region cluster is compared with statistical information for the particular type of object to determine a confidence value representing a measure of confidence that the region cluster represents the particular type of object.

Proceedings ArticleDOI
23 Jun 1998
TL;DR: A system for real-time detection of 2-D features on a reconfigurable computer based on Field Programmable Gate Arrays (FPGA's) and the algorithm employed to select good features is inspired by Tomasi and Kanade's method.
Abstract: We have designed and implemented a system for real-time detection of 2-D features on a reconfigurable computer based on Field Programmable Gate Arrays (FPGA's). We envision this device as the front-end of a system able to track image features in real-time control applications like autonomous vehicle navigation. The algorithm employed to select good features is inspired by Tomasi and Kanade's method. Compared to the original method, the algorithm that we have devised does not require any floating point or transcendental operations, and can be implemented either in hardware or in software. Moreover, it maps efficiently into a highly pipelined architecture, well suited to implementation in FPGA technology. We have implemented the algorithm on a low-cost reconfigurable computer and have observed reliable operation on an image stream generated by a standard NTSC video camera at 30 Hz.

Patent
14 Aug 1998
TL;DR: In this paper, a micro video camera is used for rental in a theme park configuration, where visitors are given a personal storage medium like a cassette, and at the end of the ride or attraction, the visitor takes the storage medium along with him or her to the next ride and attraction, and continually adds to the video record of the rides and attractions.
Abstract: Micro video cameras are sufficiently portable, miniature and weather-resistant for ands-free use by an athlete or vacationer who wishes to wear it (or attach it to a base support structure about him or herself) and self-record his or her own amusement, whether indoors or outdoors, underwater or otherwise. Disclosed usages include an instance of a skier recording hands-free his or her own skiing activity from his or her own perspective. An alternative disclosed usage includes one or more ski instructors video recording a student while actually giving the lesson. Another disclosed usage includes a theme park configuration. In this last configuration, visitors are given a personal storage medium like a cassette. At most rides or attractions, a micro video camera is available just there for loan to the visitor to get a video record just at that ride or attraction. At the end of the ride or attraction, the visitor takes the storage medium along with him or her to the next ride or attraction, and continually adds to the video record of the rides and attractions. Usage monitoring includes a rental network structure as well as supervision of the rental network for efficient sharing of the rental resource, ie., the micro video cameras, as well as a rental-inventory control, allocation and accounting data handling system. Making these micro video cameras viable for rental provides usage opportunities for customers who would like to use them but haven't spent the money to actually own one.

Patent
20 Aug 1998
TL;DR: In this paper, a self-operated karaoke recording booth is described, where a video camera positioned at the user's eye level is located on the nonreflective side of a one-way mirror and is directed at a specified performer location through the oneway mirror, which is inclined at a forty-five degree angle relative thereto.
Abstract: In a self-operated karaoke recording booth a user is provided with a selection of background scenes from which to choose and also with the option of having the lyrics of the karaoke selection displayed or suppressed. A video camera positioned at the user's eye level is located on the nonreflective side of a one-way mirror and is directed at a specified performer location through the one-way mirror, which is inclined at a forty-five degree angle relative thereto. Messages and video displays are provided to the user by a video display monitor connected to a computer that faces the reflective side of the one-way mirror and is also located at a forty-five degree angle relative thereto. The system is designed to maintain the visual focus of the performer directly into the lens of the video camera throughout the performance and to combine the video camera images with the background scene in such a way as to avoid a double exposure or phantom image of the performer against the background. The performer can choose to have the lyrics of the selected karaoke composition displayed or suppressed. If the election is for a display, the lyrics are displayed at the center of the viewing screen, directly in line with the video camera.

Patent
20 Oct 1998
TL;DR: The optical touch probe has a first target at the distal end thereof on the contact element in a standard probe and a second target is mounted to the proximal end of the probe and indicates movement and position in the Z coordinate.
Abstract: The optical touch probe has a first target at the distal end thereof on the contact element in a standard probe. The probe is mounted to a video camera of an optical coordinate measuring machine to image the target on the camera. Movement and position of the target in the X and Y coordinates is indicated by the machine's computer image processing system. A second target is mounted to the proximal end of the probe and indicates movement and position in the Z coordinate. The second target may obscure a photodetector, but preferably is parfocused on the camera by light beam parallel to the X,Y plane. Preferably there are two second targets illuminated by orthogonal beams parallel to the X,Y plane. Rotation around the Z axis then may be calculated by the computer when star probes are used. Auto changing racks are also disclosed for holding multiple probes, a probe holder, and lenses for selective mounting on the camera.

Patent
24 Mar 1998
TL;DR: In this paper, a controlling method for inputting messages to a computer, in which by the object such as a hand, on the background of the capturing section of a video camera, the following parameters is set up within a computer: (1) the maximum Y point of the image of the object is assumed as a cursor, (2) maximum X point is considered as a click; small monitoring sections are set up around the coordinate points of the cursor and click, relatively; and (4) if the distance variation between the set cursor and mouse points is over a allowing
Abstract: A controlling method for inputting messages to a computer, in which by the object, such as a hand, on the background of the capturing section of a video camera, the following parameters is set up within a computer: (1) the maximum Y point of the image of the object is assumed as a cursor, (2) the maximum X point is assumed as a click; (3) small monitoring sections are set up around the coordinate points of the cursor and click, relatively; and (4) if the distance variation between the set cursor and click points is over a allowing value, then it is judged the click has been operated. The digital image capturing by the video camera is directly transferred to a driving means or is transferred through an analog I digital signal converter to a computer for being calculated and controlled as a mouse.

Patent
18 Mar 1998
TL;DR: In this paper, a video camera system is provided in which the camera head is powered at least in part by electrical energy converted from optical energy provided by a light source, and communication between camera head and control circuitry thereof occurs by means of a wireless communications interface.
Abstract: A video camera system is provided in which the camera head is powered at least in part by electrical energy converted from optical energy provided by a light source. In one implementation, communication between the camera head and control circuitry thereof occurs by means of a wireless communications interface.

Patent
Harumi Aoki1
25 Jun 1998
TL;DR: In this article, a depression of a single switch in accordance with a mode-selection switch is executed to erase a visible image from an electro-developing recording medium, where the developed image can be thermally erased from the medium by using an electric heater.
Abstract: An electronic still video camera has an electro-developing recording medium. As soon as an optical image is formed on the medium, the image is recorded and developed as a visible image therein. The developed image is electronically read as image data by a line sensor, and the read image data may be stored in an IC memory card, a floppy disk, a hard disk or the like. Optionally, the read image data may be transferred from the camera to an external device such as a computer, a TV monitor or the like. The developed image can be thermally erased from the medium by using an electric heater. One of these operations is executed by a depression of a single switch in accordance with a mode-selection switch.

Proceedings ArticleDOI
16 Aug 1998
TL;DR: A visual surveillance and monitoring system which is based on omnidirectional imaging and view-dependent image generation from omniddirectional video streams using a hyperboloidal mirror has an advantage of less latency in looking around a large field of view.
Abstract: This paper describes a visual surveillance and monitoring system which is based on omnidirectional imaging and view-dependent image generation from omnidirectional video streams. While conventional visual surveillance and monitoring systems usually consist of either a number of fixed regular cameras or a mechanically controlled camera, the proposed system has a single omnidirectional video camera using a hyperboloidal mirror. This approach has an advantage of less latency in looking around a large field of view. In a prototype system developed, the viewing direction is determined by viewers' head tracking, by using a mouse, or by moving object trading in the omnidirectional image.