scispace - formally typeset
Search or ask a question

Showing papers on "Interactive video published in 2000"


Patent
16 May 2000
TL;DR: In this paper, a plurality of interactive video subscriber units (22), a head end facility (54), and a video distribution medium (56) are configured to transmit advertisements (40, 42) in connection with an interactive video program (36) and receive requests from one of the subscribers to register the advertisements in a menu (116).
Abstract: An interactive video distribution system includes a plurality of interactive video subscriber units (22), a head end facility (54), and a video distribution medium (56). The head end facility (54) is configured to transmit advertisements (40, 42) in connection with an interactive video program (36) and receive requests from one of the subscriber units (22') to register the advertisements (40, 42) in a menu (116). In response to each of the requests, the head end facility (54) generates a entries (118, 144) associated with the advertisements (40, 42) in the menu (116). The menu (116) is communicated in a first video still image (134) to the subscriber unit (22') through the medium (56). The head end facility (54) is further configured to obtain a selection request for one of entries (118, 144) and provide supplementary advertising information (148) associated with the selected one of advertisements (40, 42) to the subscriber unit (22').

256 citations


Patent
26 May 2000
TL;DR: In this paper, a system is described for providing an integrated, efficient and consistent production environment for the shared development of multimedia productions, including feature animation movies, computerized animation films, interactive video games, interactive movies, etc.
Abstract: A system is described for providing an integrated, efficient and consistent production environment for the shared development of multimedia productions. Examples of multimedia productions include feature animation films, computerized animation films, interactive video games, interactive movies, and other types of entertainment and/or educational multimedia works. The development of such multimedia products typically involve heterogeneous and diverse forms of multimedia data. Further, the production tools and equipment that are used to create and edit such diverse multimedia data are in and of themselves diverse and often incompatible with each other. The incompatibility between such development tools can be seen in terms of their methods of operation, operating environments, and the types and/or formats of data on which they operate upon. Disclosed herein, is a complete solution that provides a consistent and integrated multimedia production environment in the form of common utilities, methods and services. The common utilities, methods and services disclosed herein, are used to integrate the diverse world of multimedia productions. By using the common utilities, methods and services provided, diverse multimedia production tools can access, store, and share data in a multiple user production environment in a consistent, safe, efficient and predictable fashion.

209 citations


Patent
10 Aug 2000
TL;DR: In this article, a forward error correction (FEC) technique is proposed for interactive video transmission, which is based on the recovery from error spread using continuous updates (RESCU).
Abstract: Real-time interactive video transmission in the current Internet has mediocre quality because of high packet loss rates. Loss of packets belonging to a video frame is evident not only in the reduced quality of that frame but also in the propagation of that distortion to successive frames. This error propagation problem is inherent in any motion-based video codec because of the interdependence of encoded video frames. Since packet losses in the best-effort Internet environment cannot be prevented, minimizing the impact of these packet losses to the final video quality is important. A new forward error correction (FEC) technique effectively alleviates error propagation in the transmission of interactive video. The technique is based on a recently developed error recovery scheme called Recovery from Error Spread using Continuous Updates (RESCU). RESCU allows transport level recovery techniques previously known to be infeasible for interactive video transmission applications to be successfully used in such applications. The FEC technique can be very useful when the feedback channel from the receiver is highly limited, or transmission delay is high. Both simulation and Internet experiments indicate that the FEC technique effectively alleviates the error spread problem and is able to sustain much better video quality than H.261 or other conventional FEC schemes under various packet loss rates.

201 citations


Journal ArticleDOI
TL;DR: For patients with herniated disks, an interactive, diagnosis-specific videodisk program appears to facilitate decision making and may help to ensure informed consent and for patients with spinal stenosis, it reduced the surgery rate without diminishing patient outcomes.
Abstract: Background.Back surgery rates are rapidly rising in the United States. This surgery is usually elective, so patient preferences are important in the treatment decision.Objectives.The objective of this study was to determine the impact on outcomes and surgical choices of an interactive, diagnosis-spe

198 citations


Patent
10 Oct 2000
TL;DR: A video game system includes an output screen, a video game controller, video game software, and an interactive game controller adapter as mentioned in this paper, which is attached to the controller and is shaped to represent the unique characteristics of a particular video game.
Abstract: A video game system includes an output screen, a video game controller, video game software, and an interactive video game controller adapter. The video game controller has control buttons for inputting commands to manipulate images output to the screen. The video game software interfaces between the video game controller and the screen. The interactive video game controller adapter is attached to the video game controller and is shaped to represent the unique characteristics of a particular video game. The adapter has input controls shaped to simulate the real-life activity emulated by the video game. The appropriate control buttons of the video game controller are activated when the corresponding input controls of the adapter are activated.

160 citations


Patent
05 Jan 2000
TL;DR: In this paper, an off-line profiling tool analyzes typical video applications offline in order to generate profiles of different types of video applications that are then accessed in real-time by a call admission manager responsible to controlling the admission of new video application sessions as well as the assignment of admitted applications to specific available video encoders.
Abstract: When two or more different video streams a e compressed for concurrent transmission of multiple compressed video bitstreams over a single shared communication channel, control over both (1) the transmission of data over the shared channel and (2) the compression processing that generates the bitstreams is exercised taking into account the differing levels of latency required for the corresponding video applications. For example, interactive video games typically require lower latency than other video applications such as video streaming, web browsing, and electronic mail. A multiplexer and traffic controller takes these differing latency requirements, along with bandwidth and image fidelity requirements, into account when controlling both traffic flow and compression processing. In addition, an off-line profiling tool analyzes typical video applications off-line in order to generate profiles of different types of video applications that are then accessed in real-time by a call admission manager responsible to controlling the admission of new video application sessions as well as the assignment of admitted applications to specific available video encoders, which themselves may differ in video compression processing power as well as in the degree to which they allow external processors (like the multiplexer and traffic controller) to control their internal compression processing.

139 citations


Patent
30 May 2000
TL;DR: In this article, a method and system for providing a user interface to real-time interactive video services is presented, where the bettor is presented information concerning the betting opportunities and the betting window.
Abstract: A method and system for providing a user interface to real time interactive video services. The method and system allow interactive input from a viewer of the video services simultaneously with viewing the video services. The method and system also allows an interactive response to the viewer from the interactive application. To present betting information in an attractive format and maximize the information available to the bettor, a user interface to the real-time service is required. With regard to real-time betting, the bettor is presented information concerning the betting opportunities and the betting window. Since most bettors prefer to have as much information as possible prior to betting, they prefer to wait until the last possible moment to bet. The disclosed embodiments provide the bettor with betting window information and the latest information concerning the prospective wagers. Moreover, the user interface is designed to provide such information in a manner that both attracts the attention of the bettor and provides the information in a useful, easy to follow and navigate format. The betting server checks the data transmission speed so that all users can have an adequate betting window. Users will receive confirmation of attempted bets. In WAP equipped mobile stations, betting can be accomplished across a wireless Internet connection. For example, bettors using GSM mobile stations can receive information by short message services through GSM SC.

85 citations


Patent
24 Oct 2000
TL;DR: In this paper, an object of interest is extracted from a video stream and the object from the video stream is analyzed and manipulated to obtain a synthetic character, and a virtual video is assembled using the synthetic character.
Abstract: In a system for video processing, an object of interest is extracted from a video stream. The object from said video stream is analyzed and manipulated to obtain a synthetic character. A virtual video is assembled using the synthetic character.

80 citations


Patent
17 May 2000
TL;DR: An interactive video display and computer system which provides changing video images in response to a combination of signals received from repetitive body movements and voice commands is described in this paper. But the system is limited to the use of a single camera.
Abstract: An interactive video display and computer system which provides changing video images in response to a combination of signals received from repetitive body movements and voice commands. The system comprises a pace sensing apparatus (10) which is worn on the user's body which senses the repetitive body motion, translates that motion into a signal and transmits the pace signal to a signal receiving means (20), which translates the signal into a signal readily recognized by a computer and then delivers the signal to the computer system (40). The system further comprises a voice receiving mechanism (30) for receiving voice commands and transmitting a voice signal to the computer system (40). The user controls the perceived rate of motion and perceived direction of travel, as well as other aspects of the video image, by pace and voice.

77 citations


Patent
13 Dec 2000
TL;DR: In this paper, a viewer of an interactive video casting system can be presented with promotions having purchase offers or offers of credits toward future purchases by correlating a program being viewed with user profile information and product information.
Abstract: A viewer of an interactive video casting system can be presented with promotions having purchase offers or offers of credits toward future purchases These promotions can be provided by correlating a program being viewed with user profile information and product information If presented with the promotion while viewing a program, the viewer can buy products/services offered in the promotion, or defer the promotion for future viewing or as a credit Credits toward a future purchase can be maintained in a storage area for the user and applied to a later purchase Promotions can also be correlated to other interactive video casting tools or interfaces, such as the user's calendar, so that promotions relevant to calendar entries can be presented to the user

66 citations


Journal ArticleDOI
TL;DR: The study indicates that RESCU is effective in alleviating the error spread problem and can sustain much better video quality with less bit overhead than existing video error recovery techniques under various network environments.
Abstract: Real-time interactive video transmission in the current Internet has mediocre quality because of high packet loss rates. Loss of packets in a video frame manifests itself not only in the reduced quality of that frame but also in the propagation of that distortion to successive frames. This error propagation problem is inherent in any motion compensation-based video codec. In this paper, we present a new error recovery scheme, called recovery from error spread using continuous updates (RESCU), that effectively alleviates error propagation in the transmission of interactive video. The main benefit of the RESCU scheme is that it allows more time for transport-level recovery such as retransmission and forward error correction to succeed while effectively masking out delays in recovering lost packets without introducing any playout delays, thus making it suitable for interactive video communication. Through simulation and real Internet experiments, we study the effectiveness and limitations of our proposed techniques and compare their performance to that of existing video error recovery techniques including H.263+ (NEWPRED). The study indicates that RESCU is effective in alleviating the error spread problem and can sustain much better video quality with less bit overhead than existing video error recovery techniques under various network environments.

Patent
20 Jul 2000
TL;DR: In this article, a device for use as an aid to computer users is described, coupled to an associated personal computer and performs various functions which provide the user with concurrent explanations of running software, searches and multimedia integration without interfering with the functioning of the associated computer or its running software.
Abstract: A device for use as an aid to computer users. The device is coupled to an associated personal computer and performs various functions which provide the user with concurrent explanations of running software, searches and multimedia integration without interfering with the functioning of the associated computer or its running software.

Journal ArticleDOI
TL;DR: The results indicate that a well-designed interactive video application can motivate, save time, and help address learner weaknesses, especially for students most in need of assistance.
Abstract: Research on computer-assisted and video-based educational techniques has almost invariably found that these media have positive effects on learner motivation. This article presents a study of integrated computer technology which incorporates pace-controlled syntactic chunking in a captioned video presentation. The results indicate that a well-designed interactive video application can motivate, save time, and help address learner weaknesses, especially for students most in need of assistance. In addition to increasing both student motivation and learning efficiency over time, the program supplied the least able students with the means to better understand and respond to foreign language discourse. The results achieved in this study were quite positive. Weaker students in the experimental group performed beyond their apparent ability levels. Additionally, both the teachers and the students reacted favorably to working with the technology. Finally, the experimental group was able to complete tasks more quic...

Proceedings ArticleDOI
10 Sep 2000
TL;DR: A robust H.263+ video codec suitable for real-time interactive and multicast Internet applications is proposed and two techniques are proposed to minimise temporal propagation-selective FEC of the motion information and the use of periodic reference frames.
Abstract: Any real-time interactive video coding algorithm used over the Internet needs to be able to cope with packet loss, since the existing error recovery mechanisms are not suitable for real-time data. In this paper, a robust H.263+ video codec suitable for real-time interactive and multicast Internet applications is proposed. Initially, the robustness to packet loss of H.263 video packetised according to the RTP-H.263+ payload format specifications is assessed. Two techniques are proposed to minimise temporal propagation-selective FEC of the motion information and the use of periodic reference frames. It is shown that when these two techniques are combined, the robustness to loss of H.263+ video is greatly improved.

Journal ArticleDOI
TL;DR: Investigation of graduate students' satisfaction and perception of opportunities for critical thinking in distance education courses that utilized a two‐way audio/video system found that critical thinking had received little attention.
Abstract: Critical thinking is an important component of learning, yet it has received little attention in distance education literature. The purpose of this study was to investigate graduate students' satisfaction and perception of opportunities for critical thinking in distance education courses that utilized a two‐way audio/video system.

Patent
08 Aug 2000
TL;DR: In this paper, an interactive video display system includes a content provider streaming a primary video stream, an annotation data stream, and a viewing station having a video display apparatus for displaying both the primary video streams and annotation data streams.
Abstract: An interactive video display system includes a content provider streaming a primary video stream, an annotation data stream, and a viewing station having a video display apparatus for displaying both the primary video stream and annotation data stream. The annotation stream comprises an animated graphic (51) in a suitable graphic format such as (GIF). The animated graphic may move in the display in any direction, directed by data in the annotation stream, and may be associated with an entity (49) displayed from the primary video stream and moves with the image entity. In response to user interaction with an animated graphic hyperlink, alternative display entities may be sent in the annotation data stream. These alternative display entities may alternatively be sent along with the primary display entities with a specification as to how they would be displayed on the basis of user interaction.

Journal ArticleDOI
TL;DR: This article addresses the two essential elements of distance learning: the technology and the pedagogy through the four components--information, support, resources, and relationships--of a work effectiveness model.
Abstract: This article addresses the two essential elements of distance learning: the technology and the pedagogy. Both areas are discussed through the four components--information, support, resources, and relationships--of a work effectiveness model. Drawing from recent literature and their experience, the authors offer strategies for making interactive video technology "invisible" while engaging students at a distance. Students experience connections when faculty know how to manage the equipment, plan ahead, and consciously construct strategies for creating relationships across the miles.

Journal ArticleDOI
TL;DR: Based on two years of experience with a distance MSW program, the authors present an evolving model for the development, management, and evaluation of distance education graduate programs that use interactive video technology.
Abstract: Based on two years of experience with a distance MSW program, the authors present an evolving model for the development, management, and evaluation of distance education graduate programs that use interactive video technology. The model includes five core components: (a) accreditation standards compliance, (b) resource requirements, (c) curriculum adaptation, (d) faculty development, and (e) program evaluation. While not an exhaustive listing, these components are advanced as central to the effective planning and administration of distance education programs in social work.

Proceedings ArticleDOI
29 Aug 2000
TL;DR: An interactive version of stream tapping is presented and it is shown that stream tapping can use as little as 10% of the bandwidth required by dedicating a unique stream of data to each client request.
Abstract: The key performance bottleneck for a video-on-demand (VOD) server is bandwidth, which controls the number of clients the server can simultaneously support. Previous work (Carter & Long, 1997, 1999) has shown that a strategy called "stream tapping" can make efficient use of bandwidth when clients are not allowed to interact (through VCR-like controls) with the video they are viewing. In this paper, we present an interactive version of stream tapping and analyze its performance through the use of discrete-event simulation. In particular, we show that stream tapping can use as little as 10% of the bandwidth required by dedicating a unique stream of data to each client request.

Journal ArticleDOI
TL;DR: Seven barriers to interaction, which focused on ICV technology limitations and student situational and dispositional characteristics, were identified and implications for practice and future research are discussed.
Abstract: Fully interactive learning environments have been demonstrated to increase student satisfaction, learning, and retention in the educational environment. Using Moore's (1989) framework for interaction in distance education settings, this study investigated participant interactions in a course delivered to five sites by interactive compressed video (ICV) technology. The purpose of this study was two‐fold—to determine the extent to which participants took advantage of opportunities for interaction and to note their perceived barriers to interaction. The participants failed to take full advantage of the opportunities for interaction provided in the course context. Seven barriers to interaction, which focused on ICV technology limitations and student situational and dispositional characteristics, were identified. Implications for practice and future research are discussed at the conclusion of the study.

Proceedings ArticleDOI
22 Dec 2000
TL;DR: Fugue as discussed by the authors is a system that copes with the challenges of interactive video on hand-held, mobile devices through a division along time scales of adaptation, which is structured as three separate controllers: transmission, video and preference.
Abstract: Providing interactive video on hand-held, mobile devices is extremely difficult. These devices are subject to processor, memory, and power constraints, and communicate over wireless links of rapidly varying quality. Furthermore, the size of encoded video is difficult to predict, complicating the encoding task. We present Fugue, a system that copes with these challenges through a division along time scales of adaptation. Fugue is structured as three sperate controllers: transmission, video and preference. This decomposition provides adaptation along different time scales: per-packet, per-frame, and per-video. The controllers are provided at modest time and space costs compared to the cost of video encoding. We present simulations confirming the efficacy of our transmission controller, and compare our video controller to several alternatives. We find that, in situations amenable to adaptive compression, our scheme provides video quality equal to or better than the alternatives at a comparable or substantially lower computational cost. We also find that distortion, the metric commonly used to compare mobile video, under-values the contribution smooth motion makes to perceived video quality.

01 Jan 2000
TL;DR: This paper presents an interactive video visualization technique called video cubism, where video data is considered to be a block of three dimensional data where frames of video data comprise the third dimension.
Abstract: This paper presents an interactive video visualization technique called video cubism. With this technique, video data is considered to be a block of three dimensional data where frames of video data comprise the third dimension. The user can observe and manipulate a cut plane or cut sphere through the video data. An external real-time video source may also be attached to the video cube. The visualization leads to images that are aesthetically interesting as well as being useful for image analysis.

Proceedings ArticleDOI
30 Jul 2000
TL;DR: The cooperative playback systems are introduced, which enable explicitly grouped users to jointly work and cooperatively control playbacks of on-demand multimedia sessions and a multicast control streaming protocol based on an extension of RTSP adapted on top of LRMP is presented.
Abstract: IP multicast has fueled an assortment of large-scale applications over the Internet ranging from interactive video conferencing to whiteboards to video recording on-demand systems. Such applications are mainly based on the lightweight session model and on Internet standard protocols. In particular, video recording on-demand systems allow a remote client to request recording of an advertised multimedia session and playback of sessions previously archived. They are primarily designed for serving the needs of a single user who wishes, for instance, to watch a movie or to attend a recorded seminar. No groupware support is normally offered. We introduce the cooperative playback systems, which enable explicitly grouped users to jointly work and cooperatively control playbacks of on-demand multimedia sessions. We also present a multicast control streaming protocol based on an extension of RTSP adapted on top of LRMP, and illustrate our Java-enabled cooperative playback system, ViCRO/sup c/.

Proceedings ArticleDOI
30 Oct 2000
TL;DR: This paper presents an interactive video visualization technique called video cubism, where video data is considered to be a block of three dimensional data where frames of video data comprise the third dimension.
Abstract: This paper presents an interactive video visualization technique called video cubism. With this technique, video data is considered to be a block of three dimensional data where frames of video data comprise the third dimension. The user can observe and manipulate a cut plane or cut sphere through the video data. An external real-time video source may also be attached to the video cube. The visualization leads to images that are aesthetically interesting as well as being useful for image analysis.

Journal ArticleDOI
TL;DR: In this paper, the authors focus on the work-site dynamic and the context of learning and offer prescriptions for managing these critical dynamics more effectively, based on one of the author's MBA organizational behavior classes offered via interactive video technology to seven off-campus sites.
Abstract: As interactive distance education gains prominence in management education, fanfare about the technology has overshadowed how the distance mode affects the learning context. This article addresses the gap between the technology focus and the learning environment. The authors categorize three specific challenges posed by interactive distance education: (a) the challenge of the technology delivery, (b) the challenge of the work-site context, and (c) the challenge to the student/professor relationship. Because others have written about the technology delivery, the authors focus on the work-site dynamic and the context of learning and offer prescriptions for managing these critical dynamics more effectively. The work is based on one of the author’s MBA organizational behavior classes offered via interactive video technology to seven off-campus sites simultaneously and is supplemented with field observation, student surveys, and interviews.

Patent
Xiaoyuan Tu1, Boon-Lock Yeo1
12 Jun 2000
TL;DR: In this paper, the authors present a method of storing and providing frames of video streams, where the video streams are of a subject from different viewpoints and different ones of the video stream are in different states.
Abstract: In some embodiments, the invention involves a method of storing and providing frames of video streams. The method includes storing video streams of a subject from different viewpoints, wherein different ones of the video streams are in different states including different ones of the viewpoints. The method also includes responding to requests for frames of the video streams in forward, backward, and state changing directions, by providing the frames, if available. The frames may be provided to a remote computer through the Internet. In other embodiments, the invention involves a method of controlling video streams. The method includes displaying frames of the video streams, wherein the video streams are of a subject from different viewpoints and different ones of the video streams are in different states including different ones of the viewpoints. In response to activation of a user input device, displaying at least one additional frame, if available, in a forward, backward, or state changing direction with respect to a currently displayed one of the frames, depending on the activation. Other embodiments are described and claimed.

Journal ArticleDOI
TL;DR: A range of proprietary video codecs are portrayed and a number of multimode video transceivers are also characterized, including systems employing the standard H.263 video codec in the context of wideband BbB adaptive video transCEivers are examined, and the concept of B bB-adaptive video transcesivers is extended to CDMA-based systems.
Abstract: The fundamental advantage of burst-by-burst (BbB) adaptive intelligent multimode multimedia transceivers (IMMTs) is that-irrespective of the propagation environment encountered-when the mobile roams across different environments subject to path loss; shadow- and fast-fading; co-channel-, intersymbol-, and multiuser interference, while experiencing power control errors, the system will always be able to configure itself in the highest possible throughput mode, while maintaining the required transmission integrity. Finding a specific solution to a distributive or interactive video communications problem has to be based on a compromise in terms of the inherently contradictory constraints of video quality, bit rate, delay, robustness against channel errors, and the associated implementational complexity. Considering some of these tradeoffs and proposing a range of attractive solutions to various video communications problems is the basic aim of this overview. The article portrays a range of proprietary video codecs and compares them to some of the existing standard video codecs. A number of multimode video transceivers are also characterized. Systems employing the standard H.263 video codec in the context of wideband BbB adaptive video transceivers are examined, and the concept of BbB-adaptive video transceivers is then extended to CDMA-based systems.

Suguru Goto1
01 Jan 2000
TL;DR: The way of this programming emphasis the relationship between gesture and sound production is highlighted, which is a domain of programming to generate sound with a computer.
Abstract: I have been creating various Gestural Interfaces 1 for use in my compositions for Virtual Musical Instruments 2 . These Virtual Musical Instruments do not merely refer to the physical instruments, but also involve Sound Synthesis 3 , programming and Interactive Video 4 . Using the Virtual Musical Instruments, I experimented with numerous compositions and performances. This paper is intended to report my experiences, as well as their development; and concludes with a discussion of some issues as well as the problem of the very notion of interactivity. 1. An interface which translates body movement to analog signals. This contains a controller which is created with sensors and video scanning system. This is usually created by an artist himself or with a collaborator. This does not include a commercially produced MIDI controller. 2. This refers to a whole system which contains Gesture, Gestural Interface, Mapping Interface, algorithm, Sound Synthesis, and Interactive Video. According to programming and artistic concept, it may extensively vary. 3. This is a domain of programming to generate sound with a computer. In this article, the way of this programming emphasis the relationship between gesture and sound production. 4. A video image which is altered in real time. In Virtual Musical Instruments, the image is changed by gesture. This image is usually projected on a screen in a live performance. l P r i n t t h i s a r t i c l e

Journal ArticleDOI
TL;DR: The University of Iowa College of Dentistry has expanded its continuing dental education offerings to include distance learning on the Iowa Communications Network, a statewide fiber optic network linking 550 sites that provides two‐way interactive audio and video communication.
Abstract: The University of Iowa College of Dentistry has expanded its continuing dental education (CDE) offerings to include distance learning on the Iowa Communications Network (ICN). The ICN is a statewide fiber optic network linking 550 sites that provides two‐way interactive audio and video communication. The first course was broadcast on January 30, 1998 to 10 receiving sites across Iowa and was attended by 68 people. The instructor controls what is seen and heard at the remote sites, but participants can enter the discussion by activating their microphones. Recognising that the first distance learning course needed to be successful, the College of Dentistry collaborated with the College of Education to create a highly interactive instructional program. In an evaluation, the participants were almost unanimous in their approval. Ninety‐eight percent said they would attend another course if offered on the ICN. A strong majority of the participants felt the quality of the program was very good and atten...

Journal ArticleDOI
TL;DR: By analyzing the edit history, Zodiac is able to reliably detect a composed video stream's shot and scene boundaries, which facilitates interactive video browsing and features a video object annotation capability that allows users to associate annotations to moving objects in a video sequence.
Abstract: Easy-to-use audio/video authoring tools play a crucial role in moving multimedia software from research curiosity to mainstream applications. However, research in multimedia authoring systems has rarely been documented in the literature. This paper describes the design and implementation of an interactive video authoring system called Zodiac, which employs an innovative edit history abstraction to support several unique editing features not found in existing commercial and research video editing systems. Zodiac provides users a conceptually clean and semantically powerful branching history model of edit operations to organize the authoring process, and to navigate among versions of authored documents. In addition, by analyzing the edit history, Zodiac is able to reliably detect a composed video stream's shot and scene boundaries, which facilitates interactive video browsing. Zodiac also features a video object annotation capability that allows users to associate annotations to moving objects in a video sequence. The annotations themselves could be text, image, audio, or video. Zodiac is built on top of MMFS, a file system specifically designed for interactive multimedia development environments, and implements an internal buffer manager that supports transparent lossless compression/decompression. Shot/scene detection, video object annotation, and buffer management all exploit the edit history information for performance optimization.