scispace - formally typeset
Search or ask a question
Institution

Interval Research Corporation

About: Interval Research Corporation is a based out in . It is known for research contribution in the topics: Signal & Haptic technology. The organization has 180 authors who have published 245 publications receiving 19184 citations.


Papers
More filters
Patent
20 Oct 1998
TL;DR: In this article, the display of an image can be paused, then resumed at an accelerated rate until a time at which the content of the display corresponds to the content that would have been displayed had the image been displayed at the normal display rate without the pause, at which time display of the image at the same display rate resumes.
Abstract: The invention enables the display of an image to be paused, then, at the end of the pause, resumed at an accelerated rate until a time at which the content of the display corresponds to the content that would have been displayed had the image been displayed at the normal display rate without the pause, at which time display of the image at the normal display rate resumes. The invention can be used with display systems that display pre-recorded images (such as are found on video or audio cassettes, or video or audio compact discs, for example) or with display systems that display images based upon display data that is only momentarily available to the display system (such as occurs in the display of television or radio broadcasts). The invention can be used with either analog or digital display systems. Further, the invention can be used with any type of image display, such as, for example, audio displays, video displays or audiovisual displays. The invention enables a great deal of flexibility in observing the display of an image, allowing a user to pause and resume the display as desired, without having to spend more time to view the displayed image than would otherwise be the case, and without missing any part of the displayed image.

3 citations

Proceedings ArticleDOI
23 Jun 1998
TL;DR: The demo presents a virtual mirror interface which reacts to the viewer using robust, real-time face tracking, using multi-modal integration, combining stereo, color, and grey-scale pattern matching modules into 2 single realtime system.
Abstract: The demo presents a virtual mirror interface which reacts to the viewer using robust, real-time face tracking. The display directly combines a user's face with various graphical distortions, performed only on the face region in the image. The face detection and tracking is done in real-time, so the graphical effect stays with the user and continues to adapt al; they move within the viewing space, increasing in intensity as the user approaches the display.The tracking system performs well in crowded environments with open and moving backgrounds. This robust performance is achieved using multi-modal integration, combining stereo, color, and grey-scale pattern matching modules into 2 single realtime system. Stereo processing is used to isolate the figure of a user from other objects and people in the background. Skinhue classification identifies and tracks likely body parts within the foreground region. Face pattern detection discriminates and localizes the face within the tracked body parts. A second set of displays, located to the side of the main viewing area, shows intermediate results of each processing module of the system.

3 citations

Proceedings ArticleDOI
15 Oct 1994
TL;DR: This panel is to incite debate about the role of representation of multimedia contenl by bringing these two communities together and argues that work in knowledge reprcscn[ation IS essential to the construction of robust multimedia systcrns t{>manage media content.
Abstract: Without represen(a[icsn multimedia will not happen. Computers tire basically deaf, blind, and ignorant, and will remain so for quite some time. Only by creating representations for media con[ent will we be able to construct large, robust multimedia sys[ems that support content-based manipulation of video, audio, and images. Yet the research community that deals with representation (A1 researchers) and the community that deals with media manipulation (multimedia researchers) have, for the most part. not come inro contact. The purpose of this panel is to incite debate about the role of representation of multimedia contenl by bringing these two communities together. Our intention IS to confront rese~rchers at ACM Multimedia’94 with a variety of positions al-rout the need for content representation in largescalc multimedia systems We argue that work in knowledge reprcscn[ation IS essential to the construction of robust multimedia systcrns t{>manage media content.

3 citations

Proceedings ArticleDOI
15 Oct 1994
TL;DR: The PLACEHOLDER project was a IWOperson VRs ystem, with helmets marrufacturcd by Virtual Research that added a smallicrophonc to each of tic helmets to pick up the voices of the users.
Abstract: Technology Overview PLACEHOLDER was a IWOperson VRs ystem, with helmets marrufacturcd by Virtual Research that provii.tcd both visual and auditory stereo [o the participants. Wc added a small m icrophonc to each of tic helmets to pick up the voices of the users. There were two physical spaces where the participants stood wearing display helmets and body sensors. and three virtual worlds through which they could independently move. Position sensors (Polhemus “FastTraks” ) tracked the 3-space position and orientation of the users’ heads. both hands. and torsos within a circular stage of about ten feet. An additional sensor system was employed-the “Grippecs,” dcsigrred by Steve Saunders of Interval. These were placed in each hand, and measured the distance bcrwccn the thumb and forefinger (or middle finger) of the hand, aflowing the developmerr[ of a simple “grasping” interface for virtual objects. A variety of computers were used in concert in the PLACEHOLDER project. T?re primary computer used in the project was an SGI Onyx Reali[y Engine, equipped with 64M of main ram and 4M of tcxmre memory. h was programmed in C and Unix, using the Minimal Reality Toolkit (authored by Chris Shaw of the University of Alberta), as the primary VR framework. John Harrison. the Banff Cenrrc’s chief programmer, moditicd the Minimal Reality toolkit to provide support for two users and two hands per user from its original one person. one dataglove instantiation. Chris Shaw and Lloyd White, also of the University of Alberta, visited to help with code coordinating support for two users. Glenn Fraser and Graham Lundgren of the Banff Cerrtre and Rob Tow provided additional programming support within the framework of the MR Toolkit. Rob Tow wrote the C code on tJre SGI which managed the audio gcncratiort and spatializatiwr. This was coordinated with the visual VR code running in the MR toolkit and controlled sound generation by a NeXT workstation and a Macintosh II equipped with a SamplcCell audio proccssirrg card, as WCIIas the spatializatiorr by two PC clones, both equipped with two foursource Crystal River Engineering Convolvotrons. The NeXT, the Macintosh If, and two Yamaha sound prwessors were programmed by Dorota Blas7czak of the Banff Ccntre, who was also responsible for the general audio design and in~gration. Dorota also designed the realtime voice filters which altered participants’ voices to malch the Critters’ “smart costumes”. Two SGI VGX computers wet-t!used with Alias architectural design tools to lay out tfrc cnvironmcrtts’ geometries and to apply textures to the resulting wireframes;

3 citations

Proceedings ArticleDOI
18 Oct 1999
TL;DR: A motion Wavelet transform Zero Tree (WZT) codec which achieves good compression ratios and can be implemented in a single ASIC of modest size (and very low cost) and includes a number of trade-offs which reduce the compression rate but which simplify the implementation and reduce the cost.
Abstract: This paper describes a motion Wavelet transform Zero Tree (WZT) codec which achieves good compression ratios and can be implemented in a single ASIC of modest size (and very low cost). WZT includes a number of trade-offs which reduce the compression rate but which simplify the implementation and reduce the cost. The figure of merit in our codec is ASIC silicon area required, and we are willing to sacrifice some rate/distortion performance with respect to the best available algorithms to that goal. The codec employs a group of pictures (GOP) of two interlaced video frames (i.e., four video fields). Each such field is coded by the well-known transform method, whereby the image is subjected to a 2-D linear transform, the transform values are quantized, and the resulting values coded (e.g., by a zero-tree method). To that goal we are using 3D wavelet transform, dyadic quantization and various entropy codecs. In the temporal direction a temporal transform is used instead of motion estimation. Some of the technical innovations that enable the above features set are: • Edge filters which enable blockwise processing while preserving quadratic continuity across block boundaries, greatly reducing blocking artifacts.

3 citations


Authors

Showing all 180 results

NameH-indexPapersCitations
Trevor Darrell148678181113
David Goldstein1411301101955
Marc Davis9941250243
Marcus W. Feldman9763852656
Yoav Shoham6724325265
Chris Pal5723516589
Malcolm Slaney5619518673
Bruce R. Donald542299365
David D. Pollock4712311280
Perry R. Cook442199651
Karon E. MacLean411345168
Wim Sweldens418121665
Michele Covell411787321
Lev A. Zhivotovsky391089224
Aviv Bergman391067154
Network Information
Related Institutions (5)
Microsoft
86.9K papers, 4.1M citations

87% related

Google
39.8K papers, 2.1M citations

87% related

Carnegie Mellon University
104.3K papers, 5.9M citations

87% related

Facebook
10.9K papers, 570.1K citations

86% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20091
20071
20051
20043
20033
20021