scispace - formally typeset
Search or ask a question
Author

K. Manabe

Bio: K. Manabe is an academic researcher. The author has contributed to research in topics: Transmission delay & Packet switching. The author has an hindex of 1, co-authored 1 publications receiving 174 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: A variable-bit-rate coding method for asynchronous transfer mode (ATM) networks is described that is capable of compensating for packet loss and the influence of packet loss on picture quality is discussed, and decoded pictures with packet loss are shown.
Abstract: Statistical characteristics of video signals for video packet coding, are clarified and a variable-bit-rate coding method for asynchronous transfer mode (ATM) networks is described that is capable of compensating for packet loss ATM capabilities are shown to be greatly affected by delay, delay jitter, and packet loss probability Packet loss has the greatest influence on picture quality Packets may be lost either due to random bit error in a cell header or to network control when traffic is congested A layered coding technique using discrete-cosine transform (DCT) coding is presented which is suitable for packet loss compensation The influence of packet loss on picture quality is discussed, and decoded pictures with packet loss are shown The proposed algorithm was verified by computer simulations >

174 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The proposed algorithm uses the smoothness property of common image signals and produces a maximally smooth image among all those with the same coefficients and boundary conditions and recovers each damaged block by minimizing the intersample variation within the block and across the block boundary.
Abstract: The authors consider the reconstruction of images from partial coefficients in block transform coders and its application to packet loss recovery in image transmission over asynchronous transfer mode (ATM) networks. The proposed algorithm uses the smoothness property of common image signals and produces a maximally smooth image among all those with the same coefficients and boundary conditions. It recovers each damaged block by minimizing the intersample variation within the block and across the block boundary. The optimal solution is achievable through two linear transformations, where the transform matrices depend on the loss pattern and can be calculated in advance. The reconstruction of contiguously damaged blocks is accomplished iteratively using the previous solution as the boundary conditions in each new step. This technique is applicable to any unitary block-transform and is effective for recovering the DC and low-frequency coefficients. When applied to still image coders using the discrete cosine transform (DCT), high quality images are reconstructed in the absence of many DC and low-frequency coefficients over spatially adjacent blocks. When the damaged blocks are isolated by block interleaving, satisfactory results have been obtained even when all the coefficients are missing. >

343 citations

Journal ArticleDOI
01 Feb 1991
TL;DR: The authors survey a number of important research topics in ATM (asynchronous transfer mode) networks, including mathematical modeling of various types of traffic sources, congestion-control and error-control schemes for ATM networks, and priority schemes to support multiple classes of traffic.
Abstract: The authors survey a number of important research topics in ATM (asynchronous transfer mode) networks. The topics covered include mathematical modeling of various types of traffic sources, congestion-control and error-control schemes for ATM networks, and priority schemes to support multiple classes of traffic. Standard activity for ATM networks and future research problems in ATM are also presented. It is shown that the cell-arrival process for data sources can be modeled by a simple Poisson process. However, voice sources or video sources require more complex processes because of the correlation among cell arrivals. Due to the effects of high-speed channels, preventive control is more effective in ATM networks than reactive control. Due to the use of optical fibers in ATM networks, the channel error rate is very small. The effects of propagation delay and processing time become significant in such high-speed networks. These fundamental changes trigger the necessity to reexamine the error-control schemes used in existing networks. Due to the diversity of service and performance requirements, the notion of multiple traffic classes is required, and separate control mechanisms should be used according to the traffic classes. The priority scheme is shown to be an effective method to support multiple classes of traffic. >

336 citations

Journal ArticleDOI
TL;DR: The Joint Photographic Experts Group (JPEG) and Motion Picture Experts group (MPEG) algorithms for image and video compression are modified to incorporate block interleaving in the spatial domain and DCT coefficient segmentation in the frequency domain to conceal the errors due to packet loss.
Abstract: The applications of discrete cosine transform (DCT)-based image- and video-coding methods in the asynchronous transfer mode (ATM) environment are considered. Coding and reconstruction mechanisms are jointly designed to achieve a good compromise among compression gain, system complexity, processing delay, error-concealment capability, and reconstruction quality. The Joint Photographic Experts Group (JPEG) and Motion Picture Experts Group (MPEG) algorithms for image and video compression are modified to incorporate block interleaving in the spatial domain and DCT coefficient segmentation in the frequency domain to conceal the errors due to packet loss. A new algorithm is developed that recovers the damaged regions by adaptive interpolation in the spatial, temporal, and frequency domains. The weights used for spatial and temporal interpolations are varied according to the motion content and loss patterns of the damaged regions. When combined with proper layered transmission, the proposed coding and reconstruction methods can handle very high packet-loss rates at only a slight cost in compression gain, system complexity, and processing delay. >

298 citations

Journal ArticleDOI
Amy R. Reibman1, B.G. Haskell1
TL;DR: The performance of video that has been encoded using the derived constraints for the leaky bucket channel is presented and it is shown how these ideas might be implemented in a system that controls both the encoded and transmitted bit rates.
Abstract: Constraints on the encoded bit rate of a video signal that are imposed by a channel and encoder and decoder buffers are considered Conditions that ensure that the video encoder and decoder buffers do not overflow or underflow when the channel can transmit a variable bit rate are presented Using these conditions and a commonly proposed network-user contract, the effect of a (BISDN) network policing function on the allowable variability in the encoded video bit rate is examined It is shown how these ideas might be implemented in a system that controls both the encoded and transmitted bit rates The performance of video that has been encoded using the derived constraints for the leaky bucket channel is presented >

259 citations

01 Jan 1991
TL;DR: In this paper, a technique for providing error protection without the additional overhead required for channel coding is presented, where the residual redundancy can then be used to provide error protection in much the same way as the insertion of redundancy in convolutional coding provides error protection.
Abstract: The need to transmit large amounts of data over a band-limited channel has led to the development of various data compression schemes. Many of these schemes function by attempting to remove redundancy from the data stream. An unwanted side-effect of this approach is to make the information transfer process more vulnerable to channel noise. Efforts at pro- tecting against errors involve the reinsertion of redundancy and an increase in bandwidth requirements. We present a technique for providing error protection without the additional overhead required for channel coding. We start from the premise that, during source coder design, for the sake of simplicity or due to imperfect knowledge, assumptions have to be made about the source which are often incorrect. This results in residual redundancy at the output of the source coder. The residual redundancy can then be used to provide error protection in much the same way as the insertion of redundancy in convolutional coding provides error protection. In this paper we develop an approach for utilizing this redundancy. To show the validity of this approach, we apply it to image coding using DPCM, and obtain substantial performance gains, both in terms of objective as well as subjective measures.

242 citations