scispace - formally typeset
Search or ask a question

Showing papers by "Wai-Choong Wong published in 2006"


Proceedings ArticleDOI
14 May 2006
TL;DR: An optimized content-aware authentication scheme for JPEG-2000 streams over lossy networks, where a received packet is consumed only when it is both decodable and authenticated, achieves its design goal in that the rate-distortion curve of the authenticated image is very close to the R-D curve when no authentication is required.
Abstract: In this paper, we propose an optimized content-aware authentication scheme for JPEG-2000 streams over lossy networks, where a received packet is consumed only when it is both decodable and authentic. In a JPEG-2000 codestream some packets are more important than others in terms of coding dependency and visual quality. This inspires us to allocate more redundant authentication information for the more important packets to minimize the distortion of the authenticated image at the receiver. In other words, with the awareness of image content, we formulate an optimization framework, which is able to build an authentication graph yielding the best visual quality at the receiver, given a specific authentication overhead and network condition. Experimental results demonstrate that the proposed scheme achieved our design goals in that the R-D curve of an authenticated image is very close to its original one where no authentication is applied.

27 citations


Proceedings ArticleDOI
01 Aug 2006
TL;DR: A suitable wakeup schedule designed for underwater monitoring applications is presented and simulation results support this scheme with simulation results.
Abstract: Recently, sensors networks are proposed for underwater industrial applications - such as the lucrative business of seismic imaging of underwater oil wells. Underwater sensing systems present a far more challenging problem to solve, given additional communication bandwidth constraints and a sparse deployment of these underwater sensor nodes. We present a suitable wakeup schedule designed for underwater monitoring applications and support our scheme with simulation results.

10 citations


Proceedings ArticleDOI
01 Oct 2006
TL;DR: Simulation results demonstrate that the proposed authentication-aware R-D optimized streaming technique substantially outperforms authentication-unaware and state-of-the-art rate-distortion optimized streaming techniques.
Abstract: Stream authentication methods usually impose overhead and dependency among packets. The straightforward application of state-of-the-art rate-distortion (R-D) optimized streaming techniques produce highly sub-optimal R-D performance for authenticated video, since they do not account for the additional dependencies. This paper proposes an R-D optimized streaming technique for authenticated video, by accounting for authentication dependencies and overhead. It schedules packet transmission based on packets' importance in terms of both video quality and authentication dependencies. The proposed technique works with any stream authentication method as long as the verification probability can be quantitatively computed from packet loss probability. Simulation results based on H. 264 JM 10.1 and NS-2 demonstrate that the proposed authentication-aware R-D optimized streaming technique substantially outperforms authentication-unaware R-D optimized streaming techniques. In particular, when the channel capacity is below the source rate, the PSNR of authenticated video quickly drops to unacceptable levels using conventional R-D optimized streaming techniques, while the proposed technique still maintains R-D optimized video quality.

10 citations


Proceedings ArticleDOI
09 Jul 2006
TL;DR: Experimental results with JPEG-2000 coded images demonstrate that the proposed method achieves the design goal in that the R-D curve of the authenticated image is very close to the R -D curve when no authentication is required.
Abstract: This paper proposes a content-aware authentication scheme optimized to account for distortion and overhead for media streaming. When authenticated media is streamed over a lossy network, a received packet is consumed only when it is both decodable and authenticated. In most media formats, some packets are more important than others. This naturally motivates allocating more redundant authentication information for the more important packets in order to maximize their probability of authentication and thereby minimize distortion at the receiver. Toward this goal, with awareness of the media content, we formulate an optimization framework to compute an authentication graph to maximize the expected media quality at the receiver, given specific authentication overhead and knowledge of network loss rates. Experimental results with JPEG-2000 coded images demonstrate that the proposed method achieves our design goal in that the R-D curve of the authenticated image is very close to the R-D curve when no authentication is required.

6 citations


Journal ArticleDOI
TL;DR: A mathematical model of the VIN system was derived and used to explore the performance of VIN in terms of the maximum number of concurrent video streams which can be supported by the system, and it was found that VIN has a number of advantages over Staggered Multicast.
Abstract: This paper proposes a new video distribution service: Video-In-Network (VIN). In VIN, videos are continuously circulating in an optical network where they can be easily retrieved by VIN Nodes. A mathematical model of the VIN system was derived and used to explore the performance of VIN in terms of the maximum number of concurrent video streams which can be supported by the system. We then compared VIN with Staggered Multicast. We found that VIN has a number of advantages over Staggered Multicast. First, VIN is more scalable in terms of the number of streams/channels. Second, the startup latency of VIN is shorter than that of Staggered Multicast. Third, Staggered Multicast is a near-VOD system and the video clients usually have no specific channels to the server to request for video. In the VIN system, the video clients can request the VIN Node for particular videos for multicast or unicast on demand.

4 citations


Proceedings Article
16 Jul 2006
TL;DR: It is shown that the HCsMDP problem is NP-hard and that there exists an equivalent discrete-time MDP to every HCsmDP, and classical methods such as reinforcement learning can solve HCsSDPs.
Abstract: In multiple criteria Markov Decision Processes (MDP) where multiple costs are incurred at every decision point, current methods solve them by minimising the expected primary cost criterion while constraining the expectations of other cost criteria to some critical values. However, systems are often faced with hard constraints where the cost criteria should never exceed some critical values at any time, rather than constraints based on the expected cost criteria. For example, a resource-limited sensor network no longer functions once its energy is depleted. Based on the semi-MDP (sMDP) model, we study the hard constrained (HC) problem in continuous time, state and action spaces with respect to both finite and infinite horizons, and various cost criteria. We show that the HCsMDP problem is NP-hard and that there exists an equivalent discrete-time MDP to every HCsMDP. Hence, classical methods such as reinforcement learning can solve HCsMDPs.

2 citations


Proceedings ArticleDOI
26 Jun 2006
TL;DR: DINPeer exploits a spiral-ring method to discover an inner ring with most powerful nodes (DIN Nodes) to form a logical DINloop, which helps to reduce multicast delay for the fast service discovery.
Abstract: In this paper, we propose DINPeer, an optimized peer-to-peer (P2P) overlay network for service discovery by overcoming limitations in current multicast discovery approaches and P2P overlay systems. DINPeer exploits a spiral-ring method to discover an inner ring with most powerful nodes (DIN Nodes) to form a logical DINloop. With the facilitation of the DINloop, multiple DIN Nodes easily form Steiner trees using Steiner tree-based heuristic routing algorithm. DINPeer further integrates the DINloop and Steiner trees with the P2P overlay network. The key features of DINPeer include that multiple DIN Nodes function as the Rendezvous Points (RPs) for theirs associated logical spaces respectively, and Steiner trees facilitate the communication among multiple DIN Nodes. Multiple powerful DIN Nodes release the burden on the centralized server and the self-recovered DINloop avoids the single point of failure. Simulations show that DINPeer is able to reduce multicast delay for the fast service discovery.

2 citations


Proceedings ArticleDOI
01 May 2006
TL;DR: The basics of a multiple-sink architecture is introduced and it is shown that such architecture exhibit potential in reducing average end-to-end delay.
Abstract: Underwater sensor networks can be applied to oceanographic data collection, offshore oil exploration or disaster warning. The energy resource of such networks is severely limited with no easy way to replenish the supply. One way to conserve energy is to manage the topology of the network such that redundant nodes can go to sleep. Considering an event driven application with delay-sensitive information, this paper investigates the utility of a connected dominating set (CDS)-based topology management scheme. The CDS-based scheme maintains a connected backbone of awake nodes to enable speedy data delivery. Nodes belonging to the CDS remain awake until the next computation of a new CDS. The relationship between the re-computation period of a CDS, ratio of CDS change and network lifetime is analyzed to determine an optimized re-computation period. The large propagation delay in underwater acoustics communications cripples the delivery of delay-sensitive data in an underwater network. The basics of a multiple-sink architecture is introduced and it is shown that such architecture exhibit potential in reducing average end-to-end delay.

1 citations


Journal ArticleDOI
TL;DR: The experimental results show that, compared with the JPEG2000, the proposed CB-BPGC achieves better lossless and lossy coding performance with lower complexity and greater resilience to transmission errors when simulated on the wireless Rayleigh fading channel.
Abstract: In this brief, we present an image entropy coder, context-based bit-plane Golomb coder (CB-BPGC), for wavelet-based scalable image coding. CB-BPGC follows the idea of the state-of-the-art image coding standard JPEG2000 entropy coding to apply the rate-distortion optimization algorithm after block coding. However, it explores a more efficient block coding where statistical properties of the block coefficients are considered. Compression ratio and error resilience performances of the proposed coder are evaluated, and the experimental results show that, compared with the JPEG2000, it achieves better lossless and lossy coding performance with lower complexity and greater resilience to transmission errors when simulated on the wireless Rayleigh fading channel