scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Signal Processing Magazine in 2002"


Journal Article
TL;DR: The outputs of spectral unmixing, endmember, and abundance estimates are important for identifying the material composition of mixtures and the applicability of models and techniques is highly dependent on the variety of circumstances and factors that give rise to mixed pixels.
Abstract: Spectral unmixing using hyperspectral data represents a significant step in the evolution of remote decompositional analysis that began with multispectral sensing. It is a consequence of collecting data in greater and greater quantities and the desire to extract more detailed information about the material composition of surfaces. Linear mixing is the key assumption that has permitted well-known algorithms to be adapted to the unmixing problem. In fact, the resemblance of the linear mixing model to system models in other areas has permitted a significant legacy of algorithms from a wide range of applications to be adapted to unmixing. However, it is still unclear whether the assumption of linearity is sufficient to model the mixing process in every application of interest. It is clear, however, that the applicability of models and techniques is highly dependent on the variety of circumstances and factors that give rise to mixed pixels. The outputs of spectral unmixing, endmember, and abundance estimates are important for identifying the material composition of mixtures.

1,917 citations


Journal ArticleDOI
TL;DR: This article presents a suite of techniques that perform aggressive energy optimization while targeting all stages of sensor network design, from individual nodes to the entire network.
Abstract: This article describes architectural and algorithmic approaches that designers can use to enhance the energy awareness of wireless sensor networks. The article starts off with an analysis of the power consumption characteristics of typical sensor node architectures and identifies the various factors that affect system lifetime. We then present a suite of techniques that perform aggressive energy optimization while targeting all stages of sensor network design, from individual nodes to the entire network. Maximizing network lifetime requires the use of a well-structured design methodology, which enables energy-aware design and operation of all aspects of the sensor network, from the underlying hardware platform to the application software and network protocols. Adopting such a holistic approach ensures that energy awareness is incorporated not only into individual sensor nodes but also into groups of communicating nodes and the entire sensor network. By following an energy-aware design methodology based on techniques such as in this article, designers can enhance network lifetime by orders of magnitude.

1,820 citations


Journal ArticleDOI
TL;DR: This work focuses on detection algorithms that assume multivariate normal distribution models for HSI data and presents some results which illustrate the performance of some detection algorithms using real hyperspectral imaging (HSI) data.
Abstract: We introduce key concepts and issues including the effects of atmospheric propagation upon the data, spectral variability, mixed pixels, and the distinction between classification and detection algorithms. Detection algorithms for full pixel targets are developed using the likelihood ratio approach. Subpixel target detection, which is more challenging due to background interference, is pursued using both statistical and subspace models for the description of spectral variability. Finally, we provide some results which illustrate the performance of some detection algorithms using real hyperspectral imaging (HSI) data. Furthermore, we illustrate the potential deviation of HSI data from normality and point to some distributions that may serve in the development of algorithms with better or more robust performance. We therefore focus on detection algorithms that assume multivariate normal distribution models for HSI data.

1,170 citations


Journal ArticleDOI
TL;DR: The article includes an example of an image space representation, using three bands to simulate a color IR photograph of an airborne hyperspectral data set over the Washington, DC, mall.
Abstract: The fundamental basis for space-based remote sensing is that information is potentially available from the electromagnetic energy field arising from the Earth's surface and, in particular, from the spatial, spectral, and temporal variations in that field. Rather than focusing on the spatial variations, which imagery perhaps best conveys, why not move on to look at how the spectral variations might be used. The idea was to enlarge the size of a pixel until it includes an area that is characteristic from a spectral response standpoint for the surface cover to be discriminated. The article includes an example of an image space representation, using three bands to simulate a color IR photograph of an airborne hyperspectral data set over the Washington, DC, mall.

1,007 citations


Journal ArticleDOI
TL;DR: The key ideas behind the CSP algorithms for distributed sensor networks being developed at the University of Wisconsin (UW) are described and the approach to tracking multiple targets that necessarily requires classification techniques becomes a reality.
Abstract: Networks of small, densely distributed wireless sensor nodes are being envisioned and developed for a variety of applications involving monitoring and the physical world in a tetherless fashion. Typically, each individual node can sense in multiple modalities but has limited communication and computation capabilities. Many challenges must be overcome before the concept of sensor networks In particular, there are two critical problems underlying successful operation of sensor networks: (1) efficient methods for exchanging information between the nodes and (2) collaborative signal processing (CSP) between the nodes to gather useful information about the physical world. This article describes the key ideas behind the CSP algorithms for distributed sensor networks being developed at the University of Wisconsin (UW). We also describe the basic ideas on how the CSP algorithms interface with the networking/routing algorithms being developed at Wisconsin (UW-API). We motivate the framework via the problem of detecting and tracking a single maneuvering target. This example illustrates the essential ideas behind the integration between UW-API and UW-CSP algorithms and also highlights the key aspects of detection and localization algorithms. We then build on these ideas to present our approach to tracking multiple targets that necessarily requires classification techniques becomes a reality.

997 citations


Journal ArticleDOI
Feng Zhao1, Jaewon Shin1, James E. Reich1
TL;DR: The main idea is for a network to determine participants in a "sensor collaboration" by dynamically optimizing the information utility of data for a given cost of communication and computation.
Abstract: This article overviews the information-driven approach to sensor collaboration in ad hoc sensor networks. The main idea is for a network to determine participants in a "sensor collaboration" by dynamically optimizing the information utility of data for a given cost of communication and computation. A definition of information utility is introduced, and several approximate measures of the information utility are developed for reasons of computational tractability. We illustrate the use of this approach using examples drawn from tracking applications.

821 citations


Journal ArticleDOI
TL;DR: To overcome the limitations of the individual models, a joint decision logic is developed, based on a maximum entropy probability model and the GLRT, that utilizes multiple decision statistics, and this approach is applied using the detection statistics derived from the three clutter models.
Abstract: We develop anomaly detectors, i.e., detectors that do not presuppose a signature model of one or more dimensions, for three clutter models: the local normal model, the global normal mixture model, and the global linear mixture model. The local normal model treats the neighborhood of a pixel as having a normal probability distribution. The normal mixture model considers the observation from each pixel as arising from one of several possible classes such that each class has a normal probability distribution. The linear mixture model considers each observation to be a linear combination of fixed spectra, known as endmembers, that are, or may be, associated with materials in the scene, and the coefficients, interpreted as fractional abundance, are constrained to be nonnegative and sum to one. We show how the generalized likelihood ratio test (GLRT) may be used to derive anomaly detectors for the local normal and global normal mixture models. The anomaly detector applied with the linear mixture approach proceeds by identifying target like endmembers based on properties of the histogram of the abundance estimates and employing a matched filter in the space of abundance estimates. To overcome the limitations of the individual models, we develop a joint decision logic, based on a maximum entropy probability model and the GLRT, that utilizes multiple decision statistics, and we apply this approach using the detection statistics derived from the three clutter models. Examples demonstrate that the joint decision logic can improve detection performance in comparison with the individual anomaly detectors. We also describe the application of linear prediction filters to repeated images of the same area to detect changes that occur within the scene over time.

733 citations


Journal ArticleDOI
TL;DR: An overview of challenging issues for the collaborative processing of wideband acoustic and seismic signals for source localization and beamforming in an energy-constrained distributed sensor network.
Abstract: Distributed sensor networks have been proposed for a wide range of applications. The main purpose of a sensor network is to monitor an area, including detecting, identifying, localizing, and tracking one or more objects of interest. These networks may be used by the military in surveillance, reconnaissance, and combat scenarios or around the perimeter of a manufacturing plant for intrusion detection. In other applications such as hearing aids and multimedia, microphone networks are capable of enhancing audio signals under noisy conditions for improved intelligibility, recognition, and cuing for camera aiming. Previous developments in integrated circuit technology have allowed the construction of low-cost miniature sensor nodes with signal processing and wireless communication capabilities. These technological advances not only open up many possibilities but also introduce challenging issues for the collaborative processing of wideband acoustic and seismic signals for source localization and beamforming in an energy-constrained distributed sensor network. The purpose of this article is to provide an overview of these issues.

563 citations


Journal ArticleDOI
TL;DR: A new domain of collaborative information communication and processing through the framework on distributed source coding using syndromes, which enables highly effective and efficient compression across a sensor network without the need to establish inter-node communication, using well-studied and fast error-correcting coding algorithms.
Abstract: Distributed nature of the sensor network architecture introduces unique challenges and opportunities for collaborative networked signal processing techniques that can potentially lead to significant performance gains. Many evolving low-power sensor network scenarios need to have high spatial density to enable reliable operation in the face of component node failures as well as to facilitate high spatial localization of events of interest. This induces a high level of network data redundancy, where spatially proximal sensor readings are highly correlated. We propose a new way of removing this redundancy in a completely distributed manner, i.e., without the sensors needing to talk, to one another. Our constructive framework for this problem is dubbed DISCUS (distributed source coding using syndromes) and is inspired by fundamental concepts from information theory. We review the main ideas, provide illustrations, and give the intuition behind the theory that enables this framework.We present a new domain of collaborative information communication and processing through the framework on distributed source coding. This framework enables highly effective and efficient compression across a sensor network without the need to establish inter-node communication, using well-studied and fast error-correcting coding algorithms.

563 citations


Journal Article
TL;DR: This article introduces the new field of network tomography, a field which it is believed will benefit greatly from the wealth of signal processing theory and algorithms.
Abstract: Today's Internet is a massive, distributed network which continues to explode in size as e-commerce and related activities grow. The heterogeneous and largely unregulated structure of the Internet renders tasks such as dynamic routing, optimized service provision, service-level verification, and detection of anomalous/malicious behavior increasingly challenging tasks. The problem is compounded by the fact that one cannot rely on the cooperation of individual servers and routers to aid in the collection of network traffic measurements vital for these tasks. In many ways, network monitoring and inference problems bear a strong resemblance to other "inverse problems" in which key aspects of a system are not directly observable. Familiar signal processing problems such as tomographic image reconstruction, system identification, and array processing all have interesting interpretations in the networking context. This article introduces the new field of network tomography, a field which we believe will benefit greatly from the wealth of signal processing theory and algorithms.

556 citations


Journal ArticleDOI
TL;DR: An important function of hyperspectral signal processing is to eliminate the redundancy in the spectral and spatial sample data while preserving the high-quality features needed for detection, discrimination, and classification.
Abstract: Electro-optical remote sensing involves the acquisition of information about an object or scene without coming into physical contact with it. This is achieved by exploiting the fact that the materials comprising the various objects in a scene reflect, absorb, and emit electromagnetic radiation in ways characteristic of their molecular composition and shape. If the radiation arriving at the sensor is measured at each wavelength over a sufficiently broad spectral band, the resulting spectral signature, or simply spectrum, can be used (in principle) to uniquely characterize and identify any given material. An important function of hyperspectral signal processing is to eliminate the redundancy in the spectral and spatial sample data while preserving the high-quality features needed for detection, discrimination, and classification. This dimensionality reduction is implemented in a scene-dependent (adaptive) manner and may be implemented as a distinct step in the processing or as an integral part of the overall algorithm. The most widely used algorithm for dimensionality reduction is principal component analysis (PCA) or, equivalently, Karhunen-Loeve transformation.

Journal ArticleDOI
TL;DR: This article presents applications of entropic spanning graphs to imaging and feature clustering applications, naturally suited to applications where entropy and information divergence are used as discriminants.
Abstract: This article presents applications of entropic spanning graphs to imaging and feature clustering applications. Entropic spanning graphs span a set of feature vectors in such a way that the normalized spanning length of the graph converges to the entropy of the feature distribution as the number of random feature vectors increases. This property makes these graphs naturally suited to applications where entropy and information divergence are used as discriminants: texture classification, feature clustering, image indexing, and image registration. Among other areas, these problems arise in geographical information systems, digital libraries, medical information processing, video indexing, multisensor fusion, and content-based retrieval.

Journal ArticleDOI
TL;DR: This work explores system partitioning between the sensor cluster and the base station, employing computation-communication tradeoffs to reduce energy dissipation and shows that system partitions within the cluster can also improve the energy efficiency by using dynamic voltage scaling (DVS).
Abstract: There are many new challenges to be faced in implementing signal processing algorithms and designing energy-efficient DSPs for microsensor networks. We study system partitioning of computation to improve the energy efficiency of a wireless sensor networking application. We explore system partitioning between the sensor cluster and the base station, employing computation-communication tradeoffs to reduce energy dissipation. Also we show that system partitioning of computation within the cluster can also improve the energy efficiency by using dynamic voltage scaling (DVS).

Journal ArticleDOI
TL;DR: The state of the art in scaling behavior in teletraffic is overview, focusing on the capabilities of the wavelet transform as a key tool for unraveling the mysteries of traffic statistics and dynamics.
Abstract: The complexity and richness of telecommunications traffic is such that one may despair to find any regularity or explanatory principles. Nonetheless, the discovery of scaling behavior in teletraffic has provided hope that parsimonious models can be found. The statistics of scaling behavior present many challenges, especially in nonstationary environments. In this article, we overview the state of the art in this area, focusing on the capabilities of the wavelet transform as a key tool for unraveling the mysteries of traffic statistics and dynamics.

Journal ArticleDOI
TL;DR: In this article, a connecting element made from a plastic such as nylon with a straight portion at one end and a looped portion at the other end was used for attachment to fishing lines and the like.
Abstract: A connecting element molded from a plastic such as nylon with a straight portion at one end and a looped portion at the other end for attachment to fishing lines and the like. The straight portion is inserted through a hole in the object being connected and melted so as to enlarge it and prevent its withdrawal from the hole.

Journal ArticleDOI
TL;DR: This framework provides a unifying conceptual structure for a variety of traditional processing techniques and a precise mathematical setting for developing generalizations and extensions of algorithms, leading to a potentially useful paradigm for signal processing with applications in areas including frame theory, quantization and sampling methods, detection, parameter estimation, covariance shaping, and multiuser wireless communication systems.
Abstract: In this article we present a signal processing framework that we refer to as quantum signal processing (QSP) (Eldar 2001) that is aimed at developing new or modifying existing signal processing algorithms by borrowing from the principles of quantum mechanics and some of its interesting axioms and constraints. However, in contrast to such fields as quantum computing and quantum information theory, it does not inherently depend on the physics associated with quantum mechanics. Consequently, in developing the QSP framework we are free to impose quantum mechanical constraints that we find useful and to avoid those that are not. This framework provides a unifying conceptual structure for a variety of traditional processing techniques and a precise mathematical setting for developing generalizations and extensions of algorithms, leading to a potentially useful paradigm for signal processing with applications in areas including frame theory, quantization and sampling methods, detection, parameter estimation, covariance shaping, and multiuser wireless communication systems. We present a general overview of the key elements in quantum physics that provide the basis for the QSP framework and an indication of the key results that have so far been developed within this framework. In the remainder of the article, we elaborate on the various elements in this figure.

Journal ArticleDOI
TL;DR: This study of fractal landscapes departs from the simplest but yet effective model of fractional Brownian motion and explores its two-dimensional (2-D) extensions, focusing on the ability to introduce anisotropy in this model.
Abstract: Our study of fractal landscapes departs from the simplest but yet effective model of fractional Brownian motion and explores its two-dimensional (2-D) extensions. We focus on the ability to introduce anisotropy in this model, and we are also interested in considering its discrete-space counterparts. We then move towards other multifractional and multifractal models providing more degrees of freedom for fitting complex 2-D fields. We note that many of the models and processing are implemented in FracLab, a software MATLAB/Scilab toolbox for fractal processing of signals and images.

Journal ArticleDOI
TL;DR: This tutorial article focuses on two such invariants related to the time dimension of the problem, namely, long-range dependence, or self-similarity, and heavy-tail marginal distributions.
Abstract: The analysis and modeling of computer network traffic is a daunting task considering the amount of available data. This is quite obvious when considering the spatial dimension of the problem, since the number of interacting computers, gateways and switches can easily reach several thousands, even in a local area network (LAN) setting. This is also true for the time dimension: Willinger and Paxson (see Ann. Statist., vol.25, no.5, p.1856-66, 1997) cite the figures of 439 million packets and 89 gigabytes of data for a single week record of the activity of a university gateway in 1995. The complexity of the problem further increases when considering wide area network (WAN) data. In light of the above, it is clear that a notion of importance for modern network engineering is that of invariants, i.e., characteristics that are observed with some reproducibility and independently of the precise settings of the network under consideration. In this tutorial article, we focus on two such invariants related to the time dimension of the problem, namely, long-range dependence, or self-similarity, and heavy-tail marginal distributions.

Journal ArticleDOI
TL;DR: This equation, whose key feature is the use of a local vector geometry, combines the advantages of diffusion PDEs for noise removing but also uses vector shock filters to enhance blurred edges.
Abstract: In this article, we propose a local and geometric point of view of vector image filtering using diffusion PDEs. It allows us to analyze proposed methods of vector data regularization, as well as propose a new vector PDE, well adapted for image restoration. This equation, whose key feature is the use of a local vector geometry, combines the advantages of diffusion PDEs for noise removing but also uses vector shock filters to enhance blurred edges. The extension to norm constrained vector fields can be the start for other well-known constrained problems, as optical flow computation, orientation analysis, and tensor image restoration.

Journal ArticleDOI
TL;DR: The sensor net architecture presented in this article starts from a high-level description of the mission or task to be accomplished and then commands individual nodes to sense and communicate in a manner that accomplishes the desired result with attention to minimizing the computational, communication, and sensing resources required.
Abstract: Suppose we have a set of sensor nodes spread over a geographical area. Assume that these nodes are able to perform processing as well as sensing and are additionally capable of communicating with each other by means of a wireless network. Though each node is an independent hardware device, they need to coordinate their sensing, computation and communication to acquire relevant information about their environment so as to accomplish some high-level task. The integration of processing makes such nodes more autonomous and the entire system, which we call a sensor net, becomes a novel type of sensing, processing, and communication engine. The sensor net architecture presented in this article starts from a high-level description of the mission or task to be accomplished and then commands individual nodes to sense and communicate in a manner that accomplishes the desired result with attention to minimizing the computational, communication, and sensing resources required. Much work remains to be done to refine and implement the relational sensing ideas presented here and validate their performance. We believe, however, that the potential pay-off for the relation-based sensing and tracking we have proposed can be large, both in terms of developing rich theories on the design and complexity of sensing algorithms, as well as in terms of the eventual impact of the deployed sensor systems.

Journal ArticleDOI
TL;DR: It is shown that marked point processes are more adapted than Markov random fields (MRFs) including some geometrical constraints in the solution and dealing with strongly correlated noise, and can be applied to a broad range of image processing problems.
Abstract: In this article, we consider the marked point process framework for image analysis. We first show that marked point processes are more adapted than Markov random fields (MRFs) including some geometrical constraints in the solution and dealing with strongly correlated noise. Then, we consider three applications in remote sensing: road network extraction, building extraction, and image segmentation. For each of them, we define a prior model, incorporating geometrical constraints on the solution. We also derive a reversible jump Monte Carlo Markov chains (RJMCMC) algorithm to obtain the optimal solution with respect to the defined models. Results show that this approach is promising and can be applied to a broad range of image processing problems.

Journal ArticleDOI
TL;DR: This work highlights the performance enhancements that can be obtained on both the reverse and forward links through use of an antenna array architecture that supports a combination of beamforming and transmit diversity, and focuses on the performance enhancement for the forward link.
Abstract: Third-generation (3G) cellular code division multiple access (CDMA,) systems can provide an increase in capacity for system operators over existing second-generation (CDMA) systems. The gain in capacity for the base station to mobile (forward) link can be attributed to improvements in coding techniques, fast power control, and transmit diversity techniques. Additional gains in the mobile to base station (reverse) link can be attributed to the use of coherent quadrature phase shift keyed (QPSK) modulation and better coding techniques. While these enhancements can improve the performance of the system, system operators expect that with increased demand for data services, even greater capacity enhancements may be desired. There are essentially three methods, which we describe, based on diversity, spatial beamforming, and a combination of diversity and beamforming, to improve the performance of system through the use of additional antennas at the base station transmitter and receiver. The performance improvements are a function of the antenna spacings and the algorithms used to weight the antenna signals. We focus on the possibilities for the cdma2000 3G system that do not require standards changes. We highlight the performance enhancements that can be obtained on both the reverse and forward links through use of an antenna array architecture that supports a combination of beamforming and transmit diversity. We focus on the performance enhancements for the forward link.

Journal ArticleDOI
TL;DR: This article presents a variational approach to MAP estimation with a more qualitative and tutorial emphasis and proposes an information-theoretic gradient descent flow which is a result of minimizing a functional that is a hybrid between a negentropy variational integral and a total variation.
Abstract: A maximum a posteriori (MAP) estimator using a Markov or a maximum entropy random field model for a prior distribution may be viewed as a minimizer of a variational problem.Using notions from robust statistics, a variational filter referred to as a Huber gradient descent flow is proposed. It is a result of optimizing a Huber functional subject to some noise constraints and takes a hybrid form of a total variation diffusion for large gradient magnitudes and of a linear diffusion for small gradient magnitudes. Using the gained insight, and as a further extension, we propose an information-theoretic gradient descent flow which is a result of minimizing a functional that is a hybrid between a negentropy variational integral and a total variation. Illustrating examples demonstrate a much improved performance of the approach in the presence of Gaussian and heavy tailed noise. In this article, we present a variational approach to MAP estimation with a more qualitative and tutorial emphasis. The key idea behind this approach is to use geometric insight in helping construct regularizing functionals and avoiding a subjective choice of a prior in MAP estimation. Using tools from robust statistics and information theory, we show that we can extend this strategy and develop two gradient descent flows for image denoising with a demonstrated performance.

Journal ArticleDOI
TL;DR: A new digital signal processor core for handheld terminals, the SPXK5 performance and flexibility, is compatible with high-level languages, and its architecture features low-power consumption.
Abstract: We have developed a new digital signal processor (DSP) core for handheld terminals, the SPXK5 performance and flexibility, is compatible with high-level languages, and its architecture features low-power consumption. We describe the SPXK5 architecture and its performance in DSP applications. We also consider the question of application-specific enhancements. Such architecture enhancements as add-compare-select instructions or coprocessors for the Viterbi (1995) decoding algorithm are employed in some programmable DSPs, and for video codecs, other architectures include either single-instruction multiple-data (SIMD) instructions or media coprocessors. While such application-specific enhancements are valuable when their applications are actually in use, they do nothing to enhance the performance of other applications, and the more they are added, the greater the increase in chip size and energy requirements. In other words, for handheld terminals, such enhancements need to be chosen in a careful and balanced way. We have done this in developing the SPXK5, in which a wide range of signal processing algorithms are efficiently implemented.

Journal Article
TL;DR: This work studies direction of arrival estimation using expectation-maximization (EM) and space alternating generalized EM (SAGE) algorithms and suggests to use smaller search spaces after a few iterations so the overall computational costs can be reduced drastically.
Abstract: In this work we study direction of arrival estimation using expectation-maximization (EM) and space alternating generalized EM (SAGE) algorithms. The EM algorithm is a general and popular numerical method for finding maximum-likelihood estimates which is characterized by simple implementation and stable convergence. The SAGE algorithm, a generalized form of the EM algorithm, allows a more flexible optimization scheme and sometimes converges faster than the EM algorithm. Motivated by the componentwise convergence of the EM and SAGE algorithms, we suggest to use smaller search spaces after a few iterations. In this way, the overall computational costs can be reduced drastically. A procedure derived from the convergence properties of the EM and SAGE algorithms is proposed to determine the search spaces adaptively. By numerical experiments we demonstrate that the fast EM and the fast SAGE algorithms are computationally more efficient and have the same statistical performance as the original algorithms.

Journal ArticleDOI
TL;DR: A result in modeling lower order (univariate and bivariate) probability densities of pixel values resulting from bandpass filtering of images is reviewed.
Abstract: We review a result in modeling lower order (univariate and bivariate) probability densities of pixel values resulting from bandpass filtering of images. Assuming an object-based model for images, a parametric family of probabilities, called Bessel K forms, has been derived (Grenander and Srivastava 2001). This parametric family matches well with the observed histograms for a large variety of images (video, range, infrared, etc.) and filters (Gabor, Laplacian Gaussian, derivatives, etc). The Bessel parameters relate to certain characteristics of objects present in an image and provide fast tools either for object recognition directly or for an intermediate (pruning) step of a larger recognition system. Examples are presented to illustrate the estimation of Bessel forms and their applications in clutter classification and object recognition.

Journal ArticleDOI
TL;DR: The MSA core is a dual-MAC modified Harvard architecture that has been designed to have good performance on both voice and video algorithms and some of the best features and simplicity of microcontrollers has been incorporated into the core.
Abstract: The convergence of voice and video in next-generation wireless applications requires a processor that can efficiently implement a broad range of advanced third generation (3G) wireless algorithms. The micro signal architecture (MSA) core is a dual-MAC modified Harvard architecture that has been designed to have good performance on both voice and video algorithms. In addition, some of the best features and simplicity of microcontrollers has been incorporated into the MSA core. This article presents an overview of the MSA architecture, key engineering issues and their solutions, and details associated with the first implementation of the core. The utility of the MSA architecture for practical 3G wireless applications is illustrated with several application examples and performance benchmarks for typical DSP and image/video kernels. The DSP features of the MSA core include: two 16-bit single-cycle throughput multipliers, two 40-bit split data ALUs, and hardware support for on-the-fly saturation and clipping; two 32-bit pointer ALUs with support for circular and bit-reversed addressing; two separate data ports to a unified 4 GB memory space, a parallel port for instructions, and two loop counters that allow nested zero overhead looping.

Journal ArticleDOI
Cormac Herley1
TL;DR: The ease with which early watermarking algorithms were broken has given rise to a new set of schemes that are usually robust to a wide variety of attacks, but this has created an illusion of progress, when in reality there is none.
Abstract: The ease with which early watermarking algorithms were broken has given rise to a new set of schemes that are usually robust to a wide variety of attacks. We argue that this has created an illusion of progress, when in reality there is none. Most published watermarking algorithms, like their predecessors, protect all objects in a neighborhood surrounding the marked object. We point out that while this is necessary, it is very far from being sufficient. To withstand adversarial attack, a watermarking scheme would have to protect all valuable variations of an object, not merely ones that are close to it.

Journal ArticleDOI
TL;DR: An overview of scale-spaces and their application to noise suppression and segmentation of 1-D signals and 2-D images and argues that a very simple nonlinear scale-space leads to a fast estimation algorithm which produces accurate segmentations and estimates of signals and images.
Abstract: In this article, we give an overview of scale-spaces and their application to noise suppression and segmentation of 1-D signals and 2-D images. Several prototypical problems serve as our motivation. We review several scale-spaces (linear Gaussian, Perona-Malik, and SIDE-stabilized inverse diffusion equation) and discuss their advantages and shortcomings. We describe our previous work and argue that a very simple nonlinear scale-space leads to a fast estimation algorithm which produces accurate segmentations and estimates of signals and images.

Journal Article
TL;DR: The state of the art in scaling behavior in telecommunications traffic is described, focusing on the capabilities of the wavelet transform as a key tool for unravelling the mysteries of traffic statistics and dynamics.
Abstract: The complexity and richness of telecommunications traffic is such that one may despair to find any regularity or explanatory principles. Nonetheless, the discovery of scaling behaviour in tele-traffic has provided hope that parsimonious models can be found. The statistics of scaling behavior present many challenges, especially in non-stationary environments. In this paper we describe the state of the art in this area, focusing on the capabilities of the wavelet transform as a key tool for unravelling the mysteries of traffic statistics and dynamics.