scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Journal of Selected Topics in Signal Processing in 2013"


Journal ArticleDOI
TL;DR: The design for these extensions represents the latest state of the art for video coding and its applications, including work on range extensions for color format and bit depth enhancement, embedded-bitstream scalability, and 3D video.
Abstract: This paper describes extensions to the High Efficiency Video Coding (HEVC) standard that are active areas of current development in the relevant international standardization committees. While the first version of HEVC is sufficient to cover a wide range of applications, needs for enhancing the standard in several ways have been identified, including work on range extensions for color format and bit depth enhancement, embedded-bitstream scalability, and 3D video. The standardization of extensions in each of these areas will be completed in 2014, and further work is also planned. The design for these extensions represents the latest state of the art for video coding and its applications.

420 citations


Journal ArticleDOI
TL;DR: A systematic methodology for designing local agent objective functions that guarantees an equivalence between the resulting Nash equilibria and the optimizers of the system level objective and that the resulting game possesses an inherent structure that can be exploited in distributed learning, e.g., potential games.
Abstract: The central goal in multiagent systems is to design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to a given system level objective. Ideally, a system designer seeks to satisfy this goal while conditioning each agent's control law on the least amount of information possible. This paper focuses on achieving this goal using the field of game theory. In particular, we derive a systematic methodology for designing local agent objective functions that guarantees (i) an equivalence between the resulting Nash equilibria and the optimizers of the system level objective and (ii) that the resulting game possesses an inherent structure that can be exploited in distributed learning, e.g., potential games. The control design can then be completed utilizing any distributed learning algorithm which guarantees convergence to a Nash equilibrium for the attained game structure. Furthermore, in many settings the resulting controllers will be inherently robust to a host of uncertainties including asynchronous clock rates, delays in information, and component failures.

292 citations


Journal ArticleDOI
TL;DR: A novel consensus Gaussian Mixture-Cardinalized Probability Hypothesis Density filter is developed that provides a fully distributed, scalable and computationally efficient solution to the distributed multitarget tracking problem.
Abstract: The paper addresses distributed multitarget tracking over a network of heterogeneous and geographically dispersed nodes with sensing, communication and processing capabilities. The contribution has been to develop a novel consensus Gaussian Mixture-Cardinalized Probability Hypothesis Density (GM-CPHD) filter that provides a fully distributed, scalable and computationally efficient solution to the problem. The effectiveness of the proposed approach is demonstrated via simulation experiments on realistic scenarios.

286 citations


Journal ArticleDOI
TL;DR: A distributed random projection algorithm for constrained convex optimization problems that can be used by multiple agents connected over a time-varying network, where each agent has its own objective function and its own constrained set.
Abstract: Random projection algorithm is of interest for constrained optimization when the constraint set is not known in advance or the projection operation on the whole constraint set is computationally prohibitive. This paper presents a distributed random projection algorithm for constrained convex optimization problems that can be used by multiple agents connected over a time-varying network, where each agent has its own objective function and its own constrained set. We prove that the iterates of all agents converge to the same point in the optimal set almost surely. Experiments on distributed support vector machines demonstrate good performance of the algorithm.

208 citations


Journal ArticleDOI
TL;DR: The tiles feature is introduced and the performance of the tool is surveyed, and a tile-based region of interest coding method is developed.
Abstract: Tiles is a new feature in the High Efficiency Video Coding (HEVC) standard that divides a picture into independent, rectangular regions. This division provides a number of advantages. Specifically, it increases the “parallel friendliness” of the new standard by enabling improved coding efficiency for parallel architectures, as compared to previous sliced based methods. Additionally, tiles facilitate improved maximum transmission unit (MTU) size matching, reduced line buffer memory, and additional region-of-interest functionality. In this paper, we introduce the tiles feature and survey the performance of the tool. Coding efficiency is reported for different parallelization factors and MTU size requirements. Additionally, a tile-based region of interest coding method is developed.

207 citations


Journal ArticleDOI
TL;DR: A consistent approach for DMMT is developed by combining a generalized version of Covariance Intersection, based on Exponential Mixture Densities (EMDs), with Random Finite Sets (RFS), with explicit formulae for the use of EMDs with RFSs.
Abstract: In this paper, we consider the problem of Distributed Multi-sensor Multi-target Tracking (DMMT) for networked fusion systems. Many existing approaches for DMMT use multiple hypothesis tracking and track-to-track fusion. However, there are two difficulties with these approaches. First, the computational costs of these algorithms can scale factorially with the number of hypotheses. Second, consistent optimal fusion, which does not double count information, can only be guaranteed for highly constrained network architectures which largely undermine the benefits of distributed fusion. In this paper, we develop a consistent approach for DMMT by combining a generalized version of Covariance Intersection, based on Exponential Mixture Densities (EMDs), with Random Finite Sets (RFS). We first derive explicit formulae for the use of EMDs with RFSs. From this, we develop expressions for the probability hypothesis density filters. This approach supports DMMT in arbitrary network topologies through local communications and computations. We implement this approach using Sequential Monte Carlo techniques and demonstrate its performance in simulations.

193 citations


Journal ArticleDOI
TL;DR: A detailed mean-square error analysis is performed and it is established that all agents are able to converge to the same Pareto optimal solution within a sufficiently smallmean-square-error (MSE) bound even for constant step-sizes.
Abstract: We consider solving multi-objective optimization problems in a distributed manner by a network of cooperating and learning agents. The problem is equivalent to optimizing a global cost that is the sum of individual components. The optimizers of the individual components do not necessarily coincide and the network therefore needs to seek Pareto optimal solutions. We develop a distributed solution that relies on a general class of adaptive diffusion strategies. We show how the diffusion process can be represented as the cascade composition of three operators: two combination operators and a gradient descent operator. Using the Banach fixed-point theorem, we establish the existence of a unique fixed point for the composite cascade. We then study how close each agent converges towards this fixed point, and also examine how close the Pareto solution is to the fixed point. We perform a detailed mean-square error analysis and establish that all agents are able to converge to the same Pareto optimal solution within a sufficiently small mean-square-error (MSE) bound even for constant step-sizes. We illustrate one application of the theory to collaborative decision making in finance by a network of agents.

188 citations


Journal ArticleDOI
TL;DR: The core transforms specified for the high efficiency video coding (HEVC) standard were designed as finite precision approximations to the discrete cosine transform (DCT) to allow implementation friendliness and is friendly to parallel processing.
Abstract: This paper describes the core transforms specified for the high efficiency video coding (HEVC) standard. Core transform matrices of various sizes from 4 × 4 to 32 × 32 were designed as finite precision approximations to the discrete cosine transform (DCT). Also, special care was taken to allow implementation friendliness, including limited bit depth, preservation of symmetry properties, embedded structure and basis vectors having almost equal norm. The transform design has the following properties: 16 bit data representation before and after each transform stage (independent of the internal bit depth), 16 bit multipliers for all internal multiplications, no need for correction of different norms of basis vectors during quantization/de-quantization, all transform sizes above 4 × 4 can reuse arithmetic operations for smaller transform sizes, and implementations using either pure matrix multiplication or a combination of matrix multiplication and butterfly structures are possible. The transform design is friendly to parallel processing and can be efficiently implemented in software on SIMD processors and in hardware for high throughput processing.

177 citations


Journal ArticleDOI
TL;DR: This paper introduces radar data and the problem of target detection, and shows how to transform the original radar data into Toeplitz covariance matrices, and proposes deterministic and stochastic algorithms to compute p-means.
Abstract: We develop a new geometric approach for high resolution Doppler processing based on the Riemannian geometry of Toeplitz covariance matrices and the notion of Riemannian p -means. This paper summarizes briefly our recent work in this direction. First of all, we introduce radar data and the problem of target detection. Then we show how to transform the original radar data into Toeplitz covariance matrices. After that, we give our results on the Riemannian geometry of Toeplitz covariance matrices. In order to compute p-means in practical cases, we propose deterministic and stochastic algorithms, of which the convergence results are given, as well as the rate of convergence and error estimates. Finally, we propose a new detector based on Riemannian median and show its advantage over the existing processing methods.

169 citations


Journal ArticleDOI
TL;DR: Comprehensive numerical tests with both synthetic and real network data corroborate the effectiveness of the proposed online algorithms and their tracking capabilities, and demonstrate that they outperform state-of-the-art approaches developed to diagnose traffic anomalies.
Abstract: In the backbone of large-scale networks, origin-to-destination (OD) traffic flows experience abrupt unusual changes known as traffic volume anomalies, which can result in congestion and limit the extent to which end-user quality of service requirements are met. As a means of maintaining seamless end-user experience in dynamic environments, as well as for ensuring network security, this paper deals with a crucial network monitoring task termed dynamic anomalography. Given link traffic measurements (noisy superpositions of unobserved OD flows) periodically acquired by backbone routers, the goal is to construct an estimated map of anomalies in real time, and thus summarize the network `health state' along both the flow and time dimensions. Leveraging the low intrinsic-dimensionality of OD flows and the sparse nature of anomalies, a novel online estimator is proposed based on an exponentially-weighted least-squares criterion regularized with the sparsity-promoting l1-norm of the anomalies, and the nuclear norm of the nominal traffic matrix. After recasting the non-separable nuclear norm into a form amenable to online optimization, a real-time algorithm for dynamic anomalography is developed and its convergence established under simplifying technical assumptions. For operational conditions where computational complexity reductions are at a premium, a lightweight stochastic gradient algorithm based on Nesterov's acceleration technique is developed as well. Comprehensive numerical tests with both synthetic and real network data corroborate the effectiveness of the proposed online algorithms and their tracking capabilities, and demonstrate that they outperform state-of-the-art approaches developed to diagnose traffic anomalies.

147 citations


Journal ArticleDOI
TL;DR: A cardinalized probability hypothesis density filter for extended targets that can result in multiple measurements at each scan is presented, and it is compared to its PHD counterpart in a simulation study, showing that the CPHD filter has a more robust cardinality estimate leading to smaller OSPA errors.
Abstract: This paper presents a cardinalized probability hypothesis density (CPHD) filter for extended targets that can result in multiple measurements at each scan. The probability hypothesis density (PHD) filter for such targets has been derived by Mahler, and different implementations have been proposed recently. To achieve better estimation performance this work relaxes the Poisson assumptions of the extended target PHD filter in target and measurement numbers. A gamma Gaussian inverse Wishart mixture implementation, which is capable of estimating the target extents and measurement rates as well as the kinematic state of the target, is proposed, and it is compared to its PHD counterpart in a simulation study. The results clearly show that the CPHD filter has a more robust cardinality estimate leading to smaller OSPA errors, which confirms that the extended target CPHD filter inherits the properties of its point target counterpart.

Journal ArticleDOI
TL;DR: Experimental results show that multiple-detection pattern based probabilistic data association improves the state estimation accuracy and the tracking performance of the proposed filter is compared against the Posterior Cramér-Rao Lower Bound.
Abstract: Most conventional target tracking algorithms assume that a target can generate at most one measurement per scan. However, there are tracking problems where this assumption is not valid. For example, multiple detections from a target in a scan can arise due to multipath propagation effects as in the over-the-horizon radar (OTHR). A conventional multitarget tracking algorithm will fail in these scenarios, since it cannot handle multiple target-originated measurements per scan. The Joint Probabilistic Data Association Filter (JPDAF) uses multiple measurements from a single target per scan through a weighted measurement-to-track association. However, its fundamental assumption is still one-to-one. In order to rectify this shortcoming, this paper proposes a new algorithm, called the Multiple-Detection Joint Probabilistic Data Association Filter (MD-JPDAF) for multitarget tracking, which is capable of handling multiple detections from targets per scan in the presence of clutter and missed detection. The multiple-detection pattern, which can account for many-to-one measurement set-to-track association rather than one-to-one measurement-to-track association, is used to generate multiple detection association events. The proposed algorithm exploits all the available information from measurements by combinatorial association of events that are formed to handle the possibility of multiple measurements per scan originating from a target. The MD-JPDAF is applied to a multitarget tracking scenario with an OTHR, where multiple detections occur due to different propagation paths as a result of scattering from different ionospheric layers. Experimental results show that multiple-detection pattern based probabilistic data association improves the state estimation accuracy. Furthermore, the tracking performance of the proposed filter is compared against the Posterior Cramer-Rao Lower Bound (PCRLB), which is explicitly derived for the multiple-detection scenario with a single target.

Journal ArticleDOI
TL;DR: A novel score-based multi-cyclic detection algorithm based on the Shiryaev-Roberts procedure, which is as easy to employ in practice and as computationally inexpensive as the popular Cumulative Sum chart and the Exponentially Weighted Moving Average scheme is proposed.
Abstract: We consider the problem of efficient on-line anomaly detection in computer network traffic. The problem is approached statistically, as that of sequential (quickest) changepoint detection. A multi-cyclic setting of quickest change detection is a natural fit for this problem. We propose a novel score-based multi-cyclic detection algorithm. The algorithm is based on the so-called Shiryaev-Roberts procedure. This procedure is as easy to employ in practice and as computationally inexpensive as the popular Cumulative Sum chart and the Exponentially Weighted Moving Average scheme. The likelihood ratio based Shiryaev-Roberts procedure has appealing optimality properties, particularly it is exactly optimal in a multi-cyclic setting geared to detect a change occurring at a far time horizon. It is therefore expected that an intrusion detection algorithm based on the Shiryaev-Roberts procedure will perform better than other detection schemes. This is confirmed experimentally for real traces. We also discuss the possibility of complementing our anomaly detection algorithm with a spectral-signature intrusion detection system with false alarm filtering and true attack confirmation capability, so as to obtain a synergistic system.

Journal ArticleDOI
TL;DR: The proposed ALF is located at the last processing stage for each picture and can be regarded as a tool to catch and fix artifacts from previous stages, and achieves a low encoding latency.
Abstract: Adaptive loop filtering for video coding is to minimize the mean square error between original samples and decoded samples by using Wiener-based adaptive filter. The proposed ALF is located at the last processing stage for each picture and can be regarded as a tool to catch and fix artifacts from previous stages. The suitable filter coefficients are determined by the encoder and explicitly signaled to the decoder. In order to achieve better coding efficiency, especially for high resolution videos, local adaptation is used for luma signals by applying different filters to different regions or blocks in a picture. In addition to filter adaptation, filter on/off control at coding tree unit (CTU) level is also helpful for improving coding efficiency. Syntax-wise, filter coefficients are sent in a picture level header called adaptation parameter set, and filter on/off flags of CTUs are interleaved at CTU level in the slice data. This syntax design not only supports picture level optimization but also achieves a low encoding latency. Simulation results show that the ALF can achieve on average 7% bit rate reduction for 25 HD sequences. The run time increases are 1% and 10% for encoders and decoders, respectively, without special attention to optimization in C++ code.

Journal ArticleDOI
TL;DR: By factorizing the joint posterior density using the structure of MTT, an efficient DP-TBD algorithm is developed to approximately solve the joint maximization in a fast but accurate manner and can accurately estimate the number of targets and reliably track multiple targets even when targets are in proximity.
Abstract: This paper considers the multi-target tracking (MTT) problem through the use of dynamic programming based track-before-detect (DP-TBD) methods. The usual solution of this problem is to adopt a multi-target state, which is the concatenation of individual target states, then search the estimate in the expanded multi-target state space. However, this solution involves a high-dimensional joint maximization which is computationally intractable for most realistic problems. Additionally, the dimension of the multi-target state has to be determined before implementing the DP search. This is problematic when the number of targets is unknown. We make two contributions towards addressing these problems. Firstly, by factorizing the joint posterior density using the structure of MTT, an efficient DP-TBD algorithm is developed to approximately solve the joint maximization in a fast but accurate manner. Secondly, we propose a novel detection procedure such that the dimension of the multi-target state no longer needs be to pre-determined before the DP search. Our analysis indicates that the proposed algorithm could achieve a computational complexity which is almost linear to the number of processed frames and independent of the number of targets. Simulation results show that this algorithm can accurately estimate the number of targets and reliably track multiple targets even when targets are in proximity.

Journal ArticleDOI
TL;DR: The approach described in this paper leverages several recent results in the field of high-dimensional data analysis, including subspace tracking with missing data, multiscale analysis techniques for point clouds, online optimization, and change-point detection performance analysis.
Abstract: This paper describes a novel approach to change-point detection when the observed high-dimensional data may have missing elements The performance of classical methods for change-point detection typically scales poorly with the dimensionality of the data, so that a large number of observations are collected after the true change-point before it can be reliably detected Furthermore, missing components in the observed data handicap conventional approaches The proposed method addresses these challenges by modeling the dynamic distribution underlying the data as lying close to a time-varying low-dimensional submanifold embedded within the ambient observation space Specifically, streaming data is used to track a submanifold approximation, measure deviations from this approximation, and calculate a series of statistics of the deviations for detecting when the underlying manifold has changed in a sharp or unexpected manner The approach described in this paper leverages several recent results in the field of high-dimensional data analysis, including subspace tracking with missing data, multiscale analysis techniques for point clouds, online optimization, and change-point detection performance analysis Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold

Journal ArticleDOI
TL;DR: This paper proposes a multi-target filtering solution that can accommodate non-linear target models and an unknown non-homogeneous clutter and detection profile based on the multi- target multi-Bernoulli filter that adaptively learns non-Homogeneous clutter intensity and detection probability while filtering.
Abstract: In Bayesian multi-target filtering knowledge of parameters such as clutter intensity and detection probability profile are of critical importance. Significant mismatches in clutter and detection model parameters results in biased estimates. In this paper we propose a multi-target filtering solution that can accommodate non-linear target models and an unknown non-homogeneous clutter and detection profile. Our solution is based on the multi-target multi-Bernoulli filter that adaptively learns non-homogeneous clutter intensity and detection probability while filtering.

Journal ArticleDOI
TL;DR: This tutorial paper summarizes the motivations, concepts and techniques of finite-set statistics (FISST), a system-level, “top-down,” direct generalization of ordinary single-sensor, single-target engineering statistics to the realm of multisensor, multitarget detection and tracking.
Abstract: This tutorial paper summarizes the motivations, concepts and techniques of finite-set statistics (FISST), a system-level, “top-down,” direct generalization of ordinary single-sensor, single-target engineering statistics to the realm of multisensor, multitarget detection and tracking. Finite-set statistics provides powerful new conceptual and computational methods for dealing with multisensor-multitarget detection and tracking problems. The paper describes how “multitarget integro-differential calculus” is used to extend conventional single-sensor, single-target formal Bayesian motion and measurement modeling to general tracking problems. Given such models, the paper describes the Bayes-optimal approach to multisensor-multitarget detection and tracking: the multisensor-multitarget recursive Bayes filter. Finally, it describes how multitarget calculus is used to derive principled statistical approximations of this optimal filter, such as PHD filters, CPHD filters, and multi-Bernoulli filters.

Journal ArticleDOI
TL;DR: A mathematical framework to jointly model related activities with both motion and context information for activity recognition and anomaly detection and demonstrates the benefit of joint modeling and recognition of activities in a wide-area scene and the effectiveness of the proposed method in anomaly detection.
Abstract: In this paper, we propose a mathematical framework to jointly model related activities with both motion and context information for activity recognition and anomaly detection. This is motivated from observations that activities related in space and time rarely occur independently and can serve as context for each other. The spatial and temporal distribution of different activities provides useful cues for the understanding of these activities. We denote the activities occurring with high frequencies in the database as normal activities. Given training data which contains labeled normal activities, our model aims to automatically capture frequent motion and context patterns for each activity class, as well as each pair of classes, from sets of predefined patterns during the learning process. Then, the learned model is used to generate globally optimum labels for activities in the testing videos. We show how to learn the model parameters via an unconstrained convex optimization problem and how to predict the correct labels for a testing instance consisting of multiple activities. The learned model and generated labels are used to detect anomalies whose motion and context patterns deviate from the learned patterns. We show promising results on the VIRAT Ground Dataset that demonstrates the benefit of joint modeling and recognition of activities in a wide-area scene and the effectiveness of the proposed method in anomaly detection.

Journal ArticleDOI
TL;DR: This paper addresses the problem of finding outage- optimal power control policies for wireless energy harvesting sensor (EHS) nodes with automatic repeat request (ARQ)-based packet transmissions by casting it as a partially observable Markov decision process (POMDP) and shows that the POMDP solutions can significantly outperform conventional ad hoc approaches.
Abstract: This paper addresses the problem of finding outage- optimal power control policies for wireless energy harvesting sensor (EHS) nodes with automatic repeat request (ARQ)-based packet transmissions. The power control policy of the EHS specifies the transmission power for each packet transmission attempt, based on all the information available at the EHS. In particular, the acknowledgement (ACK) or negative acknowledgement (NACK) messages received provide the EHS with partial information about the channel state. We solve the problem of finding an optimal power control policy by casting it as a partially observable Markov decision process (POMDP). We study the structure of the optimal power policy in two ways. First, for the special case of binary power levels at the EHS, we show that the optimal policy for the underlying Markov decision process (MDP) when the channel state is observable is a threshold policy in the battery state. Second, we benchmark the performance of the EHS by rigorously analyzing the outage probability of a general fixed-power transmission scheme, where the EHS uses a predetermined power level at each slot within the frame. Monte Carlo simulation results illustrate the performance of the POMDP approach and verify the accuracy of the analysis. They also show that the POMDP solutions can significantly outperform conventional ad hoc approaches.

Journal ArticleDOI
TL;DR: This paper presents the first algorithm for simultaneous localization and mapping (SLAM) that can estimate the locations of both dynamic and static features in addition to the vehicle trajectory, and presents a particle/Gaussian mixture implementation of the filter.
Abstract: This paper presents the first algorithm for simultaneous localization and mapping (SLAM) that can estimate the locations of both dynamic and static features in addition to the vehicle trajectory. We model the feature-based SLAM problem as a single-cluster process, where the vehicle motion defines the parent, and the map features define the daughter. Based on this assumption, we obtain tractable formulae that define a Bayesian filter recursion. The novelty in this filter is that it provides a robust multi-object likelihood which is easily understood in the context of our starting assumptions. We present a particle/Gaussian mixture implementation of the filter, taking into consideration the challenges that SLAM presents over target tracking with stationary sensors, such as changing fields of view and a mixture of static and dynamic map features. Monte Carlo simulation results are given which demonstrate the filter's effectiveness with high measurement clutter and non-linear vehicle motion.

Journal ArticleDOI
TL;DR: A pixel-wise unified rate quantization (R-Q) model for a low-complexity rate control on configurable coding units of high efficiency video coding (HEVC) shows low bit fluctuation and good RD performance, compared to R-lambda rate control for long sequences.
Abstract: In this paper, we present a pixel-wise unified rate quantization (R-Q) model for a low-complexity rate control on configurable coding units of high efficiency video coding (HEVC). In the case of HEVC, which employs hierarchical coding block structure, multiple R-Q models can be employed for the various block sizes. However, we found that the ratios of distortions over bits for all the blocks are a nearly constant because of employment of the rate distortion optimization technique. Hence, one relationship model between rate and quantization can be derived from the characteristic of similar ratios of distortions over bits regardless of block sizes. Thus, we propose the pixel-wise unified R-Q model for HEVC rate control working on the multi-level for all block sizes. We employ a simple leaky bucket model for bit control. The rate control based on the proposed pixel-wise unified R-Q model is implemented on HEVC test model 6.1 (HM6.1). According to the evaluation for the proposed rate control, the average matching percentage to target bitrates is 99.47% and the average PSNR degradation is 0.76 dB. Based on the comparative study, we found that the proposed rate control shows low bit fluctuation and good RD performance, compared to R-lambda rate control for long sequences.

Journal ArticleDOI
TL;DR: Experimental results demonstrate the proposed Rate-GOP based rate control has much better R-D performance than the two state-of-the-art rate control schemes for HEVC.
Abstract: In this paper, a Rate-GOP based frame level rate control scheme is proposed for High Efficiency Video Coding (HEVC). The proposed scheme is developed with the consideration of the new coding tools adopted into HEVC, including the quad-tree coding structure and the new reference frame selection mechanism, called reference picture set (RPS). The contributions of this paper mainly include the following three aspects. Firstly, a RPS based hierarchical rate control structure is designed to maintain the high video quality of the key frames. Secondly, the inter-frame dependency based distortion model and bit rate model are proposed, considering the dependency between a coding frame and its reference frame. Thus the distortion and bit rate of the coding frame can be represented by the distortion and bit rate of its reference frame. Accordingly, the Rate-GOP based distortion model and rate model can be achieved via the inter-frame dependency based distortion model and bit rate model. Thirdly, based on these models and a mixed Laplacian distribution of residual information, a new ρ-domain Rate-GOP based rate control is proposed. Experimental results demonstrate the proposed Rate-GOP based rate control has much better R-D performance. Compared with the two state-of-the-art rate control schemes for HEVC, the coding gain with BD-PSNR can be up to 0.87 dB and 0.13 dB on average respectively for all testing configurations. Especially for random access low complexity testing configuration, the BD-PSNR gain can be up to 1.30 dB and 0.23 dB respectively.

Journal ArticleDOI
TL;DR: In this article, a deterministic sequential exploration and exploitation (DSEE) approach is proposed for multi-armed bandits with unknown reward models, where regret is defined as the total expected reward loss against the ideal case with known reward models.
Abstract: In the Multi-Armed Bandit (MAB) problem, there is a given set of arms with unknown reward models. At each time, a player selects one arm to play, aiming to maximize the total expected reward over a horizon of length T. An approach based on a Deterministic Sequencing of Exploration and Exploitation (DSEE) is developed for constructing sequential arm selection policies. It is shown that for all light-tailed reward distributions, DSEE achieves the optimal logarithmic order of the regret, where regret is defined as the total expected reward loss against the ideal case with known reward models. For heavy-tailed reward distributions, DSEE achieves O(T1/p) regret when the moments of the reward distributions exist up to the pth order for and O(T1/(1+p/2)) for p > 2. With the knowledge of an upper bound on a finite moment of the heavy-tailed reward distributions, DSEE offers the optimal logarithmic regret order. The proposed DSEE approach complements existing work on MAB by providing corresponding results for general reward distributions. Furthermore, with a clearly defined tunable parameter-the cardinality of the exploration sequence, the DSEE approach is easily extendable to variations of MAB, including MAB with various objectives, decentralized MAB with multiple players and incomplete reward observations under collisions, restless MAB with unknown dynamics, and combinatorial MAB with dependent arms that often arise in network optimization problems such as the shortest path, the minimum spanning tree, and the dominating set problems under unknown random weights.

Journal ArticleDOI
TL;DR: Stochastic grammar models and reciprocal Markov models (one dimensional Markov random fields) for modeling spatial trajectories with a known end point are developed and the versatility of such models is illustrated with tracking applications in surveillance.
Abstract: On meta-level time scales, anomalous trajectories can signify target intent through their shape and eventual destination. Such trajectories exhibit complex spatial patterns and have well defined destinations with long-range dependencies implying that Markov (random-walk) models are unsuitable. How can estimated target tracks be used to detect anomalous trajectories such as circling a building or going past a sequence of checkpoints? This paper develops context-free grammar models and reciprocal Markov models (one dimensional Markov random fields) for modeling spatial trajectories with a known end point. The intent of a target is assumed to be a function of the shape of the trajectory it follows and its intended destination. The stochastic grammar models developed are concerned with trajectory shape classification while the reciprocal Markov models are used for destination prediction. Towards this goal, Bayesian signal processing algorithms with polynomial complexity are presented. The versatility of such models is illustrated with tracking applications in surveillance.

Journal ArticleDOI
TL;DR: This paper provides a novel measurement-based agent classification: Type- α,β, and γ, which leads to the construction of specific graph topologies and formulate an estimator where measurement and predictor-fusion are implemented over G and β, and shows that the proposed scheme leads to distributed observability, i.e., observability of the distributed estimator.
Abstract: In this paper, we consider distributed estimation of linear, discrete-time dynamical systems monitored by a network of agents We require the agents to exchange information with their neighbors only once per dynamical system time-scale and study the network topology sufficient for distributed observability To this aim, we provide a novel measurement-based agent classification: Type- $\alpha,\beta$ , and $\gamma$ , which leads to the construction of specific graph topologies: ${\cal G}_\alpha$ and ${\cal G}_\beta$ In particular, in ${\cal G}_\alpha$ , every Type- $\alpha$ agent has a direct connection to every other agent, whereas, in ${\cal G}_\beta$ , every agent has a directed path to every Type- $\beta$ agent With the help of these constructs, we formulate an estimator where measurement and predictor-fusion are implemented over ${\cal G}_\alpha$ and ${\cal G}_\beta$ , respectively, and show that the proposed scheme leads to distributed observability, ie, observability of the distributed estimator In order to characterize the estimator further, we show that Type- $\alpha$ agents only exist in systems with $S$ -rank (maximal rank of zero/non-zero pattern) deficient system matrices In other words, systems with full $S$ -rank matrices only have Type- $\beta$ agents, and thus, a strongly-connected (agent) network is sufficient for full $S$ -rank systems—by the definition of ${\cal G}_\beta$ above; however strong-connectivity is not necessary, ie, there exist weakly-connected networks that result in distributed observability Furthermore, we show that for $S$ -rank deficient systems, measurement-fusion over ${\cal G}_\alpha$ is required, and predictor-fusion alone is insufficient The approach taken in this paper is structural, ie, we use the concept of structured systems theory and generic observability to derive the results Finally, we provide an iterative method to compute the local estimator gain at each agent once the observability is ensured using the aforementioned construction

Journal ArticleDOI
TL;DR: A new algorithm called multiple detection multiple hypothesis tracker (MD-MHT) is proposed to effectively track multiple targets in such multiple-detection systems by solving the data association problem via an extension to the multiframe assignment algorithm.
Abstract: Typical multitarget tracking systems assume that in every scan there is at most one measurement for each target. In certain other systems such as over-the-horizon radar tracking, the sensor can generate resolvable multiple detections, corresponding to different measurement modes, from the same target. In this paper, we propose a new algorithm called multiple detection multiple hypothesis tracker (MD-MHT) to effectively track multiple targets in such multiple-detection systems. The challenge for this tracker, which follows the multiple hypothesis framework, is to jointly resolve the measurement origin and measurement mode uncertainties. The proposed tracker solves this data association problem via an extension to the multiframe assignment algorithm. Its performance is demonstrated on a simulated over-the-horizon-radar multitarget tracking scenario, which confirms the effectiveness of this algorithm.

Journal ArticleDOI
TL;DR: The details of the interpolation filter design of the H.265/HEVC interpolation filtering over H.264/AVC are presented and coding efficiency gains are significant for some video sequences and can reach up to 21.7%.
Abstract: Coding efficiency gains in the new High Efficiency Video Coding (H.265/HEVC) video coding standard are achieved by improving many aspects of the traditional hybrid coding framework. Motion compensated prediction, and in particular the interpolation filter, is one area that was improved significantly over H.264/AVC. This paper presents the details of the interpolation filter design of the H.265/HEVC standard. First, the improvements of H.265/HEVC interpolation filtering over H.264/AVC are presented. These improvements include novel filter coefficient design with an increased number of taps and utilizing higher precision operations in interpolation filter computations. Then, the computational complexity is analyzed, both from theoretical and practical perspectives. Theoretical complexity analysis is done by studying the worst-case complexity analytically, whereas practical analysis is done by profiling an optimized decoder implementation. Coding efficiency improvements over the H.264/AVC interpolation filter are studied and experimental results are presented. They show a 4.0% average bitrate reduction for the luma component and 11.3% average bitrate reduction for the chroma components. The coding efficiency gains are significant for some video sequences and can reach up to 21.7%.

Journal ArticleDOI
TL;DR: The convergence analysis shows that the parameter estimates weakly converge to the true parameter across the network, yet the global activation behavior along the way tracks the set of correlated equilibria of the underlying activation control game.
Abstract: This paper presents a game-theoretic approach to node activation control in parameter estimation via diffusion least mean squares (LMS). Nodes cooperate by exchanging estimates over links characterized by the connectivity graph of the network. The energy-aware activation control is formulated as a noncooperative repeated game where nodes autonomously decide when to activate based on a utility function that captures the trade-off between individual node's contribution and energy expenditure. The diffusion LMS stochastic approximation is combined with a game-theoretic learning algorithm such that the overall energy-aware diffusion LMS has two timescales: the fast timescale corresponds to the game-theoretic activation mechanism, whereby nodes distributively learn their optimal activation strategies, whereas the slow timescale corresponds to the diffusion LMS. The convergence analysis shows that the parameter estimates weakly converge to the true parameter across the network, yet the global activation behavior along the way tracks the set of correlated equilibria of the underlying activation control game.

Journal ArticleDOI
TL;DR: An overview of the motion vector coding techniques in HEVC is provided and experimental results show that the combination of the proposed techniques achieves on average 3.1% bit-rate saving under the common test conditions used for HEVC development.
Abstract: High Efficiency Video Coding (HEVC) is an emerging international video coding standard developed by the Joint Collaborative Team on Video Coding (JCT-VC). Compared to H.264/AVC, HEVC has achieved substantial compression performance improvement. During the HEVC standardization, we proposed several motion vector coding techniques, which were crosschecked by other experts and then adopted into the standard. In this paper, an overview of the motion vector coding techniques in HEVC is firstly provided. Next, the proposed motion vector coding techniques including a priority-based derivation algorithm for spatial motion candidates, a priority-based derivation algorithm for temporal motion candidates, a surrounding-based candidate list, and a parallel derivation of the candidate list, are also presented. Based on HEVC test model 9 (HM9), experimental results show that the combination of the proposed techniques achieves on average 3.1% bit-rate saving under the common test conditions used for HEVC development.