scispace - formally typeset
Search or ask a question

Showing papers on "Time complexity published in 2011"


Journal ArticleDOI
TL;DR: This paper presents a novel classifier chains method that can model label correlations while maintaining acceptable computational complexity, and illustrates the competitiveness of the chaining method against related and state-of-the-art methods, both in terms of predictive performance and time complexity.
Abstract: The widely known binary relevance method for multi-label classification, which considers each label as an independent binary problem, has often been overlooked in the literature due to the perceived inadequacy of not directly modelling label correlations. Most current methods invest considerable complexity to model interdependencies between labels. This paper shows that binary relevance-based methods have much to offer, and that high predictive performance can be obtained without impeding scalability to large datasets. We exemplify this with a novel classifier chains method that can model label correlations while maintaining acceptable computational complexity. We extend this approach further in an ensemble framework. An extensive empirical evaluation covers a broad range of multi-label datasets with a variety of evaluation metrics. The results illustrate the competitiveness of the chaining method against related and state-of-the-art methods, both in terms of predictive performance and time complexity.

2,046 citations


Proceedings ArticleDOI
03 Oct 2011
TL;DR: It appears that the proposed list decoder bridges the gap between successive-cancellation and maximum-likelihood decoding of polar codes, and devise an efficient, numerically stable, implementation taking only O(L · n log n) time and O( L · n) space.
Abstract: We describe a successive-cancellation list decoder for polar codes, which is a generalization of the classic successive-cancellation decoder of Arikan. In the proposed list decoder, up to L decoding paths are considered concurrently at each decoding stage. Simulation results show that the resulting performance is very close to that of a maximum-likelihood decoder, even for moderate values of L. Thus it appears that the proposed list decoder bridges the gap between successive-cancellation and maximum-likelihood decoding of polar codes. The specific list-decoding algorithm that achieves this performance doubles the number of decoding paths at each decoding step, and then uses a pruning procedure to discard all but the L “best” paths. In order to implement this algorithm, we introduce a natural pruning criterion that can be easily evaluated. Nevertheless, straightforward implementation still requires O(L · n2) time, which is in stark contrast with the O(n log n) complexity of the original successive-cancellation decoder. We utilize the structure of polar codes to overcome this problem. Specifically, we devise an efficient, numerically stable, implementation taking only O(L · n log n) time and O(L · n) space.

1,338 citations


Journal ArticleDOI
TL;DR: This paper proposes a complete practical methodology for minimizing additive distortion in steganography with general (nonbinary) embedding operation and reports extensive experimental results for a large set of relative payloads and for different distortion profiles, including the wet paper channel.
Abstract: This paper proposes a complete practical methodology for minimizing additive distortion in steganography with general (nonbinary) embedding operation. Let every possible value of every stego element be assigned a scalar expressing the distortion of an embedding change done by replacing the cover element by this value. The total distortion is assumed to be a sum of per-element distortions. Both the payload-limited sender (minimizing the total distortion while embedding a fixed payload) and the distortion-limited sender (maximizing the payload while introducing a fixed total distortion) are considered. Without any loss of performance, the nonbinary case is decomposed into several binary cases by replacing individual bits in cover elements. The binary case is approached using a novel syndrome-coding scheme based on dual convolutional codes equipped with the Viterbi algorithm. This fast and very versatile solution achieves state-of-the-art results in steganographic applications while having linear time and space complexity w.r.t. the number of cover elements. We report extensive experimental results for a large set of relative payloads and for different distortion profiles, including the wet paper channel. Practical merit of this approach is validated by constructing and testing adaptive embedding schemes for digital images in raster and transform domains. Most current coding schemes used in steganography (matrix embedding, wet paper codes, etc.) and many new ones can be implemented using this framework.

726 citations


Book
22 Aug 2011
TL;DR: The algorithms apply a novel “random-like” deterministic technique that provides for a fast and efficient breaking of an apparently symmetric situation in parallel and distributed computation.
Abstract: The following problem is considered: given a linked list of length n , compute the distance from each element of the linked list to the end of the list. The problem has two standard deterministic algorithms: a linear time serial algorithm, and an O (log n ) time parallel algorithm using n processors. We present new deterministic parallel algorithms for the problem. Our strongest results are (1) O (log n log* n ) time using n /(log n log* n ) processors (this algorithm achieves optimal speed-up); (2) O (log n ) time using n log ( k ) n /log n processors, for any fixed positive integer k . The algorithms apply a novel “random-like” deterministic technique. This technique provides for a fast and efficient breaking of an apparently symmetric situation in parallel and distributed computation.

474 citations


Journal ArticleDOI
TL;DR: This work is able to both analyze the statistical error associated with any global optimum, and prove that a simple algorithm based on projected gradient descent will converge in polynomial time to a small neighborhood of the set of all global minimizers.
Abstract: Although the standard formulations of prediction problems involve fully-observed and noiseless data drawn in an i.i.d. manner, many applications involve noisy and/or missing data, possibly involving dependence, as well. We study these issues in the context of high-dimensional sparse linear regression, and propose novel estimators for the cases of noisy, missing and/or dependent data. Many standard approaches to noisy or missing data, such as those using the EM algorithm, lead to optimization problems that are inherently nonconvex, and it is difficult to establish theoretical guarantees on practical algorithms. While our approach also involves optimizing nonconvex programs, we are able to both analyze the statistical error associated with any global optimum, and more surprisingly, to prove that a simple algorithm based on projected gradient descent will converge in polynomial time to a small neighborhood of the set of all global minimizers. On the statistical side, we provide nonasymptotic bounds that hold with high probability for the cases of noisy, missing and/or dependent data. On the computational side, we prove that under the same types of conditions required for statistical consistency, the projected gradient descent algorithm is guaranteed to converge at a geometric rate to a near-global minimizer. We illustrate these theoretical predictions with simulations, showing close agreement with the predicted scalings.

465 citations


Proceedings ArticleDOI
01 Nov 2011
TL;DR: A novel strategy to discover the community structure of (possibly, large) networks by exploiting a novel measure of edge centrality, based on the κ-paths, which allows to efficiently compute a edge ranking in large networks in near linear time.
Abstract: In this paper we present a novel strategy to discover the community structure of (possibly, large) networks This approach is based on the well-know concept of network modularity optimization To do so, our algorithm exploits a novel measure of edge centrality, based on the κ-paths This technique allows to efficiently compute a edge ranking in large networks in near linear time Once the centrality ranking is calculated, the algorithm computes the pairwise proximity between nodes of the network Finally, it discovers the community structure adopting a strategy inspired by the well-known state-of-the-art Louvain method (henceforth, LM), efficiently maximizing the network modularity The experiments we carried out show that our algorithm outperforms other techniques and slightly improves results of the original LM, providing reliable results Another advantage is that its adoption is naturally extended even to unweighted networks, differently with respect to the LM

274 citations


Journal ArticleDOI
TL;DR: It is proved that finding the global minimal value of the problem is strongly NP-Hard, but computing a local minimizer of theproblem can be done in polynomial time.
Abstract: We discuss the L p (0 ≤ p < 1) minimization problem arising from sparse solution construction and compressed sensing. For any fixed 0 < p < 1, we prove that finding the global minimal value of the problem is strongly NP-Hard, but computing a local minimizer of the problem can be done in polynomial time. We also develop an interior-point potential reduction algorithm with a provable complexity bound and demonstrate preliminary computational results of effectiveness of the algorithm.

274 citations


Journal ArticleDOI
TL;DR: This work reformulates this intensity-only imaging problem as a non-convex optimization problem and shows that it can have exact recovery by minimizing the rank of a positive semidefinite matrix associated with the unknown reflectivities.
Abstract: We introduce a new approach for narrow band array imaging of localized scatterers from intensity-only measurements by considering the possibility of reconstructing the positions and reflectivities of the scatterers exactly from only partial knowledge of the array data, since we assume that phase information is not available. We reformulate this intensity-only imaging problem as a non-convex optimization problem and show that we can have exact recovery by minimizing the rank of a positive semidefinite matrix associated with the unknown reflectivities. Since this optimization problem is NP-hard and is computationally intractable, we replace the rank of the matrix by its nuclear norm, the sum of its singular values, which is a convex programming problem that can be solved in polynomial time. Numerical experiments explore the robustness of this approach, which recovers sparse reflectivity vectors exactly as solutions of a convex optimization problem.

236 citations


Journal ArticleDOI
TL;DR: Distributed reduced-order observer-based consensus protocols are proposed, based on the relative outputs of neighboring agents, under which a continuous-time multi-agent system whose communication topology contains a directed spanning tree can reach consensus.

227 citations


Journal ArticleDOI
TL;DR: It is demonstrated that a host of reconfiguration problems derived from NP-complete problems are PSPACE-complete, while some are also NP-hard to approximate.

213 citations


Journal ArticleDOI
TL;DR: It is proved that these problems are NP-hard and, hence, design techniques are introduced, relying on semidefinite programming (SDP) relaxation and randomization as well as on the theory of trigonometric polynomials, providing high-quality suboptimal solutions with a polynomial time computational complexity.
Abstract: This paper considers the problem of radar waveform design in the presence of colored Gaussian disturbance under a peak-to-average-power ratio (PAR) and an energy constraint. First of all, we focus on the selection of the radar signal optimizing the signal-to-noise ratio (SNR) in correspondence of a given expected target Doppler frequency (Algorithm 1). Then, through a max-min approach, we make robust the technique with respect to the received Doppler (Algorithm 2), namely we optimize the worst case SNR under the same constraints as in the previous problem. Since Algorithms 1 and 2 do not impose any condition on the waveform phase, we also devise their phase quantized versions (Algorithms 3 and 4, respectively), which force the waveform phase to lie within a finite alphabet. All the problems are formulated in terms of nonconvex quadratic optimization programs with either a finite or an infinite number of quadratic constraints. We prove that these problems are NP-hard and, hence, introduce design techniques, relying on semidefinite programming (SDP) relaxation and randomization as well as on the theory of trigonometric polynomials, providing high-quality suboptimal solutions with a polynomial time computational complexity. Finally, we analyze the performance of the new waveform design algorithms in terms of detection performance and robustness with respect to Doppler shifts.

Journal ArticleDOI
TL;DR: An algorithm for designing the sequence of one or more interacting nucleic acid strands intended to adopt a target secondary structure at equilibrium is described and exhibits asymptotic optimality and the exponent in the time complexity bound is sharp.
Abstract: We describe an algorithm for designing the sequence of one or more interacting nucleic acid strands intended to adopt a target secondary structure at equilibrium. Sequence design is formulated as an optimization problem with the goal of reducing the ensemble defect below a user-specified stop condition. For a candidate sequence and a given target secondary structure, the ensemble defect is the average number of incorrectly paired nucleotides at equilibrium evaluated over the ensemble of unpseudoknotted secondary structures. To reduce the computational cost of accepting or rejecting mutations to a random initial sequence, candidate mutations are evaluated on the leaf nodes of a tree-decomposition of the target structure. During leaf optimization, defect-weighted mutation sampling is used to select each candidate mutation position with probability proportional to its contribution to the ensemble defect of the leaf. As subsequences are merged moving up the tree, emergent structural defects resulting from crosstalk between sibling sequences are eliminated via reoptimization within the defective subtree starting from new random subsequences. Using a Θ(N^3) dynamic program to evaluate the ensemble defect of a target structure with N nucleotides, this hierarchical approach implies an asymptotic optimality bound on design time: for sufficiently large N, the cost of sequence design is bounded below by 4/3 the cost of a single evaluation of the ensemble defect for the full sequence. Hence, the design algorithm has time complexity Ω(N^3). For target structures containing N ∈{100,200,400,800,1600,3200} nucleotides and duplex stems ranging from 1 to 30 base pairs, RNA sequence designs at 37°C typically succeed in satisfying a stop condition with ensemble defect less than N/100. Empirically, the sequence design algorithm exhibits asymptotic optimality and the exponent in the time complexity bound is sharp

Journal ArticleDOI
TL;DR: An algorithm for decentralized multi-agent estimation of parameters in linear discrete-time regression models is proposed in the form of a combination of local stochastic approximation algorithms and a global consensus strategy, and an analysis of the asymptotic properties of the proposed algorithm is presented.
Abstract: In this paper, an algorithm for decentralized multi-agent estimation of parameters in linear discrete-time regression models is proposed in the form of a combination of local stochastic approximation algorithms and a global consensus strategy. An analysis of the asymptotic properties of the proposed algorithm is presented, taking into account both the multi-agent network structure and the probabilities of getting local measurements and implementing exchange of inter-agent messages. In the case of non-vanishing gains in the stochastic approximation algorithms, an asymptotic estimation error covariance matrix bound is defined as the solution of a Lyapunov-like matrix equation. In the case of asymptotically vanishing gains, the mean-square convergence is proved and the rate of convergence estimated. In the discussion, the problem of additive communication noise is treated in a methodologically consistent way. It is also demonstrated how the consensus scheme in the algorithm can contribute to the overall reduction of measurement noise influence. Some simulation results illustrate the obtained theoretical results.

Proceedings ArticleDOI
27 Feb 2011
TL;DR: This paper analyses different hardware sorting architectures in order to implement a highly scaleable sorter for solving huge problems at high performance up to the GB range in linear time complexity and demonstrates how partial run-time reconfiguration can be used for saving almost half the FPGA resources or alternatively for improving the speed.
Abstract: This paper analyses different hardware sorting architectures in order to implement a highly scaleable sorter for solving huge problems at high performance up to the GB range in linear time complexity. It will be proven that a combination of a FIFO-based merge sorter and a tree-based merge sorter results in the best performance at low cost. Moreover, we will demonstrate how partial run-time reconfiguration can be used for saving almost half the FPGA resources or alternatively for improving the speed. Experiments show a sustainable sorting throughput of 2GB/s for problems fitting into the on-chip FPGA memory and 1 GB/s when using external memory. These values surpass the best published results on large problem sorting implementations on FPGAs, GPUs, and the Cell processor.

Proceedings ArticleDOI
01 Dec 2011
TL;DR: This paper investigates the use of randomized algorithms that operate directly on the raw acoustic features to produce sparse approximate similarity matrices in O( n) space and O(n log n) time and demonstrates these techniques facilitate spoken term discovery performance capable of outperforming a model-based strategy in the zero resource setting.
Abstract: Spoken term discovery is the task of automatically identifying words and phrases in speech data by searching for long repeated acoustic patterns. Initial solutions relied on exhaustive dynamic time warping-based searches across the entire similarity matrix, a method whose scalability is ultimately limited by the O(n2) nature of the search space. Recent strategies have attempted to improve search efficiency by using either unsupervised or mismatched-language acoustic models to reduce the complexity of the feature representation. Taking a completely different approach, this paper investigates the use of randomized algorithms that operate directly on the raw acoustic features to produce sparse approximate similarity matrices in O(n) space and O(n log n) time. We demonstrate these techniques facilitate spoken term discovery performance capable of outperforming a model-based strategy in the zero resource setting.

Journal ArticleDOI
TL;DR: The geodesic distance measure between two phylogenetic trees with edge lengths is the length of the shortest path between them in the continuous tree space introduced by Billera, Holmes, and Vogtmann as discussed by the authors.
Abstract: Comparing and computing distances between phylogenetic trees are important biological problems, especially for models where edge lengths play an important role. The geodesic distance measure between two phylogenetic trees with edge lengths is the length of the shortest path between them in the continuous tree space introduced by Billera, Holmes, and Vogtmann. This tree space provides a powerful tool for studying and comparing phylogenetic trees, both in exhibiting a natural distance measure and in providing a euclidean-like structure for solving optimization problems on trees. An important open problem is to find a polynomial time algorithm for finding geodesics in tree space. This paper gives such an algorithm, which starts with a simple initial path and moves through a series of successively shorter paths until the geodesic is attained.

Posted Content
TL;DR: The first provably accurate feature selection method for k-means clustering is presented and, in addition, two feature extraction methods are presented that improve upon the existing results in terms of time complexity and number of features needed to be extracted.
Abstract: We study the topic of dimensionality reduction for $k$-means clustering. Dimensionality reduction encompasses the union of two approaches: \emph{feature selection} and \emph{feature extraction}. A feature selection based algorithm for $k$-means clustering selects a small subset of the input features and then applies $k$-means clustering on the selected features. A feature extraction based algorithm for $k$-means clustering constructs a small set of new artificial features and then applies $k$-means clustering on the constructed features. Despite the significance of $k$-means clustering as well as the wealth of heuristic methods addressing it, provably accurate feature selection methods for $k$-means clustering are not known. On the other hand, two provably accurate feature extraction methods for $k$-means clustering are known in the literature; one is based on random projections and the other is based on the singular value decomposition (SVD). This paper makes further progress towards a better understanding of dimensionality reduction for $k$-means clustering. Namely, we present the first provably accurate feature selection method for $k$-means clustering and, in addition, we present two feature extraction methods. The first feature extraction method is based on random projections and it improves upon the existing results in terms of time complexity and number of features needed to be extracted. The second feature extraction method is based on fast approximate SVD factorizations and it also improves upon the existing results in terms of time complexity. The proposed algorithms are randomized and provide constant-factor approximation guarantees with respect to the optimal $k$-means objective value.

Proceedings Article
12 Dec 2011
TL;DR: This paper shows that if, instead of a flat partitioning, the image is represented by a hierarchical segmentation tree, then the resulting energy combining unary and boundary terms can still be optimized using graph cut (with all the corresponding benefits of global optimality and efficiency).
Abstract: Graph cut optimization is one of the standard workhorses of image segmentation since for binary random field representations of the image, it gives globally optimal results and there are efficient polynomial time implementations. Often, the random field is applied over a flat partitioning of the image into non-intersecting elements, such as pixels or super-pixels. In the paper we show that if, instead of a flat partitioning, the image is represented by a hierarchical segmentation tree, then the resulting energy combining unary and boundary terms can still be optimized using graph cut (with all the corresponding benefits of global optimality and efficiency). As a result of such inference, the image gets partitioned into a set of segments that may come from different layers of the tree. We apply this formulation, which we call the pylon model, to the task of semantic segmentation where the goal is to separate an image into areas belonging to different semantic classes. The experiments highlight the advantage of inference on a segmentation tree (over a flat partitioning) and demonstrate that the optimization in the pylon model is able to flexibly choose the level of segmentation across the image. Overall, the proposed system has superior segmentation accuracy on several datasets (Graz-02, Stanford background) compared to previously suggested approaches.

Journal ArticleDOI
TL;DR: This work presents an advanced label propagation algorithm that combines two unique strategies of community formation, namely, defensive preservation and offensive expansion of communities, combined in a hierarchical manner to recursively extract the core of the network and to identify whisker communities.
Abstract: Label propagation has proven to be a fast method for detecting communities in large complex networks. Recent developments have also improved the accuracy of the approach; however, a general algorithm is still an open issue. We present an advanced label propagation algorithm that combines two unique strategies of community formation, namely, defensive preservation and offensive expansion of communities. The two strategies are combined in a hierarchical manner to recursively extract the core of the network and to identify whisker communities. The algorithm was evaluated on two classes of benchmark networks with planted partition and on 23 real-world networks ranging from networks with tens of nodes to networks with several tens of millions of edges. It is shown to be comparable to the current state-of-the-art community detection algorithms and superior to all previous label propagation algorithms, with comparable time complexity. In particular, analysis on real-world networks has proven that the algorithm has almost linear complexity, O(m¹·¹⁹), and scales even better than the basic label propagation algorithm (m is the number of edges in the network).

Journal ArticleDOI
TL;DR: Two efficient algorithms for linear time suffix array construction, using the techniques of divide-and-conquer, and recursion, that yield the best time and space efficiencies among all the existing linear time SACAs.
Abstract: We present, in this paper, two efficient algorithms for linear time suffix array construction. These two algorithms achieve their linear time complexities, using the techniques of divide-and-conquer, and recursion. What distinguish the proposed algorithms from other linear time suffix array construction algorithms (SACAs) are the variable-length leftmost S-type (LMS) substrings and the fixed-length d-critical substrings sampled for problem reduction, and the simple algorithms for sorting these sampled substrings: the induced sorting algorithm for the variable-length LMS substrings and the radix sorting algorithm for the fixed-length d-critical substrings. The very simple sorting mechanisms render our algorithms an elegant design framework, and, in turn, the surprisingly succinct implementations. The fully functional sample implementations of our proposed algorithms require only around 100 lines of C code for each, which is only 1/10 of the implementation of the KA algorithm and comparable to that of the KS algorithm. The experimental results demonstrate that these two newly proposed algorithms yield the best time and space efficiencies among all the existing linear time SACAs.

Posted Content
TL;DR: In this paper, the problem of minimizing a convex function over the space of large matrices with low rank was studied and an efficient greedy algorithm was proposed, and its formal approximation guarantees were derived.
Abstract: We address the problem of minimizing a convex function over the space of large matrices with low rank. While this optimization problem is hard in general, we propose an efficient greedy algorithm and derive its formal approximation guarantees. Each iteration of the algorithm involves (approximately) finding the left and right singular vectors corresponding to the largest singular value of a certain matrix, which can be calculated in linear time. This leads to an algorithm which can scale to large matrices arising in several applications such as matrix completion for collaborative filtering and robust low rank matrix approximation.

Journal ArticleDOI
TL;DR: The turning motion of UAVs is shown to be less efficient from the viewpoints of route length, duration and energy, and the problem of coverage Path Planning in a convex polygon area is transformed to width calculation of the convexpolygon, and a novel algorithm to calculate the widths of convex polygons with time complexity O ( n ) is developed.

Proceedings ArticleDOI
11 Apr 2011
TL;DR: A class of reachability queries and a class of graph patterns, in which an edge is specified with a regular expression of a certain form, expressing the connectivity in a data graph via edges of various types are proposed.
Abstract: It is increasingly common to find graphs in which edges bear different types, indicating a variety of relationships. For such graphs we propose a class of reachability queries and a class of graph patterns, in which an edge is specified with a regular expression of a certain form, expressing the connectivity in a data graph via edges of various types. In addition, we define graph pattern matching based on a revised notion of graph simulation. On graphs in emerging applications such as social networks, we show that these queries are capable of finding more sensible information than their traditional counterparts. Better still, their increased expressive power does not come with extra complexity. Indeed, (1) we investigate their containment and minimization problems, and show that these fundamental problems are in quadratic time for reachability queries and are in cubic time for pattern queries. (2) We develop an algorithm for answering reachability queries, in quadratic time as for their traditional counterpart. (3) We provide two cubic-time algorithms for evaluating graph pattern queries based on extended graph simulation, as opposed to the NP-completeness of graph pattern matching via subgraph isomorphism. (4) The effectiveness, efficiency and scalability of these algorithms are experimentally verified using real-life data and synthetic data.

Journal ArticleDOI
01 Apr 2011
TL;DR: A new pseudopolynomial algorithm is presented for solving two-player games played on a weighted graph with mean-payoff objective and with energy constraints, improving the best known worst-case complexity for pseudopoly Nominal mean- payoff algorithms.
Abstract: In this paper, we study algorithmic problems for quantitative models that are motivated by the applications in modeling embedded systems. We consider two-player games played on a weighted graph with mean-payoff objective and with energy constraints. We present a new pseudopolynomial algorithm for solving such games, improving the best known worst-case complexity for pseudopolynomial mean-payoff algorithms. Our algorithm can also be combined with the procedure by Andersson and Vorobyov to obtain a randomized algorithm with currently the best expected time complexity. The proposed solution relies on a simple fixpoint iteration to solve the log-space equivalent problem of deciding the winner of energy games. Our results imply also that energy games and mean-payoff games can be reduced to safety games in pseudopolynomial time.

Proceedings ArticleDOI
06 Nov 2011
TL;DR: A new graph structure is presented that encodes multiple-match events as standard one-to-one matches, allowing computation of the solution in polynomial time, and an efficient method to identify groups is also presented, as a flow circulation problem.
Abstract: Multiple object tracking has been formulated recently as a global optimization problem, and solved efficiently with optimal methods such as the Hungarian Algorithm. A severe limitation is the inability to model multiple objects that are merged into a single measurement, and track them as a group, while retaining optimality. This work presents a new graph structure that encodes these multiple-match events as standard one-to-one matches, allowing computation of the solution in polynomial time. Since identities are lost when objects merge, an efficient method to identify groups is also presented, as a flow circulation problem. The problem of tracking individual objects across groups is then posed as a standard optimal assignment. Experiments show increased performance on the PETS 2006 and 2009 datasets compared to state-of-the-art algorithms.

Proceedings Article
Jingrui He1, Rick Lawrence1
28 Jun 2011
TL;DR: This paper introduces Multi-Task Multi-View (M2TV) learning for such complicated learning problems with both feature heterogeneity and task heterogeneity, and proposes a graph-based framework (GraM2) to take full advantage of the dual-heterogeneous nature.
Abstract: Many real-world problems exhibit dual-heterogeneity. A single learning task might have features in multiple views (i.e., feature heterogeneity); multiple learning tasks might be related with each other through one or more shared views (i.e., task heterogeneity). Existing multi-task learning or multi-view learning algorithms only capture one type of heterogeneity. In this paper, we introduce Multi-Task Multi-View (M2TV) learning for such complicated learning problems with both feature heterogeneity and task heterogeneity. We propose a graph-based framework (GraM2) to take full advantage of the dual-heterogeneous nature. Our framework has a natural connection to Reproducing Kernel Hilbert Space (RKHS). Furthermore, we propose an iterative algorithm (IteM2) for GraM2 framework, and analyze its optimality, convergence and time complexity. Experimental results on various real data sets demonstrate its effectiveness.

Proceedings ArticleDOI
01 Sep 2011
TL;DR: This work proposes an abstract theory of denoising with atomic norms which is specialized to provide a convex optimization problem for estimating the frequencies and phases of a mixture of complex exponentials with guaranteed bounds on the mean-squared-error.
Abstract: The sub-Nyquist estimation of line spectra is a classical problem in signal processing, but currently popular subspace-based techniques have few guarantees in the presence of noise and rely on a priori knowledge about system model order. Motivated by recent work on atomic norms in inverse problems, we propose a new approach to line spectrum estimation that provides theoretical guarantees for the mean-square-error performance in the presence of noise and without advance knowledge of the model order. We propose an abstract theory of denoising with atomic norms which is specialized to provide a convex optimization problem for estimating the frequencies and phases of a mixture of complex exponentials with guaranteed bounds on the mean-squared-error. In general, our proposed optimization problem has no known polynomial time solution, but we provide an efficient algorithm, called DAST, based on the Fast Fourier Transform that achieves nearly the same error rate. We compare DAST with Cadzow's canonical alternating projection algorithm, which performs marginally better under high signal-to-noise ratios when the model order is known exactly, and demonstrate experimentally that DAST outperforms other denoising techniques, including Cadzow's, over a wide range of signal-to-noise ratios.

Journal ArticleDOI
TL;DR: This paper proposes decentralized sub-optimal (polynomial time) and decentralized optimal coalition formation algorithms that generate coalitions for a single target with low computational complexity and compares the performance of the proposed algorithms to that of a global optimal solution.
Abstract: Unmanned aerial vehicles (UAVs) have the potential to carry resources in support of search and prosecute operations. Often to completely prosecute a target, UAVs may have to simultaneously attack the target with various resources with different capacities. However, the UAVs are capable of carrying only limited resources in small quantities, hence, a group of UAVs (coalition) needs to be assigned that satisfies the target resource requirement. The assigned coalition must be such that it minimizes the target prosecution delay and the size of the coalition. The problem of forming coalitions is computationally intensive due to the combinatorial nature of the problem, but for real-time applications computationally cheap solutions are required. In this paper, we propose decentralized sub-optimal (polynomial time) and decentralized optimal coalition formation algorithms that generate coalitions for a single target with low computational complexity. We compare the performance of the proposed algorithms to that of a global optimal solution for which we need to solve a centralized combinatorial optimization problem. This problem is computationally intensive because the solution has to (a) provide a coalition for each target, (b) design a sequence in which targets need to be prosecuted, and (c) take into account reduction of UAV resources with usage. To solve this problem we use the Particle Swarm Optimization (PSO) technique. Through simulations, we study the performance of the proposed algorithms in terms of mission performance, complexity of the algorithms and the time taken to form the coalition. The simulation results show that the solution provided by the proposed algorithms is close to the global optimal solution and requires far less computational resources.

Journal ArticleDOI
TL;DR: In this paper, the authors model and study the case in which the attacker launches a multipronged (i.e., multimode) attack and prove that for various election systems even such concerted, flexible attacks can be perfectly planned in deterministic polynomial time.
Abstract: In 1992, Bartholdi, Tovey, and Trick opened the study of control attacks on elections-- attempts to improve the election outcome by such actions as adding/deleting candidates or voters. That work has led to many results on how algorithms can be used to find attacks on elections and how complexity-theoretic hardness results can be used as shields against attacks. However, all the work in this line has assumed that the attacker employs just a single type of attack. In this paper, we model and study the case in which the attacker launches a multipronged (i.e., multimode) attack. We do so to more realistically capture the richness of real-life settings. For example, an attacker might simultaneously try to suppress some voters, attract new voters into the election, and introduce a spoiler candidate. Our model provides a unified framework for such varied attacks. By constructing polynomialtime multiprong attack algorithms we prove that for various election systems even such concerted, flexible attacks can be perfectly planned in deterministic polynomial time.

Book
30 Aug 2011
TL;DR: In this article, the authors present linear time algorithms for solving the following problems involving a simple planar polygon P: (i) computing the collection of all shortest paths inside P from a given source vertex s to all the other vertices of P; (ii) computing a subpolygon of P consisting of points that are visible from a segment within P; and (iii) preprocessing P so that for any query ray r emerging from some fixed edge e of P, we can find in logarithmic time the first intersection of r with the boundary of P
Abstract: We present linear time algorithms for solving the following problems involving a simple planar polygon P: (i) Computing the collection of all shortest paths inside P from a given source vertex s to all the other vertices of P; (ii) Computing the subpolygon of P consisting of points that are visible from a segment within P; (iii) Preprocessing P so that for any query ray r emerging from some fixed edge e of P, we can find in logarithmic time the first intersection of r with the boundary of P; (iv) Preprocessing P so that for any query point x in P, we can find in logarithmic time the portion of the edge e that is visible from x; (v) Preprocessing P so that for any query point x inside P and direction u, we can find in logarithmic time the first point on the boundary of P hit by the ray at direction u from x; (vi) Calculating a hierarchical decomposition of P into smaller polygons by recursive polygon cutting, as in [Ch]. (vii) Calculating the (clockwise and counterclockwise) “convex ropes” (in the terminology of [PS]) from a fixed vertex s of P lying on its convex hull, to all other vertices of P. All these algorithms are based on a recent linear time algorithm of Tarjan and Van Wyk for triangulating a simple polygon, but use additional techniques to make all subsequent phases of these algorithms also linear.