scispace - formally typeset
Search or ask a question

Showing papers on "Concatenation published in 2012"



Journal ArticleDOI
TL;DR: This study attempts to resolve cheirogaleid phylogeny, focusing especially on the mouse lemurs, by employing a large multilocus data set and shows that phylogenetic results are substantially influenced by the selection of alleles in the concatenation process.
Abstract: The systematics and speciation literature is rich with discussion relating to the potential for gene tree/species tree discordance. Numerous mechanisms have been proposed to generate discordance, including differential selection, longbranch attraction, gene duplication, genetic introgression, and/or incomplete lineage sorting. For speciose clades in which divergence has occurred recently and rapidly, recovering the true species tree can be particularly problematic due to incomplete lineage sorting. Unfortunately, the availability of multilocus or ‘‘phylogenomic’’ data sets does not simply solve the problem, particularly when the data are analyzed with standard concatenation techniques. In our study, we conduct a phylogenetic study for a nearly complete species sample of the dwarf and mouse lemur clade, Cheirogaleidae. Mouse lemurs (genus, Microcebus) have been intensively studied over the past decade for reasons relating to their high level of cryptic species diversity, and although there has been emerging consensus regarding the evolutionary diversity contained within the genus, there is no agreement as to the inter-specific relationships within the group. We attempt to resolve cheirogaleid phylogeny, focusing especially on the mouse lemurs, by employing a large multilocus data set. We compare the results of Bayesian concordance methods with those of standard gene concatenation, finding that though concatenation yields the strongest results as measured by statistical support, these results are found to be highly misleading. By employing an approach where individual alleles are treated as operational taxonomic units, we show that phylogenetic results are substantially influenced by the selection of alleles in the concatenation process.

84 citations


Proceedings ArticleDOI
03 Sep 2012
TL;DR: New binary pattern features for use in the problem of 3D facial action unit (AU) detection are proposed, using the traditional Local Binary Pattern, along with Local Phase Quantisation, Gabor filters and Monogenic filters, followed by the binary pattern feature extraction method.
Abstract: In this paper we propose new binary pattern features for use in the problem of 3D facial action unit (AU) detection. Two representations of 3D facial geometries are employed, the depth map and the Azimuthal Projection Distance Image (APDI). To these the traditional Local Binary Pattern is applied, along with Local Phase Quantisation, Gabor filters and Monogenic filters, followed by the binary pattern feature extraction method. Feature vectors are formed for each feature type through concatenation of histograms formed from the resulting binary numbers. Feature selection is then performed using a two-stage GentleBoost approach. Finally, we apply Support Vector Machines as classifiers for detection of each AU. This system is tested in two ways. First we perform 10-fold cross-validation on the Bosphorus database, and then we perform cross-database testing by training on this database and then testing on apex frames from the D3DFACS database, achieving promising results in both.

41 citations


Proceedings ArticleDOI
12 Aug 2012
TL;DR: A novel generalized Hidden Markov Model with discriminative training that can not only handle all the major types of spelling errors in a single unified framework, but also efficiently evaluate all the candidate corrections to ensure the finding of a globally optimal correction.
Abstract: Query spelling correction is a crucial component of modern search engines. Existing methods in the literature for search query spelling correction have two major drawbacks. First, they are unable to handle certain important types of spelling errors, such as concatenation and splitting. Second, they cannot efficiently evaluate all the candidate corrections due to the complex form of their scoring functions, and a heuristic filtering step must be applied to select a working set of top-K most promising candidates for final scoring, leading to non-optimal predictions. In this paper we address both limitations and propose a novel generalized Hidden Markov Model with discriminative training that can not only handle all the major types of spelling errors, including splitting and concatenation errors, in a single unified framework, but also efficiently evaluate all the candidate corrections to ensure the finding of a globally optimal correction. Experiments on two query spelling correction datasets demonstrate that the proposed generalized HMM is effective for correcting multiple types of spelling errors. The results also show that it significantly outperforms the current approach for generating top-K candidate corrections, making it a better first-stage filter to enable any other complex spelling correction algorithm to have access to a better working set of candidate corrections as well as to cover splitting and concatenation errors, which no existing method in academic literature can correct.

38 citations


Journal ArticleDOI
TL;DR: This work developed a novel technique called Random Addition Concatenation Analysis (RADICAL), which operates by sequentially concatenating randomly chosen gene partitions starting with a single-gene partition and ending with the entire genomic data set, and reveals the two partitions contain congruent phylogenetic signal.
Abstract: Recent whole-genome approaches to microbial phylogeny have emphasized partitioning genes into functional classes, often focusing on differences between a stable core of genes and a variable shell. To rigorously address the effects of partitioning and combining genes in genome-level analyses, we developed a novel technique called Random Addition Concatenation Analysis (RADICAL). RADICAL operates by sequentially concatenating randomly chosen gene partitions starting with a single-gene partition and ending with the entire genomic data set. A phylogenetic tree is built for every successive addition, and the entire process is repeated creating multiple random concatenation paths. The result is a library of trees representing a large variety of differently sized random gene partitions. This library can then be mined to identify unique topologies, assess overall agreement, and measure support for different trees. To evaluate RADICAL, we used 682 orthologous genes across 13 cyanobacterial genomes. Despite previous assertions of substantial differences between a core and a shell set of genes for this data set, RADICAL reveals the two partitions contain congruent phylogenetic signal. Substantial disagreement within the data set is limited to a few nodes and genes involved in metabolism, a functional group that is distributed evenly between the core and the shell partitions. We highlight numerous examples where RADICAL reveals aspects of phylogenetic behavior not evident by examining individual gene trees or a "'total evidence" tree. Our method also demonstrates that most emergent phylogenetic signal appears early in the concatenation process. The software is freely available at http://desalle.amnh.org.

37 citations


Proceedings ArticleDOI
03 Jul 2012
TL;DR: This paper lights behaviors of advanced high school students, with language concatenation in the Computational Models course, and offers suggestions for educators to cope with this phenomenon.
Abstract: Composition is a fundamental problem solving heuristic. In computer science, it primarily appears in program design with concrete objects such as language constructs. It also appears in more abstract forms in higher-level courses. One such form is that of language concatenation in the Computational Models course. This concatenation involves the composition of two specifications of infinite sets (source languages) into a third one, and requires both abstraction and non-deterministic conception. In this paper, we illuminate behaviors of advanced high school students, with such composition. Students who encountered difficulties offered pseudo solutions, which enclosed only "surface" features and observations. We orderly display their solutions, discuss them, and offer suggestions for educators to cope with this phenomenon.

30 citations


Journal ArticleDOI
TL;DR: A novel group testing method, termed semiquantitative group testing (SQGT), motivated by a class of problems arising in genome screening experiments, and describes several combinatorial and probabilistic constructions for codes representing generalizations of classical disjunct and separable codes.
Abstract: We propose a novel group testing method, termed semi-quantitative group testing, motivated by a class of problems arising in genome screening experiments. Semi-quantitative group testing (SQGT) is a (possibly) non-binary pooling scheme that may be viewed as a concatenation of an adder channel and an integer-valued quantizer. In its full generality, SQGT may be viewed as a unifying framework for group testing, in the sense that most group testing models are special instances of SQGT. For the new testing scheme, we define the notion of SQ-disjunct and SQ-separable codes, representing generalizations of classical disjunct and separable codes. We describe several combinatorial and probabilistic constructions for such codes. While for most of these constructions we assume that the number of defectives is much smaller than total number of test subjects, we also consider the case in which there is no restriction on the number of defectives and they may be as large as the total number of subjects. For the codes constructed in this paper, we describe a number of efficient decoding algorithms. In addition, we describe a belief propagation decoder for sparse SQGT codes for which no other efficient decoder is currently known. Finally, we define the notion of capacity of SQGT and evaluate it for some special choices of parameters using information theoretic methods.

25 citations


Journal ArticleDOI
TL;DR: This paper proposes a soft-input VLC decoding method using an a priori knowledge of the lengths of the source-symbol sequence and the compressed bit-stream with Maximum A Posteriori (MAP) sequence estimation and shows that the proposed decoding algorithm leads to significant performance gain in comparison with the prefix VLC decode besides exhibiting very low complexity.
Abstract: Most source coding standards (voice, audio, image and video) use Variable-Length Codes (VLCs) for compression. However, the VLC decoder is very sensitive to transmission errors in the compressed bit-stream. Previous contributions, using a trellis description of the VLC codewords to perform soft decoding, have been proposed. Significant improvements are achieved by this approach when compared with prefix decoding. Nevertheless, for realistic VLCs, the complexity of the trellis technique becomes intractable. In this paper, we propose a soft-input VLC decoding method using an a priori knowledge of the lengths of the source-symbol sequence and the compressed bit-stream with Maximum A Posteriori (MAP) sequence estimation. Performance in the case of transmission over an Additive White Gaussian Noise (AWGN) channel is evaluated. Simulation results show that the proposed decoding algorithm leads to significant performance gain in comparison with the prefix VLC decoding besides exhibiting very low complexity. A new VLC decoding method generating additional information regarding the reliability of the bits of the compressed bit-stream is also proposed. We consider the serial concatenation of a VLC with two types of channel code and perform iterative decoding. Results show that, when concatenated with a recursive systematic convolutional code (RSCC), iterative decoding provides remarkable error correction performance. In fact, a gain of about 2.3 dB is achieved, in the case of transmission over an AWGN channel, with respect to tandem decoding. Second, we consider a concatenation with a low-density parity-check (LDPC) code and it is shown that iterative joint source/channel decoding outperforms tandem decoding and an additional coding gain of 0.25 dB is achieved.

24 citations


01 Jan 2012
TL;DR: This paper presents a proposal for exactly such a standardization eort, i.e., an SMTLIBization of strings and regular expressions, and introduces a theory of sequences generalizing strings, and builds a theories of regular expressions on top of sequences.
Abstract: Strings are ubiquitous in software. Tools for verication and testing of software rely in various degrees on reasoning about strings. Web applications are particularly important in this context since they tend to be string-heavy and have large number security errors attributable to improper string sanitzation and manipulations. In recent years, many string solvers have been implemented to address the analysis needs of verication, testing and security tools aimed at string-heavy applications. These solvers support a basic representation of strings, functions such as concatenation, extraction, and predicates such as equality and membership in regular expressions. However, the syntax and semantics supported by the current crop of string solvers are mutually incompatible. Hence, there is an acute need for a standardized theory of strings (i.e., SMT-LIBization of a theory of strings) that supports a core set of functions, predicates and string representations. This paper presents a proposal for exactly such a standardization eort, i.e., an SMTLIBization of strings and regular expressions. It introduces a theory of sequences generalizing strings, and builds a theory of regular expressions on top of sequences. The proposed logic QF BVRE is designed to capture a common substrate among existing tools for string constraint solving.

24 citations


Book ChapterDOI
14 May 2012
TL;DR: This work describes a new data structure that supports fast pattern searching and describes a basic compression scheme called relative Lempel-Ziv compression, which gives a good compression ratio when every string in S is similar to R, but does not provide any pattern searching functionality.
Abstract: Recent advances in biotechnology and web technology are generating huge collections of similar strings. People now face the problem of storing them compactly while supporting fast pattern searching. One compression scheme called relative Lempel-Ziv compression uses textual substitutions from a reference text as follows: Given a (large) set S of strings, represent each string in S as a concatenation of substrings from a reference string R . This basic scheme gives a good compression ratio when every string in S is similar to R , but does not provide any pattern searching functionality. Here, we describe a new data structure that supports fast pattern searching.

24 citations


Book ChapterDOI
Dag Hovland1
05 Mar 2012
TL;DR: It is shown that the membership problem for regular expressions with unordered concatenation (without numerical constraints) is already NP-hard, and a polynomial-time algorithm is shown for the membership issue when restricted to a subclass called strongly 1-unambiguous.
Abstract: We study the membership problem for regular expressions extended with operators for unordered concatenation and numerical constraints. The unordered concatenation of a set of regular expressions denotes all sequences consisting of exactly one word denoted by each of the expressions. Numerical constraints are an extension of regular expressions used in many applications, e.g. text search (e.g., UNIX grep), document formats (e.g. XML Schema). Regular expressions with unordered concatenation and numerical constraints denote the same languages as the classical regular expressions, but, in certain important cases, exponentially more succinct. We show that the membership problem for regular expressions with unordered concatenation (without numerical constraints) is already NP-hard. We show a polynomial-time algorithm for the membership problem for regular expressions with numerical constraints and unordered concatenation, when restricted to a subclass called strongly 1-unambiguous.

Journal ArticleDOI
TL;DR: The question whether or not the number of components induces an infinite hierarchy of the recognized languages was formulated as an open problem in the literature is affirmatively solved by connecting P automata with right linear simple matrix grammars.

Book ChapterDOI
01 Jan 2012
TL;DR: The paper investigates the expressive power of language equations with the operations of concatenation and symmetric difference by demonstrating that the sets representable by unique solutions of such equations are exactly the recursively enumerable sets (their complements, respectively).
Abstract: The paper investigates the expressive power of language equations with the operations of concatenation and symmetric difference. For equations over every finite alphabet Σ with |Σ| ≥ 1, it is demonstrated that the sets representable by unique solutions of such equations are exactly the recursive sets over Σ, and the sets representable by their least (greatest) solutions are exactly the recursively enumerable sets (their complements, respectively). If |_| ≥ 2, the same characterization holds already for equations using symmetric difference and linear concatenation with regular constants. In both cases, the solution existence problem is P01-complete, the existence of a unique, a least or a greatest solution is P02-complete, while the existence of finitely many solutions is P03-complete.

Proceedings ArticleDOI
01 Dec 2012
TL;DR: Simulation results in the presence of ASE noise show that the TIER-LDPC concatenation scheme performs significantly better than the LDPC code alone in the case of severe impairments.
Abstract: In long-haul fiber-optic communication systems, the system performance is affected adversely by both severe physical impairments and amplified spontaneous emission (ASE) noise. Constrained coding, which avoids waveforms in the transmitted signal that are more likely to be detected incorrectly, has been proved to be an effective approach to suppress some physical impairments. However, the constrained coding schemes proposed in the literature are limited to the suppression of only some “resonant” sequences and their performance is evaluated in the absence of ASE noise. Various error correction codes also have been developed to reduce errors due to ASE noise but their performance is very vulnerable to the strong nonlinear impairments. This paper develops a novel concatenation scheme with the inner code being a constrained code based on Total Impairment Extent Rank (TIER) and the outer code being a low-density parity-check (LDPC) code. The TIER code ranks the bit patterns by order of all physical impairments imposed on them and constrains the bit patterns with large physical impairments. It is based on a discrete-time analytical model of physical impairments in long-haul fiber-optic communication systems. Compared with the current constrained codes, the TIER code offers more flexibility and better effectiveness. The TIER-LDPC concatenation scheme combines the strength of the TIER code in correcting errors due to physical impairments and that of the LDPC code in correcting memoryless errors due to ASE noise. Simulation results in the presence of ASE noise show that the TIER-LDPC concatenation scheme performs significantly better than the LDPC code alone in the case of severe impairments.

Patent
29 Feb 2012
TL;DR: In this article, an exchange frame through a plurality of embodiments, which is formed by connecting more than one concatenation unit and more than 1 exchange unit, was revealed. But the authors did not reveal the corresponding exchange ports of the colony router.
Abstract: The invention discloses an exchange frame through a plurality of embodiments, which is formed by connecting more than one concatenation unit and more than one exchange unit, wherein the concatenation unit is provided with a concatenation interface for connecting a retransmission frame; the exchange unit is provided with an exchange port for connecting the concatenation interface; and any concatenation interface of any concatenation unit is connected with one exchange port of any exchange unit The invention also discloses a colony router with the exchange frame through a plurality of embodiments, which comprises exchange frames and retransmission frames connected by optical cables; any optical cable interface of any retransmission frame is connected with one concatenation interface of any concatenation unit; and any concatenation interface of any concatenation unit is connected with one exchange port of any exchange unit Each embodiment can realize the capacity upgrading of the colony router, does not need to replace parts and reduces the cost for capacity expansion

Journal ArticleDOI
TL;DR: In this paper, the authors combine the advantages of the (e, λ)-topology and the locally L0-convex topology to prove that every complete random normed module with the countable concatenation property is also random subreflexive under the locally l0-convolutional topology.
Abstract: Combining respective advantages of the (e, λ)-topology and the locally L0-convex topology we first prove that every complete random normed module is random subreflexive under the (e, λ)-topology. Further, we prove that every complete random normed module with the countable concatenation property is also random subreflexive under the locally L0-convex topology, at the same time we also provide a counterexample which shows that it is necessary to require the random normed module to have the countable concatenation property.

Posted Content
TL;DR: In this article, the authors investigated the design and application of write-once memory (WOM) codes for flash memory storage and presented a construction of WOM codes based on finite Euclidean geometries over $\mathbb{F}_2$.
Abstract: This paper investigates the design and application of write-once memory (WOM) codes for flash memory storage. Using ideas from Merkx ('84), we present a construction of WOM codes based on finite Euclidean geometries over $\mathbb{F}_2$. This construction yields WOM codes with new parameters and provides insight into the criterion that incidence structures should satisfy to give rise to good codes. We also analyze methods of adapting binary WOM codes for use on multilevel flash cells. In particular, we give two strategies based on different rewrite objectives. A brief discussion of the average-write performance of these strategies, as well as concatenation methods for WOM codes is also provided.

Patent
14 Nov 2012
TL;DR: In this article, a system for combining a plurality of video streams includes a time stamp adjustment module that generates an adjusted second video stream by adjusting the plurality of time stamps of the first video stream.
Abstract: A system for combining a plurality of video streams includes a time stamp adjustment module that generates an adjusted second video stream by adjusting a plurality of time stamps of a second video stream. A video stream concatenation module generates a combined video stream by concatenating the adjusted second video stream to an end of a first video stream.

Book ChapterDOI
03 Jul 2012
TL;DR: The paper shows that the tight bound for the conversion of alternating finite automata into nondeterministic finite Automata with a single initial state is 2 n + 1.
Abstract: The paper shows that the tight bound for the conversion of alternating finite automata into nondeterministic finite automata with a single initial state is 2 n + 1. This solves an open problem stated by Fellah et al. (Intern. J. Computer Math. 35, 1990, 117–132). Then we examine the complexity of basic operations on languages represented by boolean and alternating finite automata. We get tight bounds for intersection and union, and for concatenation and reversal of languages represented by boolean automata. In the case of star, and of concatenation and reversal of AFA languages, our upper and lower bounds differ by one.

Journal ArticleDOI
TL;DR: A multi-channel Wiener filter (MWF)-based approach is presented for speech and noise scenarios, where an MWF-based NR algorithm is combined with DRC, and shows less degradation of the SNR improvement at a low increase in distortion compared to a serial concatenation.

Proceedings ArticleDOI
07 May 2012
TL;DR: A multi-target DP-TBD algorithm is developed to approximately solve the joint maximization in a efficient and accurate manner and a generalized detection procedure is adopted in a way that the dimension of the multi- target state is no longer needed to be to predetermined.
Abstract: This paper addresses the multi-target tracking problem through the use of dynamic programming based track-before-detect (DP-TBD). The usual way of implementing DP-TBD in a multi-target scenario is to adopt a multi-target state, which is the concatenation of individual target states. But this method involves the solution of a high-dimensional joint maximization which is usually computationally prohibitive. Besides, it suffers from the unknown number of targets since the dimension of the multi-target state has to be determined before DP integration. In this work, via utilizing the structure of target dyamics, a multi-target DP-TBD algorithm is developed to approximately solve the joint maximization in a efficient and accurate manner. Also, a generalized detection procedure is adopted in a way that the dimension of the multi-target state is no longer needed be to predetermined, therefore single-target and multi-target scenarios are handled in the same framework. Simulation example shows that the proposed algorithm can efficiently and reliably track multiple targets even targets are in proximity.

Journal ArticleDOI
TL;DR: This paper investigates the family of languages representable by unique solutions of such systems of language equations, and a method for proving nonrepresentability of particular languages is developed.

Journal ArticleDOI
TL;DR: It is shown that the sequential (respectively, parallel) concatenation of tree languages recognized by deterministic bottom-up automata with m and n states can be recognized by an automaton with (n+1)@?(m@?2^n+ 2^n^-^1)-1-1) states.

Journal ArticleDOI
TL;DR: Appropriate unit selection cost functions, namely concatenation cost and target cost, are proposed for syllable based synthesis for improving the quality of text-to-speech synthesis.
Abstract: This paper presents the design and development of syllable specific unit selection cost functions for improving the quality of text-to-speech synthesis Appropriate unit selection cost functions, namely concatenation cost and target cost, are proposed for syllable based synthesis Concatenation costs are defined based on the type of segments present at the syllable joins Proposed concatenation costs have shown significant reduction in perceptual discontinuity at syllable joins Three-stage target cost formulation is proposed for selecting appropriate units from database Subjective evaluation has shown improvement in the quality of speech at each stage

Book ChapterDOI
18 Sep 2012
TL;DR: It is shown that, if the phases of activity of timed automata in a network are disjoint, then location reachability for the network can be decided using a concatenation of timedAutomata, which reduces the complexity of verification in Uppaal-like tools from quadratic to linear time (in the number of components) while traversing the same reachable state space.
Abstract: The behavior of timed automata consists of idleness and activity, i.e. delay and action transitions. We study a class of timed automata with periodic phases of activity. We show that, if the phases of activity of timed automata in a network are disjoint, then location reachability for the network can be decided using a concatenation of timed automata. This reduces the complexity of verification in Uppaal-like tools from quadratic to linear time (in the number of components) while traversing the same reachable state space. We provide templates which imply, by construction, the applicability of sequential composition, a variant of concatenation, which reflects relevant reachability properties while removing an exponential number of states. Our approach covers the class of TDMA-based (Time Division Multiple Access) protocols, e.g. FlexRay and TTP. We have successfully applied our approach to an industrial TDMA-based protocol of a wireless fire alarm system with more than 100 sensors.

Proceedings Article
01 May 2012
TL;DR: In this paper a method based on the designed co-evolutionary asymptotic probabilistic genetic algorithm for the determination of the most likely sentence corresponding to the recognized chain of syllables within an acceptable time frame is suggested and tested.
Abstract: A syllable-based language model reduces the lexicon size by hundreds of times. It is especially beneficial in case of highly inflective languages like Russian due to the abundance of word forms according to various grammatical categories. However, the main arising challenge is the concatenation of recognised syllables into the originally spoken sentence or phrase, particularly in the presence of syllable recognition mistakes. Natural fluent speech does not usually incorporate clear information about the outside borders of the spoken words. In this paper a method for the syllable concatenation and error correction is suggested and tested. It is based on the designed co-evolutionary asymptotic probabilistic genetic algorithm for the determination of the most likely sentence corresponding to the recognized chain of syllables within an acceptable time frame. The advantage of this genetic algorithm modification is the minimum number of settings to be manually adjusted comparing to the standard algorithm. Data used for acoustic and language modelling are also described here. A special issue is the preprocessing of the textual data, particularly, handling of abbreviations, Arabic and Roman numerals, since their inflection mostly depends on the context and grammar.

Book ChapterDOI
23 Jul 2012
TL;DR: The iterated variants chop-star and chop-plus are defined similar as the classical operations Kleene star and plus, and the state complexity of chop operations on unary and/or finite languages is investigated, and similar bounds are obtained.
Abstract: We continue our research on the descriptional complexity of chop operations. Informally, the chop of two words is like their concatenation with the touching letters merged if they are equal, otherwise their chop is undefined. The iterated variants chop-star and chop-plus are defined similar as the classical operations Kleene star and plus. We investigate the state complexity of chop operations on unary and/or finite languages, and obtain similar bounds as for the classical operations.

Patent
10 May 2012
TL;DR: In this article, acknowledgements, negative acknowledgements and discontinuous transmission indications for a concatenation window that spans a plurality of subframes are concatenated into a compiled uplink message, which is then mapped to a fixed subframe regardless of the uplink and downlink subframe configuration.
Abstract: Acknowledgements, negative acknowledgements and discontinuous transmission indications for a concatenation window that spans a plurality of subframes are concatenated into a compiled uplink message, which is then mapped the uplink message to a fixed subframe regardless of the uplink and downlink subframe configuration of the concatenation window. In one example each radio frame has ten subframes and the concatenation window spans subframe 9 of radio frame N-1 through subframe 8 of radio frame N, and the fixed subframe is subframe 2 of radio frame N+l. In other examples the plurality of subframes span a length less than one radio frame, or a length equal to N radio frames (N>1). In further examples the downlink assignment index is re-defined to indicate the PUCCH resource in the fixed subframe, and/or to indicate whether or not multiple PDSCHs are allocated in the concatenation window.

Book ChapterDOI
21 Feb 2012
TL;DR: It is established that the worst-case state complexity of bottom-up star is $(n + \frac{3}{2}) \cdot 2^{n-1}$ , which differs by an order of magnitude from the corresponding result for string languages.
Abstract: The concatenation of trees can be defined either as a sequential or a parallel operation, and the corresponding iterated operation gives an extension of Kleene-star to tree languages. Since the sequential tree concatenation is not associative, we get two essentially different iterated sequential concatenation operations that we call the bottom-up star and top-down star operation, respectively. We establish that the worst-case state complexity of bottom-up star is $(n + \frac{3}{2}) \cdot 2^{n-1}$ . The bound differs by an order of magnitude from the corresponding result for string languages. The state complexity of top-down star is similar as in the string case. The iteration of the parallel concatenation has to be defined slightly differently in order to yield a regularity preserving operation.

Journal ArticleDOI
TL;DR: This paper proposes a different image super-resolution reconstruction scheme, based on the newly advanced results of sparse representation and the recently presented SR methods via this model, and online learn a subsidiary dictionary with the degradation estimation of the given low-resolution image and concatenate it with main one offline learned from many natural images with high quality.