scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Foundations of Computer Science in 2008"


Journal ArticleDOI
TL;DR: The algebraic approach is used to show decidability of expressibility within fragments of first-order logic over finite words and to give a proof of Simon's theorem on factorization forests restricted to aperiodic monoids.
Abstract: We consider fragments of first-order logic over finite words. In particular, we deal with first-order logic with a restricted number of variables and with the lower levels of the alternation hierarchy. We use the algebraic approach to show decidability of expressibility within these fragments. As a byproduct, we survey several characterizations of the respective fragments. We give complete proofs for all characterizations and we provide all necessary background. Some of the proofs seem to be new and simpler than those which can be found elsewhere. We also give a proof of Simon's theorem on factorization forests restricted to aperiodic monoids because this is simpler and sufficient for our purpose.

98 citations


Journal ArticleDOI
TL;DR: This paper presents a new technique that is based on the notion of simulation, which is still optimal from the computational complexity point, and opens up the possibility of devising composition in a "just-in-time" fashion.
Abstract: In this paper we study the issue of service composition, for services that export a representation of their behavior in the form of a finite deterministic transition system. In particular, given a specification of the target service requested by the client as a finite deterministic transition system, the problem we face is how we can exploit the computations of the available services for realizing the computations of the target service. While ways to tackle such a problem are known, in this paper we present a new technique that is based on the notion of simulation, which is still optimal from the computational complexity point. Notably, such a technique, opens up the possibility of devising composition in a "just-in-time" fashion. Indeed, we show that, by exploiting simulation, it is actually possible to implicitly compute all possible compositions at once, and delay the choice of the actual composition to run-time.

90 citations


Journal ArticleDOI
TL;DR: This work considers two approaches to building a concise representation, respecting the underlying structural relationships while hiding superfluous information: a pruning strategy based on the notion of concept stability and a representational improvement based on nested line diagrams and "zooming".
Abstract: We present an application of formal concept analysis aimed at representing a meaningful structure of knowledge communities in the form of a lattice-based taxonomy. The taxonomy groups together agents (community members) who develop a set of notions. If no constraints are imposed on how it is built, a knowledge community taxonomy may become extremely complex and difficult to analyze. We consider two approaches to building a concise representation respecting the underlying structural relationships, while hiding uninteresting and/or superfluous information: a pruning strategy based on the notion of concept stability and a representational improvement based on nested line diagrams and “zooming”. We illustrate the methods on two examples: a community of embryologists and a community of researchers in complex systems.

72 citations


Journal ArticleDOI
TL;DR: It is shown that equivalence can be decided in polynomial time, using a reduction to the equivalence problem for probabilistic automata, which is known to be solvable in poynomial time.
Abstract: We consider the equivalence problem for labeled Markov chains (LMCs), where each state is labeled with an observation. Two LMCs are equivalent if every finite sequence of observations has the same probability of occurrence in the two LMCs. We show that equivalence can be decided in polynomial time, using a reduction to the equivalence problem for probabilistic automata, which is known to be solvable in polynomial time. We provide an alternative algorithm to solve the equivalence problem, which is based on a new definition of bisimulation for probabilistic automata. We also extend the technique to decide the equivalence of weighted probabilistic automata.

57 citations


Journal ArticleDOI
TL;DR: Conjunctive grammars over a unary alphabet generate only regular languages as discussed by the authors, which is a negative answer to the question of whether a conjunctive grammar can generate regular languages.
Abstract: Conjunctive grammars, introduced by Okhotin, extend context-free grammars by an additional operation of intersection in the body of any production of the grammar Several theorems and algorithms for context-free grammars generalize to the conjunctive case Okhotin posed nine open problems concerning those grammars One of them was a question, whether a conjunctive grammars over a unary alphabet generate only regular languages We give a negative answer, contrary to the conjectured positive one, by constructing a conjunctive grammar for the language {a4n : n ∈ ℕ} We also generalize this result: for every set of natural numbers L we show that {an : n ∈ L} is a conjunctive unary language, whenever the set of representations in base-k system of elements of L is regular, for arbitrary k

56 citations



Journal ArticleDOI
TL;DR: This paper provides complete classification of classes of bipartite graphs defined by a single forbidden induced bipartites subgraph with respect to bounded/unbounded clique-width.
Abstract: In this paper, we provide complete classification of classes of bipartite graphs defined by a single forbidden induced bipartite subgraph with respect to bounded/unbounded clique-width.

43 citations


Journal ArticleDOI
TL;DR: The problem of computing the relative entropy of unambiguous probabilistic automata can be formulated as a shortest-distance problem over an appropriate semiring, given efficient exact and approximate algorithms for its computation, and the results of experiments demonstrating the practicality of these algorithms are reported.
Abstract: We present an exhaustive analysis of the problem of computing the relative entropy of two probabilistic automata. We show that the problem of computing the relative entropy of unambiguous probabilistic automata can be formulated as a shortest-distance problem over an appropriate semiring, give efficient exact and approximate algorithms for its computation in that case, and report the results of experiments demonstrating the practicality of our algorithms for very large weighted automata. We also prove that the computation of the relative entropy of arbitrary probabilistic automata is PSPACE-complete. The relative entropy is used in a variety of machine learning algorithms and applications to measure the discrepancy of two distributions. We examine the use of the symmetrized relative entropy in machine learning algorithms and show that, contrarily to what is suggested by a number of publications in that domain, the symmetrized relative entropy is neither positive definite symmetric nor negative definite symmetric, which limits its use and application in kernel methods. In particular, the convergence of training for learning algorithms is not guaranteed when the symmetrized relative entropy is used directly as a kernel, or as the operand of an exponential as in the case of Gaussian Kernels. Finally, we show that our algorithm for the computation of the entropy of an unambiguous probabilistic automaton can be generalized to the computation of the norm of an unambiguous probabilistic automaton by using a monoid morphism. In particular, this yields efficient algorithms for the computation of the Lp-norm of a probabilistic automaton.

40 citations


Journal ArticleDOI
TL;DR: It is shown that for any e > 0, (α−e)n is an asymptotic lower bound for the maxrun function ρ(n) = max {number of runs in string x | all strings x of length n} is presented.
Abstract: An asymptotic lower bound for the maxrun function ρ(n) = max {number of runs in string x | all strings x of length n} is presented. More precisely, it is shown that for any e > 0, (α−e)n is an asymptotic lower bound, where . A recent construction of an increasing sequence of binary strings “rich in runs” is modified and extended to prove the result.

37 citations


Journal ArticleDOI
TL;DR: It is shown that reachability analysis performed by supercompilation can be seen as the proof of a correctness condition by induction.
Abstract: We present an approach to verification of parameterized systems, which is based on program transformation technique known as supercompilation. In this approach the statements about safety properties of a system to be verified are translated into the statements about properties of the program that simulates and tests the system. Supercompilation is used then to establish the required properties of the program. In this paper we show that reachability analysis performed by supercompilation can be seen as the proof of a correctness condition by induction. We formulate suitable induction principles and proof strategies and illustrate their use by examples of verification of parameterized protocols.

35 citations


Journal ArticleDOI
TL;DR: The decidability of the emptiness and reachability problems for these stateless automata are investigated and it is shown that the results are applicable to similar questions concerning certain variants of P systems, namely, token systems and sequential tissue-like P systems.
Abstract: We introduce the notion of stateless multihead two-way (respectively, one-way) NFAs and stateless multicounter systems and relate them to P systems and vector addition systems. In particular, we investigate the decidability of the emptiness and reachability problems for these stateless automata and show that the results are applicable to similar questions concerning certain variants of P systems, namely, token systems and sequential tissue-like P systems.

Journal ArticleDOI
TL;DR: It is shown not only that a black hole can be located in a ring using tokens with scattered agents, but also that the problem is solvable even if the ring is un-oriented, which is optimal.
Abstract: A black hole in a network is a highly harmful host that disposes of any incoming agents upon their arrival. Determining the location of a black hole in a ring network has been studied when each node is equipped with a whiteboard. Recently, the Black Hole Search problem was solved in a less demanding and less expensive token model with co-located agents. Whether the problem can be solved with scattered agents in a token model remains an open problem. In this paper, we show not only that a black hole can be located in a ring using tokens with scattered agents, but also that the problem is solvable even if the ring is un-oriented. More precisely, first we prove that the black hole search problem can be solved using only three scattered agents. We then show that, with K (K ⩾ 4) scattered agents, the black hole can be located in O(kn + n log n) moves. Moreover, when K (K ⩾ K) is a constant number, the move cost can be reduced to O(n log n), which is optimal. These results hold even if both agents and nodes are anonymous.

Journal ArticleDOI
TL;DR: It is shown that the upper bounds are tight if the authors have a variable sized alphabet that can depend on the size of input deterministic finite-state automata, and it is proved that theupper bounds are unreachable for any fixed sized alphabet.
Abstract: We investigate the state complexity of union and intersection for finite languages. Note that the problem of obtaining the tight bounds for both operations was open. First we compute upper bounds using structural properties of minimal deterministic finite-state automata for finite languages. Then, we show that the upper bounds are tight if we have a variable sized alphabet that can depend on the size of input deterministic finite-state automata. In addition, we prove that the upper bounds are unreachable for any fixed sized alphabet.

Journal ArticleDOI
Sebastian Link1
TL;DR: It is shown that Lien's axiomatisation of multivalued dependencies with null values (NMVDs) does not adequately reflect the role of the complementation rule, and a correspondence between (minimal) axiomatisations in fixed universes that do reflect the property of complementation and (Minimal)Axiomatisation in undetermined universes is shown.
Abstract: The implication of multivalued dependencies (MVDs) in relational databases has originally and independently been defined in the context of some fixed finite universe by Delobel, Fagin, and Zaniolo. Biskup observed that the original axiomatisation for MVD implication does not reflect the fact that the complementation rule is merely a means to achieve database normalisation. He proposed two alternative ways to overcome this deficiency: i) an axiomatisation that does represent the role of the complementation rule adequately, and ii) a notion of MVD implication in which the underlying universe of attributes is left undetermined together with an axiomatisation of this notion. In this paper we investigate multivalued dependencies with null values (NMVDs) as defined and axiomatised by Lien. We show that Lien's axiomatisation does not adequately reflect the role of the complementation rule, and extend Biskup's findings for MVDs in total database relations to NMVDs in partial database relations. Moreover, a correspondence between (minimal) axiomatisations in fixed universes that do reflect the property of complementation and (minimal) axiomatisations in undetermined universes is shown.

Journal ArticleDOI
TL;DR: It is shown that, while for determinism and nondeterminism such lower bounds are optimal even with respect to unary languages, for alternation optimal lower bounds for unary language turn out to be strictly higher than those for languages over alphabets with two or more symbols.
Abstract: We study lower bounds on space and input head reversals for deterministic, nondeterministic, and alternating Turing machines accepting nonregular languages. Three notions of space, namely strong, middle, weak are considered, and another notion, called accept, is introduced. In all cases, we obtain tight lower bounds. Moreover, we show that, while for determinism and nondeterminism such lower bounds are optimal even with respect to unary languages, for alternation optimal lower bounds for unary languages turn out to be strictly higher than those for languages over alphabets with two or more symbols.

Journal ArticleDOI
TL;DR: In this paper, a representation of regular and context-free languages with insertion systems of weight (2,0) and star languages, respectively, has been presented, in the form L = h(L(γ) ∩ D), where γ is an insertion system of weight(3, 0) (at most three symbols are inserted in a context of length zero), h is a projection, and D is a Dyck language.
Abstract: Insertion-deletion operations are much investigated in linguistics and in DNA computing and several characterizations of Turing computability and characterizations or representations of languages in Chomsky hierarchy were obtained in this framework. In this note we contribute to this research direction with a new characterization of this type, as well as with representations of regular and context-free languages, mainly starting from context-free insertion systems of as small as possible complexity. For instance, each recursively enumerable language L can be represented in a way similar to the celebrated Chomsky-Schutzenberger representation of context-free languages, i.e., in the form L = h(L(γ) ∩ D), where γ is an insertion system of weight (3, 0) (at most three symbols are inserted in a context of length zero), h is a projection, and D is a Dyck language. A similar representation can be obtained for regular languages, involving insertion systems of weight (2,0) and star languages, as well as for context-free languages – this time using insertion systems of weight (3, 0) and star languages.

Journal ArticleDOI
TL;DR: Although this work uses the extended model of spiking neural P systems with decaying spikes, these restrictions of decaying spikes and/or total spiking do not allow for the generation or the acceptance of more than regular sets of natural numbers.
Abstract: We consider extended variants of spiking neural P systems with decaying spikes (i.e., the spikes have a limited lifetime) and/or total spiking (i.e., the whole contents of a neuron is erased when it spikes). Although we use the extended model of spiking neural P systems, these restrictions of decaying spikes and/or total spiking do not allow for the generation or the acceptance of more than regular sets of natural numbers.

Journal ArticleDOI
TL;DR: Evaluation of a design and architecture for browsing and searching MPEG-7 images indicates that image navigation via a concept lattice is a highly successful interface paradigm and provides general insights for interface design using concept lattices.
Abstract: This paper presents the evaluation of a design and architecture for browsing and searching MPEG-7 images. Our approach is novel in that it exploits concept lattices for the representation and navigation of image content. Several concept lattices provide the foundation for the system (called IMAGE-SLEUTH) each representing a different search context, one for image shape, another for color and luminance, and a third for semantic content, namely image browsing based on a metadata ontology. The test collection used for our study is a sub-set of MPEG-7 images created from the popular The Sims 2™ game. The evaluation of the IMAGE-SLEUTH program is based on usability testing among 29 subjects. The results of the study are used to build an improved second generation program – IMAGE-SLEUTH2 – however these results also indicate that image navigation via a concept lattice is a highly successful interface paradigm. Our results provide general insights for interface design using concept lattices that will be of interest to any applied research and development using concept lattices.

Journal ArticleDOI
TL;DR: A thorough study of the succinct system of minimal generators as formerly defined is given, and a new lossless reduction of the MG set is introduced allowing to overcome its limitations and to derive all redundant association rules, starting from the maintained ones.
Abstract: In data mining applications, highly sized contexts are handled what usually results in a considerably large set of frequent itemsets, even for high values of the minimum support threshold. An interesting solution consists then in applying an appropriate closure operator that structures frequent itemsets into equivalence classes, such that two itemsets belong to the same class if they appear in the same sets of objects. Among equivalent itemsets, minimal elements (w.r.t. the number of items) are called minimal generators (MGs), while their associated closure is called closed itemset (CI), and is the largest one within the corresponding equivalence class. Thus, the pairs - composed by MGs and their associated CIs - make easier localizing each itemset since it is necessarily encompassed by an MG and an CI. In addition, they offer informative implication/association rules, with minimal premises and maximal conclusions, which losslessly represent the entire rule set. These important concepts - MG and CI - were hence at the origin of various works. Nevertheless, the inherent absence of a unique MG associated to a given CI leads to an intra-class combinatorial redundancy that leads an exhaustive storage and impractical use. This motivated an in-depth study towards a lossless reduction of this redundancy. This study was started by Dong et al. who introduced the succinct system of minimal generators (SSMG) as an attempt to eliminate the redundancy within this set. In this paper, we give a thorough study of the SSMG as formerly defined by Dong et al. This system will be shown to suffer from some flaws. As a remedy, we introduce a new lossless reduction of the MG set allowing to overcome its limitations. The new SSMG will then be incorporated into the framework of generic bases of association rules. This makes it possible to only maintain succinct and informative rules. After that, we give a thorough formal study of the related inference mechanisms allowing to derive all redundant association rules, starting from the maintained ones. Finally, an experimental evaluation shows the utility of our approach towards eliminating important rate of redundant information.

Journal ArticleDOI
TL;DR: Finite probabilistic automata with an isolated cut-point can be exponentially smaller than the size of any equivalent finite deterministic automaton, and these results imply a similar result for quantum finite automata.
Abstract: Size (the number of states) of finite probabilistic automata with an isolated cut-point can be exponentially smaller than the size of any equivalent finite deterministic automaton. However, the proof is non-constructive. The result is presented in two versions. The first version depends on Artin's Conjecture (1927) in Number Theory. The second version does not depend on conjectures not proved but the numerical estimates are worse. In both versions the method of the proof does not allow an explicit description of the languages used. Since our finite probabilistic automata are reversible, these results imply a similar result for quantum finite automata.

Journal ArticleDOI
TL;DR: A parameterized algorithm is proposed, to learn rules in the presence of a taxonomy, that works on a non-completed context and can compute various kinds of concept-based rules.
Abstract: Formal Concept Analysis (FCA) is a natural framework to learn from examples. Indeed, learning from examples results in sets of frequent concepts whose extent contains mostly these examples. In terms of association rules, the above learning strategy can be seen as searching the premises of rules where the consequence is set. In its most classical setting, FCA considers attributes as a non-ordered set. When attributes of the context are partially ordered to form a taxonomy, Conceptual Scaling allows the taxonomy to be taken into account by producing a context completed with all attributes deduced from the taxonomy. The drawback, however, is that concept intents contain redundant information. In this article, we propose a parameterized algorithm, to learn rules in the presence of a taxonomy. It works on a non-completed context. The taxonomy is taken into account during the computation so as to remove all redundancies from intents. Simply changing one of its operations, this parameterized algorithm can compute various kinds of concept-based rules. We present instantiations of the parameterized algorithm to learn rules as well as to compute the set of frequent concepts.

Journal ArticleDOI
TL;DR: Two algorithms are provided to solve the problem of synchronizing the activity of all the membranes of a P system and one of them works in the time 3h, where h is the height of the tree defining the membrane structure of the considered P system.
Abstract: We consider the problem of synchronizing the activity of all the membranes of a P system. After pointing at the connection with a similar problem dealt with in the field of cellular automata where the problem is called the firing squad synchronization problem, FSSP for short, we provide two algorithms to solve this problem. One algorithm is non-deterministic and it works in the time 3h, where h is the height of the tree defining the membrane structure of the considered P system. The other algorithm is deterministic and it works in time 4n + 2h, where n is the number of membranes of the considered P system. Finally, we suggest various directions to continue this work.

Journal ArticleDOI
Steven Lindell1
TL;DR: The paper uses singulary vocabularies to analyze first-order definability over doubly-linked data structures and provides a syntactically based proof using counting quantifiers that makes precise the notion of implicit calculability for arbitrary arity first- order formulas.
Abstract: We use singulary vocabularies to analyze first-order definability over doubly-linked data structures. Singulary vocabularies contain only monadic predicate and monadic function symbols. A class of mathematical structures in any vocabulary can be elementarily interpreted in a singulary vocabulary, while preserving notions of total size and degree. Doubly-linked data structures are a special case of bounded-degree finite structures in which there are reciprocal connections between elements, corresponding closely with physically feasible models of information storage. They can be associated with logical models involving unary relations and bijective functions in what we call an invertible singulary vocabulary. Over classes of these models, there is a normal form for first-order logic which eliminates all quantification of dependent variables. The paper provides a syntactically based proof using counting quantifiers. It also makes precise the notion of implicit calculability for arbitrary arity first-order formulas. Linear-time evaluation of first-order logic over doubly-linked data structures becomes a direct corollary. Included is a discussion of why these special data structures are appropriate for physically realizable models of information.

Journal ArticleDOI
TL;DR: This paper designs a minimum-process checkpointing algorithm for mobile distributed systems, where no useless checkpoint is taken, and reduces the blocking of processes by allowing the processes to do their normal computations, send messages and receive selective messages during their blocking period.
Abstract: Checkpoint is a designated place in a program at which normal process is interrupted specifically to preserve the status information necessary to allow resumption of processing at a later time. A checkpoint algorithm for mobile distributed systems needs to handle many new issues like: mobility, low bandwidth of wireless channels, lack of stable storage on mobile nodes, disconnections, limited battery power and high failure rate of mobile nodes. These issues make traditional checkpointing techniques unsuitable for such environments. Minimum-process coordinated checkpointing is an attractive approach to introduce fault tolerance in mobile distributed systems transparently. This approach is domino-free, requires at most two checkpoints of a process on stable storage, and forces only a minimum number of processes to checkpoint. But, it requires extra synchronization messages, blocking of the underlying computation or taking some useless checkpoints. In this paper, we design a minimum-process checkpointing algorithm for mobile distributed systems, where no useless checkpoint is taken. We reduce the blocking of processes by allowing the processes to do their normal computations, send messages and receive selective messages during their blocking period.

Journal ArticleDOI
TL;DR: In this article, the problem of placing the base stations such that each point in the entire area can communicate with at least one base station, and the total power required for all the base-stations in the network is minimized is minimized.
Abstract: Due to the recent growth in the demand of mobile communication services in several typical environments, the development of efficient s ystems for providing specialized services has become an important issue in mobile communication research. An important sub-problem in this area is the base-station placement problem, where the objective is to identify the location for placing the basestations. Mobile terminals communicate with their respective nearest base station, and the base stations communicate with each other over scarce wireless channels in a multi-hop fashion by receiving and transmitting radio signals. Each base station emits signal periodically and all the mobile terminals within its range can identify it as its nearest base station after receiving such radio signal. Here the problem is to position the base stations such that each point in the entire area can communicate with at least one base-station, and total power required for all the base-stations in the network is minimized. A different variation of this problem arises when some portions of the target region is not suitable for placing the base-stations, but the communication inside those regions need to be provided. For example, we may consider the large water bodies or the stiff mountains. In such cases, we need some specialized algorithms for efficiently placing the base-stations on the boundary of the f orbidden zone to provide services inside that region.

Journal ArticleDOI
TL;DR: It is shown that one-dimensional piecewise affine maps are equivalent to pseudo-billiard or so called “strange billiard” systems and use of more general classes of functions lead to undecidability of reachability problem for one- dimensional piecewise maps.
Abstract: In this paper we analyze the dynamics of one-dimensional piecewise maps. We show that one-dimensional piecewise affine maps are equivalent to pseudo-billiard or so called “strange billiard” systems. We also show that use of more general classes of functions lead to undecidability of reachability problem for one-dimensional piecewise maps.

Journal ArticleDOI
TL;DR: A way to computed the factor lattice directly from input data, i.e. without the need to compute the possibly large original concept lattice, is presented and an illustrative example to demonstrate the method.
Abstract: The paper presents results on factorization by similarity of fuzzy concept lattices with hedges. A fuzzy concept lattice is a hierarchically ordered collection of clusters extracted from tabular data. The basic idea of factorization by similarity is to have, instead of a possibly large original fuzzy concept lattice, its factor lattice. The factor lattice contains less clusters than the original concept lattice but, at the same time, represents a reasonable approximation of the original concept lattice and provides us with a granular view on the original concept lattice. The factor lattice results by factorization of the original fuzzy concept lattice by a similarity relation. The similarity relation is specified by a user by means of a single parameter, called a similarity threshold. Smaller similarity thresholds lead to smaller factor lattices, i.e. to more comprehensible but less accurate approximations of the original concept lattice. Therefore, factorization by similarity provides a trade-off between comprehensibility and precision. We first describe the notion of factorization. Second, we present a way to compute the factor lattice directly from input data, i.e. without the need to compute the possibly large original concept lattice. Third, we provide an illustrative example to demonstrate our method.

Journal ArticleDOI
TL;DR: It is shown that for all integers n and α such that n ⩽ α⩽ 2n, there exists a minimal nondeterministic finite automaton of n states with a four-letter input alphabet whose equivalent minimal deterministic finite automation has exactly α states.
Abstract: We show that for all integers n and α such that n ⩽ α ⩽ 2n, there exists a minimal nondeterministic finite automaton of n states with a four-letter input alphabet whose equivalent minimal deterministic finite automaton has exactly α states. It follows that in the case of a four-letter alphabet, there are no "magic numbers", i.e., the holes in the hierarchy. This improves a similar result obtained by Geffert for a growing alphabet of size n + 2.

Journal ArticleDOI
TL;DR: It is proved that any circ-UMFF is a totally ordered set and a factorization over it must be monotonic, and defines atom words and initiate a study of u, v-atoms.
Abstract: We say a family of strings over an alphabet is an UMFF if every string has a unique maximal factorization over . Foundational work by Chen, Fox and Lyndon established properties of the Lyndon circ-UMFF, which is based on lexicographic ordering. Commencing with the circ-UMFF related to V-order, we then proved analogous factorization families for a further 32 Block-like binary orders. Here we distinguish between UMFFs and circ-UMFFs, and then study the structural properties of circ-UMFFs. These properties give rise to the complete construction of any circ-UMFF. We prove that any circ-UMFF is a totally ordered set and a factorization over it must be monotonic. We define atom words and initiate a study of u, v-atoms. Applications of circ-UMFFs arise in string algorithmics.

Journal ArticleDOI
Maurice Margenstern1
TL;DR: In this article, the authors extend Hedlund's characterization of cellular automata to the case of automata in the hyperbolic plane and prove that it holds on the grids {p, q}.
Abstract: In this paper, we look at the extension of Hedlund's characterization of cellular automata to the case of cellular automata in the hyperbolic plane. This requires an additional condition. The new theorem is proved with full details in the case of the pentagrid and in the case of the ternary heptagrid and enough indications to show that it also holds on the grids {p, q} of the hyperbolic plane.