scispace - formally typeset
Search or ask a question

Showing papers on "Computability published in 2012"


Book
01 Jan 2012
TL;DR: The introduction to formal languages and automata wasolutionary rather than rcvolrrtionary and addressed Initially, I felt that giving solutions to exercises was undesirable hecause it lirrritcd the Chapter 1 fntroduction to the Theory of Computation.
Abstract: G' A. Linz, Peter. An introduction to formal languages and automata / Peter Linz'--3'd cd charrgcs ftrr the second edition wercl t)volutionary rather than rcvolrrtionary and addressed Initially, I felt that giving solutions to exercises was undesirable hecause it lirrritcd the Chapter 1 fntroduction to the Theory of Computation. Issuu solution manual to introduction to languages. Introduction theory computation 2nd edition solution manual sipser. Structural Theory of automata: solution manual of theory of computation. Kellison theory of interest pdf. Transformation, Sylvester's theorem(without proof), Solution of Second Order. Linear Differential Higher Engineering Mathematics by B.S. Grewal, 40th Edition, Khanna. Publication. 2. Introduction Of Automata Theory, Languages and computationHopcroft. Motwani&Ulman UNIX system Utilities manual. 4.

1,383 citations


Book
28 Aug 2012
TL;DR: This book focuses on the recent algorithmic results in the field of distributed computing by oblivious mobile robots (unable to remember the past), and introduces the computational model with its nuances, focusing on basic coordination problems: pattern formation, gathering, scattering, leader election, as well as on dynamic tasks such as flocking.
Abstract: The study of what can be computed by a team of autonomous mobile robots, originally started in robotics and AI, has become increasingly popular in theoretical computer science (especially in distributed computing), where it is now an integral part of the investigations on computability by mobile entities. The robots are identical computational entities located and able to move in a spatial universe; they operate without explicit communication and are usually unable to remember the past; they are extremely simple, with limited resources, and individually quite weak. However, collectively the robots are capable of performing complex tasks, and form a system with desirable fault-tolerant and self-stabilizing properties. The research has been concerned with the computational aspects of such systems. In particular, the focus has been on the minimal capabilities that the robots should have in order to solve a problem. This book focuses on the recent algorithmic results in the field of distributed computing by oblivious mobile robots (unable to remember the past). After introducing the computational model with its nuances, we focus on basic coordination problems: pattern formation, gathering, scattering, leader election, as well as on dynamic tasks such as flocking. For each of these problems, we provide a snapshot of the state of the art, reviewing the existing algorithmic results. In doing so, we outline solution techniques, and we analyze the impact of the different assumptions on the robots' computability power. Table of Contents: Introduction / Computational Models / Gathering and Convergence / Pattern Formation / Scatterings and Coverings / Flocking / Other Directions

309 citations


01 Jan 2012
TL;DR: This book is very referred for you because it gives not only the experience but also lesson, it is about this book that will give wellness for all people from many societies.
Abstract: Where you can find the algorithmic randomness and complexity theory and applications of computability easily? Is it in the book store? On-line book store? are you sure? Keep in mind that you will find the book in this site. This book is very referred for you because it gives not only the experience but also lesson. The lessons are very valuable to serve for you, that's not about who are reading this algorithmic randomness and complexity theory and applications of computability book. It is about this book that will give wellness for all people from many societies.

221 citations


01 Jan 2012
TL;DR: Computability, despite having exact access to its own discrete data type, provides a unique tool for the investigation of ‘unpredictability’ in both Physics and Biology through its fine-grained analysis of undecidability.
Abstract: Computability has its origins in Logic within the framework formed along the original path laid down by the founding fathers of the modern foundational analysis for Mathematics (Frege and Hilbert). This theoretical itinerary, which was largely focused on Logic and Arithmetic, departed in principle from the renewed relations between Geometry and Physics occurring at the time. In particular, the key issue of physical measurement, as our only access to ‘reality’, played no part in its theoretical framework. This is in stark contrast to the position in Physics, where the role of measurement has been a core theoretical and epistemological issue since Poincare, Planck and Einstein. Furthermore, measurement is intimately related to unpredictability, (in-)determinism and the relationship with physical space–time. Computability, despite having exact access to its own discrete data type, provides a unique tool for the investigation of ‘unpredictability’ in both Physics and Biology through its fine-grained analysis of undecidability – note that unpredictability coincides with physical randomness in both classical and quantum frames. Moreover, it now turns out that an understanding of randomness in Physics and Biology is a key component of the intelligibility of Nature. In this paper, we will discuss a few results following

69 citations


Posted Content
TL;DR: The theory of represented spaces is well-known to exhibit a strong topological flavour as discussed by the authors, and it is a general setting for the study of computability derived from Turing machines, as such they are the basic entities for endeavors such as computable analysis or computable measure theory.
Abstract: Represented spaces form the general setting for the study of computability derived from Turing machines. As such, they are the basic entities for endeavors such as computable analysis or computable measure theory. The theory of represented spaces is well-known to exhibit a strong topological flavour. We present an abstract and very succinct introduction to the field; drawing heavily on prior work by Escard\'o, Schr\"oder, and others. Central aspects of the theory are function spaces and various spaces of subsets derived from other represented spaces, and -- closely linked to these -- properties of represented spaces such as compactness, overtness and separation principles. Both the derived spaces and the properties are introduced by demanding the computability of certain mappings, and it is demonstrated that typically various interesting mappings induce the same property.

69 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that every nonzero Turing degree contains a set that is neither generically computable nor coarsely computable, and that there is a c.e. set of density 1 that has no computable subset of density 0.
Abstract: Generic decidability has been extensively studied in group theory, and we now study it in the context of classical computability theory. A set A of natural numbers is called generically computable if there is a partial computable function that agrees with the characteristic function of A on its domain D, and furthermore D has density 1, that is, lim n→∞ |{k

58 citations


Posted Content
TL;DR: In this article, a deep investigation is performed on the interrelations between mechanical computations and their mathematical descriptions emerging when a human (the researcher) starts to describe a Turing machine (the object of the study) by different mathematical languages (the instruments of investigation).
Abstract: The Turing machine is one of the simple abstract computational devices that can be used to investigate the limits of computability. In this paper, they are considered from several points of view that emphasize the importance and the relativity of mathematical languages used to describe the Turing machines. A deep investigation is performed on the interrelations between mechanical computations and their mathematical descriptions emerging when a human (the researcher) starts to describe a Turing machine (the object of the study) by different mathematical languages (the instruments of investigation). Together with traditional mathematical languages using such concepts as 'enumerable sets' and 'continuum' a new computational methodology allowing one to measure the number of elements of different infinite sets is used in this paper. It is shown how mathematical languages used to describe the machines limit our possibilities to observe them. In particular, notions of observable deterministic and non-deterministic Turing machines are introduced and conditions ensuring that the latter can be simulated by the former are established.

56 citations


Journal ArticleDOI
TL;DR: In this article, the authors analyze the auto-solvability requirement that a program of a certain kind must solve the halting problem for all programs of that kind, and present novel results concerning this autosolvability requirements.
Abstract: Instruction sequence is a key concept in practice, but it has as yet not come prominently into the picture in theoretical circles. This paper concerns instruction sequences, the behaviours produced by them under execution, the interaction between these behaviours and components of the execution environment, and two issues relating to computability theory. Positioning Turing’s result regarding the undecidability of the halting problem as a result about programs rather than machines, and taking instruction sequences as programs, we analyse the autosolvability requirement that a program of a certain kind must solve the halting problem for all programs of that kind. We present novel results concerning this autosolvability requirement. The analysis is streamlined by using the notion of a functional unit, which is an abstract state-based model of a machine. In the case where the behaviours exhibited by a component of an execution environment can be viewed as the behaviours of a machine in its different states, the behaviours concerned are completely determined by a functional unit. The above-mentioned analysis involves functional units whose possible states represent the possible contents of the tapes of Turing machines with a particular tape alphabet. We also investigate functional units whose possible states are the natural numbers. This investigation yields a novel computability result, viz. the existence of a universal computable functional unit for natural numbers.

51 citations


Book
01 Jan 2012
TL;DR: Going from the origins of counting to the most blue-skies proposals for novel methods of computation, the authors investigate the extent to which the laws of nature and of logic constrain what the authors can compute.
Abstract: Computation and its Limits is an innovative cross-disciplinary investigation of the relationship between computing and physical reality. It begins by exploring the mystery of why mathematics is so effective in science and seeks to explain this in terms of the modelling of one part of physical reality by another. Going from the origins of counting to the most blue-skies proposals for novel methods of computation, the authors investigate the extent to which the laws of nature and of logic constrain what we can compute. In the process they examine formal computability, the thermodynamics of computation and the promise of quantum computing.

47 citations


Journal ArticleDOI
TL;DR: In this paper, a measure of shape which is appropriate for the study of a complicated geometric structure, defined using the topology of neighborhoods of the structure, was proposed and applied to branched polymers, Brownian trees, and self-avoiding random walks.
Abstract: We propose a measure of shape which is appropriate for the study of a complicated geometric structure, defined using the topology of neighborhoods of the structure. One aspect of this measure gives a new notion of fractal dimension. We demonstrate the utility and computability of this measure by applying it to branched polymers, Brownian trees, and self-avoiding random walks.

46 citations


Proceedings ArticleDOI
14 May 2012
TL;DR: This paper addresses the minimal revision problem for specification automata by constructing automata specifications that are as “close” as possible to the initial user intent, by removing the minimum number of constraints from the specification that cannot be satisfied.
Abstract: One of the important challenges in robotics is the automatic synthesis of provably correct controllers from high level specifications. One class of such algorithms operates in two steps: (i) high level discrete controller synthesis and (ii) low level continuous controller synthesis. In this class of algorithms, when phase (i) fails, then it is desirable to provide feedback to the designer in the form of revised specifications that can be achieved by the system. In this paper, we address the minimal revision problem for specification automata. That is, we construct automata specifications that are as “close” as possible to the initial user intent, by removing the minimum number of constraints from the specification that cannot be satisfied. We prove that the problem is computationally hard and we encode it as a satisfiability problem. Then, the minimal revision problem can be solved by utilizing efficient SAT solvers.

Journal ArticleDOI
01 Jan 2012
TL;DR: It is proved that for every computable measurable space, RN is W-reducible to EC, and a computable measured space is constructed for which EC isW-reduced to RN.
Abstract: We show that a single application of the noncomputable operator EC, which transforms enumerations of sets (in N) to their characteristic functions, suffices to compute the Radon-Nikodym derivative dµ/dλ of a finite measure µ, which is absolutely continuous w.r.t. the σ-finite measure λ. We also give a condition on the two measures (in terms of computability of the norm of a certain linear operator involving the two measures) which is sufficient to compute the derivative.

Journal ArticleDOI
TL;DR: This work turns folklore into a both topological and combinatorial complexity theory of information, investigating for several practical problems how much advice is necessary and sufficient to render them computable.

Posted Content
TL;DR: In this paper, the authors present a new model of computation, described in terms of monoidal categories, which conforms the Church-Turing Thesis, and captures the same computable functions as the standard models.
Abstract: We present a new model of computation, described in terms of monoidal categories. It conforms the Church-Turing Thesis, and captures the same computable functions as the standard models. It provides a succinct categorical interface to most of them, free of their diverse implementation details, using the ideas and structures that in the meantime emerged from research in semantics of computation and programming. The salient feature of the language of monoidal categories is that it is supported by a sound and complete graphical formalism, string diagrams, which provide a concrete and intuitive interface for abstract reasoning about computation. The original motivation and the ultimate goal of this effort is to provide a convenient high level programming language for a theory of computational resources, such as one-way functions, and trapdoor functions, by adopting the methods for hiding the low level implementation details that emerged from practice. In the present paper, we make a first step towards this ambitious goal, and sketch a path to reach it. This path is pursued in three sequel papers, that are in preparation.

Journal ArticleDOI
TL;DR: In this paper, the authors extend this result to the more expressive cirquent calculus system CL6, which is a conservative extension of both CL5 and classical propositional logic.
Abstract: Computability logic is a formal theory of computability. The earlier article "Introduction to cirquent calculus and abstract resource semantics" by Japaridze proved soundness and completeness for the basic fragment CL5 of computability logic. The present article extends that result to the more expressive cirquent calculus system CL6, which is a conservative extension of both CL5 and classical propositional logic.

Journal ArticleDOI
TL;DR: In this article, countable free groups of different ranks were considered and the complexity of index sets within the class of free groups and within the classes of all groups was investigated. But the complexity was not considered for a computable free group of infinite rank.
Abstract: We consider countable free groups of different ranks. For these groups, we investigate computability theoretic complexity of index sets within the class of free groups and within the class of all groups. For a computable free group of infinite rank, we consider the difficulty of finding a basis.

Journal Article
TL;DR: The great challenge for the TPLT is to precisify (formalize) the definitions of linguistic primitives in order that ‘linking hypotheses’ (not mere correlations) to as yet undiscovered neurobiological primitives can be formed.
Abstract: ness it subsumes and thereby relates all computational primitives; in principle therefore it renders commensurable the computational ontologies of linguistics and neuroscience — or so I would endeavor to prove in the TPLT. A Turing machine is a mathematical abstraction, not a physical device, but my theory is that the information it specifies in the form of I-language must be encoded in the human genetic program — and/or derived from the mathematical laws of nature (‘third factors’ in the sense of Chomsky 2005) — and expressed in the brain. Central to the machine is a generative procedure for dinfinity; however, “[a]lthough the characterizations of what might be the most basic linguistic operations must be considered one of the deepest and most pressing in experimental language research, we know virtually nothing about the neuronal implementation of the putative primitives of linguistic computation” (Poeppel & Omaki 2008: 246). So is presented the great challenge for the TPLT: To precisify (formalize) the definitions of linguistic primitives in order that ‘linking hypotheses’ (not mere correlations) to as yet undiscovered neurobiological primitives can be formed. 4. Generative Systems It was in the theory of computability and its equivalent formalisms that the infinite generative capacity of a finite system was formalized and abstracted and thereby made available to theories of natural language (see Chomsky 1955 for a discussion of the intellectual zeitgeist and the influence of mathematical logic, computability theory, etc. at the time generative linguistics emerged in the 1950s). In particular, a generative grammar15 was defined as a set of rules that recursively generate (enumerate/specify) the sentences of a language in the form of a production system as defined by Post (1944) and exapted by Chomsky (1951): (1) φ1, ..., φn → φn+1 15 Linguists use the term with systematic ambiguity to refer to the explananda of linguistic theory (i.e. I-languages) and to the explanantia (i.e. theories of I-languages). Biolinguistics  Forum  228 “[E]ach of the φi is a structure of some sort and [...] the relation → is to be interpreted as expressing the fact that if our process of recursive specification generates the structures φ1, ..., φn then it also generates the structure φn+1” (Chomsky & Miller 1963: 284); the inductive (recursive) definition derives infinite sets of structures. The objective of this formalization was analogous to “[t]he objective of formalizing a mathematical theory a la Hilbert, [i.e.] to remove all uncertainty about what constitutes a proof in the theory, [...] to establish an algorithm for the notion of proof” (Kleene 1981: 47) (see Davis 2012 on Hilbert’s program). Chomsky (1956: 117) observed that a derivation as in (1) is analogous to a proof with φ1, ..., φn as the set of axioms, the rewrite rule (production) → as the rule of inference, and the derived structure φn+1 as the lemma/theorem. For a toy model, let (2) be a simplified phrase structure grammar with S = Start symbol Sentence, ⌒ = concatenation, # = boundary symbol, N[P] = Noun [Phrase], V[P] =

Journal ArticleDOI
TL;DR: In this article, a quantum version of the Church-Turing theorem is presented, which exhibits a formal non-trivial interplay between theoretical physics symmetries and computability assumptions.
Abstract: As was emphasized by Deutsch, quantum computation shatters complexity theory, but is innocuous to computability theory. Yet Nielsen and others have shown how quantum theory as it stands could breach the physical Church-Turing thesis. We draw a clear line as to when this is the case, in a way that is inspired by Gandy. Gandy formulates postulates about physics, such as homogeneity of space and time, bounded density and velocity of information -- and proves that the physical Church-Turing thesis is a consequence of these postulates. We provide a quantum version of the theorem. Thus this approach exhibits a formal non-trivial interplay between theoretical physics symmetries and computability assumptions.

Posted Content
TL;DR: In this article, game-theoretic arguments can be used in computability theory and algorithmic information theory: unique numbering theorem (Friedberg), gap between conditional complexity and total conditional complexity, Epstein--Levin theorem and some (yet unpublished) result of Muchnik and Vyugin.
Abstract: We provide some examples showing how game-theoretic arguments can be used in computability theory and algorithmic information theory: unique numbering theorem (Friedberg), the gap between conditional complexity and total conditional complexity, Epstein--Levin theorem and some (yet unpublished) result of Muchnik and Vyugin

Journal ArticleDOI
Troy Day1
TL;DR: A theorem is proven demonstrating that, because of the digital nature of inheritance, there are inherent limits on the kinds of questions that can be answered using a complete theory accounting for the potential open-endedness of evolution.
Abstract: The process of evolutionary diversification unfolds in a vast genotypic space of potential outcomes. During the past century, there have been remarkable advances in the development of theory for this diversification, and the theory's success rests, in part, on the scope of its applicability. A great deal of this theory focuses on a relatively small subset of the space of potential genotypes, chosen largely based on historical or contemporary patterns, and then predicts the evolutionary dynamics within this pre-defined set. To what extent can such an approach be pushed to a broader perspective that accounts for the potential open-endedness of evolutionary diversification? There have been a number of significant theoretical developments along these lines but the question of how far such theory can be pushed has not been addressed. Here a theorem is proven demonstrating that, because of the digital nature of inheritance, there are inherent limits on the kinds of questions that can be answered using such an approach. In particular, even in extremely simple evolutionary systems, a complete theory accounting for the potential open-endedness of evolution is unattainable unless evolution is progressive. The theorem is closely related to Godel's incompleteness theorem, and to the halting problem from computability theory.

Journal ArticleDOI
TL;DR: It is shown that testing computability by width-2 OBDDs when the order of variables is fixed and known requires a number of queries that grows logarithmically with n (for a constant @e), and an algorithm is provided that performs [email protected]?(logn/@e) queries.

Journal ArticleDOI
TL;DR: A third, indispensable complexity measure for interactive computations termed amplitude complexity is introduced, and the adequacy of CL12 with respect to A-amplitude, S-space and T-time computability under certain minimal conditions on the triples (A,S,T) of function classes is established.
Abstract: Computability logic (see this http URL) is a long-term project for redeveloping logic on the basis of a constructive game semantics, with games seen as abstract models of interactive computational problems. Among the fragments of this logic successfully axiomatized so far is CL12 --- a conservative extension of classical first-order logic, whose language augments that of classical logic with the so called choice sorts of quantifiers and connectives. This system has already found fruitful applications as a logical basis for constructive and complexity-oriented versions of Peano arithmetic, such as arithmetics for polynomial time computability, polynomial space computability, and beyond. The present paper introduces a third, indispensable complexity measure for interactive computations termed amplitude complexity, and establishes the adequacy of CL12 with respect to A-amplitude, S-space and T-time computability under certain minimal conditions on the triples (A,S,T) of function classes. This result very substantially broadens the potential application areas of CL12. The paper is self-contained, and targets readers with no prior familiarity with the subject.

Journal ArticleDOI
TL;DR: A new, substantially simplified version of the branching recurrence operation of computability logic is introduced, and its equivalence to the old, “canonical” version is proved.

Book ChapterDOI
30 Jun 2012
TL;DR: This paper shows how to capture failure detectors in each model so that both models become computationally equivalent, and introduces the notion of a "strongly correct" process which appears particularly well-suited to the iterated model, and presents simulations that prove the computational equivalence.
Abstract: The base distributed asynchronous read/write computation model is made up of n asynchronous processes which communicate by reading and writing atomic registers only. The distributed asynchronous iterated model is a more constrained model in which the processes execute an infinite number of rounds and communicate at each round with a new object called immediate snapshot object. Moreover, in both models up to n−1 processes may crash in an unexpected way. When considering computability issues, two main results are associated with the previous models. The first states that they are computationally equivalent for decision tasks. The second states that they are no longer equivalent when both are enriched with the same failure detector. This paper shows how to capture failure detectors in each model so that both models become computationally equivalent. To that end it introduces the notion of a "strongly correct" process which appears particularly well-suited to the iterated model, and presents simulations that prove the computational equivalence when both models are enriched with the same failure detector. The paper extends also these simulations to the case where the wait-freedom requirement is replaced by the notion of t-resilience.

Journal ArticleDOI
TL;DR: A case is made that CL12 is an adequate logical basis for constructive applied theories, including complexity-oriented ones, and soundness and completeness for the deductive system CL12 with respect to the semantics of CoL are proved.
Abstract: The work is devoted to Computability logic (CoL)—the philosophical/mathematical platform and long-term project for redeveloping classical logic after replacing truth by computability in its underlying semantics. This article elaborates some basic complexity theory for the CoL framework. Then it proves soundness and completeness for the deductive system CL12 with respect to the semantics of CoL, including the version of the latter based on polynomial time computability instead of computability-in-principle. CL12 is a sequent calculus system, where the meaning of a sequent intuitively can be characterized as ‘the succedent is algorithmically reducible to the antecedent’, and where formulas are built from predicate letters, function letters, variables, constants, identity, negation, parallel and choice connectives and blind and choice quantifiers. A case is made that CL12 is an adequate logical basis for constructive applied theories, including complexity-oriented ones. MSC: primary: 03F50; secondary: 03D75; 03D15; 03D20; 68Q10; 68T27; 68T30.

01 Jan 2012
TL;DR: A new model of property testing that closely minors the active learning model is introduced and testing results in this new model may be used to improve the efficiency of model selection algorithms in learning theory.
Abstract: Given oracle access to some boolean function f, how many queries do we need to test whether f is linear? Or monotone? Or whether its output is completely determined by a small number of the input variables? This thesis studies these and related questions in the framework of property testing introduced by Rubinfeld and Sudan ('96). The results of this thesis are grouped into three main lines of research. I. We determine nearly optimal bounds on the number of queries required to test k-juntas (functions that depend on at most k variables) and k-linearity (functions that return the parity of exactly k of the input bits). These two problems are fundamental in the study of boolean functions and the bounds obtained for these two properties lead to tight or improved bounds on the query complexity for testing many other properties including, for example, testing sparse polynomials, testing low Fourier degree, and testing computability by small-size decision trees. We give a partial characterization of the set of functions for which we can test isomorphism—that is, identity up to permutation of the labels of the variables—with a constant number of queries. This result provides some progress on the question of characterizing the set of properties of boolean functions that can be tested with a constant number of queries. II. We establish new connections between property testing and other areas of computer science. First, we present a new reduction between testing problems and communication problems. We use this reduction to obtain many new lower bounds in property testing from known results in communication complexity. Second, we introduce a new model of property testing that closely minors the active learning model. We show how testing results in this new model may be used to improve the efficiency of model selection algorithms in learning theory. The results presented in this thesis are obtained by applying tools from various mathematical areas, including probability theory, the analysis of boolean functions, orthogonal polynomials, and extremal combinatorics.

Posted Content
17 Apr 2012
TL;DR: The theory of represented spaces is well-known to exhibit a strong topological flavour as discussed by the authors, and it has been studied extensively in computable analysis or computable measure theory in the literature.
Abstract: Represented spaces form the general setting for the study of computability derived from Turing machines As such, they are the basic entities for endeavors such as computable analysis or computable measure theory The theory of represented spaces is well-known to exhibit a strong topological flavour We present an abstract and very succinct introduction to the field; drawing heavily on prior work by Escardo, Schroder, and others Central aspects of the theory are function spaces and various spaces of subsets derived from other represented spaces, and -- closely linked to these -- properties of represented spaces such as compactness, overtness and separation principles Both the derived spaces and the properties are introduced by demanding the computability of certain mappings, and it is demonstrated that typically various interesting mappings induce the same property

Journal ArticleDOI
TL;DR: The Hartman-Grobman linearization theorem for ordinary differential equations (ODEs) is studied: it is proved that near a hyperbolic equilibrium point x"0 of a nonlinear ODE [email protected]?=f(x), there is a computable homeomorphism H such that H is the solution to the ODE.


Journal ArticleDOI
TL;DR: Two types of filters are discussed, namely, 1) singular and 2) normal, and sufficient conditions for the solvability of the problem in terms of Hamilton-Jacobi-Isaacs equations (HJIEs) are presented.
Abstract: In this paper, we consider the H∞-filtering problem for affine nonlinear singular (or descriptor systems). Two types of filters are discussed, namely, 1) singular and 2) normal, and sufficient conditions for the solvability of the problem in terms of Hamilton-Jacobi-Isaacs equations (HJIEs) are presented. The results are also specialized to linear systems in which case the HJIEs reduce to a system of bilinear-matrix-inequalities (BLMIs) which can still be solved efficiently. Some simple examples are also given to illustrate the approach.