scispace - formally typeset
Search or ask a question

Showing papers in "Computing and Informatics \/ Computers and Artificial Intelligence in 2002"


Journal Article
TL;DR: The GARA library provides a restricted representation scheme for encoding resource properties and the associated monitoring of Service Level Agreements (SLAs) and the GARA architecture is proposed, whereby a given service may indicate the QoS properties it can offer, or where a~service may search for other services based on particularQoS properties.
Abstract: We extend the service abstraction in the Open Grid Services Architecture~\cite{ogsa} for Quality of Service (QoS) properties. The realization of QoS often requires mechanisms such as advance or on-demand reservation of resources, varying in type and implementation, and independently controlled and monitored. Foster et al. propose the GARA~\cite{FostKessl99} architecture. The GARA library provides a restricted representation scheme for encoding resource properties and the associated monitoring of Service Level Agreements (SLAs). Our focus is on the application layer, whereby a given service may indicate the QoS properties it can offer, or where a~service may search for other services based on particular QoS properties.

182 citations


Journal Article
TL;DR: The technical challenge in achieving interoperability between Globus and UNICORE consists of mapping resource descriptions from both grid environments to an abstract format appropriate to work-flow preparation, and then the instantiation of work- flow tasks on the target systems.
Abstract: For several years, UNICORE and Globus have co-existed as approaches to exploiting what has become known as the ``Grid''. Both offer many services beneficial for creating and using production Grids. A cooperative approach, providing interoperability between Globus and UNICORE, would result in an advanced set of Grid services that gain strength from each other. This paper outlines some of these parallels and differences as they relate to the development of an interoperability layer between UNICORE and Globus. Given the increasing ubiquity of Globus, what emerges is the desire for a hybridised facility that utilises the UNICORE work-flow management of complex, multi-site tasks, but that can run on either UNICORE- or Globus-enabled resources. The technical challenge in achieving this, addressed in this paper, consists of mapping resource descriptions from both grid environments to an abstract format appropriate to work-flow preparation, and then the instantiation of work-flow tasks on the target systems. Other issues such as reconciling disparate security models and file transfer support are also addressed.

27 citations


Journal Article
TL;DR: In this article, the authors proposed to use depth first search (DFS) method for routing decisions, where each node A, upon receiving the message for the first time, sorts all its neighbors according to a criterion, such as their distance to destination, and uses that order in DFS algorithm.
Abstract: In a localized routing algorithm, node A currently holding the message forwards it based on the location of itself, its neighboring nodes and destination. We propose to use depth first search (DFS) method for routing decisions. Each node A, upon receiving the message for the first time, sorts all its neighbors according to a criterion, such as their distance to destination, and uses that order in DFS algorithm. It is the first localized algorithm that guarantees delivery for (connected) wireless networks modeled by arbitrary graphs, including inaccurate location information. We then propose the first localized QoS routing algorithm for wireless networks. It performs DFS routing algorithm after edges with insufficient bandwidth or insufficient connection time are deleted from the graph, and attempts to minimize hop count. This is also the first paper to apply GPS in QoS routing decisions, and to consider the connection time (estimated lifetime of a link) as a QoS criterion. The average length of measured QoS path in our experiments, obtained by DFS method, was between 1 and 1.34 times longer than the length of QoS path obtained by shortest path algorithm. The overhead is considerably reduced by applying the concept of internal nodes.

27 citations


Journal Article
TL;DR: This work proposes a generalization of the original model of fuzzy Turing machines and fuzzy languages that is based on rigorous mathematical fundamentals of fuzzy logic and its acceptance criterion is modified so that the resulting model obeys the Church--Turing Thesis.
Abstract: Fuzzy Turing machines and fuzzy languages were introduced by Zadeh, Lee and Santos in nineteen seventies. Unfortunately, it appears that from computability point of view their model is too powerful --- its nondeterministic version accepts non--recursively enumerable fuzzy languages. Moreover, from the viewpoint of the modern fuzzy logic theory the model is too restrictive since it is defined only for a specific $t$-norm (G\"odel norm). Therefore we propose a generalization of the original model that is based on rigorous mathematical fundamentals of fuzzy logic. Its acceptance criterion is modified so that the resulting model obeys the Church--Turing Thesis.

27 citations


Journal Article
TL;DR: The main issues, requirements, and design approaches for the implementation of grid-based knowledge discovery systems are discussed and some prospects and promising research directions in datacentric and knowledge-discovery oriented grids are outlined.
Abstract: When data mining and knowledge discovery techniques must be used to analyze large amounts of data, high-performance parallel and distributed computers can help to provide better computational performance and, as a consequence, deeper and more meaningful results. Recently grids, composed of large-scale, geographically distributed platforms working together, have emerged as effective architectures for high-performance decentralized computation. It is natural to consider grids as tools for distributed data-intensive applications such as data mining, but the underlying patterns of computation and data movement in such applications are different from those of more conventional high-performance computation. These differences require a different kind of grid, or at least a grid with significantly different emphases. This paper discusses the main issues, requirements, and design approaches for the implementation of grid-based knowledge discovery systems. Furthermore, some prospects and promising research directions in datacentric and knowledge-discovery oriented grids are outlined.

22 citations


Journal Article
TL;DR: It is shown that the improvement in quality can be arbitrarily high (and certainly superlinear in the number of processors used by the parallel computer), akin to superlinear speedup, a phenomenon itself originally thought to be impossible.
Abstract: The primary purpose of parallel computation is the fast execution of computational tasks that require an inordinate amount of time to perform sequentially. As a consequence, interest in parallel computation to date has naturally focused on the speedup provided by parallel algorithms over their sequential counterparts. The thesis of this paper is that a second equally important motivation for using parallel computers exists. Specifically, the following question is posed: Can parallel computers, thanks to their multiple processors, do more than simply speed up the solution to a problem? We show that within the paradigm of real-time computation, some classes of problems have the property that a solution to a problem in the class, when computed in parallel, is far superior in quality than the best one obtained on a sequential computer. What constitutes a better solution depends on the problem under consideration. Thus, `better' means `closer to optimal' for optimization problems, `more secure' for cryptographic problems, and `more accurate' for numerical problems. Examples from these classes are presented. In each case, the solution obtained in parallel is significantly, provably, and consistently better than a sequential one. It is important to note that the purpose of this paper is not to demonstrate merely that a parallel computer can obtain a better solution to a computational problem than one derived sequentially. The latter is an interesting (and often surprising) observation in its own right, but we wish to go further. It is shown here that the improvement in quality can be arbitrarily high (and certainly superlinear in the number of processors used by the parallel computer). This result is akin to superlinear speedup --- a phenomenon itself originally thought to be impossible.

22 citations


Journal Article
TL;DR: The evolution of the specification from its original form to the current draft of 10/4/02 authored by S. Tuecke, K. Czajkowski, J. Frey, S. Graham, C. Kesselman, and P. Vanderbilt is described.
Abstract: This paper began its life as an unpublished technical review citeanalysis of the proposed Open Grid Services Architecture (OGSA) as described in the papers, ``The Physiology of the Grid'' citefoster by Ian Foster, Carl Kesselman, Jeffrey Nick and Steven Tuecke, and ``The Grid Service Specification (Draft 2/15/02) citeogsi'' by Foster, Kesselman, Tuecke and Karl Czajkowski, Jeffrey Frey and Steve Graham. However, much has changed since the publication of the original documents. The architecture has evolved substantially and the vast majority of our initial concerns have been addressed. In this paper we will describe the evolution of the specification from its original form to the current draft of 10/4/02 authored by S. Tuecke, K. Czajkowski, J. Frey, S. Graham, C. Kesselman, and P. Vanderbilt, which is now the central component of the Global Grid Forum Open Grid Service Infrastructure (OGSI) working group which is co-chaired by Steven Tuecke and David Snelling.

17 citations


Journal Article
TL;DR: Efficient and practical extensions of the randomized Parallel Priority Queue algorithms of Ranade et al., and efficient randomized and deterministic algorithms for the problem of list contraction on the Bulk-Synchronous Parallel (BSP) model are presented.
Abstract: In this work we present efficient and practical randomized data structures on the Bulk-Synchronous Parallel (BSP) model of computation along with an experimental study of their performance. In particular, we study data structures for the realization of Parallel Priority Queues (PPQs). We show that our algorithms are communication efficient and achieve optimality to within small multiplicative constant factors for a wide range of parallel machines. We also present an experimental study of our PPQ algorithms on a Cray T3D\@. Finally, we present new randomized and deterministic BSP algorithms for list and tree contraction.

16 citations


Journal Article
TL;DR: A survey of hypermedia systems modelling is presented along proposed five perspectives: hypermedia application perspective, development process perspective, aspect (static or dynamic) perspective, degree of formality, and notation perspective.
Abstract: Modelling is an important activity in the development of complex systems. Hypermedia modelling is a relatively new direction of research. Based on achieved results in software systems development, the recognition of importance of hypermedia modelling took relatively short time from the time of widespread adoption of hypermedia systems. However, the hypermedia modelling is not that mature as its software counterpart. The aim of this paper is to present a survey of hypermedia systems modelling along proposed five perspectives: hypermedia application perspective, development process perspective, aspect (static or dynamic) perspective, degree of formality, and notation perspective. First, a categorisation of hypermedia related models is proposed. According to this categorisation, the multidimensional modelling space is established. The space leads to a schema that is capable to support decisions about the reuse of methods and techniques for hypermedia systems modelling. Additionally, selected hypermedia modelling methods and techniques with their mutual dependencies are studied according to the proposed categorisation.

11 citations


Journal Article
TL;DR: The logical characterization of this class of languages given by $(1)$ (and $(2)$) is new and is an extension of analogous results over finite structures such as words, trees and grids relating full existential MSO and definability by finite automata.
Abstract: In this paper, we prove that for any language $L$ of finitely branching finite and infinite trees, the following properties are equivalent: (1)~$L$~is definable by an existential MSO sentence which is bisimulation invariant over graphs, (2)~$L$~is definable by a FO-closed existential MSO sentence which is bisimulation invariant over graphs, (3)~$L$~is definable in the nu-level of the modal mu-calculus, (4)~$L$~is the projection of a locally testable tree language and is bisimulation closed, (5)~$L$~is closed in the prefix topology and recognizable by a modal finite tree automaton, (6)~$L$~is recognizable by a modal finite tree automaton of index zero. The equivalence between $(3)$, $(4)$, $(5)$ and $(6)$ is known for quite a long time, although maybe not in such a form, and can be considered as a classical result. The logical characterization of this class of languages given by $(1)$ (and $(2)$) is new. It is an extension of analogous results over finite structures such as words, trees and grids relating full existential MSO and definability by finite automata.

10 citations


Journal Article
TL;DR: This paper presents a methodology for provisioning QoS in the UMS backbone network based on the Differentiated Services (DiffServ) model, a relatively simple but scalable IP-based QoS technology.
Abstract: A distinguishing feature of the Universal Mobile Telecommunications System (UMTS) is the support of different levels of quality of service (QoS) as required by subscribers and their applications. To provide QoS, the UMTS backbone network needs an efficient QoS mechanism to provide the demanded level of services on a UMTS core network. This paper presents a methodology of provisioning QoS in this backbone network based on the Differentiated Services (DiffServ) model. DiffServ is a relatively simple but scalable IP-based technology, which can efficiently provide QoS in networks of DiffServ supporting routers. This is accomplished by defining a framework for setting a DiffServ-based UMTS backbone router, as well as the requisite mapping function for interworking between a DiffServ domain and UMTS. Efficient schemes are presented for the scheduling and buffer management components of the backbone router supporting DiffServ. The performance of this system for provisioning UMTS primary QoS classes is evaluated by computer simulations. The results show that DiffServ can be an effective candidate for UMTS backbone bearer service.

Journal Article
TL;DR: An evolutionary approach based on Nearest Neighbour Classifiers (ENNC) is introduced where no parameters are involved, thus overcoming all the problems derived from the use of the above mentioned parameters.
Abstract: The design of nearest neighbour classifiers can be seen as the partitioning of the whole domain in different regions that can be directly mapped to a~class. The definition of the limits of these regions is the goal of any nearest neighbour based algorithm. These limits can be described by the location and class of a reduced set of prototypes and the nearest neighbour rule. The nearest neighbour rule can be defined by any distance metric, while the set of prototypes is the matter of design. To compute this set of prototypes, most of the algorithms in the literature require some crucial parameters as the number of prototypes to use, and a smoothing parameter. In this work, an evolutionary approach based on Nearest Neighbour Classifiers (ENNC) is introduced where no parameters are involved, thus overcoming all the problems derived from the use of the above mentioned parameters. The algorithm follows a biological metaphor where each prototype is identified with an animal, and the regions of the prototypes with the territory of the animals. These animals evolve in a competitive environment with a limited set of resources, emerging a population of animals able to survive in the environment, i.e.~emerging a~right set of prototypes for the above classification objectives. The approach has been tested using different domains, showing successful results, both in the classification accuracy and the distribution and number of the prototypes achieved.

Journal Article
TL;DR: The architecture of the web tool system --- Trillian, which discovers the interests of users without their interaction and uses them for autonomous searching of related web content, is proposed.
Abstract: The main goal of this research was to investigate the means of intelligent support for retrieval of web documents. We have proposed the architecture of the web tool system --- Trillian, which discovers the interests of users without their interaction and uses them for autonomous searching of related web content. Discovered pages are suggested to the user. The discovery of user interests is based on analysis of documents visited by the users previously. We have created a module for completely transparent tracking of the user's movement on the web, which logs both visited URLs and contents of web pages. The post analysis step is based on a variant of the suffix tree clustering algorithm. We primarily focus on overall Trillian architecture design and the process of discovering topics of interests. We have implemented an experimental prototype of Trillian and evaluated the quality, speed and usefulness of the proposed system. We have shown that clustering is a feasible technique for extraction of interests from web documents. We consider the proposed architecture to be quite promising and suitable for future extensions.

Journal Article
TL;DR: This work presents a language Scc, based on concurrent constraint programming paradigm which is modified in such a way that agents can maintain its local private store, share (read/write) the information in the global store and communicate with other agents (via multi-party or handshake).
Abstract: We present a language Scc for the specification of direct exchange and/or global sharing of information in multi-agent systems. Scc is based on concurrent constraint programming paradigm which we modify in such a way that agents can (i) maintain its local private store, (ii) share (read/write) the information in the global store and (iii) communicate with other agents (via multi-party or handshake). To justify our proposal we compare Scc to a recently proposed language for the exchange of information in multi-agent systems. We also provide an operational semantics of Scc and prove its compositionality.

Journal Article
TL;DR: This work presents competitive policies for a wide range of cost functions, describing the QoS of a video stream, and considers online policies for selective frame discard and analyzes their performance by means of competitive analysis.
Abstract: Many multimedia applications require transmission of streaming video from a server to a client across an internetwork. In many cases loss may be unavoidable due to congestion or heterogeneous nature of the network. We explore how discard policies can be used in order to maximize the quality of service (QoS) perceived by the client. In our model the QoS of a video stream is measured in terms of a cost function, which takes into account the discarded frames. In this paper we consider online policies for selective frame discard and analyze their performance by means of competitive analysis. In competitive analysis the performance of a given online policy is compared with that of an optimal offline policy. In this work we present competitive policies for a wide range of cost functions, describing the QoS of a video stream.

Journal Article
TL;DR: This paper proposes a clustering-based method to characterize the communications between processes generated by message-passing applications and proposes a criterion to measure the quality of the obtained partitions.
Abstract: Clusters have become a very cost-effective platform for high-performance computing. Usually these systems become heterogeneous as they grow, due to their incremental capabilities. Many research activities have focused on the problem of task scheduling in heterogeneous systems from the computational point of view. However, an ideal scheduling strategy would also take into account the communication requirements of the applications and the communication bandwidth available in the network. One of the key issues in this strategy is the measurement of the communication requirements for each application. In this paper, we propose a clustering-based method to characterize the communications between processes generated by message-passing applications. This technique provides a model consisting of several partitions of the processes generated by the application. Also, we propose a criterion to measure the quality of the obtained partitions. This approach can be used when a given application is repeatedly executed with different input data. Results show that the proposed method can provide a partition with the highest ratio between the intracluster and the intercluster required communication bandwidth. This partition can be used to map groups of processes to processors in the heterogeneous system.

Journal Article
TL;DR: Experiments show that this method is more robust and precise than other typical methods because it can efficiently detect and delete the bad corresponding points, which include both bad locations and false matches.
Abstract: Given three partially overlapping views of the scene from which a set of point or line correspondences have been extracted, 3D structure and camera motion parameters can be represented by the trifocal tensor, which is the key to many problems of computer vision on three views. Unlike in conventional typical methods, the residual value is the only rule to eliminate outliers with large value, we build a Gaussian mixture model assuming that the residuals corresponding to the inliers come from Gaussian distributions different from that of the residuals of outliers. Then Bayesian rule of minimal risk is employed to classify all the correspondences using the parameters computed from GMM. Experiments with both synthetic data and real images show that our method is more robust and precise than other typical methods because it can efficiently detect and delete the bad corresponding points, which include both bad locations and false matches.

Journal Article
TL;DR: This paper describes the Desktop Grid Environment Enabler (DEGREE), a set of Web Services that provides advanced capabilities for grid computing based on the Globus Toolkit and the Grid Resource Broker, a grid portal developed at the University of Lecce.
Abstract: This paper describes our Desktop Grid Environment Enabler (DEGREE), a set of Web Services that provides advanced capabilities for grid computing. DEGREE services are based both on the Globus Toolkit and the Grid Resource Broker, a grid portal developed at the University of Lecce. Trusted users can develop innovative, grid-aware applications that seamlessly access computational resources and services exploiting our Web Services independently of platform and programming language.

Journal Article
TL;DR: GRAB (GRid And Biodiversity), which is a prototype demonstrator illustrating how one particular biodiversity-related task, namely bioclimatic modelling, can be supported in a Globus-based environment, is presented.
Abstract: In the field of biodiversity informatics a wide range of diverse databases and tools already exists. The challenge is to integrate such resources in order to support scientists wishing to explore complex problems of relevance to biodiversity, and to create new resources where necessary. In this paper we outline the relevance of biodiversity informatics requirements to the future development of the GRID, identifying the main issues that need to be addressed in this area. We present GRAB (GRid And Biodiversity), which is a prototype demonstrator illustrating how one particular biodiversity-related task, namely bioclimatic modelling, can be supported in a Globus-based environment. We also describe a much larger-scale GRID application project that is just commencing (BiodiversityWorld) in which a flexible problem-solving environment is to be built for full-scale investigations by scientists working in a number of biodiversity research areas.

Journal Article
TL;DR: This approach capitalizes on the unique reconfiguration capabilities of FPGAs and replaces the affected tile with a functionally equivalent one that does not rely on the faulty component, unlike fixed structure fault-tolerance techniques for ASICs and microprocessors.
Abstract: This paper presents a new approach to on-line fault tolerance via reconfiguration for the systems mapped onto field programmable gate arrays (FPGAs). The fault detection, based on self-checking technique, is introduced at application level; therefore our approach can detect the faults of configurable logic blocks (CLBs) and routing interconnections in the FPGAs concurrently with the normal system work. A grid of tiles is projected on the FPGA structure and a certain number of spare CLBs is reserved inside every tile. The number of spare CLBs per tile, which will be used as a backup upon detecting any faulty CLB, is estimated in accordance with the probability of failure After locating the faulty CLBs, the faulty tile will be reconfigured with avoiding the faulty CLBs. Our proposed approach uses a coordination of hardware and software redundancy. We assume that a module external to the FPGA controls automatically the reconfiguration process in addition to the diagnosis process (DIRC); typically this is an embedded microprocessor having some storage for the various tile configurations. We have implemented our approach using Xilinx Virtex FPGA. The DIRC code is written in JBits software tools. In response to a component failure this approach capitalizes on the unique reconfiguration capabilities of FPGAs and replaces the affected tile with a functionally equivalent one that does not rely on the faulty component. Unlike fixed structure fault-tolerance techniques for ASICs and microprocessors, this approach allows a single physical component to provide redundant backup for several types of components.

Journal Article
TL;DR: This paper reports on using an Inductive Logic Programmming (ILP) system for learning function-free Horn-clause descriptions of spatial knowledge to show that an existing relation between two reference systems can be automatically learned by an ILP system, given the proper background knowledge and positive examples.
Abstract: The ability to learn spatial relations is a prerequisite for performing many relevant tasks such as those associated with motion, orientation, navigation, etc. This paper reports on using an Inductive Logic Programmming (ILP) system for learning function-free Horn-clause descriptions of spatial knowledge. Its main contribution, however, is to show that an existing relation between two reference systems - the speaker-relative and the absolute - can be automatically learned by an ILP system, given the proper background knowledge and positive examples.

Journal Article
TL;DR: The paper describes a concept of induced rational parametrisation for curves that has applications in computer graphics and geometric modeling and a range of examples is given.
Abstract: The paper describes a concept of induced rational parametrisation for curves. Parametrisations of curves are defined in terms of rational parametrisations of simpler or 'primitive' curves. The technique has applications in computer graphics and geometric modeling. A range of examples is given.

Journal Article
TL;DR: A survey of a large number of channel assignment strategies that have been proposed by many researchers over the years are provided and they are categorised into four groups.
Abstract: For many years, telecommunication specialists are working on ways to improve service capacity of cellular networks. Since the radio frequency spectrum allocated for cellular network is limited, it is important to manage this scarce resource carefully in order to meet the ever-growing demand. This paper reviews channel assignment schemes for mobile cellular network. The paper provides a survey of a large number of channel assignment strategies that have been proposed by many researchers over the years. These channel assignment strategies are categorised into four groups. The characteristics and methods for evaluating these channel assignment schemes are also presented.

Journal Article
TL;DR: The alternating space hierarchy is refined and unary (tally) sets separating Σ 2 -SPACE(s(n) and H 2 - SPACE( s(n)) from Δ 2-SPACE (s( n) as well as from Δ 3 -SP space are presented.
Abstract: We refine the alternating space hierarchy by separating the classes Σ κ -SPACE(s(n)) and Π κ -SPACE(s(n)) from Δ κ -SPACE(s(n)) as well as from Δ κ+1 -SPACE(s(n)), for each s(n) E Ω(log log n) ∩ o(log n), and κ ≥ 2. We also present unary (tally) sets separating Σ 2 -SPACE(s(n)) and H 2 -SPACE(s(n)) from Δ 2 -SPACE(s(n)) as well as from Δ 3 -SPACE(s(n)).

Journal Article
TL;DR: Evidence is given that for some well-known interconnection networks there are efficient deadlock-free multidimensional interval routing schemes (DFMIRS) despite of a provable non-existence of efficient deterministic shortest path interval routing scheme (IRS).
Abstract: We present deadlock-free packet/wormhole routing algorithms ba\-sed on multidimensional interval schemes for certain hypercube related multiprocessor interconnection networks and give their analysis in terms of the compactness (i.e.~the maximum number of intervals per link) and the buffer-size (i.e.~the ma\-xi\-mum number of buffers per node/link). The issue of a simultaneous reduction of the compactness and the buffer-size is fundamental, worth to investigate and of practical importance, since the interval routing and wormhole routing have been industrially realized in INMOS Transputer C104 Router chips. In this paper we give an evidence that for some well-known interconnection networks there are efficient deadlock-free multidimensional interval routing schemes (DFMIRS) despite of a provable non-existence of efficient deterministic shortest path interval routing schemes (IRS). For $d$-dimensional hypercubes (tori) we present a $d$-dimensional DFMIRS of compactness $1$ and size $2$ (of compactness $1$ and size $4$), while for shortest path IRS we can achieve the reduction to $2$ (to at most $5$) buffers per node with compactness $2^{d-1}$ (with compactness $O(n^{d-1})$). For $d$-dimensional generalized butterflies we give a $d$-dimensional DFMIRS with compactness $2$ and size $3$, while each shortest path IRS is of the compactness at least superpolynomial in $d$. For $d$-dimensional cube-connected cycles we show a $d$-dimensional DFMIRS with compactness and size polynomial in $d$, while each shortest path IRS needs compactness at least $2^{d/2}$. We also present a nonconstant lower bound (in the form $\sqrt{d}$) on the size of deadlock-free packet routing (based on acyclic orientation covering) for a set of monotone routing paths on $d$-dimensional hypercubes.

Journal Article
TL;DR: Very sharp separation results for Turing machine sublogarithmic space complexity classes which are of the form: for any, arbitrarily slow growing, recursive nondecreasing and unbounded function s there is a k ∈ N and an unary language L such that L ∈ SP ACE(s (n) + k) \ SPACE(s(n - 1)).
Abstract: We present very sharp separation results for Turing machine sublogarithmic space complexity classes which are of the form: For any, arbitrarily slow growing, recursive nondecreasing and unbounded function s there is a k ∈ N and an unary language L such that L ∈ SP ACE(s(n) + k) \ SPACE(s(n - 1)). For a binary L the supposition lim s = ∞ is sufficient. The witness languages differ from each language from the lower classes on infinitely many words We use so called demon (Turing) machines where the tape limit is given automatically without any construction. The results hold for deterministic and nondeterministic demon machines and also for alternating demon machines with a constant number of alternations, and with unlimited number of alternations. The sharpness of the results is ensured by using a very sensitive measure of space complexity of Turing computations which is defined as the amount of the tape required by the simulation (of the computation in question) on a fixed universal machine. As a proof tool we use a succint diagonalization method.