scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Universal Computer Science in 2008"


Journal Article
TL;DR: This paper sheds light on the role of QoE by means of a common framework that covers the whole communications ecosystem and a research agenda for a holistic ecosystem analysis is outlined.
Abstract: Communications ecosystem covers a huge area from technical issues to business models and human behaviour. Due to this extreme diversity various societies need to discuss with each other, each of them using their own language. Engineers talk about network performance and quality of service, business people talk about average revenue per user and customer churn while behavioural scientists talk about happiness and experiences. Thus, everyone who wants to understand, or even analyze, the whole ecosystem, has to deal with all these diverse issues. In addition to the apparent communication problems, the main challenges of ecosystem analysis are to realistically model human behaviour, and to efficiently combine the models developed for different domains. A central concept when solving these problems is quality of experience (QoE). This paper sheds light on the role of QoE by means of a common framework that covers the whole communications ecosystem. Additionally, a research agenda for a holistic ecosystem analysis is outlined.

197 citations


Journal Article
TL;DR: A model that includes culture as one of the factors that influence mobile phone adoption and usage is proposed, which represents the influence of mediating factors and determining factors on actual mobile phone use.
Abstract: In human-computer interaction and computing, mobile phone usage is mostly addressed from a feature-driven perspective, i.e. which features do a certain user group use, and/or a usability perspective, i.e. how do they interact with these features. Although the feature driven and usability focus carry value, it is not the full picture. There is also an alternative or wider perspective: mobile phone use is influenced by demographic, social, cultural, and contextual factors that complicate the understanding of mobile phone usage. Drawing on concepts and models from sociology, computer-supported cooperative work, human-computer interaction and marketing, we researched the influence of culture on mobile phone adoption using interviews and two surveys. The contribution of this research is a model that includes culture as one of the factors that influence mobile phone adoption and usage. The proposed model represents the influence of mediating factors and determining factors on actual mobile phone use. The proposed model has been evaluated from both a qualitative and quantitative perspective.

106 citations


Journal Article
TL;DR: This work introduces the idea of community-based groupware (CBG), in which groupware is organized around groups of people working independently, rather than shared applications, documents, or virtual places, and argues that this way of organizing groupware supports informal collaboration better than other existing approaches.
Abstract: Shared-workspace groupware has not become common in the workplace, despite many positive results from research labs. One reason for this lack of success is that most shared workspace systems are designed around the idea of planned, formal collaboration sessions - yet much of the collaboration that occurs in a co-located work group is informal and opportunistic. To support informal collaboration, groupware must be designed and built differently. We introduce the idea of community-based groupware (CBG), in which groupware is organized around groups of people working independently, rather than shared applications, documents, or virtual places. Community-based groupware provides support for three things that are fundamental to informal collaboration: awareness of others and their individual work, lightweight means for initiating interactions, and the ability to move into closely-coupled collaboration when necessary. We demonstrate three prototypes that illustrate the ideas behind CBG, and argue that this way of organizing groupware supports informal collaboration better than other existing approaches.

105 citations


Journal Article
TL;DR: This paper proposes a new technique for the validation of document binarization algorithms that is simple in its implementation and can be performed on anybinarization algorithm since it doesn't require anything more than the binarized stage.
Abstract: Document binarization is an active research area for many years. The choice of the most appropriate binarization algorithm for each case proved to be a very difficult procedure itself. In this paper, we propose a new technique for the validation of document binarization algorithms. Our method is simple in its implementation and can be performed on any binarization algorithm since it doesn't require anything more than the binarization stage. As a demonstration of the proposed technique, we use the case of degraded historical documents. Then we apply the proposed technique to 30 binarization algorithms. Experimental results and conclusions are presented.

99 citations


Journal Article
TL;DR: A methodological approach is proposed, based on a set of notations of both a graphical and a textual nature, to support the joint modeling of collaborative and interactive issues of groupware systems.
Abstract: The design of the groupware systems is a progressively extended task, which is difficult to tackle. There are not proposals to support the joint modeling of collaborative and interactive issues of this kind of systems, that is, proposals that allow designing the presentation layer of these applications. In order to solve this lack we propose a methodological approach, based on a set of notations of both a graphical and a textual nature.

75 citations


Journal ArticleDOI
TL;DR: The language As metaL is introduced, and the AsmetaL encoding of ASM specifications of increasing complexity is provided, which provides the As metaS encoding for general-purpose simulation engine for Abstract State Machine (ASM) specifications.
Abstract: In this paper, we present a concrete textual notation, called AsmetaL, and a general-purpose simulation engine, called AsmetaS, for Abstract State Machine (ASM) specifications. They have been developed as part of the ASMETA (ASMs mETAmodelling) toolset, which is a set of tools for ASMs based on the metamod- elling approach of the Model-driven Engineering. We briefly present the ASMETA framework, and we discuss how the language and the simulator have been developed exploiting the advantages offered by the metamodelling approach. We introduce the language AsmetaL used to write ASM specifications, and we provide the AsmetaL encoding of ASM specifications of increasing complexity. We explain the AsmetaS ar- chitecture, its kernel engine, and how the simulator works within the ASMETA tool set. We discuss the features currently supported by the simulator and how it has been validated.

73 citations


Journal Article
TL;DR: This paper addresses a WSN layout problem instance in which full coverage is treated as a constraint while the other two objectives are optimized using a multi- objective approach, and employs a set of multi-objective optimization algorithms for this problem.
Abstract: Wireless Sensor Networks (WSN) allow, thanks to the use of small wireless devices known as sensor nodes, the monitorization of wide and remote areas with precision and liveness unseen to the date without the intervention of a human operator. For many WSN applications it is fundamental to achieve full coverage of the terrain monitored, known as sensor field. The next major concerns are the energetic efficiency of the network, in order to increase its lifetime, and having the minimum possible number of sensor nodes, in order to reduce the network cost. The task of placing the sensor nodes while addressing these objectives is known as WSN layout problem. In this paper we address a WSN layout problem instance in which full coverage is treated as a constraint while the other two objectives are optimized using a multi- objective approach. We employ a set of multi-objective optimization algorithms for this problem where we define the energy efficiency and the number of nodes as the independent optimization objectives. Our results prove the efficiency of multi-objective metaheuristics to solve this kind of problem and encourage further research on more realistic instances and more constrained scenarios.

72 citations


Journal ArticleDOI
TL;DR: This paper presents a meta-modelling framework that automates the very labor-intensive and therefore time-heavy and therefore expensive and expensive process of designing and implementing learning systems for hypermedia education.
Abstract: Berlanga, A. J., & Garcia, F. J. (2008). Learning Design in Adaptive Educational Hypermedia Systems. Journal of Universal Computer Science 14(22), 3627-3647.

60 citations


Journal Article
TL;DR: This paper is focused only on describing the foundations and structure of IQM3, which is based on staged CMMI, and a Methodology for the Assessment and Improvement of Information Quality Management (MAIMIQ), which usesIQM3 as a reference model for the assessment and for the improvement goal of an IMP.
Abstract: In order to enhance their global business performance, organizations must be careful with the quality of their information since it is one of their main assets. Analogies to quality management of classical products demonstrate that Information Quality is also preferably attainable through management by integrating some corresponding Information Quality management activities into the organizational processes. To achieve this goal we have developed an Information Quality Management Framework (IQMF). It is articulated on the concept of Information Management Process (IMP), based on the idea of Software Process. An IMP is a combination of two sub-processes: the first, a production process, aimed to manufacture information from raw data, and the second to adequately manage the required Information Quality level of the first. IQMF consists of two main components: an Information Quality Management Maturity Model (IQM3), and a Methodology for the Assessment and Improvement of Information Quality Management (MAIMIQ), which uses IQM3 as a reference model for the assessment and for the improvement goal of an IMP. Therefore, as a result of an assessment with MAIMIQ, an IMP can be said to have raised one of the maturity levels described in IQM3, and as improvement goal, it would be desirable to achieve a higher maturity level. Since an Information System can be seen as a set of several IMPs sharing several resources, it is possible to improve the Information Quality level of the entire Information System by improving the most critical IMPs. This paper is focused only on describing the foundations and structure of IQM3, which is based on staged CMMI.

58 citations


Journal Article
TL;DR: AmOS implements a computation model that supports highly dynamic behaviour adaptation to changing contexts and features first-class closures, multimethods and contexts that make it a very simple and elegant paradigm for context-oriented programming.
Abstract: In this paper we present AmOS, the Ambient Object System that underlies the Ambience programming language. AmOS implements a computation model that supports highly dynamic behaviour adaptation to changing contexts. Apart from being purely object-based, AmOS features first-class closures, multimethods and contexts. Dynamic method scoping through a subjective dispatch mechanism is at the heart of our approach. These features make of AmOS a very simple and elegant paradigm for context-oriented programming.

57 citations


Journal Article
TL;DR: A new measurement of semantic centrality is proposed, i.e., the power of semantic bridging on semantic peer-to-peer environment, to build semantically cohesive user subgroups, so that semantic affinities between peers can be computed.
Abstract: Query transformation is a serious hurdle on semantic peer-to-peer environment. For interoperability between peers, queries sent from a source peer have to be efficiently transformed to be understandable to potential peers processing the queries. However, the problem is that the transformed queries might lose some information from the original one, as continuously traveling along peer-to-peer networks. We mainly consider two factors; i) number of transformations and ii) quality of ontology alignment. In this paper, we propose a new measurement of semantic centrality, i.e., the power of semantic bridging on semantic peer-to-peer environment. Thereby, we want to build semantically cohesive user subgroups, so that semantic affinities between peers can be computed. Then, given a query, we find out a path of peers for optimal interoperability between a source peer and a target one, i.e., minimizing information loss by the transformation. We have shown an example for retrieving image resources annotated on peer-to-peer environment by using query transformation based on semantic centrality.

Journal Article
TL;DR: In this paper, the security of optimistic fair exchange in a multi-user setting was studied and a generic construction was proposed based on one-way functions in the random oracle model and trapdoor oneway permutations in the standard model.
Abstract: This paper addresses the security of optimistic fair exchange in a multi-user setting. While the security of public key encryption and public key signature schemes in a single-user setting guarantees the security in a multi-user setting, we show that the situation is different in the optimistic fair exchange. First, we show how to break, in the multi-user setting, an optimistic fair exchange scheme provably secure in the single-user setting. This example separates the security of optimistic fair exchange between the single-user setting and the multi-user setting. We then define the formal security model of optimistic fair exchange in the multi-user setting, which is the first complete security model of optimistic fair exchange in the multi-user setting. We prove the existence of a generic construction meeting our multi-user security based on one-way functions in the random oracle model and trapdoor one-way permutations in the standard model. Finally, we revisit two well-known methodologies of optimistic fair exchange, which are based on the verifiably encrypted signature and the sequential two-party multisignature, respectively. Our result shows that these paradigms remain valid in the multi-user setting.

Journal Article
TL;DR: Au et al. as mentioned in this paper proposed the first certificateless public key encryption (CL-PKE) scheme secure against malicious key generation center (KGC) attack, with proof in the standard model.
Abstract: Recently, Au et al. (Au et al. 2007) pointed out a seemingly neglected se- curity concern for certificateless public key encryption (CL-PKE) scheme, where a malicious key generation center (KGC) can compromise the confidentiality of the mes- sages by embedding extra trapdoors in the system parameter. Although some schemes are secure against such an attack, they require random oracles to prove the security. In this paper, we first show that two existing CL-PKE schemes without random ora- cles are not secure against malicious KGC, we then propose the first CL-PKE scheme secure against malicious KGC attack, with proof in the standard model.

Journal Article
TL;DR: It is proved that the computable multi-functions on multi-represented sets are closed under flowchart programming, allowing programmers to avoid the "use of 0s and 1s" in programming to a large extent and to think in terms of abstract data like real numbers or continuous real functions.
Abstract: In the representation approach to computable analysis (TTE) (Grz55, KW85, Wei00), abstract data like rational numbers, real numbers, compact sets or continuous real functions are represented by finite or infinite sequences (Σ ∗ ,Σ ω )o f symbols, which serve as concrete names. A function on abstract data is called comput- able, if it can be realized by a computable function on names. It is the purpose of this ar- ticle to justify and generalize methods which are already used informally in computable analysis for proving computability. As a simple formalization of informal programming we consider flowcharts with indirect addressing. Using the fact that every computable function on Σ ω can be generated by a monotone and computable function on Σ ∗ we prove that the computable functions on Σ ω are closed under flowchart programming. We introduce generalized multi-representations, where names can be from general sets, and define realization of multi-functions by multi-functions. We prove that the function computed by a flowchart over realized functions is realized by the function computed by the corresponding flowchart over realizing functions. As a consequence, data from abstract sets on which computability is well-understood can be used for writing realiz- ing flowcharts of computable functions. In particular, the computable multi-functions on multi-represented sets are closed under flowchart programming. These results allow us to avoid the "use of 0s and 1s" in programming to a large extent and to think in terms of abstract data like real numbers or continuous real functions. Finally we gen- eralize effective exponentiation to multi-functions on multi-represented sets and study two different kinds of λ-abstraction. The results allow simpler and more formalized proofs in computable analysis.

Journal Article
TL;DR: The concept of Service-Oriented Mobile Unit (SOMU) is introduced, an autonomous software infrastructure running on a computing device that is able to be integrated to ad-hoc networks and it can interoperate with other mobile units in ad-Hoc collaboration scenarios.
Abstract: Advances in wireless communication and mobile computing extend collaboration scenarios. Mobile workers using computing devices are currently able to collaborate in order to carry out productive, educational or social activities. Typically, collaborative applications intended to support mobile workers involve some type of centralized data or services, because they are designed to work on infrastructure supported wireless networks. This centralization constrains the collaboration capabilities in ad-hoc communication cases. This paper introduces the concept of Service-Oriented Mobile Unit (SOMU) in order to reduce such limitation. SOMU is an autonomous software infrastructure running on a computing device; it is able to be integrated to ad-hoc networks and it can interoperate with other mobile units in ad-hoc collaboration scenarios. In addition, the paper presents the challenges faced when designing and implementing the SOMU platform. It also describes an application developed on SOMU.

Journal Article
TL;DR: The use and validation of the model-based tool is illustrated in the preparation of the automatic derivation of the JUnit framework and a J2ME games product line.
Abstract: In this paper, we present a model-based tool for product derivation. Our tool is centered on the definition of three models (feature, architecture and configuration models) which enable the automatic instantiation of software product lines (SPLs) or frameworks. The Eclipse platform and EMF technology are used as the base for the implementation of our tool. A set of specific Java annotations are also defined to allow generating automatically many of our models based on existing implementations of SPL architectures. We illustrated the use and validation of our tool in the preparation of the automatic derivation of the JUnit framework and a J2ME games product line.

Journal Article
TL;DR: The investigation in this paper reveals a striking similarity of the refinement concepts used in Abstract State Machines (ASM) based system development and Feature-Oriented Programming (FOP) of software product lines.
Abstract: A goal of software product lines is the economical assembly of programs in a family of programs. In this paper, we explore how theorems about program properties may be integrated into feature-based development of software product lines. As a case study, we analyze an existing Java/JVM compilation correctness proof for defining, interpreting, compiling, and executing bytecode for the Java language. We show how features modularize program source, theorem statements and their proofs. By composing features, the source code, theorem statements and proofs for a program are assembled. The investigation in this paper reveals a striking similarity of the refinement concepts used in Abstract State Machines (ASM) based system development and Feature-Oriented Programming (FOP) of software product lines. We suggest to exploit this observation for a fruitful interaction of researchers in the two communities.

Journal Article
TL;DR: This paper surveys a broad range of relevant research, describing and contrasting the approaches of each using a uniform termino- logical and conceptual vocabulary, and identifies and discusses three commonly advocated principles within this work.
Abstract: Software adaptation techniques appear in many disparate areas of research literature, and under many guises. This paper enables a clear and uniform understand- ing of the related research, in three ways. Firstly, it surveys a broad range of relevant research, describing and contrasting the approaches of each using a uniform termino- logical and conceptual vocabulary. Secondly, it identifies and discusses three commonly advocated principles within this work: component models, first-class connection and loose coupling. Thirdly, it identifies and compares the various modularisation strategies employed by the surveyed work.

Journal Article
TL;DR: This paper presents a fast entropy-based segmentation method for generating high-quality binarized images of documents with back-to-front interference.
Abstract: Back-to-front interference", "bleeding" and "show-through" is the name given to the phenomenon found whenever documents are written on both sides of translucent paper and the print of one side is visible on the other one. The binarization of documents with back-to- front interference with standard algorithms yields unreadable documents. This paper presents a fast entropy-based segmentation method for generating high-quality binarized images of documents with back-to-front interference.

Journal Article
TL;DR: This paper proposes a calculation of significant co-occurrences of diseases and defined regions of the human body, in order to identify possible risks for health and designs and develops an application for analyzing expert comments on magnetic resonance images (MRI) diagnoses by applying a text mining method.
Abstract: Most information in Hospitals is still only available in text format and the amount of this data is immensely increasing. Consequently, text mining is an essential area of medical informatics. With the aid of statistic and linguistic procedures, text mining software attempts to dig out (mine) information from plain text. The aim is to transform data into information. However, for the efficient support of end users, facets of computer science alone are insufficient; the next step consists of making the information both usable and useful. Consequently, aspects of cognitive psychology must be taken into account in order to enable the transformation of information into knowledge of the end users. In this paper we describe the design and development of an application for analyzing expert comments on magnetic resonance images (MRI) diagnoses by applying a text mining method in order to scan them for regional correlations. Consequently, we propose a calculation of significant co-occurrences of diseases and defined regions of the human body, in order to identify possible risks for health.

Journal Article
TL;DR: This paper presents a specification framework for collaborative environments and highlights the interplay of task specifications and domain models, including CTML, which has a precisely defined syntax and semantics.
Abstract: A precise model of the behavioral dynamics is a necessary precondition for the development of collaborative environments. In this paper we present a specification framework for collaborative environments. In particular we highlight the interplay of task specifications and domain models. The framework consists of two components: A formal specification language (called CTML) and the tool CTML Editor and Simulator. CTML has a precisely defined syntax and semantics and is designed to model actors, roles, collaborative tasks and their dependency and impact on the domain. The CTML Editor and Simulator is an Eclipse IDE for the interactive creation and simulation of CTML specifications.

Journal Article
TL;DR: The following paper introduces the work conducted to create a relative virtual mouse based on the interpretation of head movements and face gesture through a low cost camera and the optical flow of the images.
Abstract: The following paper introduces the work conducted to create a relative virtual mouse based on the interpretation of head movements and face gesture through a low cost camera and the optical flow of the images. This virtual device is designed specifically as an alternative non- contact pointer for people with mobility impairments in the upper extremities and reduced head control. The proposed virtual device was compared with a conventional mouse, a touchpad and a digital joystick. Validation results show performances close to a digital joystick but far away from a conventional mouse.

Journal Article
TL;DR: In a component-based development process the selection of components is an activity that takes place over multiple lifecycle phases that span from requirement specifications through design to design to implementation.
Abstract: In a component-based development process the selection of components is an activity that takes place over multiple lifecycle phases that span from requirement specifications through design to imple ...

Journal Article
TL;DR: A new method to recover both the inner and the outer key used in HMAC when instantiated with a concrete hash function by observing text/MAC pairs is presented, and the first theoretical full key recovery attack on NMAC-MD5 is presented.
Abstract: Message Authentication Code (MAC) algorithms can provide cryptographically secure authentication services. One of the most popular algorithms in commercial applications is HMAC based on the hash functions MD5 or SHA-1. In the light of new collision search methods for members of the MD4 family including SHA-1, the security of HMAC based on these hash functions is reconsidered. We present a new method to recover both the innerand the outer key used in HMAC when instantiated with a concrete hash function by observing text/MAC pairs. In addition to collisions, also other non-random properties of the hash function are used in this new attack. Among the examples of the proposed method, the first theoretical full key recovery attack on NMAC-MD5 is presented. Other examples are distinguishing, forgery and partial or full key recovery attacks on NMAC/HMAC-SHA-1 with a reduced number of steps (up to 62 out of 80). This information about the new, reduced security margin serves as an input to the selection of algorithms for authentication purposes.

Journal Article
TL;DR: This work presents an alternative interface that allows users to perceive new sensa- tions in virtual environments and describes various simple yet effective techniques that allow eyetracking devices to enhance the three-dimensional visualization capabilities of current displays.
Abstract: We present an alternative interface that allows users to perceive new sensa- tions in virtual environments. Gaze-based interaction in virtual environments creates the feeling of controlling objects with the mind, arguably translating into a more in- tense immersion sensation. Additionally, it is also free of some of the most cumbersome aspects of interacting in virtual worlds. By incorporating a real-time physics engine, the sensation of moving something real is further accentuated. We also describe various simple yet effective techniques that allow eyetracking devices to enhance the three-dimensional visualization capabilities of current displays. Some of these techniques have the additional advantage of freeing the mouse from most navigation tasks. This work focuses on the study of existing techniques, a detailed description of the im- plemented interface and the evaluation (both objective and subjective) of the interface. Given that appropriate filtering of the data from the eye tracker used is a key aspect for the correct functioning of the interface, we will also discuss that aspect in depth.

Journal Article
TL;DR: A new empirical method to approximate the Quality of Experience automatically from passive network measurements is proposed and its pros and cons with usual techniques are compared and a notion of sensitiveness is proposed to compare these correlations on different applications.
Abstract: Quality of Experience (QoE) is a promising method to take into account the users' needs in designing, monitoring and managing networks. However, there is a challenge in finding a quick and simple way to estimate the QoE due to the diversity of needs, habits and customs. We propose a new empirical method to approximate it automatically from passive network measurements and we compare its pros and cons with usual techniques. We apply it, as an example, on ADSL traffic traces to estimate the QoE dependence on the loss rate for the most used applications. We analyze more precisely the correlations between packet losses and some traffic characteristics of TCP connections, the duration, the sizes and the inter-arrival. We define different thresholds on the loss rate for network management. And we propose a notion of sensitiveness to compare these correlations on different applications.

Journal Article
TL;DR: Work on (1) service behavior mediation, (2) service discovery, and (3) service composition are summarized, showing that the corresponding solutions can be described as variations of a fundamental abstract processing model—the Virtual Provider.
Abstract: We give a survey on work we did in the past where we have successfully applied the ASM methodology to provide abstract models for a number of problem areas that are commonly found in Service Oriented Architectures (SOA). In particular, we summarize our work on (1) service behavior mediation, (2) service discovery, and (3) service composition, showing that the corresponding solutions can be described as variations of a fundamental abstract processing model—the Virtual Provider.

Journal Article
TL;DR: ZRTP is a protocol designed to set up a shared secret between two com- munication parties which is subsequently used to secure the media stream of a VoIP connection, which is inherently vulnerable to active Man-in-the-Middle (MitM) attacks.
Abstract: ZRTP is a protocol designed to set up a shared secret between two com- munication parties which is subsequently used to secure the media stream (i.e. the audio data) of a VoIP connection. It uses Diffie-Hellman (DH) key exchange to agree upon a session key, which is inherently vulnerable to active Man-in-the-Middle (MitM) attacks. Therefore ZRTP introduces some proven methods to detect such attacks. The most important measure is a so called Short Authentication String (SAS). This is a set of characters that is derived essentially from the public values of the Diffie-Hellman key exchange and displayed to the end users for reading out and comparing over the phone. If the SAS on the caller's and the callee's side match, there is a high probability that no MitM attack is going on. Furthermore, ZRTP offers a form of key continuity by caching key material from previous sessions for use in the next call. In order to prevent that a MitM can manipulate the Diffie-Hellman key exchange in such a way that both partners get the same SAS although different shared keys were negotiated, ZRTP uses hash commitment for the public DH value. Despite these measures a Relay Attack (also known as Mafia Fraud Attack or Chess Grandmaster Attack) is still possible. We present a practical implementation of such an attack and discuss its characteristics and limitations, and show that the attack works only in certain scenarios.

Journal ArticleDOI
TL;DR: This work has been partially funded by the Spanish Ministry of Science and Education, and by U.A.M-Grupo Santander (project Itech Calli), and is part of the UAM-SOLUZIONA AmI Laboratory research program.
Abstract: This work has been partially funded by the Spanish Ministry of Science and Education, (project TIN2004-03140) and by U.A.M-Grupo Santander (project Itech Calli), and is part of the UAM-SOLUZIONA AmI Laboratory research program. Special thanks to Eran Eden and Manuel Freire for their recommendations.

Journal Article
TL;DR: Comparisons with multi-phase Chan-Vese method show that the proposed multi-layer level set method has a less time-consuming computation and much faster convergence.
Abstract: In this paper, a new multi-layer level set method is proposed for multi-phase image segmentation. The proposed method is based on the conception of image layer and improved numerical solution of bimodal Chan-Vese model. One level set function is employed for curve evolution with a hierarchical form in sequential image layers. In addition, new initialization method and more efficient computational method for signed distance function are introduced. Moreover, the evolving curve can automatically stop on true boundaries in single image layer according to a termination criterion which is based on the length change of evolving curve. Specially, an adaptive improvement scheme is designed to speed up curve evolution process in a queue of sequential image layers, and the detection of background image layer is used to confirm the termination of the whole multi-layer level set evolution procedure. Finally, numerical experiments on some synthetic and real images have demonstrated the efficiency and robustness of our method. And the comparisons with multi-phase Chan-Vese method also show that our method has a less time-consuming computation and much faster convergence.