scispace - formally typeset
Search or ask a question

Showing papers in "Acta Cybernetica in 2017"


Journal ArticleDOI
TL;DR: This paper surveys work on the degree of ambiguity and on various nondeterminism measures for finite automata, and focuses on state complexity comparisons between NFAs with quantified ambiguity or nond determinism.
Abstract: The degree of ambiguity counts the number of accepting computations of a nondeterministic finite automaton (NFA) on a given input. Alternatively, the nondeterminism of an NFA can be measured by counting the amount of guessing in a single computation or the number of leaves of the computation tree on a given input. This paper surveys work on the degree of ambiguity and on various nondeterminism measures for finite automata. In particular, we focus on state complexity comparisons between NFAs with quantified ambiguity or nondeterminism.

18 citations


Journal ArticleDOI
TL;DR: This healthcare interoperability solution, service architecture and corresponding software engineering technique bridges technology barriers among the above-mentioned healthcare segments and seeks to bridge all major healthcare architecture frameworks including classical, telemedicine and eHealth IoT applications and appliances.
Abstract: Here, we outline the design, implementation, testing and evaluation phases of our bi-directional semantic and syntactic interoperability framework interconnecting traditional healthcare, industrial telemedicine and IoT wearable eHealth-domains. Specifically, our study demonstrates system interoperability among a hospital information system, an industrial telemedicine instrument and an eHealth smart wearable consumer electronic product through the Open Telemedicine Interoperability Hub (OTI-Hub) embedded in a hybrid Cloud architecture. The novelty of this study is the handling of Internet-ofThingssmart healthcare devices and traditional healthcare devices through the same Cloud-based solution. This healthcare interoperability solution, service architecture and corresponding software engineering technique bridges technology barriers among the above-mentioned healthcare segments. Standard interoperability solutions exist and have already been described in related literature, but they are not applicable to the IoT healthcare devices and vice versa. Our study goes beyond isolated, individual interoperability solutions and seeks to bridge all major healthcare architecture frameworks including classical, telemedicine and eHealth IoT applications and appliances. This study presents the results of a two-year OTI-Hub Research Program. These experiments are manifestations of a trilateral cooperation among the University of Debrecen, Faculty of Informatics, the Semmelweis University 2nd Department of Paediatrics Pulmonology Division and an international hospital information system service provider.

14 citations


Journal ArticleDOI
TL;DR: It is shown that energy functions form a *-continuous Kleene ω-algebra, as an application of a general result that finitely additive, locally *-closed and T-continuously functions on complete lattices form *- continuous Kleeneπ-algebras.
Abstract: Energy and resource management problems are important in areas such as embedded systems or autonomous systems. They are concerned with the question whether a given system admits infinite schedules during which certain tasks can be repeatedly accomplished and the system never runs out of energy (or other resources). In order to develop a general theory of energy problems, we introduce energy automata: finite automata whose transitions are labeled with energy functions which specify how energy values change from one system state to another. We show that energy functions form a *-continuous Kleene ω-algebra, as an application of a general result that finitely additive, locally *-closed and T-continuous functions on complete lattices form *-continuous Kleene ω-algebras. This permits to solve energy problems in energy automata in a generic, algebraic way. In order to put our work in context, we also review extensions of energy problems to higher dimensions and to games.

10 citations



Journal ArticleDOI
TL;DR: This work examines complexity properties of special prefix-convex languages, and exhibits right-ideal, prefix-closed, and prefix-free languages that meet the complexity bounds for all the measures listed above.
Abstract: A language L over an alphabet Σ is prefix-convex if, for any words x, y, z ∈ Σ∗, whenever x and xyz are in L, then so is xy. Prefix-convex languages include right-ideal, prefix-closed, and prefix-free languages as special cases. We examine complexity properties of these special prefix-convex languages. In particular, we study the quotient/state complexity of boolean operations, product (concatenation), star, and reversal, the size of the syntactic semigroup, and the quotient complexity of atoms. For binary operations we use arguments with different alphabets when appropriate; this leads to higher tight upper bounds than those obtained with equal alphabets. We exhibit right-ideal, prefix-closed, and prefix-free languages that meet the complexity bounds for all the measures listed above.

9 citations


Journal ArticleDOI
TL;DR: In this paper, a new family of complexity functions of infinite words indexed by k ≥ 0 was introduced, which can be used to characterize Sturmian words in the context of aperiodic one-sided infinite words.
Abstract: In this paper we investigate local-to-global phenomena for a new family of complexity functions of infinite words indexed by k ≥ 0. Two finite words u and v are said to be k-abelian equivalent if for all words x of length less than or equal to k, the number of occurrences of x in u is equal to the number of occurrences of x in v. This defines a family of equivalence relations, bridging the gap between the usual notion of abelian equivalence (when k = 1) and equality (when k = ∞). Given an infinite word w, we consider the associated complexity function which counts the number of k-abelian equivalence classes of factors of w of length n. As a whole, these complexity functions have a number of common features: each gives a characterization of periodicity in the context of bi-infinite words, and each can be used to characterize Sturmian words in the framework of aperiodic one-sided infinite words. Nevertheless, they also exhibit a number of striking differences, the study of which is one of the main topics of our paper.

6 citations


Journal ArticleDOI
TL;DR: The main result in the proof of the Kleene Theorem is the construction of a weighted ω-pushdown automaton for the ω -algebraic closure of subsets of a continuous star-omega semiring.
Abstract: Weighted ω-pushdown automata were introduced as generalization of the classical pushdown automata accepting infinite words by Büchi acceptance. The main result in the proof of the Kleene Theorem is the construction of a weighted ω-pushdown automaton for the ω-algebraic closure of subsets of a continuous star-omega semiring.

5 citations


Journal ArticleDOI
TL;DR: The results showed that the efficiency of the proposed balanced active learning (BAL) approach outperforms the basic uncertainty sampling and is based on a novel metric that is presented in this paper.
Abstract: The manual labeling of natural images is and has always been painstaking and slow process, especially when large data sets are involved. Nowadays, many studies focus on solving this problem, and most of them use active learning, which offers a solution for reducing the number of images that need to be labeled. Active learning procedures usually select a subset of the whole data by iteratively querying the unlabeled instances based on their predicted informativeness. One way of estimating the information content of an image is by using uncertainty sampling as a query strategy. This basic technique can significantly reduce the number of label needed; e.g. to set up a good model for classification. Our goal was to improve this method by balancing the distribution of the already labeled images. This modification is based on a novel metric that we present in this paper. We conducted experiments on two popular data sets to demonstrate the efficiency of our proposed balanced active learning (BAL) approach, and the results showed that it outperforms the basic uncertainty sampling.

4 citations


Journal ArticleDOI
TL;DR: This study presents a method of computing software process metrics in a graph database using Neo4j as the graph database engine, and its query language - cypher - to get the process metrics.
Abstract: Identifying fault-prone code parts is useful for the developers to help reduce the time required for locating bugs. It is usually done by characterizing the already known bugs with certain kinds of metrics and building a predictive model from the data. For the characterization of bugs, software product and process metrics are the most popular ones. The calculation of product metrics is supported by many free and commercial software products. However, tools that are capable of computing process metrics are quite rare. In this study, we present a method of computing software process metrics in a graph database. We describe the schema of the database created and we present a way to readily get the process metrics from it. With this technique, process metrics can be calculated at the file, class and method levels. We used GitHub as the source of the change history and we selected 5 open-source Java projects for processing. To retrieve positional information about the classes and methods, we used SourceMeter, a static source code analyzer tool. We used Neo4j as the graph database engine, and its query language - cypher - to get the process metrics. We published the tools we created as open-source projects on GitHub. To demonstrate the utility of our tools, we selected 25 release versions of the 5 Java projects and calculated the process metrics for all of the source code elements (files, classes and methods) in these versions. Using our previous published bug database, we built bug databases for the selected projects that contain the computed process metrics and the corresponding bug numbers for files and classes. (We published these databases as an online appendix.) Then we applied 13 machine learning algorithms on the database we created to find out if it is feasible for bug prediction purposes. We achieved F-measure values on average of around 0.7 at the class level, and slightly better values of between 0.7 and 0.75 at the file level. The best performing algorithm was the RandomForest method for both cases.

4 citations


Journal ArticleDOI
TL;DR: This work considers offline sensing unranked top-down tree automata in which the state transitions are computed by bimachines and gives a polynomial time algorithm for minimizing such tree Automata when they are state-separated.
Abstract: We consider offline sensing unranked top-down tree automata in which the state transitions are computed by bimachines. We give a polynomial time algorithm for minimizing such tree automata when they are state-separated.

4 citations


Journal ArticleDOI
TL;DR: It is proved that the class of their behaviors is closed under sum, and under scalar, Hadamard, Cauchy, and shuffle products, as well as star operation, and a Kleene-Schützenberger theorem is state.
Abstract: We introduce weighted variable automata over infinite alphabets and commutative semirings. We prove that the class of their behaviors is closed under sum, and under scalar, Hadamard, Cauchy, and shuffle products, as well as star operation. Furthermore, we consider rational series over infinite alphabets and we state a Kleene-Schützenberger theorem. We introduce a weighted monadic second order logic and a weighted linear dynamic logic over infinite alphabets and investigate their relation to weighted variable automata. An application of our theory, to series over the Boolean semiring, concludes to new results for the class of languages accepted by variable automata.

Journal ArticleDOI
TL;DR: An algorithm is presented that automatically delineates the two extremal retinal layers, successfully localizes subretinal fluid regions, and computes their extent using a set of SD-OCT images.
Abstract: A modern tool for age-related macular degeneration (AMD) investigation is Optical Coherence Tomography (OCT), which can produce high resolution cross-sectional images of retinal layers. AMD is one of the most frequent reasons for blindness in economically developed countries. AMD means degeneration of the macula, which is responsible for central vision. Since AMD affects only this specific part of the retina, untreated patients lose their fine shape- and face recognition, reading ability, and central vision. Here, we deal with the automatic localization of subretinal fluid areas and also analyze retinal layers, since layer information can help to localize fluid regions. We present an algorithm that automatically delineates the two extremal retinal layers, successfully localizes subretinal fluid regions, and computes their extent. We present our results using a set of SD-OCT images. The quantitative information can also be visualized in an anatomical context for visual assessment.

Journal ArticleDOI
TL;DR: This work studies the commutative positive varieties of languages closed under various operations: shuffle, renaming and product over one-letter alphabets.
Abstract: We study the commutative positive varieties of languages closed under various operations: shuffle, renaming and product over one-letter alphabets.

Journal ArticleDOI
TL;DR: This work proposes a novel validation technique for evaluating well-formedness constraints on incomplete, partial models with may and must semantics, and maps the problem of constraint evaluation over partial models into regular graph pattern matching over complete models by semantically equivalent rewrites of graph queries.
Abstract: In modern modeling tools used for model-driven development, the validation of several well-formedness constraints is continuously been carried out by exploiting advanced graph query engines to highlight conceptual design flaws. However, while models are still under development, they are frequently partial and incomplete. Validating constraints on incomplete, partial models may identify a large number of irrelevant problems. By switching off the validation of these constraints, one may fail to reveal problematic cases which are difficult to correct when the model becomes sufficiently detailed. Here, we propose a novel validation technique for evaluating well-formedness constraints on incomplete, partial models with may and must semantics, e.g. a constraint without a valid match is satisfiable if there is a completion of the partial model that may satisfy it. To this end, we map the problem of constraint evaluation over partial models into regular graph pattern matching over complete models by semantically equivalent rewrites of graph queries.


Journal ArticleDOI
TL;DR: A reliable method based on the Branch-and-Bound algorithm is implemented, which will give the opportunity to use node level (also called low-level or type 1) parallelization, since this algorithm does not modify the searching trajectories; nor do the authors modify the dimensions of the Branch and-Bound tree.
Abstract: Video cards have now outgrown their purpose of being only a simple tool for graphic display. With their high speed video memories, lots of maths units and parallelism, they can be very powerful accessories for general purpose computing tasks. Our selected platform for testing is the CUDA (Compute Unified Device Architecture), which offers us direct access to the virtual instruction set of the video card, and we are able to run our computations on dedicated computing kernels. The CUDA development kit comes with a useful toolbox and a wide range of GPU-based function libraries. In this parallel environment, we implemented a reliable method based on the Branch-and-Bound algorithm. This algorithm will give us the opportunity to use node level (also called low-level or type 1) parallelization, since we do not modify the searching trajectories; nor do we modify the dimensions of the Branch-and-Bound tree [5]. For testing, we chose the circle covering problem. We then scaled the problem up to three dimensions, and ran tests with sphere covering problems as well.

Journal ArticleDOI
TL;DR: This paper overviews existing dependency replacement techniques of C++ and introduces a non-intrusive, compiler instrumentation based testing approach that does not have such disadvantages and presents different approaches to conveniently access private members in C++.
Abstract: In C++, test code is often interwoven with the unit we want to test. During the test development process we often have to modify the public interface of a class to replace existing dependencies; e.g. a supplementary setter or constructor function is added for dependency injection. In many cases, extra template parameters are used for the same purpose. All existing solutions have serious detrimental effects on the code structure and sometimes on the run-time performance as well. In this paper, we overview existing dependency replacement techniques of C++ and we evaluate their advantages and disadvantages. We introduce our non-intrusive, compiler instrumentation based testing approach that does not have such disadvantages. All non-intrusive testing methods (including our new method) require access to an object’s internal state in order to setup a test. Thus, to complement our new solution, we also present different approaches to conveniently access private members in C++. To evaluate these techniques, we created a proof-of-concept implementation which is publicly available for further testing.

Journal ArticleDOI
TL;DR: It is shown how the families of DR-recognizable tree languages path-definable by a variety of finite monoids (or semigroups) can be derived from varieties of string languages.
Abstract: We consider deterministic root-to-frontier (DR) tree recognizers and the tree languages recognized by them from an algebraic point of view. We make use of a correspondence between DR algebras and unary algebras shown by Z. Ésik (1986). We also study a question raised by F. Gécseg (2007) that concerns the definability of families of DR-recognizable tree languages by syntactic path monoids. We show how the families of DR-recognizable tree languages path-definable by a variety of finite monoids (or semigroups) can be derived from varieties of string languages. In particular, the three pathdefinable families of Gécseg and B. Imreh (2002, 2004) are obtained this way.

Journal ArticleDOI
TL;DR: A novel, robust method is developed that measures the size of a pupil even under poor circumstances (noise, blur, reflections and occlusions) and is compared with measurements obtained using manual annotation.
Abstract: Pupillometry is a non-invasive technique that can be used to objectively characterize pathophysiological changes involving the pupillary reflex. It is essentially the measurement of the pupil diameter over time. Here, specially designed computer algorithms provide fast, reliable and reproducible solutions for the analysis. These methods use a priori information about the shape and color of the pupil. Our study focuses on measuring the diameter and dynamics of the pupils of rats with schizophrenia using videos recorded with a modified digital camera under infrared (IR) illumination. We developed a novel, robust method that measures the size of a pupil even under poor circumstances (noise, blur, reflections and occlusions). We compare our results with measurements obtained using manual annotation.

Journal ArticleDOI
TL;DR: No collection of sound inequations of bounded depth is ground-complete with respect to the trace simulation preorder over BCCSP even over a singleton set of actions.
Abstract: This note shows that the trace simulation preorder does not have a finite inequational basis over the language BCCSP. Indeed, no collection of sound inequations of bounded depth is ground-complete with respect to the trace simulation preorder over BCCSP even over a singleton set of actions.

Journal ArticleDOI
TL;DR: This paper associates with every commutative continuous semiring S and alphabet Σ a category whose objects are all sets and a morphism X → Y is determined by a function from X into the semiring of formal series S⟪(Y ⊎Σ) ⟫ of finite words over Y ⊩Σ.
Abstract: We associate with every commutative continuous semiring S and alphabet Σ a category whose objects are all sets and a morphism X → Y is determined by a function from X into the semiring of formal series S⟪(Y⊎Σ)*⟫ of finite words over Y⊎Σ, an X × Y -matrix over S⟪(Y⊎Σ)*⟫, and a function from into the continuous S⟪(Y⊎Σ)*⟫-semimodule S⟪(Y⊎Σ)ω⟫ of series of ω-words over Y⊎Σ. When S is also an ω-semiring (equipped with an infinite product operation), then we define a fixed point operation over our category and show that it satisfies all identities of iteration categories. We then use this fixed point operation to give semantics to recursion schemes defining series of finite and infinite words. In the particular case when the semiring is the Boolean semiring, we obtain the context-free languages of finite and ω-words.

Journal ArticleDOI
TL;DR: It is shown that the traced monoidal category of finite sets and relations with coproduct as tensor is complete for the extension of the traced symmetric monoidal axioms by two simple axiomatic, which capture the additive nature of trace in this category.
Abstract: It is shown that the traced monoidal category of finite sets and relations with coproduct as tensor is complete for the extension of the traced symmetric monoidal axioms by two simple axioms, which capture the additive nature of trace in this category. The result is derived from a theorem saying that already the structure of finite partial injections as a traced monoidal category is complete for the given axioms. In practical terms this means that if two biaccessible flowchart schemes are not isomorphic, then there exists an interpretation of the schemes by partial injections which distinguishes them.

Journal ArticleDOI
TL;DR: All possible pairing strategies for the 9-in-a-row game are described and a graph of the pairings is defined, containing 194,543 vertices and 532,107 edges, in order to give them a structure.
Abstract: In Maker-Breaker positional games two players, Maker and Breaker, play on a finite or infinite board with the goal of claiming or preventing the opponent from getting a finite winning set, respectively. For different games there are several winning strategies for Maker or Breaker. One class of winning strategies is the so-called pairing (paving) strategies. Here, we describe all possible pairing strategies for the 9-in-a-row game. Furthermore, we define a graph of the pairings, containing 194,543 vertices and 532,107 edges, in order to give them a structure. A complete characterization of the graph is also given.

Journal ArticleDOI
TL;DR: This study proposes a real-time road traffic planning system based on mobile context and crowd sourcing efforts that will eventually give the users an automatically optimized route to the destination and predict the users’ traveling route based on live traffic conditions and historical data.
Abstract: Traffic optimization is a subject that has become vital for the world we live in. People these days need to get from a starting point to a destination point as fast and as safe as possible. Traffic congestion plays a key role in the frustration of people and it results in lost time, reduced productivity and wasted resources. In our study we seek to address these issues by proposing a real-time road traffic planning system based on mobile context and crowd sourcing efforts. The first step toward this goal is real-time traffic characterization using data collected from mobile sensors of drivers, pedestrians, cyclists, passengers, etc.. We started developing a data collection and analysis system composed of a mobile application in order to collect user context data and a Web application to view and analyze the data. This new system will eventually give the users an automatically optimized route to the destination and predict the users’ traveling route based on live traffic conditions and historical data.

Journal ArticleDOI
TL;DR: This empirical study of Firefox, an open source browser application developed in C++, finds that the more antipatterns the source code contains, the harder it is to maintain.
Abstract: The notion that antipatterns have a detrimental effect on source code maintainability is widely accepted, but there is relatively little objective evidence to support it. We seek to investigate this issue by analyzing the connection between antipatterns and maintainability in an empirical study of Firefox, an open source browser application developed in C++. After extracting antipattern instances and maintainability information from 45 revisions, we looked for correlations to uncover a connection between the two concepts. We found statistically significant negative values for both Pearson and Spearman correlations, most of which were under -0.65. These values suggest there are strong, inverse relationships, thereby supporting our initial assumption that the more antipatterns the source code contains, the harder it is to maintain. Lastly, we combined these data into a table applicable for machine learning experiments, which we conducted using Weka [10] and several of its classifier algorithms. All five regression types we tried had correlation coefficients over 0.77 and used mostly negative weights for the antipattern predictors in the models we constructed. In conclusion, we can say that this empirical study is another step towards objectively demonstrating that antipatterns have an adverse effect on software maintainability.

Journal ArticleDOI
TL;DR: A tool is introduced that uses a third-party static analyzer as input and enables developers to automatically fix the detected issues for them, and supports the automatic elimination of coding issues in Java.
Abstract: To decrease software maintenance cost, software development companies use static source code analysis techniques. Static analysis tools are capable of finding potential bugs, anti-patterns, coding rule violations, and they can also enforce coding style standards. Although there are several available static analyzers to choose from, they only support issue detection. The elimination of the issues is still performed manually by developers. Here, we propose a process that supports the automatic elimination of coding issues in Java. We introduce a tool that uses a third-party static analyzer as input and enables developers to automatically fix the detected issues for them. Our tool uses a special technique, called reverse AST-search, to locate source code elements in a syntax tree, just based on location information. Our tool was evaluated and tested in a two-year project with six software development companies where thousands of code smells were identified and fixed in five systems that have altogether over five million lines of code.

Journal ArticleDOI
TL;DR: The algebraic structure of graphoids is used in order to obtain the wellknown Birkhoff’s theorem in the framework of graphs by establishing a natural bijection between the class of Σ-graphoids and theclass of strong congruences overGR(Σ, X), which is the free graphoid over the doubly ranked alphabet Σ and the set of variables X.
Abstract: The algebraic structure of graphoids is used in order to obtain the wellknown Birkhoff’s theorem in the framework of graphs. Namely we establish a natural bijection between the class of Σ-graphoids and the class of strong congruences over GR(Σ, X), which is the free graphoid over the doubly ranked alphabet Σ and the set of variables X.

Journal ArticleDOI
TL;DR: The goal in this study is to find the optimal parametrization of RTEHunter in terms of the maximum number of states, maximum depth of the symbolic execution tree and search strategy in order to find more runtime issues in a shorter time.
Abstract: In a software system, most of the runtime failures may come to light only during test execution, and this may have a very high cost. To help address this problem, a symbolic execution engine called RTEHunter, which has been developed at the Department of Software Engineering at the University of Szeged, is able to detect runtime errors (such as null pointer dereference, bad array indexing, division by zero) in Java programs without actually running the program in a real-life environment. Applying the theory of symbolic execution, RTEHunter builds a tree, called a symbolic execution tree, composed of all the possible execution paths of the program. RTEHunter detects runtime issues by traversing the symbolic execution tree and if a certain condition is fulfilled the engine reports an issue. However, as the number of execution paths increases exponentially with the number of branching points, the exploration of the whole symbolic execution tree becomes impossible in practice. To overcome this problem, different kinds of constraints can be set up over the tree. E.g. the number of symbolic states, the depth of the execution tree, or the time consumption could be restricted. Our goal in this study is to find the optimal parametrization of RTEHunter in terms of the maximum number of states, maximum depth of the symbolic execution tree and search strategy in order to find more runtime issues in a shorter time. Results on three open-source Java systems demonstrate that more runtime issues can be detected in the 0 to 60 basic block-depth levels than in deeper ones within the same time frame. We also developed two novel search strategies for traversing the tree based on the number of null pointer references in the program and on linear regression that performs better than the default depth-first search strategy.

Journal ArticleDOI
TL;DR: It is proved that SFSG have exactly the same expressive power as compositions of an inverse MBOT with an MBOT, although the class of tree translations computable by MBOT is closed under composition.
Abstract: The expressive power of synchronous forest (tree-sequence) substitution grammars (SFSG) is studied in relation to multi bottom-up tree transducers (MBOT). It is proved that SFSG have exactly the same expressive power as compositions of an inverse MBOT with an MBOT. This result is used to derive complexity results for SFSG and the fact that compositions of an MBOT with an inverse MBOT can compute tree translations that cannot be computed by any SFSG, although the class of tree translations computable by MBOT is closed under composition.

Journal ArticleDOI
TL;DR: A computational method is proposed to measure the phase and approximate height of cells after microscope calibration, assuming a linear formation model.
Abstract: The development of fluorescent probes and proteins has helped make light microscopy more popular by allowing the visualization of specific subcellular components, location and dynamics of biomolecules. However, it is not always feasible to label the cells as it may be phototoxic or perturb their functionalities. Label-free microscopy techniques allow us to work with live cells without perturbation and to evaluate morphological differences, which in turn can provide useful information for high-throughput assays. In this study, we use one of the most popular label-free techniques called differential interference contrast (DIC) microscopy to estimate the phase of cells and other nearly transparent objects and instantly estimate their height. DIC images provide detailed information about the optical path length (OPL) differences in the sample and they are visually similar to a gradient image. Our previous DIC construction algorithm outputs an image where the values are proportional to the OPL (or implicitly the phase) of the sample. Although the reconstructed images are capable of describing cellular morphology and to a certain extent turn DIC into a quantitative technique, the actual OPL has to be computed from the input DIC image and the microscope calibration settings. Here we propose a computational method to measure the phase and approximate height of cells after microscope calibration, assuming a linear formation model. After a calibration step the phase of further samples can be determined when the refractive indices of the sample and the surrounding medium is known. The precision of the method is demonstrated on reconstructing the thickness of known objects and real cellular samples.