scispace - formally typeset
Search or ask a question

Showing papers in "Acta Cybernetica in 2014"


Journal ArticleDOI
TL;DR: Tight upper bounds are found on the quotient complexity of intersection, union, difference, symmetric difference, concatenation, star, and reversal in these three classes of languages.
Abstract: A language L is prefix-free if whenever words u and v are in L and u is a prefix of v, then u = v. Suffix-, factor-, and subword-free languages are defined si milarly, where by “subword” we mean “subsequence”, and a language is bifix-free if it is both prefix- and suffix-free. These languages have important appl ications in coding theory. The quotient complexity of an operation on regular languages is defined as the number of left quotients of the result of the operation as a fu nction of the numbers of left quotients of the operands. The quotient complexity of a regular language is the same as its state complexity, which is the number of states in the complete minimal deterministic finite automaton accepting the language. The state/quotient complexity of operations in the classes of prefix- and suffix-free languages has been studied before. Here, we study the complexity of operations in the classes of bifix-, factor-, and subword-free languages. We find tight upper bounds on the quotient complexity of intersection, union, differe nce, symmetric difference, concatenation, star, and reversal in these three classes of languages.

11 citations


Journal ArticleDOI
TL;DR: This research work investigated this phenomena by studying the impact of version control commit operations (add, update, delete) on the quality of the code by calculating the ISO/IEC 9126 quality attributes for thousands of revisions of an industrial and three open-source software systems.
Abstract: Software erosion is a well-known phenomena, meaning that software quality is continuously decreasing due to the ever-ongoing modifications in the source code. In this research work we investigated this phenomena by studying the impact of version control commit operations (add, update, delete) on the quality of the code. We calculated the ISO/IEC 9126 quality attributes for thousands of revisions of an industrial and three open-source software systems with the help of the Columbus Quality Model. We also collected the cardinality of each version control operation type for every investigated revision. We performed Chisquared tests on contingency tables with rows of quality change and columns of version control operation commit types. We compared the results with random data as well. We identified that the relationship between the version control operations and quality change is quite strong. Great maintainability improvements are mostly caused by commits containing Add operation. Commits containing file updates only tend to have a negative impact on the quality. Deletions have a weak connection with quality, and we could not formulate a general statement.

8 citations


Journal ArticleDOI
TL;DR: The results show that monitoring cyclomatic complexity evolution of functions and number of revisions of files focuses the attention of designers to potentially problematic files and functions for manual assessment and improvement.
Abstract: Background: Complexity management has become a crucial activity in continuous software development. While the overall perceived complexity of a product grows rather insignificantly, the small units, such as functions and files, can have noticeable complexity growth with every increment of product features. This kind of evolution triggers risks of escalating fault-proneness and deteriorating maintainability. Goal: The goal of this research was to develop a measurement system which enables effective monitoring of complexity evolution. Method: An action research has been conducted in two large software development organiza-tions. We have measured three complexity and two change properties of code for two large industrial products. The complexity growth has been measured for five consecutive releases of products. Different patterns of growth have been identified and evaluated with software engi-neers in industry. Results: The results show that monitoring cyclomatic complexity evolution of functions and number of revisions of files focuses the attention of designers to potentially problematic files and functions for manual assessment and improvement. A measurement system was developed at Ericsson to support the monitoring process.

8 citations


Journal ArticleDOI
TL;DR: This paper examines a recently published approach for the reachability checking of Petri net markings and gives proofs concerning the completeness and the correctness proper- ties of the algorithm, and extends the algorithm to handle new classes of problems: submarking coverability and reachability of PetRI nets with inhibitor arcs.
Abstract: Formal verication is becoming more prevalent and often compulsory in the safety-critical system and software development pro- cesses. Reachability analysis can provide information about safety and in- variant properties of the developed system. However, checking the reach- ability is a computationally hard problem, especially in the case of asyn- chronous or innite state systems. Petri nets are widely used for the mod- eling and verication of such systems. In this paper we examine a recently published approach for the reachability checking of Petri net markings. We give proofs concerning the completeness and the correctness proper- ties of the algorithm, and we introduce algorithmic improvements. We also extend the algorithm to handle new classes of problems: submarking coverability and reachability of Petri nets with inhibitor arcs.

7 citations


Journal ArticleDOI
TL;DR: This work implements and revise Kornai's grammar of Hungarian NPs to create a parser that identifies noun phrases in Hungarian text and forms rules to account for some specific phenomena of the Hungarian language not covered by the original rule system.
Abstract: We implement and revise Kornai's grammar of Hungarian NPs [11] to create a parser that identifies noun phrases in Hungarian text. After making several practical amendments to our morphological annotation system of choice, we proceed to formulate rules to account for some specific phenomena of the Hungarian language not covered by the original rule system. Although the performance of the final parser is still inferior to state-of-the-art machine learning methods, we use its output successfully to improve the performance of one such system.

7 citations


Journal ArticleDOI
TL;DR: A novel ranking method which may be useful in sports like tennis, table tennis or American football, etc. is introduced and analyzed and it has a good predictive power.
Abstract: In this paper a novel ranking method which may be useful in sports like tennis, table tennis or American football, etc. is introduced and analyzed. In order to rank the players or teams, a time-dependent PageRank based method is applied on the directed and weighted graph representing game results in a sport competition. The method was examined on the results of the table tennis competition of enthusiastic sport-loving researchers of the Institute of Informatics at the University of Szeged. The results of our method were compared by several popular ranking techniques. We observed that our approach works well in general and it has a good predictive power.

7 citations


Journal ArticleDOI
TL;DR: The pilot implementation of a general testing methodology for embedded systems that could be specialized to dierent environments is introduced and, as a proof-ofconcept, how the coverage results were used for dierent purposes is presented.
Abstract: Software testing is a very important activity in the software development life cycle. Numerous general black- and white-box techniques exist to achieve dierent goals and there are a lot of practices for dierent kinds of software. The testing of embedded systems, however, raises some very special constraints and requirements in software testing. Special solutions exist in this field, but there is no general testing methodology for embedded systems. One of the goals of the CIRENE project was to fill this gap and define a general testing methodology for embedded systems that could be specialized to dierent environments. The project included a pilot implementation of this methodology in a specific environment: an Android-based Digital TV receiver (Set-Top-Box). In this pilot, we implemented method level code coverage measurement of Android applications. This was done by instrumenting the applications and creating a framework for the Android device that collected basic information from the instrumented applications and communicated it through the network towards a server where the data was finally processed. The resulting code coverage information was used for many purposes according to the methodology: test case selection and prioritization, traceability computation, dead code detection, etc. The resulting methodology and toolset were reused in another project where we investigated whether the coverage information can be used to determine locations to be instrumented in order to collect relevant information about software usability. In this paper, we introduce the pilot implementation and, as a proof-ofconcept, present how the coverage results were used for dierent purposes.

7 citations


Journal ArticleDOI
TL;DR: The paper discusses the model-driven results on the field of multi-mobile platform development and provides a method, which increases the development productivity and the quality of the applications and also reduces the time to market.
Abstract: Mobile devices and mobile applications have a significant effect on the present and on the future of the software industry. The diversity of mobile platforms necessitates the development of the same mobile application for all major mobile platforms, which requires considerable development effort. Mobile application developers are multiplatform developers, but they prioritize the platforms, therefore, not all platforms are equally important for them. Appropriate methods, processes and tools are required to support the development in order to achieve better productivity. The main motivation of our research activity is to provide a method, which increases the development productivity and the quality of the applications and also reduces the time to market. The paper discusses our model-driven results on the field of multi-mobile platform development.

5 citations


Journal ArticleDOI
TL;DR: Most of the runtime failures of a software system can be revealed during test execution only, which has a very high cost as discussed by the authors, since most of the run-time failures of Java programs are manifested as unhandled runtime failures.
Abstract: Most of the runtime failures of a software system can be revealed during test execution only, which has a very high cost. In Java programs, runtime failures are manifested as unhandled runtime exce...

5 citations


Journal ArticleDOI
TL;DR: A prototype system is presented, which satisfies the requirements of a virtual observatory over semantic databases, such as user roles, data import, query execution, visualization, exporting result, etc, and has special features which facilitate working with semantic data.
Abstract: E-Science relies heavily on manipulating massive amounts of data for research purposes. Researchers should be able to contribute their own data and methods, thus making their results accessible and reproducible by others worldwide. They need an environment which they can use anytime and anywhere to perform data-intensive computations. Virtual observatories serve this purpose. With the advance of the Semantic Web, more and more data is available in Resource Description Framework based databases. It is often desirable to have the ability to link data from local sources to these public data sets. We present a prototype system, which satisfies the requirements of a virtual observatory over semantic databases, such as user roles, data import, query execution, visualization, exporting result, etc. The system has special features which facilitate working with semantic data: visual query editor, use of ontologies, knowledge inference, querying remote endpoints, linking remote data with local data, extracting data from web pages.

4 citations


Journal ArticleDOI
TL;DR: In this article, the authors present their experiences in implementing control flow graph (CFG) construction for a special 4th generation language called Magic, and they identify dierences compared to 3rd generation languages mostly because of the unique programming technique of Magic (e.g. data access, parallel task execution, events).
Abstract: A good compiler which implements many optimizations during its compilation phases must be able to perform several static analysis techniques such as control flow or data flow analysis. Besides compilers, these techniques are common for static analyzers as well to retrieve information from source code, for example for code auditing, quality assurance or testing purposes. Implementing control flow analysis requires handling many special structures of the target language. In our paper we present our experiences in implementing control flow graph (CFG) construction for a special 4th generation language called Magic. While we were designing and implementing the CFG for this language, we identified dierences compared to 3rd generation languages mostly because of the unique programming technique of Magic (e.g. data access, parallel task execution, events). Our work was motivated by our industrial partner who needed precise static analysis tools (e.g. for quality assurance or testing purposes) for this language. We believe that our experiences for Magic, as a representative of 4GLs, might be generalized for other languages too.

Journal ArticleDOI
TL;DR: A user-driven approach for reusable RESTful service compositions is presented, which can be executed once or configured to be executed repeatedly, for example, to get newest updates from a service once a week.
Abstract: RESTful services are becoming a popular technology for providing and consuming cloud services. The idea of cloud computing is based on on-demand services and their agile usage. This implies that also personal service compositions and workflows should be supported. Some approaches for RESTful service compositions have been proposed. In practice, such compositions typically present mashup applications, which are composed in an ad-hoc manner. In addition, such approaches and tools are mainly targeted for programmers rather than end-users. In this paper, a user-driven approach for reusable RESTful service compositions is presented. Such compositions can be executed once or they can be configured to be executed repeatedly, for example, to get newest updates from a service once a week.

Journal ArticleDOI
TL;DR: A general algorithm schema is provided to construct special algorithms which are able to compute the equivalence classes for a given set of affinity functions and a special scenario of when the affinity operator combines two affinities using an aggregation operator and the particular parameter defines the weights of the affinity.
Abstract: The equivalence of affinities in fuzzy connectedness (FC) is a novel concept which gives us the ability to study affinity functions and their precise connection with FC algorithms Two seminal papers by Ciesielski and Udupa create a strong theoretical background and provide some useful practical examples Our intention here is to investigate this concept further because from a practical viewpoint if we are able to determine the equivalence classes for a given set of affinity functions and narrow it down to a much smaller set of nonequivalent affinities, then the set can be used more effectively in an optimization framework which searches for the best affinity function or parameters for a special task In other words, we can find the best configuration for a set of given hardware or an image set with special characteristics From a theoretical perspective, we are interested in the complexity of this problem, ie determining equivalence classes Here, an affinity operator is used which is a function of a given parameter and maps different parameter values for different affinity functions Our first questions, namely how many different meaningful, non-equivalent affinities there are and how we can enumerate them, led us to a general problem of how the equivalent affinities partition the parameter's domain and how the corresponding equivalence classes can be determined We will provide a general algorithm schema to construct special algorithms which are able to compute the equivalence classes We will also analyze a special but very common scenario of when the affinity operator combines two affinities (eg a homogeneity and an object feature-based affinity) using an aggregation operator (eg weighted average) and the particular parameter defines the weights of the affinities Based on the general algorithm schema, we propose algorithms for this special case and we determine their complexity as well These algorithms will be tested on two sets of medical images, namely, 25 digital dermoscopy images 1280 x 1024 pixels in size and 3 x 25 simulated brain MRI slices 181 x 217 in size

Journal ArticleDOI
TL;DR: This paper demonstrates the new facilities of the browser as a visualization tool, going beyond what is expected of traditional web applications, and demonstrates that with mashup technologies, which enable combining already existing content from various sites into an integrated experience, the new graphics facilities unleashes unforeseen potential.
Abstract: The Web has rapidly evolved from a simple document browsing and distribution environment into a rich software platform, where desktop-style applications are treated as first class citizens. Despite the associated technical complexities and limitations, it is not unusual to find complex applications that build on the web as their only platform, with no traditional installable application for the desktop environment – such systems are simply accessed via a web page that is downloaded inside the browser and once loading is completed, the application will begin its execution immediately. With the recent standardization efforts, including HTML5 and WebGL in particular, compelling, visually rich applications are increasingly supported by the the browsers. In this paper, we demonstrate the new facilities of the browser as a visualization tool, going beyond what is expected of traditional web applications. In particular, we demonstrate that with mashup technologies, which enable combining already existing content from various sites into an integrated experience, the new graphics facilities unleashes unforeseen potential

Journal ArticleDOI
TL;DR: The concept of database slicing is introduced and the algorithms and data structures necessary for slicing a given database are described and the Table-based and Record-based slicing algorithms are defined.
Abstract: Many software systems today use databases to permanently store their data Testing, bug finding and migration are complex problems in the case of databases that contain many records Here, our method can speed up these processes if we can select a smaller piece of the database (called a slice) that contains all of the records belonging to the slicing criterion The slicing criterion might be, for example, a record which gives rise to a bug in the program Database slicing seeks to select all the records belonging to a specific slicing criterion Here, we introduce the concept of database slicing and describe the algorithms and data structures necessary for slicing a given database We define the Table-based and the Record-based slicing algorithms and we empirically evaluate these methods in two scenarios by applying the slicing to the database of a real-life application and to random generated database content

Journal ArticleDOI
TL;DR: A semidefinite upper bound on the square of the stability number of a graph, the inverse theta number, is introduced, which is proved to be multiplicative with respect to the strong graph product, hence to be an upper bound for thesquare of the Shannon capacity of the graph.
Abstract: In the paper we introduce a semidefinite upper bound on the square of the stability number of a graph, the inverse theta number, which is proved to be multiplicative with respect to the strong graph product, hence to be an upper bound for the square of the Shannon capacity of the graph. We also describe a heuristic algorithm for the stable set problem based on semidefinite programming, Cholesky factorization, and eigenvector computation.

Journal ArticleDOI
TL;DR: The jSRML metalanguage is demonstrated which provides a way to define more comprehensive and non-obtrusive validation rules for forms and a system called j SRMLTool is created which can perform hybrid validation methods as well as propose jSR ML validation rules using machine learning.
Abstract: Over the years the Internet has spread to most areas of our lives ranging from reading news, ordering food, streaming music, playing games all the way to handling our finances online With this rapid expansion came an increased need to ensure that the data being transmitted is valid Validity is important not just to avoid data corruption but also to prevent possible security breaches Whenever a user wants to interact with a website where information needs to be shared they usually fill out forms and submit them for server-side processing Web forms are very prone to input errors, external exploits like SQL injection attacks, automated bot submissions and several other security circumvention attempts We will demonstrate our jSRML metalanguage which provides a way to define more comprehensive and non-obtrusive validation rules for forms We used jQuery to allow asynchronous AJAX validation without posting the page to provide a seamless experience for the user Our approach also allows rules to be defined to correct mistakes in user input aside from performing validation making it a valuable asset in the space of form validation We have created a system called jSRMLTool which can perform hybrid validation methods as well as propose jSRML validation rules using machine learning