scispace - formally typeset
Search or ask a question

Showing papers in "Fundamenta Informaticae in 2014"


Journal ArticleDOI
TL;DR: This study examines the notion of inconsistency in pairwise comparisons for providing an axiomatization for it and proposes two inconsistency indicators for pairwise comparison.
Abstract: This study examines the notion of inconsistency in pairwise comparisons for providing an axiomatization for it. It also proposes two inconsistency indicators for pairwise comparisons. The primary motivation for the inconsistency reduction is expressed by a computer industry concept “garbage in, garbage out”. The quality of the output depends on the quality of the input.

70 citations


Journal ArticleDOI
TL;DR: It is shown that the logic of subintervals, the fragment of the Halpern--Shoham logic where only the operator “during”, or D, is allowed, is undecidable over discrete structures.
Abstract: The Halpern--Shoham logic is a modal logic of time intervals. Some effort has been put in last ten years to classify fragments of this beautiful logic with respect to decidability of its satisfiability problem. We complete this classification by showing ---what we believe is quite an unexpected result---that the logic of subintervals, the fragment of the Halpern--Shoham logic where only the operator “during”, or D, is allowed, is undecidable over discrete structures. This is surprising as this, apparently very simple, logic is decidable over dense orders and its reflexive variant is known to be decidable over discrete structures. Our result subsumes a lot of previous undecidability results of fragments that include D.

54 citations


Journal ArticleDOI
TL;DR: This paper constructs a reaction system model based on a novel concept of dominance graph that captures the competition on resources in the ODE model and discusses on the expressivity of reaction systems as compared to that of ODE-based models.
Abstract: Reaction systems are a formal framework for modeling processes driven by biochemical reactions. They are based on the mechanisms of facilitation and inhibition. A main assumption is that if a resource is available, then it is present in sufficient amounts and as such, several reactions using the same resource will not compete concurrently against each other; this makes reaction systems very different as a modeling framework than traditional frameworks such as ODEs or continuous time Markov chains. We demonstrate in this paper that reaction systems are rich enough to capture the essential characteristics of ODE-based models. We construct a reaction system model for the heat shock response in such a way that its qualitative behavior correlates well with the quantitative behavior of the corresponding ODE model. We construct our reaction system model based on a novel concept of dominance graph that captures the competition on resources in the ODE model. We conclude with a discussion on the expressivity of reaction systems as compared to that of ODE-based models.

46 citations


Journal ArticleDOI
TL;DR: Two new centrality measures, Diffusion Degree for independent cascade model of information diffusion and Maximum Influence Degree are proposed, which provide the maximum theoretically possible influence Upper Bound for a node.
Abstract: The paper addresses the problem of finding top k influential nodes in large scale directed social networks. We propose two new centrality measures, Diffusion Degree for independent cascade model of information diffusion and Maximum Influence Degree. Unlike other existing centrality measures, diffusion degree considers neighbors' contributions in addition to the degree of a node. The measure also works flawlessly with non uniform propagation probability distributions. On the other hand, Maximum Influence Degree provides the maximum theoretically possible influence Upper Bound for a node. Extensive experiments are performed with five different real life large scale directed social networks. With independent cascade model, we perform experiments for both uniform and non uniform propagation probabilities. We use Diffusion Degree Heuristic DiDH and Maximum Influence Degree Heuristic MIDH, to find the top k influential individuals. k seeds obtained through these for both the setups show superior influence compared to the seeds obtained by high degree heuristics, degree discount heuristics, different variants of set covering greedy algorithms and Prefix excluding Maximum Influence Arborescence PMIA algorithm. The superiority of the proposed method is also found to be statistically significant as per T-test.

44 citations


Journal ArticleDOI
TL;DR: An abstract graphical language in the form of Petri nets is used to describe the Physarum polycephalum behavior, and petri nets are a good formalism to assist designers and support hardware design tools, especially in developing concurrent systems.
Abstract: Our research is focused on creation of a new object-oriented programming language for Physarum polycephalum computing. Physarum polycephalum is a one-cell organism that can be used for developing a biological architecture of different abstract devices, among others, the digital ones. In the paper, we use an abstract graphical language in the form of Petri nets to describe the Physarum polycephalum behavior. Petri nets are a good formalism to assist designers and support hardware design tools, especially in developing concurrent systems. At the beginning stage considered in this paper, we show how to build Petri net models, and next implement them as Physarum polycephalum machines, of basic logic gates AND, OR, NOT, and simple combinational circuits on the example of the 1-to-2 demultiplexer.

42 citations


Journal ArticleDOI
TL;DR: The presented framework, concepts and techniques can be used to boost the development of next generation algorithms and advanced applications quite conveniently and will be highly benefitted mostly by CICT infocentric worldview.
Abstract: Discrete Tomography (DT), differently from GT and CT, focuses on the case where only few specimen projections are known and the images contain a small number of different colours (e.g. black-and-white). A concise review on main contemporary physical and mathematical CT system problems is offered. Stochastic vs. Combinatorially Optimized Noise generation is compared and presented by two visual examples to emphasise a major double-bind problem at the core of contemporary most advanced instrumentation systems. Automatic tailoring denoising procedures to real dynamic system characteristics and performance can get closer to ideal self-registering and self-linearizing system to generate virtual uniform and robust probing field during its whole designed service life-cycle. The first attempt to develop basic principles for system background low-level noise source automatic characterization, profiling and identification by CICT, from discrete system parameter, is presented. As a matter of fact, CICT can supply us with cyclic numeric sequences perfectly tuned to their low-level multiplicative source generators, related to experimental high-level overall perturbation (according to high-level classic perturbation computational model under either additive or multiplicative perturbation hypothesis). Numeric examples are presented. Furthermore, a practical NTT example is given. Specifically, advanced CT system, HRO and Mission Critical Project (MCP) for very low Technological Risk (TR) and Crisis Management (CM) system will be highly benefitted mostly by CICT infocentric worldview. The presented framework, concepts and techniques can be used to boost the development of next generation algorithms and advanced applications quite conveniently.

38 citations


Journal ArticleDOI
TL;DR: This paper starts by defining certain and possible rules based on non-deterministic information and reconsidering the NIS-Apriori algorithm which generates a given implication if and only if it is either a certain rule or a possible rule satisfying the constraints.
Abstract: This paper discusses issues related to incomplete information databases and considers a logical framework for rule generation. In our approach, a rule is an implication satisfying specified constraints. The term incomplete information databases covers many types of inexact data, such as non-deterministic information, data with missing values, incomplete information or interval valued data. In the paper, we start by defining certain and possible rules based on non-deterministic information. We use their mathematical properties to solve computational problems related to rule generation. Then, we reconsider the NIS-Apriori algorithm which generates a given implication if and only if it is either a certain rule or a possible rule satisfying the constraints. In this sense, NIS-Apriori is logically sound and complete. In this paper, we pay a special attention to soundness and completeness of the considered algorithmic framework, which is not necessarily obvious when switching from exact to inexact data sets. Moreover, we analyze different types of non-deterministic information corresponding to different types of the underlying attributes, i.e., value sets for qualitative attributes and intervals for quantitative attributes, and we discuss various approaches to construction of descriptors related to particular attributes within the rules' premises. An improved implementation of NIS-Apriori and some demonstrations of an experimental application of our approach to data sets taken from the UCI machine learning repository are also presented. Last but not least, we show simplified proofs of some of our theoretical results.

36 citations


Journal ArticleDOI
TL;DR: This work proposes an approach to decompose process mining problems into smaller problems using the notion of passages, supported through ProM plug-ins that automatically decomposes process discovery and conformance checking tasks.
Abstract: The two most prominent process mining tasks are process discovery i.e., learning a process model from an event log and conformance checking i.e., diagnosing and quantifying differences between observed and modeled behavior. The increasing availability of event data makes these tasks highly relevant for process analysis and improvement. Therefore, process mining is considered to be one of the key technologies for Business Process Management BPM. However, as event logs and process models grow, process mining becomes more challenging. Therefore, we propose an approach to decompose process mining problems into smaller problems using the notion of passages. A passage is a pair of two non-empty sets of activities X, Y such that the set of direct successors of X is Y and the set of direct predecessors of Y is X. Any Petri net can be partitioned using passages. Moreover, process discovery and conformance checking can be done per passage and the results can be aggregated. This has advantages in terms of efficiency and diagnostics. Moreover, passages can be used to distribute process mining problems over a network of computers. Passages are supported through ProM plug-ins that automatically decompose process discovery and conformance checking tasks.

33 citations


Journal ArticleDOI
TL;DR: A nonconstructive proof can be used to prove the existence of an object with some properties without providing an explicit example of such an object as discussed by the authors, where a special case is a probabilistic proof where the object with required properties appears with some positive probability in some random process.
Abstract: A nonconstructive proof can be used to prove the existence of an object with some properties without providing an explicit example of such an object. A special case is a probabilistic proof where we show that an object with required properties appears with some positive probability in some random process. Can we use such arguments to prove the existence of a computable infinite object? Sometimes yes: following [8], we show how the notion of a layerwise computable mapping can be used to prove a computable version of Lovasz local lemma.

29 citations


Journal ArticleDOI
TL;DR: This paper proposes a new multi-step backward cloud transformation algorithm based on sampling with replacement (MBCT-SR) which is more precise than the existing methods and the effectiveness and convergence of new method is analyzed in detail.
Abstract: The representation and processing of uncertainty information is one of the key basic issues of the intelligent information processing in the face of growing vast information, especially in the era of network. There have been many theories, such as probability statistics, evidence theory, fuzzy set, rough set, cloud model, etc., to deal with uncertainty information from different perspectives, and they have been applied into obtaining the rules and knowledge from amount of data, for example, data mining, knowledge discovery, machine learning, expert system, etc. Simply, This is a cognitive transformation process from data to knowledge (FDtoK). However, the cognitive transformation process from knowledge to data (FKtoD) is what often happens in human brain, but it is lack of research. As an effective cognition model, cloud model provides a cognitive transformation way to realize both processes of FDtoK and FKtoD via forward cloud transformation (FCT) and backward cloud transformation (BCT). In this paper, the authors introduce the FCT and BCT firstly, and make a depth analysis for the two existing single-step BCT algorithms. We find that these two BCT algorithms lack stability and sometimes are invalid. For this reason we propose a new multi-step backward cloud transformation algorithm based on sampling with replacement (MBCT-SR) which is more precise than the existing methods. Furthermore, the effectiveness and convergence of new method is analyzed in detail, and how to set the parameters m, r appeared in MBCT-SR is also analyzed. Finally, we have error analysis and comparison to demonstrate the efficiency of the proposed backward cloud transformation algorithm for some simulation experiments.

28 citations


Journal ArticleDOI
TL;DR: It is proved that the computational power of SN P systems with rules on synapses working in this way is reduced; specifically, they can only generate finite sets of numbers.
Abstract: Spiking neural P systems (SN P systems, for short) with rules on synapses are a new variant of SN P systems, where the spiking and forgetting rules are placed on synapses instead of in neurons. Recent studies illustrated that this variant of SN P systems is universal working in the way that the synapses starting from the same neuron work in parallel (i.e., all synapses starting from the same neuron should apply their rules if they have rules to be applied). In this work, we consider SN P systems with rules on synapses working in another way: the synapses starting from the same neuron are restricted to work in a sequential way (i.e., at each step at most one synapse starting from the same neuron applies its rule). It is proved that the computational power of SN P systems with rules on synapses working in this way is reduced; specifically, they can only generate finite sets of numbers. Such SN P systems with rules on synapses are proved to be universal, if synapses are allowed to have weight at most 2 (if a rule which can generate n spikes is applied on a synapse with weight k, then the neuron linking to this synapse will receive totally nk spikes). Two small universal SN P systems with rules on synapses for computing functions are also constructed: a universal system with 26 neurons when using extended rules and each synapse having weight at most 2, and a universal system with 26 neurons when using standard rules and each synapse having weight at most 12. These results illustrate that the weight is an important feature for the computational power of SN P systems.

Journal ArticleDOI
TL;DR: This paper develops the first bisimulation-based method of concept learning in DLs for the following setting: given a knowledge base KB in a DL, a set of objects standing for positive examples and a set for negative examples, learn a concept C in that DL such that the positive examples are instances of C w.r.t. KB.
Abstract: Concept learning in description logics (DLs) is similar to binary classification in traditional machine learning. The difference is that in DLs objects are described not only by attributes but also by binary relationships between objects. In this paper, we develop the first bisimulation-based method of concept learning in DLs for the following setting: given a knowledge base KB in a DL, a set of objects standing for positive examples and a set of objects standing for negative examples, learn a concept C in that DL such that the positive examples are instances of C w.r.t. KB, while the negative examples are not instances of C w.r.t. KB. We also prove soundness of our method and investigate its C-learnability.

Journal ArticleDOI
TL;DR: In this article, the authors introduce a threshold model of social networks, in which the nodes influenced by their neighbours can adopt one out of several alternatives, and characterize social networks for which adoption of a product by the whole network is possible respectively necessary and the ones for which a unique outcome is guaranteed.
Abstract: We introduce a new threshold model of social networks, in which the nodes influenced by their neighbours can adopt one out of several alternatives. We characterize social networks for which adoption of a product by the whole network is possible respectively necessary and the ones for which a unique outcome is guaranteed. These characterizations directly yield polynomial time algorithms that allow us to determine whether a given social network satisfies one of the above properties. We also study algorithmic questions for networks without unique outcomes. We show that the problem of determining whether a final network exists in which all nodes adopted some product is NP-complete. In turn, we also resolve the complexity of the problems of determining whether a given node adopts some respectively, a given product in some respectively, all networks. Further, we show that the problem of computing the minimum possible spread of a product is NP-hard to approximate with an approximation ratio better than Ωn, in contrast to the maximum spread, which is efficiently computable. Finally, we clarify that some of the above problems can be solved in polynomial time when there are only two products.

Journal ArticleDOI
TL;DR: The longest common palindromic subsequence LCPS problem as discussed by the authors is a variant of the classic LCS problem which finds a longest common subsequence between two given strings such that the computed subsequence is also a palindrome.
Abstract: The longest common subsequence LCS problem is a classic and well-studied problem in computer science. Palindrome is a word which reads the same forward as it does backward. The longest common palindromic subsequence LCPS problem is a variant of the classic LCS problem which finds a longest common subsequence between two given strings such that the computed subsequence is also a palindrome. In this paper, we study the LCPS problem and give two novel algorithms to solve it. To the best of our knowledge, this is the first attempt to study and solve this problem.

Journal ArticleDOI
TL;DR: Gs-graphs are argued to offer a simpler and more standard algebraic structure, based on monoidal categories, for representing both states and transitions, and can be equipped with a simple type system to check the well-formedness of legal gs- graphs that are shown to characterise binding bigraphs.
Abstract: Compositional graph models for global computing systems must account for two relevant dimensions, namely structural containment and communication linking. In Milner's bigraphs the two dimensions are made explicit and represented as two loosely coupled structures: the place graph and the link graph. Here, bigraphs are compared with an earlier model, gs-graphs, originally conceived for modelling the syntactical structure of agents with α-convertible declarations. We show that gs-graphs are quite convenient also for the new purpose, since the two above mentioned dimensions can be recovered by considering only a specific class of hyper-signatures. With respect to bigraphs, gs-graphs can be proved essentially equivalent, with minor differences at the interface level. We argue that gs-graphs offer a simpler and more standard algebraic structure, based on monoidal categories, for representing both states and transitions. Moreover, they can be equipped with a simple type system to check the well-formedness of legal gs-graphs that are shown to characterise binding bigraphs. Another advantage concerns a textual form in terms of sets of assignments, which can make implementation easier in rewriting frameworks like Maude.

Journal ArticleDOI
TL;DR: In this article, the authors studied unique recovery of cosparse signals from limited-view tomographic measurements of two-and three-dimensional domains by linear programming and showed that the class of uniquely recoverable signals is large enough to cover practical applications, like contactless quality inspection of compound solid bodies composed of few materials.
Abstract: We study unique recovery of cosparse signals from limited-view tomographic measurements of two-and three-dimensional domains. Admissible signals belong to the union of subspaces defined by all cosupports of maximal cardinality l with respect to the discrete gradient operator. We relate l both to the number of measurements and to a nullspace condition with respect to the measurement matrix, so as to achieve unique recovery by linear programming. These results are supported by comprehensive numerical experiments that show a high correlation of performance in practice and theoretical predictions. Despite poor properties of the measurement matrix from the viewpoint of compressed sensing, the class of uniquely recoverable signals basically seems large enough to cover practical applications, like contactless quality inspection of compound solid bodies composed of few materials.

Journal ArticleDOI
TL;DR: In this publication, a simpler algorithm that lacks future pruning is presented and proven correct and its performance is compared with future pruned algorithms.
Abstract: Many algorithms for computing minimal coverability sets for Petri nets prune futures. That is, if a newmarking strictly covers an old one, then not just the old marking but also some subset of its successor markings is discarded from search. In this publication, a simpler algorithm that lacks future pruning is presented and proven correct. Its performance is compared with future pruning. It is demonstrated, using examples, that neither approach is systematically better than the other. However, the simple algorithm has some attractive features. It never needs to re-construct pruned parts of the minimal coverability set. It automatically gives most of the advantage of future pruning, if the minimal coverability set is constructed in depth-first or most tokens first order, and if so-called history merging is applied. Some implementation aspects of minimal coverability set construction are also discussed. Some measurements are given to demonstrate the effect of construction order and other implementation aspects.

Journal ArticleDOI
TL;DR: In this paper, the authors propose to model complex systems by interactive computational systems (ICS) created by societies of agents, which are based on complex granules (c-granules, for short).
Abstract: Information granules (infogranules, for short) are widely discussed in the literature. In particular, let us mention here the rough granular computing approach based on the rough set approach and its combination with other approaches to soft computing. However, the issues related to interactions of infogranules with the physical world and to perception of interactions in the physical world by infogranules are not well elaborated yet. On the other hand the understanding of interactions is the critical issue of complex systems. We propose to model complex systems by interactive computational systems (ICS) created by societies of agents. Computations in ICS are based on complex granules (c-granules, for short). In the paper we concentrate on some basic issues related to interactive computations based on c-granules performed by agents in the physical world.

Journal ArticleDOI
TL;DR: In this paper, the authors provide a formal characterization of rough sets in terms of topologies or orders, which can be used to give the formal topology or order of a rough set.
Abstract: Theory exploration is a term describing the development of a formal approach to selected topic, usually within mathematics or computer science, with the help of an automated proof-assistant. This activity however usually doesn't reflect the view of science considered as a whole, not as separated islands of knowledge. Merging theories essentially has its primary aim of bridging these gaps between specific disciplines. As we provided formal apparatus for basic notions within rough set theory (as e.g. approximation operators and membership functions), we try to reuse the knowledge which is already contained in available repositories of computer-checked mathematical knowledge, or which can be obtained in a relatively easy way. We can point out at least three topics here: topological aspects of rough sets – as approximation operators have properties of the topological interior and closure; possible connections with formal concept analysis; lattice-theoretic approach giving the algebraic viewpoint (e.g. Stone algebras). In the first case, we discovered semiautomatically some connections with Isomichi's classification of subsets of a topological space and with the problem of fourteen Kuratowski sets. This paper is also a brief description of the computer source code which is a feasible illustration of our approach – nearly two thousand lines containing all the formal proofs (essentially we omit them in the paper). In such a way we can give the formal characterization of rough sets in terms of topologies or orders. Although fully formal, still the approach can be revised to keep the uniformity all the time.

Journal ArticleDOI
TL;DR: In this paper, the authors show how conventional verification tools can be used to verify unconventional programs implementing a logical XOR gate, such as the one presented in this paper, and how to verify the correctness of a program implemented in a non-standard programming framework.
Abstract: As unconventional computation matures and non-standard programming frameworks are demonstrated, the need for formal verification will become more prevalent. This is so because “programming” in unconventional substrates is difficult. In this paper we show how conventional verification tools can be used to verify unconventional programs implementing a logical XOR gate.

Journal ArticleDOI
TL;DR: In this article, the authors propose a method to discover cancellation regions from transition systems built on event logs and show the way to construct equivalent workflow net with reset arcs to simplify the control flow structure.
Abstract: Process mining is a relatively new field of computer science which deals with process discovery and analysis based on event logs. In this work we consider the problem of discovering workflow nets with cancellation regions from event logs. Cancellations occur in the majority of real-life event logs. In spite of huge amount of process mining techniques little has been done on cancellation regions discovery. We show that the state-based region algorithm gives labeled Petri nets with overcomplicated control flow structure for logs with cancellations. We propose a novel method to discover cancellation regions from the transition systems built on event logs and show the way to construct equivalent workflow net with reset arcs to simplify the control flow structure.

Journal ArticleDOI
TL;DR: A formal description of a subset of the Alvis language designed for the modelling and formal verification of concurrent systems, which makes Alvis similar to other formal languages like Petri nets, process algebras, time automata, etc.
Abstract: The paper presents a formal description of a subset of the Alvis language designed for the modelling and formal verification of concurrent systems. Alvis combines possibilities of a formal models verification with flexibility and simplicity of practical programming languages. Alvis provides a graphical modelling of interconnections among agents and a high level programming language used for the description of agents behaviour. Its semantic depends on the so-called system layer. The most universal system layer α0, described in the paper, makes Alvis similar to other formal languages like Petri nets, process algebras, time automata, etc.

Journal ArticleDOI
Ruisong Ye1
TL;DR: Experimental results show that the new image encryption scheme has satisfactory security thanks to its large key space and robust permutation-diffusion mechanism, which makes it a potential candidate for designing image encryption schemes.
Abstract: In this paper, a generalized multi-sawtooth map based image encryption scheme with an efficient permutation-diffusion mechanism is proposed. In the permutation process, a generalized multi-sawtooth map is utilized to generate one chaotic orbit used to get one index order sequence for the permutation of image pixel positions, while in the diffusion process, two generalized multi-sawtooth maps are employed to yield two pseudo-random grey value sequences for a two-way diffusion of pixel grey values. The yielded grey value sequences are not only sensitive to the control parameters and initial conditions of the considered chaotic maps, but also strongly depend on the plain-image processed, therefore the proposed scheme can effectively resist statistical attack, differential attack, known-plaintext as well as chosen-plaintext attack. Experimental results show that the new image encryption scheme has satisfactory security thanks to its large key space and robust permutation-diffusion mechanism, which makes it a potential candidate for designing image encryption schemes.

Journal ArticleDOI
TL;DR: An approach for dealing with uncertainty in complex systems based on interactive computations over complex objects called c-granules, for short, developed over years of work on different real-life projects is discussed.
Abstract: We discuss an approach for dealing with uncertainty in complex systems. The approach is based on interactive computations over complex objects called here complex granules c-granules, for short. Any c-granule consists of a physical part and a mental part linked in a special way. We begin from the rough set approach and next we move toward interactive computations on c-granules. From our considerations it follows that the fundamental issues of intelligent systems based on interactive computations are related to risk management in such systems. Our approach is a step toward realization of the Wisdom Technology WisTech program. The approach was developed over years of work on different real-life projects.

Journal ArticleDOI
TL;DR: In this paper, the authors extend the previously proposed iterative HRE algorithm and present all the heuristics that create a generalized approach, which can be used in practice to improve performance.
Abstract: The Heuristic Ratio Estimation (HRE) approach proposes a new way of using the pairwise comparisons matrix. It allows the assumption that the weights of some alternatives (herein referred to as concepts) are known and fixed, hence the weight vector needs to be estimated only for the other unknown values. The main purpose of this paper is to extend the previously proposed iterative HRE algorithm and present all the heuristics that create a generalized approach. Theoretical considerations are accompanied by a few numerical examples demonstrating how the selected heuristics can be used in practice.

Journal ArticleDOI
TL;DR: The paper presents a method of reasoning about the behaviour of asynchronous programs in denotational models designed with metric spaces and continuation semantics for concurrency.
Abstract: The paper presents a method of reasoning about the behaviour of asynchronous programs in denotational models designed with metric spaces and continuation semantics for concurrency.

Journal ArticleDOI
TL;DR: This paper addresses the automatic construction of such Reo connectors directly from a constraint automaton representation of the exogenous coordination language Reo.
Abstract: In controller synthesis, i.e., the question whether there is a controller or strategy to achieve some objective in a given system, the controller is often realized as some kind of automaton. In the context of the exogenous coordination language Reo, where the coordination glue code between the components is realized as a network of channels, it is desirable for such synthesized controllers to also take the form of a Reo connector built from a repertoire of basic channels. In this paper, we address the automatic construction of such Reo connectors directly from a constraint automaton representation.

Journal ArticleDOI
TL;DR: The synergistic combination of X-ray microtomography, in situ mechanical tests on material samples and full-field kinematic measurements by 3D-Volume Digital Image Correlation is discussed, with reference to a variety of biological and engineering materials.
Abstract: In this review paper the synergistic combination of X-ray microtomography, in situ mechanical tests on material samples and full-field kinematic measurements by 3D-Volume Digital Image Correlation is discussed. First, basic features are outlined, concerning X-ray microtomography by either laboratory sources or synchrotron radiation. The main equations for 3D-Volume Digital Image Correlation are then presented, and different provisions regularizing the ill-posed problem of motion estimation are outlined. Thereafter, a survey of the state of the art is provided, with reference to a variety of biological and engineering materials. Limitations and perspectives of the proposed methodology in diverse applications are highlighted. The rapid growth of this research topic is emphasized, due to the truly multi-disciplinary vocation, the synergy between algorithmic and technological solutions, a fusion of experiments and numerical methods.

Journal ArticleDOI
TL;DR: The study of the geometry of the TDOA map that encodes the noiseless model for the localization of a source from the range differences between three receivers in a plane is complete by computing the Cartesian equation of the bifurcation curve in terms of the positions of the receivers.
Abstract: In this paper, we complete the study of the geometry of the TDOA map that encodes the noiseless model for the localization of a source from the range differences between three receivers in a plane, by computing the Cartesian equation of the bifurcation curve in terms of the positions of the receivers. From that equation, we can compute its real asymptotic lines. The present manuscript completes the analysis of [12]. Our result is useful to check if a source belongs or is closed to the bifurcation curve, where the localization in a noisy scenario is ambiguous.

Journal ArticleDOI
TL;DR: In this paper, the authors introduce and briefly investigate P systems with controlled computations and compare the relation between the families of sets of numbers computed by the various classes of controlled P systems, also comparing them with length sets of languages in Chomsky and Lindenmayer hierarchies characterizations of the length set of ET0L and of recursively enumerable languages.
Abstract: We introduce and briefly investigate P systems with controlled computations. First, P systems with label restricted transitions are considered in each step, all rules used have either the same label, or, possibly, the empty label, λ, then P systems with the computations controlled by languages as in context-free controlled grammars. The relationships between the families of sets of numbers computed by the various classes of controlled P systems are investigated, also comparing them with length sets of languages in Chomsky and Lindenmayer hierarchies characterizations of the length sets of ET0L and of recursively enumerable languages are obtained in this framework. A series of open problems and research topics are formulated.