scispace - formally typeset
Search or ask a question

Showing papers in "Informatica (lithuanian Academy of Sciences) in 2000"


Journal Article
TL;DR: An object-oriented extension to canonical attribute grammars is described, permitting attributes to be references to arbitrary nodes in the syntax tree, and Attributes to be accessed via the reference attributes.
Abstract: An object-oriented extension to canonical attribute grammars is described, permitting attributes to be references to arbitrary nodes in the syntax tree, and attributes to be accessed via the reference attributes. Important practical problems such as name and type analysis for object-oriented languages can be expressed in a concise and modular manner in these grammars, and an optimal evaluation algorithm is available. An extensive example is given, capturing all the key constructs in object-oriented languages including block structure, classes, inheritance, qualified use, and assignment compatibility in the presence of subtyping. The formalism and algorithm have been implemented in APPLAB, an interactive language development tool.

192 citations


Journal ArticleDOI
TL;DR: A secure, nonrepudiable and known signers threshold proxy signature scheme which remedies the weakness of the Sun's scheme is proposed.
Abstract: In the (t;n) proxy signature scheme, the signature, originally signed by a signer, can be signed by t or more proxy signers out of a proxy group of n members. Recently, an efficient nonrepudiable threshold proxy signature scheme with known signers was proposed by H.-M. Sun. Sun's scheme has two advantages. One is nonrepudiation. The proxy group cannot deny that having signed the proxy signature. Any verifier can identify the proxy group as a real signer. The other is identifiable signers. The verifier is able to identify the actual signers in the proxy group. Also, the signers cannot deny that having generated the proxy signature. In this article, we present a cryptanalysis of the Sun's scheme. Further, we propose a secure, nonrepudiable and known signers threshold proxy signature scheme which remedies the weakness of the Sun's scheme.

116 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper presented two methods to against Harn's signature verification scheme and showed that there is a weakness in his scheme, which can reduce signature verification time.
Abstract: Recently, Harn proposed an efficient scheme that can batch verification multiple RSA digital signatures. His scheme can reduce signature verification time. However, there is a weakness in his scheme. In this study, we present two methods to against his scheme.

47 citations


Journal ArticleDOI
TL;DR: This paper surveys and generalizes known results in this topic and demonstrates how true shadow prices can be computed with or without modification to existing software.
Abstract: : It is well known that in linear programming, the optimal values of the dual variables can be interpreted as shadow prices (marginal values) of the right-hand-side coefficients. However, this is true only under nondegeneracy assumptions. Since real problems are often degenerate, the output from conventional LP software regarding such marginal information can be misleading. This paper surveys and generalizes known results in this topic and demonstrates how true shadow prices can be computed with or without modification to existing software. Keywords: Optimization software.

20 citations


Journal ArticleDOI
TL;DR: An algorithm for generating quadratic assignment problem (QAP) instances with known provably optimal solution which generalizes some existing algorithms based on the iterative selection of triangles only and produces a set of instances which can be produced by the algorithm.
Abstract: In this paper we present an algorithm for generating quadratic assignment problem (QAP) instances with known provably optimal solution. The flow matrix of such instances is constructed from the matrices corresponding to special graphs whose size may reach the dimension of the problem. In this respect, the algorithm generalizes some existing algorithms based on the iterative selection of triangles only. The set of instances which can be produced by the algorithm is NP-hard. Using multi-start descent heuristic for the QAP, we compare experimentally such test cases against those created by several existing generators and against Nugent-type problems from the QAPLIB as well.

16 citations


Journal ArticleDOI
TL;DR: The present work substantiates the necessity to divide words into fixed and variable parts used to build different grammatical forms, as well as to store only those parts rather than the whole worlds in the dictionary.
Abstract: The paper deals with one of the components of text-to-speech synthesis of the Lithuanian language, namely – automatic text stressing. The present work substantiates the necessity to divide words into fixed and variable parts used to build different grammatical forms, as well as to store only those parts rather than the whole worlds in the dictionary. According to the inflexion method, all words of the Lithuanian language are divided into three groups (noun-adjectives, verbs and noninflectional words) and each group is analysed separately. The type of information, as well as the form in which it is to be stored, has been established for each group and the algorithm by means of which the grammatical form of a word can be recognised and stressed, has been presented.

16 citations


Journal ArticleDOI
TL;DR: The capabilities of the scripting language Open PROMOL and its processor are presented and the generation, computation, control, parameterization and gluing capabilities are described.
Abstract: We present the capabilities of the scripting language Open PROMOL and its processor. The intention of the language is to pre-program specifications for modifying programs written in a target language. We use its processor either as a tool for developing the stand-alone reusable components or as a “component-from-the-shelf” in generative tools for generating domain specific programs. The processor itself uses the module (lexical analyser and parser) produced by Lex & Yacc as a reusable component. We describe the generation, computation, control, parameterization and gluing capabilities of the language. We compare our approach with the similar approaches known in the literature.

15 citations


Journal ArticleDOI
TL;DR: The existence and uniqueness theorem for the general case of vital rates is proved, the extinction and growth of the population are considered, and a class of the product (separable) solutions is obtained for these two models.
Abstract: Two models for an age-structured nonlimited population dynamics with maternal care of offspring are presented One of them deals with a bisexual population and includes a harmonic mean type mating of sexes and females’ pregnancy The other one describes dynamics of an asexual population Migration is not taken into account The existence and uniqueness theorem for the general case of vital rates is proved, the extinction and growth of the population are considered, and a class of the product (separable) solutions is obtained for these two models The long-time behavior of the asexual population is obtained in the stationary case of vital rates

15 citations


Journal ArticleDOI
TL;DR: This paper proposes a deterministic heuristic algorithm, which is applied to the quadratic assignment problem, and refers to as intensive search algorithm (or briefly intensive search), which appears superior to the well-known algorithm – simulated annealing.
Abstract: Many heuristics, such as simulated annealing, genetic algorithms, greedy randomized adaptive search procedures are stochastic. In this paper, we propose a deterministic heuristic algo- rithm, which is applied to the quadratic assignment problem. We refer this algorithm to as intensive search algorithm (or briefly intensive search). We tested our algorithm on the various instances from the library of the QAP instances - QAPLIB. The results obtained from the experiments show that the proposed algorithm appears superior, in many cases, to the well-known algorithm - simulated annealing.

11 citations


Journal ArticleDOI
TL;DR: A rule for adjusting the Monte-Carlo sample size is introduced to ensure the convergence and to find the solution of the stochastic opti- mization problem from acceptable volume of Monte- Carlo trials.
Abstract: Methods for solving stochastic optimization problems by Monte-Carlo simulation are considered. The stoping and accuracy of the solutions is treated in a statistical manner, testing the hypothesis of optimality according to statistical criteria. A rule for adjusting the Monte-Carlo sample size is introduced to ensure the convergence and to find the solution of the stochastic opti- mization problem from acceptable volume of Monte-Carlo trials. The examples of application of the developed method to importance sampling and the Weber location problem are also considered.

10 citations


Journal ArticleDOI
TL;DR: This article is an introduction to the simplest mathematical model, which describes the hormone interaction during the menstrual cycle, including the mathematical model with the time delay depending on function researched and the mathematical models with the dispersed time delay.
Abstract: This article is an introduction to the simplest mathematical model, which describes the hormone interaction during the menstrual cycle. Modifications of the mathematical model of the menstrual cycle including the mathematical model with the time delay depending on function researched and the mathematical model with the dispersed time delay are researched and described here. A numerical investigation was conducted, during which solutions for the above mentioned models were calculated. The solutions found are compared mutually and with the clinical data.

Journal ArticleDOI
TL;DR: The aim of the given paper is a development of the indirect approach used for the estimation of parame- ters of a closed-loop discrete-time dynamic system in the case of additive correlated noise with outliers contaminated uniformly in it.
Abstract: In the previous paper (Pupeikis, 2000) the problem of closed-loop robust identification using the direct approach in the presence of outliers in observations have been considered. The aim of the given paper is a development of the indirect approach used for the estimation of parame- ters of a closed-loop discrete-time dynamic system in the case of additive correlated noise with outliers contaminated uniformly in it. To calculate current M-estimates of unknown parameters of such a system by means of processing input and noisy output observations, obtained from closed- loop experiments, the recursive robust technique based on an ordinary recursive least square (RLS) algorithm is applied here. The results of numerical simulation of closed-loop system (Fig. 3) by computer (Figs. 4-7) are given.

Journal ArticleDOI
TL;DR: The possibilities of using The Brain, a personal desktop productivity tool, for visualisation of ontologies are outlined and compared with that of Hyperbolic ontology viewer of Ontobroker.
Abstract: The survey of the current status in ontological engineering is presented: notion, peculiarities, applications, design and evaluation of ontologies. The possibilities of using The Brain , a personal desktop productivity tool, for visualisation of ontologies are outlined and compared with that of Hyperbolic ontology viewer of Ontobroker.

Journal ArticleDOI
TL;DR: A known-plaintext attack on a redundancy reducing cipher method which is proposed by Wayner is discussed and an extension of Wayner's redundancy reduce cipher scheme is proposed so that the security will be improved greatly.
Abstract: This paper discusses a known-plaintext attack on a redundancy reducing cipher method which is proposed by Wayner. We also propose an extension of Wayner's redundancy reducing cipher scheme so that the security will be improved greatly.

Journal ArticleDOI
TL;DR: Computer simulation of the hexagonal neural network indicated a suitability and prospectiveness of proposed approach in the creation of artificial neural network which will realize the most complicated processes that take place in the brain of living beings.
Abstract: In this paper, the hexagonal approach was proposed for modeling the functioning of cerebral cortex, especially, the processes of learning and recognition of visual information. This approach is based on the real neurophysiological data of the structure and functions of cerebral cortex. Distinctive characteristic of the proposed neural network is the hexagonal arrangement of excitatory connections between neurons that enable the spreading or cloning of information on the surface of neuronal layer. Cloning of information and modification of the weight of connections between neurons are used as the basic principles for learning and recognition processes. Computer simulation of the hexagonal neural network indicated a suitability and prospectiveness of proposed approach in the creation, together with other modern concepts, of artificial neural network which will realize the most complicated processes that take place in the brain of living beings, such as short-term and long-term memory, episodic and declarative memory, recall, recognition, categori- sation, thinking, and others. Described neural network was realized with computer program written on Delfi 3 language named the first order hexagon brainware (HBW-1).

Journal ArticleDOI
TL;DR: The analysis carried out by qualitative and numerical methods allows us to conclude, that the mathematical model explains the functioning of the physiological system "insulin-blood glucose" in normal and pathological cases.
Abstract: A system of two nonlinear difference-differential equations which is a mathematical model of self-regulation of glucose level in blood with time delay into consideration of insulin "age structure" is presented. The analysis carried out by qualitative and numerical methods allows us to conclude, that the mathematical model explains the functioning of the physiological system "insulin-blood glucose" in normal and pathological cases.

Journal ArticleDOI
TL;DR: The method of business processes analysis in unstructured environments is presented and the creation of new business procedures is based on investigation of the communication acts, and application of similar workflow patterns.
Abstract: One of the problem in business process reengineering is the identification and implemen- tation of new workflow procedures for specific business processes, if they are not clearly defined. Analysis of unstructured (ad-hoc) activities cannot be based on traditional approaches using ex- isting business procedures and expert knowledge. The method of business processes analysis in unstructured environments is presented in this paper. The creation of new business procedures is based on investigation of the communication acts, and application of similar workflow patterns. This method is useful in the earliest stages of business process reengineering. Preliminary analysis in ad-hoc area can be done for process identification, applying the existing knowledge baggage, reducing the analytical efforts, and creating the strong motivation for managers.

Journal ArticleDOI
TL;DR: This work demonstrates that the slope of the function T1=2 = f(R) depends both on the distance between the potential measurement place and the current electrode, as well as the mea- surement direction in respect to the fibers' direction.
Abstract: In this work the analytical expressions of half-time T1=2 and its derivatives in respect to distance @T1=2=@R in a one-dimensional RC medium (a current electrode has a shape of the segment) and in a two-dimensional RC medium (a current electrode has a shape of the circle) were received. First, by using a well-known in electrostatics a superposition principle, the current's electrodes were divided into elementary point sources by positioning them on the perimeter or the surface of the electrode. Second, with the help of the computer-simulation, the dependencies of T1=2 and @T1=2=@R on the current electrode dimensions, the degree of electrotonic anisotropy, and the distance between the current electrode and the potential measurement place were calcu- lated. Our calculations demonstrate that the slope of the function T1=2 = f(R) depends both on the distance between the potential measurement place and the current electrode, as well as the mea- surement direction in respect to the fibers' direction. Furthermore, the slope value can be less or greater to 0.5. If we apply a linear dependency T1=2 =0 :5R +c onstfor the analysis of the electrotonic potential measurement data in close vicinity to the current electrode in the direction of X-axis, we can receive 40% smaller values ofm. The analogical estimations of m on theY -axis would lead to the errors of up to +40%.

Journal ArticleDOI
TL;DR: This paper deals with maximum likelihood and least square segmentation of autoregres- sive random sequences with abruptly changing parameters and derives conditional distribution of the obser- vations from modified Objective function.
Abstract: This paper deals with maximum likelihood and least square segmentation of autoregres- sive random sequences with abruptly changing parameters. Conditional distribution of the obser- vations has been derived. Objective function was modified to the form suitable to apply dynamic programming method for its optimization. Expressions of Bellman functions for this case were obtained. Performance of presented approach is illustrated with simulation examples and segmen- tation of speech signals examples.

Journal ArticleDOI
Rytis Stanik1
TL;DR: Four-layer neural network for color constancy, which has separate input channels for the test chip and for the background, was developed and was able to achieve colorconstancy.
Abstract: Color constancy is the perceived stability of the color of objects under different illu- minants. Four-layer neural network for color constancy has been developed. It has separate input channels for the test chip and for the background. Input of network was RGB receptors. Second layer consisted of color opponent cells and output have three neurons signaling x;y;Y coordinates (1931 CIE). Network was trained with the back-propagation algorithm. For training and testing we used nine illuminants with wide spectrum. Neural network was able to achieve color constancy. Input of background coordinates and nonlinearity of network have crucial influence for training.

Journal ArticleDOI
TL;DR: This paper estimates the generalization error (mean expected probability of classification) for randomized linear zero empirical error (RLZEE) classifier which was considered by Raudys, Diciūnas and Basalykas and obtains an “explicit” and more simple asymptotics.
Abstract: An estimation of the generalization performance of classifier is one of most important problems in pattern clasification and neural network training theory. In this paper we estimate the generalization error (mean expected probability of classification) for randomized linear zero empirical error (RLZEE) classifier which was considered by Raudys, Diciūnas and Basalykas. Instead of “non-explicit” asymptotics of a generalization error of RLZEE classifier for centered multivariate spherically Gaussian classes proposed by Basalykas et al. (1996) we obtain an “explicit” and more simple asymptotics. We also present the numerical simulations illustrating our theoretical results and comparing them with each other and previously obtained results.

Journal ArticleDOI
TL;DR: A compensation scheme is devised based on the Shapley value allocations, whereby participants who enjoy a greater payoff with respect to the technological cooperation compensate the participants who receive a relatively lesser payoff via cooperation.
Abstract: A concept of regional technological cooperation is developed based on a cooperative game theoretic model, in which a plan of payoff distributions induces an agreement that is acceptable to each participant. Under certain conditions, the underlying game is shown to be convex, and hence to have a nonempty core with the Shapley value allocations belonging to the core. A compensation scheme is devised based on the Shapley value allocations, whereby participants who enjoy a greater payoff with respect to the technological cooperation compensate the participants who receive a relatively lesser payoff via cooperation. In this manner, regional technological cooperation can bring overall benefits to all the involved players in the game. Some insightful examples are provided to illustrate the methodological concept.

Journal ArticleDOI
TL;DR: The result of simulation of an idealized thin wet film connecting fixed points in the Euclidean plane is a length-minimizing curve and the investigation of dead-point situations gives the ways of overcoming the difficulties of dead -point situations and continuing the film evolution by temporarily decreasing pressure.
Abstract: The result of simulation of an idealized thin wet film connecting fixed points in the Euclidean plane is a length-minimizing curve. Gradually increasing the exterior pressure we are able to achieve the film configuration near to the Steiner minimal tree. This film evolution may be an interesting tool for solving the Euclidean Steiner problem, but several dead-point situations may occur for a certain location of fixed points. A continuous evolution of the film is impossible by increasing the pressure in these situations. The investigation of dead-point situations gives the ways of overcoming the difficulties of dead-point situations and continuing the film evolution by temporarily decreasing pressure.

Journal ArticleDOI
TL;DR: The method of fingerprint pre-classification based on the ridge frequency replace- ment by the density of edge points of the ridge boundary is proposed and it enables to preliminary reject part of the fin- gerprints without heavy loss of the recognition quality.
Abstract: Fingerprint ridge frequency is a global feature, which is most prominently different in fingerprints of men and woman, and it also changes within the maturing period of a person. This paper proposes the method of fingerprint pre-classification, based on the ridge frequency replace- ment by the density of edge points of the ridge boundary. This method is to be used after applying the common steps in most fingerprint matching algorithms, namely the fingerprint image filtering, binarization and marking of good/bad image areas. The experimental performance evaluation of fingerprint pre-classification is presented. We have found that fingerprint pre-classification using the fingerprint ridge edges density is possible, and it enables to preliminary reject part of the fin- gerprints without heavy loss of the recognition quality. The paper presents the evaluation of two sources of fingerprint ridge edges density variability: a) different finger pressure during the finger- print scanning, b) different distance between the geometrical center of the fingerprint and position of the fingerprint fragment.

Journal ArticleDOI
TL;DR: This paper obtains the sufficient conditions for the representation some language by special sequences of simple bracketed languages, and can decide equivalence problem for some grammatical structures of programming languages, which define neither regular, nor deterministic context-free languages.
Abstract: We consider in this paper so called simple bracketed languages having special limita- tions. They are sometimes used for the definitions of some grammatical structures of programming languages. Generally speaking, these languages are context-free, but not deterministic context- free, i.e., they cannot be defined by deterministic push-down automata. For the simple bracketed languages having special limitations, the equivalence problem is decidable. We obtain the sufficient conditions for the representation some language by special sequences of simple bracketed languages. We also consider the examples of grammatical structures as the simple bracketed languages. Therefore, we can decide equivalence problem for some grammatical structures of programming languages, and such structures define neither regular, nor deterministic context-free languages.

Journal ArticleDOI
TL;DR: A three-language (3L) paradigm for building the program gen- erator models using a relationship model of the specification, scripting and target languages and some results from the experimental systems developed for a validation of the approach are presented.
Abstract: In this paper we suggest a three-language (3L) paradigm for building the program gen- erator models. The basis of the paradigm is a relationship modelof the specification, scripting and target languages. It is not necessary that all three languages would be the separate ones. We con- sider some internal relationship (roles) between the capabilities of a given la nguage for specifying, scripting (gluing) and describing the domain functionality. We also assume that a target la nguage is basic. We introduce domain architecture (functionality) with the generic componentsusually com- posed using the scripting and target languages. The specification language is for describing user's needs for the domain functionality to be extracted from the system. We present the framework for implementing the 3L paradigm and some results from the experimental systems developed for a validation of the approach.

Journal ArticleDOI
TL;DR: Computer simulation results show that for compar- atively small sample size classification using projection pursuit algorithm gives better accuracy of estimates of a posteriori probabilities and less classification error.
Abstract: Influence of projection pursuit on classification errors and estimates of a posteriori prob- abilities from the sample is considered. Observed random variable is supposed to satisfy a multidi- mensional Gaussian mixture model. Presented computer simulation results show that for compar- atively small sample size classification using projection pursuit algorithm gives better accuracy of estimates of a posteriori probabilities and less classification error.

Journal ArticleDOI
TL;DR: In this paper, the security issues of the mobile code (or safelets for use in the aircrafts) delivered to the Remote Platform are discussed in the aviation industry and issues of code effectiveness and safety itself are discussed with Java and Juice/Oberon technologies been compared.
Abstract: Growing popularity of the mobile code requires to consider various aspects related to its security. In the aviation industry there is a case when additional information needs to be delivered to the pilot by uploading it from the ground station. It creates a need for a platform-independent solution and it raises a problem of the mobile code security as well. Organization of the security in the Base System (similar to extranets) as well as the security issues of the mobile code (or safelets for use in the aircrafts) delivered to the Remote Platform are discussed in the paper. Safelet implementation technologies and issues of code effectiveness and safety itself are discussed with Java and Juice/Oberon technologies been compared.

Journal ArticleDOI
TL;DR: This paper contains measures to describe the matrix impulse response sensitivity of state space multivariable systems with respect to parameter perturbations.
Abstract: This paper contains measures to describe the matrix impulse response sensitivity of state space multivariable systems with respect to parameter perturbations.The parameter sensitivity is defined as an integral measure of the matrix impulse response with respect to the coefficients. A state space approach is used to find a realization of impulse response that minimizes a sensitivity measure. One possibility of converting linear systems into mathematical processes consists in the state space representations. They are characterized by the description of behavior of phys- ical systems with constant matrices and, therefore, they establish a direct relation to the methods of linear algebra. The state space representation of a given linear system can be used in order to perform a pole-zero determination (Dooren, 1981), a stability test (Bernett and Storey, 1970), and a test concerning the passivity (Anderson and Vongpan- itlerd, 1973). The solution of these problems using the methods of linear algebra is rec- ommended from a numerical point of view as powerful software packages are available (Thiele, 1986). The state space design of a system consists of finding a set of state space equations that realize a desired transfer function. The state space equations corresponding to the transfer function or impulse response are not unique, thus one may select among the real- izations one that minimizes a suitable sensitivity measure. Also some system properties are invariant with respect to the realizations, state space realizations do affect certain properties. One important property is the sensitivity of the system with respect to the realization parameters. In applications such as digital control and filter design, it is of practical importance to have a state space realization for which the system sensitivity is minimal. One of the reasons for this is the existence of the finite word length effect to co- efficient truncation and arithmetic roundoff in the implementation of a controller or filter. It is understandable that poor sensitivity may lead to the degradation in performance of an implementation (Yan and Moore, 1992).

Journal Article
TL;DR: This work presents several parallel algorithms for computing a collection of group-by aggregates based on a multiprocessor system with shared disks and focuses on a special case of the aggregation problem-"Cube" operator which computes group- by aggregates over all possible combinations of a list of attributes.
Abstract: Computing multiple related group-by aggregates is one of the core operations of online analytical processing (OLAP) applications. This kind of computation involves a huge volume of data operations (megabytes or treabytes). The response time for such applications is crucial, so, using parallel processing techniques to handle such computation is inevitable. We present several parallel algorithms for computing a collection of group-by aggregates based on a multiprocessor system with shared disks. We focus on a special case of the aggregation problem-"Cube" operator which computes group-by aggregates over all possible combinations of a list of attributes. The proposed algorithms introduce a novel processor scheduling policy and a non-trivial decomposition approach for the problem in the parallel environment. Particularly, the hybrid algorithm has the best performance potential among the four proposed algorithms. All the proposed algorithms are scalable.