scispace - formally typeset
Search or ask a question

Showing papers by "Turku Centre for Computer Science published in 2005"


Journal ArticleDOI
TL;DR: An extended approach to decision making with OWA operators with maximal Renyi entropy OWA weights is formulated by introducing considerations of maximizing information content in the aggregation according to a general parametric class of dispersion measures.

86 citations


Journal ArticleDOI
TL;DR: The concept of possibilistic correlation is presented representing an average degree of interaction between marginal distributions of a joint possibility distribution as compared to their respective dispersions and the classical Cauchy-Schwarz inequality is formulated.

75 citations


Journal ArticleDOI
TL;DR: This paper continues the investigation of Parikh matrices and subword occurrences, and studies certain inequalities, as well as information about subword occurrence sufficient to determine the whole word uniquely.

61 citations


Journal ArticleDOI
TL;DR: This paper adopts a multi-methodological approach to information systems research in order to produce new information through data mining, particularly suitable for mining material that consists of both qualitative and quantitative information.

60 citations


Journal ArticleDOI
TL;DR: Data sets from ICAT experiments, analysed with liquid chromatography/tandem mass spectrometry (MS/MS), using an Applied Biosystems QSTAR® Pulsar quadrupole‐TOF mass spectrumeter, were processed in triplicate using separate mass spectromaetry software packages.
Abstract: The options available for processing quantitative data from isotope coded affinity tag (ICAT) experiments have mostly been confined to software specific to the instrument of acquisition. However, recent developments with data format conversion have subsequently increased such processing opportunities. In the present study, data sets from ICAT experiments, analysed with liquid chromatography/tandem mass spectrometry (MS/MS), using an Applied Biosystems QSTAR® Pulsar quadrupole-TOF mass spectrometer, were processed in triplicate using separate mass spectrometry software packages. The programs Pro ICAT, Spectrum Mill and SEQUEST with XPRESS were employed. Attention was paid towards the extent of common identification and agreement of quantitative results, with additional interest in the flexibility and productivity of these programs. The comparisons were made with data from the analysis of a specifically prepared test mixture, nine proteins at a range of relative concentration ratios from 0.1 to 10 (light to heavy labelled forms), as a known control, and data selected from an ICAT study involving the measurement of cytokine induced protein expression in human lymphoblasts, as an applied example. Dissimilarities were detected in peptide identification that reflected how the associated scoring parameters favoured information from the MS/MS data sets. Accordingly, there were differences in the numbers of peptides and protein identifications, although from these it was apparent that both confirmatory and complementary information was present. In the quantitative results from the three programs, no statistically significant differences were observed.

33 citations


Journal ArticleDOI
TL;DR: It is shown that the performance of SVMs can be improved by the proposed weighting scheme, and the increase of the classification performance due to the weighting is greater than that obtained by selecting the underlying classifier or the kernel part of the SVM.
Abstract: The ability to distinguish between genes and proteins is essential for understanding biological text Support Vector Machines (SVMs) have been proven to be very efficient in general data mining tasks We explore their capability for the gene versus protein name disambiguation task We incorporated into the conventional SVM a weighting scheme based on distances of context words from the word to be disambiguated This weighting scheme increased the performance of SVMs by five percentage points giving performance better than 85% as measured by the area under ROC curve and outperformed the Weighted Additive Classifier, which also incorporates the weighting, and the Naive Bayes classifier We show that the performance of SVMs can be improved by the proposed weighting scheme Furthermore, our results suggest that in this study the increase of the classification performance due to the weighting is greater than that obtained by selecting the underlying classifier or the kernel part of the SVM

23 citations


Journal ArticleDOI
TL;DR: A linear-time algorithm for filling regions defined by closed contours in raster format that relies on a single pass contour labeling and the actual filling is done in a scan-line manner, visiting the interior pixels only once.

23 citations


Book ChapterDOI
18 Jul 2005
TL;DR: In this paper, the authors introduce a formal framework for reasoning about performance-style properties of probabilistic programs at the level of program code, drawing heavily on the refinement-style of program verification.
Abstract: We introduce a formal framework for reasoning about performance-style properties of probabilistic programs at the level of program code. Drawing heavily on the refinement-style of program verification, our approach promotes abstraction and proof re-use. The theory and proof tools to facilitate the verification have been implemented in HOL.

18 citations


Book ChapterDOI
08 Sep 2005
TL;DR: An adaptation of the Regularized Least-Squares algorithm for the rank learning problem and an application of the method to reranking of the parses produced by the Link Grammar (LG) dependency parser are presented.
Abstract: We present an adaptation of the Regularized Least-Squares algorithm for the rank learning problem and an application of the method to reranking of the parses produced by the Link Grammar (LG) dependency parser. We study the use of several grammatically motivated features extracted from parses and evaluate the ranker with individual features and the combination of all features on a set of biomedical sentences annotated for syntactic dependencies. Using a parse goodness function based on the F-score, we demonstrate that our method produces a statistically significant increase in rank correlation from 0.18 to 0.42 compared to the built-in ranking heuristics of the LG parser. Further, we analyze the performance of our ranker with respect to the number of sentences and parses per sentence used for training and illustrate that the method is applicable to sparse datasets, showing improved performance with as few as 100 training sentences.

16 citations


Journal ArticleDOI
TL;DR: The centralizer of any rational code is rational and if the code is finite, then the centralizer is finitely generated, and an elementary proof for the prefix case is given.

15 citations


Book ChapterDOI
13 Dec 2005
TL;DR: It is shown that cross-layer approach performs better than separating the overlay from the access networks with the comparison of different settings for the peer-to-peer overlay and underlying mobile ad hoc network.
Abstract: With the advance in mobile wireless communication technology and the increasing number of mobile users, peer-to-peer computing, in both academic research and industrial development, has recently begun to extend its scope to address problems relevant to mobile devices and wireless networks. This paper is a performance study of peer-to-peer systems over mobile ad hoc networks. We show that cross-layer approach performs better than separating the overlay from the access networks with the comparison of different settings for the peer-to-peer overlay and underlying mobile ad hoc network.

Journal Article
TL;DR: This work introduces a formal framework for reasoning about performance-style properties of probabilistic programs at the level of program code, drawing heavily on the refinement-style of program verification.
Abstract: We introduce a formal framework for reasoning about performance-style properties of probabilistic programs at the level of program code. Drawing heavily on the refinement-style of program verification, our approach promotes abstraction and proof re-use. The theory and proof tools to facilitate the verification have been implemented in HOL.

Proceedings ArticleDOI
11 Jul 2005
TL;DR: Results from the field experiment imply that a mobile system, once implemented in the case organization, could permanently change daily routines in customer service recovery and introduce new ways of reacting to problems, treating them in collaborative manner and learning together.
Abstract: The paper focuses on developing the idea that process visibility is a key aspect of effective cross-boundary organizational processes, specifically customer care, and that it is potentially enhanced by the capabilities provided by mobile technology. Process visibility enables specific actors to track the dynamics of a process, empowers them and provides a higher degree of involvement, what may result in increased customer satisfaction. We introduce the visibility concept into the domain of mobile information systems, and within the following case study we outline the unique opportunities it can offer when incorporated into business customer solutions. Results from the field experiment imply that a mobile system, once implemented in the case organization, could permanently change daily routines in customer service recovery and introduce new ways of reacting to problems, treating them in collaborative manner and learning together.

Journal ArticleDOI
TL;DR: Three algebraic constructions of authentication codes with secrecy have simple algebraic structures and are easy to implement and are asymptotically optimal with respect to certain bounds.

Journal ArticleDOI
TL;DR: This paper presents a compressed shadow map algorithm, which reduces memory consumption by representing lit surfaces with endpoints of intermediate line segments as opposed to the conventional array-based pixel structures, and helps with the shadow map self-shadowing problems.
Abstract: Shadow mapping has been subject to extensive investigation, but previous shadow map algorithms cannot usually generate high-quality shadows with a small memory footprint. In this paper, we present compressed shadow maps as a solution to this problem. A compressed shadow map reduces memory consumption by representing lit surfaces with endpoints of intermediate line segments as opposed to the conventional array-based pixel structures. Compressed shadow maps are only discretized in the vertical direction while the horizontal direction is represented by floating-point accuracy. The compression also helps with the shadow map self-shadowing problems. We compare our algorithm against all of the most popular shadow map algorithms and show, on average, order of magnitude improvements in storage requirements in our test scenes. The algorithm is simple to implement, can be added easily to existing software renderers, and lets us use graphics hardware for shadow visualization.

Journal ArticleDOI
TL;DR: It is proved that for any nonperiodic set of words F \subseteq Σ+ with at most three elements, the centralizer of F, i.e., the largest set commuting with F, is F*.
Abstract: We prove that for any nonperiodic set of words F \subseteq Σ+ with at most three elements, the centralizer of F, i.e., the largest set commuting with F, is F*. Moreover, any set X commuting with F is of the form X = FI, for some I \subseteq ℕ. A boundary point is thus established, as these results do not hold for all languages with at least four words. This solves a conjecture of Karhumaki and Petre, and provides positive answers to special cases of some intriguing questions on commutation of languages, raised by Ratoandromanana and Conway.

Book ChapterDOI
22 Feb 2005
TL;DR: In this article, a correspondence theory for lattices with operators of possibility and sufficiency has been proposed for not necessarily distributive lattices, but with modal-like operators.
Abstract: In this paper we present some examples of relational correspondences for not necessarily distributive lattices with modal-like operators of possibility (normal and additive operators) and sufficiency (co-normal and co-additive operators). Each of the algebras (P,∨,∧, 0, 1, f), where (shape P, ∨, ∧, 0,1) is a bounded lattice and shape f is a unary operator on shape P, determines a relational system (frame) $(X(P), \lesssim_1, \lesssim_2, R_f, S_f)$ with binary relations $\lesssim_1$, $\lesssim_2$, shape Rf, shape Sf, appropriately defined from shape P and shape f. Similarly, any frame of the form $(X, \lesssim_1, \lesssim_2, R, S)$ with two quasi-orders $\lesssim_1$ and $\lesssim_2$, and two binary relations shape R and shape S induces an algebra (shape L(shape X), ∨, ∧, 0,1, shape fR,S), where the operations ∨, ∧, and shape fR,S and constants 0 and 1 are defined from the resources of the frame. We investigate, on the one hand, how properties of an operator shape f in an algebra shape P correspond to the properties of relations shape Rf and shape Sf in the induced frame and, on the other hand, how properties of relations in a frame relate to the properties of the operator shape fR,S of an induced algebra. The general observations and the examples of correspondences presented in this paper are a first step towards development of a correspondence theory for lattices with operators.


Journal ArticleDOI
TL;DR: The degree of dominance of an alternative x with respect to an available fuzzy set of alternatives is introduced and new congruence axioms for fuzzy choice functions are formulated.

Journal ArticleDOI
22 Nov 2005
TL;DR: A short, elementary proof of Hmelevskii's result that the set of solutions of xyz = zvx cannot be described using only finitely many parameters, contrary to the case of equations in three unknowns.
Abstract: Although Makanin proved the problem of satisfiability of word equations to be decidable, the general structure of solutions is difficult to describe. In particular, Hmelevskii proved that the set of solutions of xyz = zvx cannot be described using only finitely many parameters, contrary to the case of equations in three unknowns. In this paper we give a short, elementary proof of Hmelevskii's result.

Proceedings ArticleDOI
30 Aug 2005
TL;DR: The need for a robust model interchange document format in order to realize the model driven engineering vision is discussed and a plea for a continued discussion on the future and improvement of the XMI and theXMI[DI] model interchange standards is concluded.
Abstract: We discuss the need for a robust model interchange document format in order to realize the model driven engineering vision. The OMG proposes XMI and XMI[DI] as the standard document formats to store and exchange models and diagrams between applications. However, there are still some open issues regarding the current version of these standards. In this article, we discuss some of these issues and conclude with a plea for a continued discussion on the future and improvement of the XMI and the XMI[DI] model interchange standards.

Book ChapterDOI
13 Jun 2005
TL;DR: An approach to empirical software engineering based on a combined software factory and software laboratory is described to define and evaluate a software process that combines practices from Extreme Programming with architectural design and documentation practices in order to find a balance between agility, maintainability and reliability.
Abstract: In this article, we describe an approach to empirical software engineering based on a combined software factory and software laboratory. The software factory develops software required by an external customer while the software laboratory monitors and improves the processes and methods used in the factory. We have used this approach during a period of four years to define and evaluate a software process that combines practices from Extreme Programming with architectural design and documentation practices in order to find a balance between agility, maintainability and reliability.

Journal Article
TL;DR: In this paper, the authors describe an approach to empirical software engineering based on a combined software factory and software laboratory, where the software factory develops software required by an external customer while the software laboratory monitors and improves the processes and methods used in the factory.
Abstract: In this article, we describe an approach to empirical software engineering based on a combined software factory and software laboratory. The software factory develops software required by an external customer while the software laboratory monitors and improves the processes and methods used in the factory. We have used this approach during a period of four years to define and evaluate a software process that combines practices from Extreme Programming with architectural design and documentation practices in order to find a balance between agility, maintainability and reliability.

Journal ArticleDOI
TL;DR: The concepts behind local perception filters are examined and extended to cover artificially increased delays and are implemented in a testbench program, which is used to study the usability and limitations of the approach.

Journal ArticleDOI
01 Mar 2005
TL;DR: A method for synthesising a set of components from a high-level specification of the intended behaviour of the target system through correctness-preserving transformation steps towards an implementable architecture of components which communicate asynchronously.
Abstract: We propose a method for synthesising a set of components from a high-level specification of the intended behaviour of the target system. The designer proceeds via correctness-preserving transformation steps towards an implementable architecture of components which communicate asynchronously. The interface model of each component specifies the communication protocol used. At each step a pre-defined component is extracted and the correctness of the step is proved. This ensures the compatibility of the components. We use Action Systems as our formal approach to system design. The method is inspired by hardware-oriented approaches with their component libraries, but is more general. We also explore the possibility of using tool support to administer the derivation, as well as to assist in correctness proofs. Here we rely on the tools supporting the B Method, as this method is closely related to Action Systems and has good tool support.

Journal ArticleDOI
TL;DR: A new algebraic model for program variables is introduced, suitable for reasoning about recursive procedures with parameters and local variables in a mechanical verification setting, and a special form of Hoare specification statement is introduced which alone is enough to fully specify a procedure.
Abstract: We introduce a new algebraic model for program variables, suitable for reasoning about recursive procedures with parameters and local variables in a mechanical verification setting. We give a predicate transformer semantics to recursive procedures and prove refinement rules for introducing recursive procedure calls, procedure parameters, and local variables. We also prove, based on the refinement rules, Hoare total correctness rules for recursive procedures, and parameters. We introduce a special form of Hoare specification statement which alone is enough to fully specify a procedure. Moreover, we prove that this Hoare specification statement is equivalent to a refinement specification. We implemented this theory in the PVS theorem prover.

Journal ArticleDOI
TL;DR: Pin's variety theorem is proved for trees and a bijective correspondence between positive varieties of tree languages and varieties of finite ordered algebras is established, which corresponds to Steinby's generalized variety theorem.

Journal ArticleDOI
TL;DR: The number of primitive and unbordered words w with a fixed weight, that is, words for which the Parikh vector of w is a fixed vector is studied.

Journal ArticleDOI
TL;DR: Assessing how location-based mobile support systems can support salespersons’ CRM efforts when they are operating within a highly mobile work environment and suggesting potential mobile location services and applications that can help salesPersons perform effectively their everyday CRM tasks and link such applications to the determinant of salesperson’ performance.
Abstract: This paper aims at assessing how location-based mobile support systems can support salespersons’ CRM efforts when they are operating within a highly mobile work environment. After briefly discussing the state-of-the-art issues associated with mobile location technologies, the paper conceptualizes key dimensions for location-based mobile support systems. The paper then discusses the dual role of salespersons in CRM. A fourth section suggests a categorization of salespersons’ CRM tasks based on both properties of location-based mobile support and the areas of salespersons’ CRM-related tasks that may be affected by mobile location technologies. Finally, the paper suggests potential mobile location services and applications that can help salespersons perform effectively their everyday CRM tasks and link such applications to the determinant of salespersons’ performance. The paper concludes with a discussion of some critical issues and suggests areas for further research.

Book ChapterDOI
TL;DR: This work introduces new constructs to model the client-server architecture of grid systems, as well as important features like communication and synchronisation, in such a manner that the necessary proof obligations are automatically generated and the system can be implemented in a straightforward manner.
Abstract: Computational grids have become widespread in organizations for handling their need for computational resources and the vast amount of available information. Grid systems, and other distributed systems, are often complex and formal reasoning about them is needed, in order to ensure their correctness and to structure their development. Event B is a formal method with tool support that is meant for stepwise development of distributed systems. To facilitate the implementation of grid systems we here propose extensions to Event B that take grid specific features into account. We add new constructs to model the client-server architecture of grid systems, as well as important features like communication and synchronisation. We introduce the extensions in such a manner that the necessary proof obligations are automatically generated and the system can be implemented in a straightforward manner.