Institution
Turku Centre for Computer Science
Facility•Turku, Finland•
About: Turku Centre for Computer Science is a facility organization based out in Turku, Finland. It is known for research contribution in the topics: Decidability & Word (group theory). The organization has 382 authors who have published 1027 publications receiving 19560 citations.
Papers published on a yearly basis
Papers
More filters
••
24 Jun 2007TL;DR: In binary Hamming spaces, new 1- identifying codes are constructed which improve on previously known upper bounds on the cardinalities of 1-identifying codes for many lengths when n ges 10.
Abstract: In binary Hamming spaces, we construct new 1- identifying codes which improve on previously known upper bounds on the cardinalities of 1-identifying codes for many lengths when n ges 10. We also construct tau-identifying codes using the direct sum of tau codes that are 1-identifying.
1 citations
•
01 Jan 2011
TL;DR: Energy-efficient hierarchical monitoring is presented on smart house platforms for scalable and portable design interfaces to tackle with the increasing number of embedded systems.
Abstract: Energy-efficient hierarchical monitoring is presented on smart house platforms. The rapid expansion of embedded systems requires scalable and portable design interfaces to tackle with the increasing ...
1 citations
••
02 Nov 2005TL;DR: A straightforward method for updating dynamically the lower bound value for the lcs is presented and the purpose is to refine the estimate gradually to prune more effectively the search space of the used exact lcs algorithm.
Abstract: The running time of longest common subsequence (lcs) algorithms is shown to be dependent of several parameters. To such parameters belong e. g. the size of the input alphabet, the distribution of the characters in the input strings and the degree of similarity between the strings. Therefore it is very difficult to establish an lcs algorithm that could be efficient enough for all relevant problem instances. As a consequence of that fact, many of those algorithms are planned to be applied only on a restricted set of all possible inputs. Some of them are besides quite tricky to implement.
In order to speed up the running time of lcs algorithms in common, one of the most crucial prerequisities is that preliminary information about the input strings could be utilized. In addition, this information should be available after a reasonably quick preprocessing phase. One informative a priori -value to calculate is a lower bound estimate for the length of the lcs. However, the obtained lower bound might not be as accurate as desired and thus no appreciable advantages of the preprocessing can be drawn.
In this paper, a straightforward method for updating dynamically the lower bound value for the lcs is presented. The purpose is to refine the estimate gradually to prune more effectively the search space of the used exact lcs algorithm. Furthermore, simulation tests for the new presented method will be performed in order to convince us of the benefits of it.
1 citations
••
TL;DR: The equivalence relation problem is studied and the motivations to replace the equivalences relation by a data structure suitable for efficient computation are identified.
Abstract: Well-known strategies give incentive to the algorithmic refinement of programs and we ask in this paper whether general patterns also exist for data refinement. In order to answer this question, we study the equivalence relation problem and identify the motivations to replace the equivalence relation by a data structure suitable for efficient computation.
1 citations
••
15 Aug 2002TL;DR: This paper makes the cause of the Turing universality of models of DNA computing explicit, reducing the matter to some previously known issues in computability theory.
Abstract: Watson-Crick complementarity is one of the central components of DNA computing, the other central component being the massive parallelism of DNA strands. While the parallelism drastically reduces (provided the laboratory techniques will become adequate) the computational complexity, the complementarity is the actual computational tool "freely" available. It is also the cause behind the Turing universality of models of DNA computing. This paper makes this cause explicit, reducing the matter to some previously known issues in computability theory. We also discuss some specific models.
1 citations
Authors
Showing all 383 results
Name | H-index | Papers | Citations |
---|---|---|---|
José A. Teixeira | 101 | 1414 | 47329 |
Cunsheng Ding | 61 | 254 | 11116 |
Jun'ichi Tsujii | 59 | 389 | 15985 |
Arto Salomaa | 56 | 374 | 17706 |
Tero Aittokallio | 52 | 271 | 8689 |
Risto Lahdelma | 48 | 149 | 6637 |
Hannu Tenhunen | 45 | 819 | 11661 |
Mats Gyllenberg | 44 | 204 | 8029 |
Sampo Pyysalo | 42 | 153 | 8839 |
Olli Polo | 42 | 140 | 5303 |
Pasi Liljeberg | 40 | 306 | 6959 |
Tapio Salakoski | 38 | 231 | 7271 |
Filip Ginter | 37 | 156 | 7294 |
Robert Fullér | 37 | 152 | 5848 |
Juha Plosila | 35 | 342 | 4917 |