scispace - formally typeset
Search or ask a question
Institution

Turku Centre for Computer Science

FacilityTurku, Finland
About: Turku Centre for Computer Science is a facility organization based out in Turku, Finland. It is known for research contribution in the topics: Decidability & Word (group theory). The organization has 382 authors who have published 1027 publications receiving 19560 citations.


Papers
More filters
Proceedings ArticleDOI
24 Jun 2007
TL;DR: In binary Hamming spaces, new 1- identifying codes are constructed which improve on previously known upper bounds on the cardinalities of 1-identifying codes for many lengths when n ges 10.
Abstract: In binary Hamming spaces, we construct new 1- identifying codes which improve on previously known upper bounds on the cardinalities of 1-identifying codes for many lengths when n ges 10. We also construct tau-identifying codes using the direct sum of tau codes that are 1-identifying.

1 citations

Proceedings Article
01 Jan 2011
TL;DR: Energy-efficient hierarchical monitoring is presented on smart house platforms for scalable and portable design interfaces to tackle with the increasing number of embedded systems.
Abstract: Energy-efficient hierarchical monitoring is presented on smart house platforms. The rapid expansion of embedded systems requires scalable and portable design interfaces to tackle with the increasing ...

1 citations

Book ChapterDOI
02 Nov 2005
TL;DR: A straightforward method for updating dynamically the lower bound value for the lcs is presented and the purpose is to refine the estimate gradually to prune more effectively the search space of the used exact lcs algorithm.
Abstract: The running time of longest common subsequence (lcs) algorithms is shown to be dependent of several parameters. To such parameters belong e. g. the size of the input alphabet, the distribution of the characters in the input strings and the degree of similarity between the strings. Therefore it is very difficult to establish an lcs algorithm that could be efficient enough for all relevant problem instances. As a consequence of that fact, many of those algorithms are planned to be applied only on a restricted set of all possible inputs. Some of them are besides quite tricky to implement. In order to speed up the running time of lcs algorithms in common, one of the most crucial prerequisities is that preliminary information about the input strings could be utilized. In addition, this information should be available after a reasonably quick preprocessing phase. One informative a priori -value to calculate is a lower bound estimate for the length of the lcs. However, the obtained lower bound might not be as accurate as desired and thus no appreciable advantages of the preprocessing can be drawn. In this paper, a straightforward method for updating dynamically the lower bound value for the lcs is presented. The purpose is to refine the estimate gradually to prune more effectively the search space of the used exact lcs algorithm. Furthermore, simulation tests for the new presented method will be performed in order to convince us of the benefits of it.

1 citations

Book ChapterDOI
TL;DR: The equivalence relation problem is studied and the motivations to replace the equivalences relation by a data structure suitable for efficient computation are identified.
Abstract: Well-known strategies give incentive to the algorithmic refinement of programs and we ask in this paper whether general patterns also exist for data refinement. In order to answer this question, we study the equivalence relation problem and identify the motivations to replace the equivalence relation by a data structure suitable for efficient computation.

1 citations

Book ChapterDOI
15 Aug 2002
TL;DR: This paper makes the cause of the Turing universality of models of DNA computing explicit, reducing the matter to some previously known issues in computability theory.
Abstract: Watson-Crick complementarity is one of the central components of DNA computing, the other central component being the massive parallelism of DNA strands. While the parallelism drastically reduces (provided the laboratory techniques will become adequate) the computational complexity, the complementarity is the actual computational tool "freely" available. It is also the cause behind the Turing universality of models of DNA computing. This paper makes this cause explicit, reducing the matter to some previously known issues in computability theory. We also discuss some specific models.

1 citations


Authors

Showing all 383 results

NameH-indexPapersCitations
José A. Teixeira101141447329
Cunsheng Ding6125411116
Jun'ichi Tsujii5938915985
Arto Salomaa5637417706
Tero Aittokallio522718689
Risto Lahdelma481496637
Hannu Tenhunen4581911661
Mats Gyllenberg442048029
Sampo Pyysalo421538839
Olli Polo421405303
Pasi Liljeberg403066959
Tapio Salakoski382317271
Filip Ginter371567294
Robert Fullér371525848
Juha Plosila353424917
Network Information
Related Institutions (5)
Vienna University of Technology
49.3K papers, 1.3M citations

84% related

Eindhoven University of Technology
52.9K papers, 1.5M citations

83% related

Aalto University
32.6K papers, 829.6K citations

83% related

Carnegie Mellon University
104.3K papers, 5.9M citations

82% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20231
20223
20213
20209
20198
201816