scispace - formally typeset
Search or ask a question
Institution

Helsinki Institute for Information Technology

FacilityEspoo, Finland
About: Helsinki Institute for Information Technology is a facility organization based out in Espoo, Finland. It is known for research contribution in the topics: Population & Bayesian network. The organization has 630 authors who have published 1962 publications receiving 63426 citations.


Papers
More filters
Proceedings ArticleDOI
28 Oct 2004
TL;DR: A set of merge rules derived from use cases on XML merging a compact and versatile XML merge in accordance with these rules and a classification of conflicts in the context of that merge are presented.
Abstract: Three-way merging is a technique that may be employed for reintegrating changes to a document in cases where multiple independently modified copies have been made. While tools for three-way merge of ASCII text files exist in the form of the ubiquitous diff and patch tools these are of limited applicability to XML documents.We present a method for three-way merging of XML which is targeted at merging XML formats that model human-authored documents as ordered trees (e.g. rich text formats structured text drawings etc.). To this end we investigate a number of use cases on XML merging (collaborative editing propagating changes across document variants) from which we derive a set of high-level merge rules. Our merge is based on these rules.We propose that our merge is easy to both understand and implement yet sufficiently expressive to handle several important cases of merging on document structure that are beyond the capabilities of traditional text-based tools. In order to justify these claims we applied our merging method to the merging tasks contained in the use cases. The overall performance of the merge was found to be satisfactory.The key contributions of this work are: a set of merge rules derived from use cases on XML merging a compact and versatile XML merge in accordance with these rules and a classification of conflicts in the context of that merge.

109 citations

MonographDOI
01 May 2015
TL;DR: This paper presents a meta-modelling approach to genome-Scale Index Structures that automates the very labor-intensive and therefore time-heavy and expensive process of genome-scale index construction.
Abstract: High-throughput sequencing has revolutionised the field of biological sequence analysis. Its application has enabled researchers to address important biological questions, often for the first time. This book provides an integrated presentation of the fundamental algorithms and data structures that power modern sequence analysis workflows. The topics covered range from the foundations of biological sequence analysis (alignments and hidden Markov models), to classical index structures (k-mer indexes, suffix arrays and suffix trees), Burrows–Wheeler indexes, graph algorithms and a number of advanced omics applications. The chapters feature numerous examples, algorithm visualisations, exercises and problems, each chosen to reflect the steps of large-scale sequencing projects, including read alignment, variant calling, haplotyping, fragment assembly, alignment-free genome comparison, transcript prediction and analysis of metagenomic samples. Each biological problem is accompanied by precise formulations, providing graduate students and researchers in bioinformatics and computer science with a powerful toolkit for the emerging applications of high-throughput sequencing.

109 citations

Proceedings ArticleDOI
10 Apr 2010
TL;DR: In the explicit biofeedback conditions, players were more immersed and positively affected, and they were able to manipulate the game play with the biosignal interface, and the report recommends exploring the possibilities of using explicitBiofeedback interaction in commercial games.
Abstract: To understand how implicit and explicit biofeedback work in games, we developed a first-person shooter (FPS) game to experiment with different biofeedback techniques While this area has seen plenty of discussion, there is little rigorous experimentation addressing how biofeedback can enhance human-computer interaction In our two-part study, (N=36) subjects first played eight different game stages with two implicit biofeedback conditions, with two simulation-based comparison and repetition rounds, then repeated the two biofeedback stages when given explicit information on the biofeedback The biofeedback conditions were respiration and skin-conductance (EDA) adaptations Adaptation targets were four balanced player avatar attributes We collected data with psycho¬physiological measures (electromyography, respiration, and EDA), a game experience questionnaire, and game-play measures According to our experiment, implicit biofeedback does not produce significant effects in player experience in an FPS game In the explicit biofeedback conditions, players were more immersed and positively affected, and they were able to manipulate the game play with the biosignal interface We recommend exploring the possibilities of using explicit biofeedback interaction in commercial games

109 citations

Journal ArticleDOI
TL;DR: An elegant recursion formula is derived which allows efficient computation of the stochastic complexity in the case of n observations of a single multinomial random variable with K values and the time complexity is O(n+K) as opposed to O(nlognlogK) obtained with the previous results.

109 citations

Proceedings ArticleDOI
10 Apr 2011
TL;DR: By means of analytical models, it is shown that an opportunistic content sharing system, without any supporting infrastructure, can be a viable and surprisingly reliable option for content sharing as long as a certain criterion, referred to as the criticality condition, is met.
Abstract: We consider an opportunistic content sharing system designed to store and distribute local spatio-temporal “floating” information in uncoordinated P2P fashion relying solely on the mobile nodes passing through the area of interest, referred to as the anchor zone. Nodes within the anchor zone exchange the information in opportunistic manner, i.e., whenever two nodes come within each others' transmission range. Outside the anchor zone, the nodes are free to delete the information, since it is deemed relevant only for the nodes residing inside the anchor zone. Due to the random nature of the operation, there are no guarantees, e.g., for the information availability. By means of analytical models, we show that such a system, without any supporting infrastructure, can be a viable and surprisingly reliable option for content sharing as long as a certain criterion, referred to as the criticality condition, is met. The important quantity is the average number of encounters a randomly chosen node experiences during its sojourn time in the anchor zone, which again depends on the communication range and the mobility pattern. The theoretical studies are complemented with simulation experiments with various mobility models showing good agreement with the analytical results.

108 citations


Authors

Showing all 632 results

NameH-indexPapersCitations
Dimitri P. Bertsekas9433285939
Olli Kallioniemi9035342021
Heikki Mannila7229526500
Jukka Corander6641117220
Jaakko Kangasjärvi6214617096
Aapo Hyvärinen6130144146
Samuel Kaski5852214180
Nadarajah Asokan5832711947
Aristides Gionis5829219300
Hannu Toivonen5619219316
Nicola Zamboni5312811397
Jorma Rissanen5215122720
Tero Aittokallio522718689
Juha Veijola5226119588
Juho Hamari5117616631
Network Information
Related Institutions (5)
Google
39.8K papers, 2.1M citations

93% related

Microsoft
86.9K papers, 4.1M citations

93% related

Carnegie Mellon University
104.3K papers, 5.9M citations

91% related

Facebook
10.9K papers, 570.1K citations

91% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20231
20224
202185
202097
2019140
2018127