scispace - formally typeset
Search or ask a question
Institution

Helsinki Institute for Information Technology

FacilityEspoo, Finland
About: Helsinki Institute for Information Technology is a facility organization based out in Espoo, Finland. It is known for research contribution in the topics: Population & Bayesian network. The organization has 630 authors who have published 1962 publications receiving 63426 citations.


Papers
More filters
Proceedings ArticleDOI
08 Feb 2009
TL;DR: A predictive text input technique that is based on association rules and item frequencies is introduced that makes text input significantly faster, decreases typing error rates and increases overall user satisfaction.
Abstract: The fundamental nature of grocery shopping makes it an interesting domain for intelligent mobile assistants. Even though the central role of shopping lists is widely recognized, relatively little attention has been paid to facilitating shopping list creation and management. In this paper we introduce a predictive text input technique that is based on association rules and item frequencies. We also describe an interface design for integrating the predictive text input with a web-based mobile shopping assistant. In a user study we compared two interfaces, one with text input support and one without. Our results indicate that, even though shopping list entries are typically short, our technique makes text input significantly faster, decreases typing error rates and increases overall user satisfaction.

15 citations

Proceedings Article
01 Jan 2010
TL;DR: Which of the six goals (transparency, scrutability, effectiveness, persuasiveness, efficiency and trust) the different UI elements promote in the existing music recommenders are discovered and how they could be measured are discovered in order to create a simple framework for evaluating recommender UIs.
Abstract: paper provides a review of explanations, visualizations and interactive elements of user interfaces (UI) in music recommendation systems. We call these UI features "recommendation aids". Explanations are elements of the interface that inform the user why a certain recommendation was made. We highlight six possible goals for explanations, resulting in overall satisfaction towards the system. We found that the most of the existing music recommenders of popular systems provide no explanations, or very limited ones. Since explanations are not independent of other UI elements in recommendation process, we consider how the other elements can be used to achieve the same goals. To this end, we evaluated several existing music recommenders. We wanted to discover which of the six goals (transparency, scrutability, effectiveness, persuasiveness, efficiency and trust) the different UI elements promote in the existing music recommenders, and how they could be measured in order to create a simple framework for evaluating recommender UIs. By using this framework designers of recommendation systems could promote users' trust and overall satisfaction towards a recommender system thereby improving the user experience with the system.

15 citations

Proceedings ArticleDOI
23 Nov 2010
TL;DR: In this paper, the authors introduce explicit (CCE), hidden (HCCE) and asymmetric (ACCE) variants of a procedure that eliminates covered clauses from CNF formulas.
Abstract: Generalizing the novel clause elimination procedures developed in [1], we introduce explicit (CCE), hidden (HCCE), and asymmetric (ACCE) variants of a procedure that eliminates covered clauses from CNF formulas. We show that these procedures are more effective in reducing CNF formulas than the respective variants of blocked clause elimination, and may hence be interesting as new preprocessing/simplification techniques for SAT solving.

15 citations

Proceedings ArticleDOI
28 May 2008
TL;DR: PuppetWall is presented, a multi-user, multimodal system intended for digitally augmented puppeteering that allows natural interaction to control puppets and manipulate playgrounds comprising background, props, and puppets.
Abstract: Recently, multimodal and affective technologies have been adopted to support expressive and engaging interaction, bringing up a plethora of new research questions Among the challenges, two essential topics are 1) how to devise truly multimodal systems that can be used seamlessly for customized performance and content generation, and 2) how to utilize the tracking of emotional cues and respond to them in order to create affective interaction loops We present PuppetWall, a multi-user, multimodal system intended for digitally augmented puppeteering This application allows natural interaction to control puppets and manipulate playgrounds comprising background, props, and puppets PuppetWall utilizes hand movement tracking, a multi-touch display and emotion speech recognition input for interfacing Here we document the technical features of the system and an initial evaluation The evaluation involved two professional actors and also aimed at exploring naturally emerging expressive speech categories We conclude by summarizing challenges in tracking emotional cues from acoustic features and their relevance for the design of affective interactive systems

15 citations

Posted Content
TL;DR: Zhang et al. as discussed by the authors applied two unsupervised learning algorithms, PCA and ICA, to the outputs of a deep Convolutional Neural Network trained on the ImageNet of 1000 classes.
Abstract: The outputs of a trained neural network contain much richer information than just an one-hot classifier. For example, a neural network might give an image of a dog the probability of one in a million of being a cat but it is still much larger than the probability of being a car. To reveal the hidden structure in them, we apply two unsupervised learning algorithms, PCA and ICA, to the outputs of a deep Convolutional Neural Network trained on the ImageNet of 1000 classes. The PCA/ICA embedding of the object classes reveals their visual similarity and the PCA/ICA components can be interpreted as common visual features shared by similar object classes. For an application, we proposed a new zero-shot learning method, in which the visual features learned by PCA/ICA are employed. Our zero-shot learning method achieves the state-of-the-art results on the ImageNet of over 20000 classes.

15 citations


Authors

Showing all 632 results

NameH-indexPapersCitations
Dimitri P. Bertsekas9433285939
Olli Kallioniemi9035342021
Heikki Mannila7229526500
Jukka Corander6641117220
Jaakko Kangasjärvi6214617096
Aapo Hyvärinen6130144146
Samuel Kaski5852214180
Nadarajah Asokan5832711947
Aristides Gionis5829219300
Hannu Toivonen5619219316
Nicola Zamboni5312811397
Jorma Rissanen5215122720
Tero Aittokallio522718689
Juha Veijola5226119588
Juho Hamari5117616631
Network Information
Related Institutions (5)
Google
39.8K papers, 2.1M citations

93% related

Microsoft
86.9K papers, 4.1M citations

93% related

Carnegie Mellon University
104.3K papers, 5.9M citations

91% related

Facebook
10.9K papers, 570.1K citations

91% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20231
20224
202185
202097
2019140
2018127