scispace - formally typeset
Search or ask a question

Showing papers by "Helsinki Institute for Information Technology published in 2010"


Proceedings Article
31 Mar 2010
TL;DR: A new estimation principle is presented to perform nonlinear logistic regression to discriminate between the observed data and some artificially generated noise, using the model log-density function in the regression nonlinearity, which leads to a consistent (convergent) estimator of the parameters.
Abstract: We present a new estimation principle for parameterized statistical models. The idea is to perform nonlinear logistic regression to discriminate between the observed data and some artificially generated noise, using the model log-density function in the regression nonlinearity. We show that this leads to a consistent (convergent) estimator of the parameters, and analyze the asymptotic variance. In particular, the method is shown to directly work for unnormalized models, i.e. models where the density function does not integrate to one. The normalization constant can be estimated just like any other parameter. For a tractable ICA model, we compare the method with other estimation methods that can be used to learn unnormalized models, including score matching, contrastive divergence, and maximum-likelihood where the normalization constant is estimated with importance sampling. Simulations show that noise-contrastive estimation offers the best trade-off between computational and statistical efficiency. The method is then applied to the modeling of natural images: We show that the method can successfully estimate a large-scale two-layer model and a Markov random field.

1,736 citations


Journal ArticleDOI
TL;DR: This work shows how to combine the non-Gaussian instantaneous model with autoregressive models, effectively what is called a structural vector autoregression (SVAR) model, and contributes to the long-standing problem of how to estimate SVAR's.
Abstract: Analysis of causal effects between continuous-valued variables typically uses either autoregressive models or structural equation models with instantaneous effects. Estimation of Gaussian, linear structural equation models poses serious identifiability problems, which is why it was recently proposed to use non-Gaussian models. Here, we show how to combine the non-Gaussian instantaneous model with autoregressive models. This is effectively what is called a structural vector autoregression (SVAR) model, and thus our work contributes to the long-standing problem of how to estimate SVAR's. We show that such a non-Gaussian model is identifiable without prior knowledge of network structure. We propose computationally efficient methods for estimating the model, as well as methods to assess the significance of the causal influences. The model is successfully applied on financial and brain imaging data.

265 citations


Journal ArticleDOI
TL;DR: An in-depth look at HIP is provided, discussing its architecture, design, benefits, potential drawbacks, and ongoing work.
Abstract: The Host Identity Protocol (HIP) is an inter-networking architecture and an associated set of protocols, developed at the IETF since 1999 and reaching their first stable version in 2007. HIP enhances the original Internet architecture by adding a name space used between the IP layer and the transport protocols. This new name space consists of cryptographic identifiers, thereby implementing the so-called identifier/locator split. In the new architecture, the new identifiers are used in naming application level end-points (sockets), replacing the prior identification role of IP addresses in applications, sockets, TCP connections, and UDP-based send and receive system calls. IPv4 and IPv6 addresses are still used, but only as names for topological locations in the network. HIP can be deployed such that no changes are needed in applications or routers. Almost all pre-compiled legacy applications continue to work, without modifications, for communicating with both HIP-enabled and non-HIP-enabled peer hosts. The architectural enhancement implemented by HIP has profound consequences. A number of the previously hard networking problems become suddenly much easier. Mobility, multi-homing, and baseline end-to-end security integrate neatly into the new architecture. The use of cryptographic identifiers allows enhanced accountability, thereby providing a base for easier build up of trust. With privacy enhancements, HIP allows good location anonymity, assuring strong identity only towards relevant trusted parties. Finally, the HIP protocols have been carefully designed to take middle boxes into account, providing for overlay networks and enterprise deployment concerns. This article provides an in-depth look at HIP, discussing its architecture, design, benefits, potential drawbacks, and ongoing work.

250 citations


Proceedings ArticleDOI
10 Apr 2010
TL;DR: In the explicit biofeedback conditions, players were more immersed and positively affected, and they were able to manipulate the game play with the biosignal interface, and the report recommends exploring the possibilities of using explicitBiofeedback interaction in commercial games.
Abstract: To understand how implicit and explicit biofeedback work in games, we developed a first-person shooter (FPS) game to experiment with different biofeedback techniques While this area has seen plenty of discussion, there is little rigorous experimentation addressing how biofeedback can enhance human-computer interaction In our two-part study, (N=36) subjects first played eight different game stages with two implicit biofeedback conditions, with two simulation-based comparison and repetition rounds, then repeated the two biofeedback stages when given explicit information on the biofeedback The biofeedback conditions were respiration and skin-conductance (EDA) adaptations Adaptation targets were four balanced player avatar attributes We collected data with psycho¬physiological measures (electromyography, respiration, and EDA), a game experience questionnaire, and game-play measures According to our experiment, implicit biofeedback does not produce significant effects in player experience in an FPS game In the explicit biofeedback conditions, players were more immersed and positively affected, and they were able to manipulate the game play with the biosignal interface We recommend exploring the possibilities of using explicit biofeedback interaction in commercial games

109 citations


Proceedings ArticleDOI
13 Apr 2010
TL;DR: A simple and practical power model for data transmission over an 802.11g WLAN is presented and its accuracy against physical data measured from three popular mobile platforms, Maemo, Android and Symbian is shown.
Abstract: Previous studies have shown that a significant part of the overall energy consumption of battery-powered mobile devices is caused by network data transmission. Power models that describe the power consumption behavior of the network data transmission are therefore an essential tool in estimating the battery lifetime and in minimizing the energy usage of mobile devices. In this paper, we present a simple and practical power model for data transmission over an 802.11g WLAN and show its accuracy against physical data measured from three popular mobile platforms, Maemo, Android and Symbian. Our model estimates the energy usage based on the data transmission flow characteristics which are easily available on all the platforms without modifications to low-level software components or hardware. Based on our measurements and experimentation on real networks we conclude that our model is easy to apply and of adequate accuracy.

85 citations


Journal ArticleDOI
TL;DR: In this paper, the process by which new designs are deliberately produced, as a predominantly memory-based activity, is conceptualized as a process called idea generation (IG), and several investigators have begun to conceptualise idea generation as a memory-dependent process.
Abstract: Recently, several investigators have begun to conceptualise idea generation (IG), the process by which new designs are deliberately produced, as a predominantly memory-based activity. In this paper...

69 citations


Journal ArticleDOI
01 Apr 2010
TL;DR: It is found that microblogging centers on selective, I-centered disclosure of current activities and experiences, making daily experiences visible for others, and the high frequency of brief and mundane status updates is a second-order effect resulting from posting becoming a routine executed to keep the audience interested.
Abstract: Microblogging is a "Mobile Web 2.0" service category that enables brief blog-like postings from mobile terminals and PCs to the World Wide Web. To shed light on microblogging as a communication genre, we report on multiple analyses of data from the first 10 months of a service called Jaiku. The main finding is that microblogging centers on selective, I-centered disclosure of current activities and experiences, making daily experiences visible for others. The high frequency of brief and mundane status updates, like "working," may be a second-order effect resulting from posting becoming a routine executed to keep the audience interested. The results highlight the importance of reciprocal activity and feedback in users' motivation to invest in this activity.

68 citations


Proceedings ArticleDOI
13 Jun 2010
TL;DR: A distributed algorithm is presented that finds a maximal edge packing inO(Δ + log* W) synchronous communication rounds in a weighted graph, independent of the number of nodes in the network.
Abstract: We present a distributed algorithm that finds a maximal edge packing in O(Δ + log* W) synchronous communication rounds in a weighted graph, independent of the number of nodes in the network; here Δ is the maximum degree of the graph and W is the maximum weight. As a direct application, we have a distributed 2-approximation algorithm for minimum-weight vertex cover, with the same running time. We also show how to find an $f$-approximation of minimum-weight set cover in O(f2k2 + fk log* W) rounds; here k is the maximum size of a subset in the set cover instance, f is the maximum frequency of an element, and W is the maximum weight of a subset. The algorithms are deterministic, and they can be applied in anonymous networks.

68 citations


Proceedings ArticleDOI
18 Dec 2010
TL;DR: This paper develops a linear regression model with nonnegative coefficients, which describes the aggregate power consumption of the processors, the wireless network interface and the display, and exhibits 2.62 percent median error on real mobile internet services.
Abstract: The growing popularity of mobile internet services, characterized by heavy network transmission, intensive computation and an always-on display, poses a great challenge to the battery lifetime of mobile devices. To manage the power consumption in an efficient way, it is essential to understand how the power is consumed at the system level and to be able to estimate the power consumption during runtime. Although the power modeling of each hardware component has been studied separately, there is no general solution at present of combining them into a system-level power model. In this paper we present a methodology for building a system-level power model without power measurement at the component level. We develop a linear regression model with nonnegative coefficients, which describes the aggregate power consumption of the processors, the wireless network interface and the display. Based on statistics and expert knowledge, we select three hardware performance counters, three network transmission parameters and one display parameter as regression variables. The power estimation, based on our model, exhibits 2.62 percent median error on real mobile internet services.

67 citations


Journal ArticleDOI
01 Oct 2010
TL;DR: Chung et al. as discussed by the authors proposed a trimmed Moebius inversion algorithm for combinatorial optimisation problems on graphs with maximum degree Δ, and showed that for any constant Δ, these bounds are O((2−e) n ) for e>0 independent of the number of vertices.
Abstract: We study ways to expedite Yates’s algorithm for computing the zeta and Moebius transforms of a function defined on the subset lattice. We develop a trimmed variant of Moebius inversion that proceeds point by point, finishing the calculation at a subset before considering its supersets. For an n-element universe U and a family ℱ of its subsets, trimmed Moebius inversion allows us to compute the number of packings, coverings, and partitions of U with k sets from ℱ in time within a polynomial factor (in n) of the number of supersets of the members of ℱ. Relying on an projection theorem of Chung et al. (J. Comb. Theory Ser. A 43:23–37, 1986) to bound the sizes of set families, we apply these ideas to well-studied combinatorial optimisation problems on graphs with maximum degree Δ. In particular, we show how to compute the domatic number in time within a polynomial factor of (2Δ+1−2) n/(Δ+1) and the chromatic number in time within a polynomial factor of (2Δ+1−Δ−1) n/(Δ+1). For any constant Δ, these bounds are O((2−e) n ) for e>0 independent of the number of vertices n.

67 citations


Journal ArticleDOI
TL;DR: In this paper, the authors analyzed the dynamics of carbon balance components: gross primary production (GPP) and total ecosystem respiration (TER), of a boreal Scots pine forest in Southern Finland.
Abstract: . We analyzed the dynamics of carbon balance components: gross primary production (GPP) and total ecosystem respiration (TER), of a boreal Scots pine forest in Southern Finland. The main focus is on investigations of environmental drivers of GPP and TER and how they affect the inter-annual variation in the carbon balance in autumn (September–December). We used standard climate data and CO2 exchange measurements collected by the eddy covariance (EC) technique over 11 years. EC data revealed that increasing autumn temperature significantly enhances TER: the temperature sensitivity was 9.5 gC m−2 °C−1 for the period September–October (early autumn when high radiation levels still occur) and 3.8 gC m−2 °C−1 for November–December (late autumn with suppressed radiation level). The cumulative GPP was practically independent of the temperature in early autumn. In late autumn, air temperature could explain part of the variation in GPP but the temperature sensitivity was very weak, less than 1 gC m−2 °C−1. Two models, a stand photosynthesis model (COCA) and a global vegetation model (ORCHIDEE), were used for estimating stand GPP and its sensitivity to the temperature. The ORCHIDEE model was tested against the observations of GPP derived from EC data. The stand photosynthesis model COCA predicted that under a predescribed 3–6 °C temperature increase, the temperature sensitivity of 4–5 gC m−2 °C−1 in GPP may appear in early autumn. The analysis by the ORCHIDEE model revealed the model sensitivity to the temporal treatment of meteorological forcing. The model predictions were similar to observed ones when the site level 1/2-hourly time step was applied, but the results calculated by using daily meteorological forcing, interpolated to 1/2-hourly time step, were biased. This is due to the nonlinear relationship between the processes and the environmental factors.

Journal ArticleDOI
TL;DR: It is shown that the ratio of matched individuals to blocking pairs grows linearly with the number of propose–accept rounds executed by the Gale–Shapley algorithm for the stable marriage problem, allowing a distributed (2+ε)-approximation algorithm for maximum-weight matching in bicoloured graphs and a centralised randomised constant-time approximation scheme for estimating the size of a stable matching.
Abstract: We show that the ratio of matched individuals to blocking pairs grows linearly with the number of propose–accept rounds executed by the Gale–Shapley algorithm for the stable marriage problem. Consequently, the participants can arrive at an almost stable matching even without full information about the problem instance; for each participant, knowing only its local neighbourhood is enough. In distributed-systems parlance, this means that if each person has only a constant number of acceptable partners, an almost stable matching emerges after a constant number of synchronous communication rounds. We apply our results to give a distributed (2+e)-approximation algorithm for maximum-weight matching in bicoloured graphs and a centralised randomised constant-time approximation scheme for estimating the size of a stable matching.

Journal ArticleDOI
TL;DR: The applicability area of multivariate, multi-way ANOVA-type methods to multi-source cases is extended by introducing a novel Bayesian model capable of finding covariate-related dependencies between the sources and to source-specific ones.
Abstract: Motivation: Analysis of variance (ANOVA)-type methods are the default tool for the analysis of data with multiple covariates. These tools have been generalized to the multivariate analysis of high-throughput biological datasets, where the main challenge is the problem of small sample size and high dimensionality. However, the existing multi-way analysis methods are not designed for the currently increasingly important experiments where data is obtained from multiple sources. Common examples of such settings include integrated analysis of metabolic and gene expression profiles, or metabolic profiles from several tissues in our case, in a controlled multi-way experimental setup where disease status, medical treatment, gender and time-series are usual covariates. Results: We extend the applicability area of multivariate, multi-way ANOVA-type methods to multi-source cases by introducing a novel Bayesian model. The method is capable of finding covariate-related dependencies between the sources. It assumes the measurements consist of groups of similarly behaving variables, and estimates the multivariate covariate effects and their interaction effects for the discovered groups of variables. In particular, the method partitions the effects to those shared between the sources and to source-specific ones. The method is specifically designed for datasets with small sample sizes and high dimensionality. We apply the method to a lipidomics dataset from a lung cancer study with two-way experimental setup, where measurements from several tissues with mostly distinct lipids have been taken. The method is also directly applicable to gene expression and proteomics. Availability: An R-implementation is available at http://www.cis.hut.fi/projects/mi/software/multiWayCCA/ Contact:ilkka.huopaniemi@tkk.fi; samuel.kaski@tkk.fi

Journal ArticleDOI
TL;DR: Transitions were found to play a part in how the life interests of late middle-aged persons are often conflictive, forcing them to choose from among various 'possible selves'.
Abstract: More and more people entering the stage of retirement at around age 55-65 are healthy, active, and also very computer-literate. This trend is rapidly changing the common image of late-midlife technology users, which rests on the assumption that they find it difficult to embrace new technologies and also that their main interests are health related. Although technology use and lifestyles are changing, however, many other aspects of life remain the same. One of these aspects is that of the transitions, or life changes, that generally take place in these years. Besides retirement, these transitions include changes in health, housing, social interaction, work life, and personal finance. People develop different ways of coping with these transitions, which brings up interesting issues related to the late midlife stage. This paper presents a diary-aided interview study of late middle-age adults (N=24) in Finland and Sweden with a focus on the interplay between technologies and transitions. Transitions were found to play a part in how the life interests of late middle-aged persons are often conflictive, forcing them to choose from among various 'possible selves'. At its best, technology can help alleviate these tensions. This finding is exemplified in the paper's discussion of two design implications associated with particular clashes of interests, related to how daily activities are organized and how contact is maintained with one's friends and family.

Journal ArticleDOI
TL;DR: In this paper, various statistical and machine learning methods, such as graphical Gaussian models, L1-regularised regression with least absolute shrinkage and selection operator (LASSO), sparse Bayesian regression and Bayesian networks, have been applied to detect species interactions.

Journal ArticleDOI
TL;DR: Properties of the 11-sparse Steiner triple systems of order 19 are examined and there are exactly two 3-resolvable and 3-existentially closed STS(19).
Abstract: Properties of the 11 084 874 829 Steiner triple systems of order 19 are examined. In particular, there is exactly one 5-sparse, but no 6-sparse, STS(19); there is exactly one uniform STS(19); there are exactly two STS(19) with no almost parallel classes; all STS(19) have chromatic number 3; all have chromatic index 10, except for 4 075 designs with chromatic index 11 and two with chromatic index 12; all are 3-resolvable; and there are exactly two 3-existentially closed STS(19).

Journal ArticleDOI
TL;DR: The model unifies previous extensions to independent component analysis such as subspace and topographic models and provides new evidence that localized, oriented, phase-invariant features reflect the statistical properties of natural image patches.
Abstract: We consider a hierarchical two-layer model of natural signals in which both layers are learned from the data. Estimation is accomplished by score matching, a recently proposed estimation principle for energy-based models. If the first-layer outputs are squared and the second-layer weights are constrained to be nonnegative, the model learns responses similar to complex cells in primary visual cortex from natural images. The second layer pools a small number of features with similar orientation and frequency, but differing in spatial phase. For speech data, we obtain analogous results. The model unifies previous extensions to independent component analysis such as subspace and topographic models and provides new evidence that localized, oriented, phase-invariant features reflect the statistical properties of natural image patches.

Journal ArticleDOI
TL;DR: This paper proposes an alternative, information-theoretic framework which sidesteps some of the technical problems facing the Bayesian approach, based on the minimax optimal normalized maximum likelihood (NML) distribution, which is motivated by the minimum description length (MDL) principle.

Book ChapterDOI
18 Nov 2010
TL;DR: This work presents an algorithm for obtaining partial but in the large sample limit correct information about pairwise total causal effects in linear non-Gaussian acyclic models with hidden variables.
Abstract: Causal relationships among a set of observed variables are often modeled using directed acyclic graph (DAG) structures, and learning such structures from data is known as the causal discovery problem. We here consider the learning of linear non-Gaussian acyclic models [9] with hidden variables [5]. Estimation of such models is computationally challenging and hence only possible when the number of variables is small. We present an algorithm for obtaining partial but in the large sample limit correct information about pairwise total causal effects in such a model. In particular, we obtain consistent estimates of the total effects for all variable pairs for which there exist an unconfounded superset of observed variables. Simulations show that the estimated pairwise total effects are good approximations of the true total effects.

Proceedings ArticleDOI
26 Sep 2010
TL;DR: Key principles embodied in Energy Life are situated and combined feedback including knowledge and consumption information, intuitiveness and non-intrusiveness by utilizing an always at hand solution on a touch enabled smart phone and lighting as an ambient interface, sustained interaction and engagement by using a applied game that connects players within and between households.
Abstract: We present Energy Life a system utilizing wireless sensors, mobile and ambient interfaces that turn energy consumers into active players. Energy Life participants play through different levels collecting scores in savings and through advice tip reading and quizzes. We describe principles, logic of the game, implementation and user interfaces providing rationale for design choices. Key principles embodied in Energy Life are: situated and combined feedback including knowledge and consumption information, intuitiveness and non-intrusiveness by utilizing an always at hand solution on a touch enabled smart phone and lighting as an ambient interface, sustained interaction and engagement by using a applied game that connects players within and between households.

Posted Content
TL;DR: In this paper, the authors examine the shift in consumer behavior and business models from a public policy perspective, and present three case studies to examine the key policy issues that virtual goods are giving rise to, and analyze some of the regulatory responses that have been effected so far.
Abstract: Millions of people around the world are spending billions of euros per year on virtual items, characters and currencies in online games, social networking sites, and other digital hangouts. In this paper, we examine this shift in consumer behavior and business models from a public policy perspective. We present three case studies to examine the key policy issues that virtual goods are giving rise to, and analyze some of the regulatory responses that have been effected so far: judicial protection of the possession of virtual goods in Finland and the Netherlands, statutory regulation of virtual goods trade in Korea, and application of consumer protection law to virtual goods sales in Finland. As with the debate over copyright, the first big content policy debate of the digital era, this new digital policy debate tends to pit individual consumers and entrepreneurs against the interests of publishers and established public policy. However, the roles are curiously reversed: it is not the publishers but the consumers who demand that pieces of digital content be respected as property, and turn to courts to enforce their view. While copyright and virtual goods both aim to impose artificial scarcity on non-rivalrous matter, copyright is designed to provide economic incentives to producers, while in virtual goods scarcity provides benefits to consumers directly.

Proceedings ArticleDOI
01 Dec 2010
TL;DR: An alternative image translation solution based on crowdsourcing that is able to account for context and non-textual cues, and provide higher level information to the end-user is proposed.
Abstract: Travelers in countries that use an unfamiliar script cannot use pocket translators or online translation services to understand menus, maps, signs and other important information, because they are unable to write the text they see. Solutions based on optical character recognition provide very limited performance in real-world situations and for complex scripts such as Chinese and Japanese. In this paper, we propose an alternative image translation solution based on crowdsourcing. A large number of human workers on mobile terminals are used to carry out the tasks of image recognition, translation and quality assurance. Compared to purely technical solutions, this human computation approach is also able to account for context and non-textual cues, and provide higher level information to the end-user. In this paper, we describe a preliminary user study to create a model of end-user requirements.

Journal ArticleDOI
TL;DR: A new criterion based on sequentially minimized squared deviations is introduced, which is smaller than both the usual least squares and the squared prediction errors used in PLS, and has a probabilistic interpretation as a model which is asymptotically optimal within the given class of distributions.

Journal ArticleDOI
TL;DR: A generative probabilistic model for protein-protein interaction links and two ways for including gene expression data into the model are introduced and it is shown that these methods outperform a representative set of earlier models in the task of finding biologically relevant modules having enriched functional classes.
Abstract: Functional gene modules and protein complexes are being sought from combinations of gene expression and protein-protein interaction data with various clustering-type methods. Central features missing from most of these methods are handling of uncertainty in both protein interaction and gene expression measurements, and in particular capability of modeling overlapping clusters. It would make sense to assume that proteins may play different roles in different functional modules, and the roles are evidenced in their interactions. We formulate a generative probabilistic model for protein-protein interaction links and introduce two ways for including gene expression data into the model. The model finds interaction components, which can be interpreted as overlapping clusters or functional modules. We demonstrate the performance on two data sets of yeast Saccharomyces cerevisiae. Our methods outperform a representative set of earlier models in the task of finding biologically relevant modules having enriched functional classes. Combining protein interaction and gene expression data with a probabilistic generative model improves discovery of modules compared to approaches based on either data source alone. With a fairly simple model we can find biologically relevant modules better than with alternative methods, and in addition the modules may be inherently overlapping in the sense that different interactions may belong to different modules.

Book ChapterDOI
20 Sep 2010
TL;DR: A mixture of robust canonical correlation analyzers is introduced, using t-distribution to make the model robust to outliers and variational Bayesian inference for learning from noisy data.
Abstract: We study the problem of extracting statistical dependencies between multivariate signals, to be used for exploratory analysis of complicated natural phenomena. In particular, we develop generative models for extracting the dependencies, made possible by the probabilistic interpretation of canonical correlation analysis (CCA). We introduce a mixture of robust canonical correlation analyzers, using t-distribution to make the model robust to outliers and variational Bayesian inference for learning from noisy data. We demonstrate the improvements of the new model on artificial data, and further apply it for analyzing dependencies between MEG and measurements of autonomic nervous system to illustrate potential use scenarios.

Proceedings Article
18 Feb 2010
TL;DR: This paper presents a two-step method, which is constrained nonlinear ICA followed by statistical independence tests, to distinguish the cause from the effect in the two-variable case, and successfully identify causes from effects.
Abstract: Distinguishing causes from effects is an important problem in many areas. In this paper, we propose a very general but well defined nonlinear acyclic causal model, namely, post-nonlinear acyclic causal model with inner additive noise, to tackle this problem. In this model, each observed variable is generated by a nonlinear function of its parents, with additive noise, followed by a nonlinear distortion. The nonlinearity in the second stage takes into account the effect of sensor distortions, which are usually encountered in practice. In the two-variable case, if all the nonlinearities involved in the model are invertible, by relating the proposed model to the post-nonlinear independent component analysis (ICA) problem, we give the conditions under which the causal relation can be uniquely found. We present a two-step method, which is constrained nonlinear ICA followed by statistical independence tests, to distinguish the cause from the effect in the two-variable case. We apply this method to solve the problem "CauseEffectPairs" in the Pot-luck challenge, and successfully identify causes from effects.

Book ChapterDOI
19 May 2010
TL;DR: A generic framework and methods for simplification of large networks, which has relative neighborhood graphs, spanning trees, and certain Pathfinder graphs as its special cases, and is applicable to different kinds of graphs.
Abstract: We propose a generic framework and methods for simplification of large networks. The methods can be used to improve the understandability of a given network, to complement user-centric analysis methods, or as a pre-processing step for computationally more complex methods. The approach is path-oriented: edges are pruned while keeping the original quality of best paths between all pairs of nodes (but not necessarily all best paths). The framework is applicable to different kinds of graphs (for instance flow networks and random graphs) and connections can be measured in different ways (for instance by the shortest path, maximum flow, or maximum probability). It has relative neighborhood graphs, spanning trees, and certain Pathfinder graphs as its special cases. We give four algorithmic variants and report on experiments with 60 real biological networks. The simplification methods are part of on-going projects for intelligent analysis of networked information.

Proceedings ArticleDOI
22 Nov 2010
TL;DR: This work aims at proposing a space for discussion that can enliven and enrich the dialogue between human-computer interaction and the creative practices and reflects on the different participative strategies and the role of interaction technologies.
Abstract: ParticipART is an initiative aimed at exploring participation in interactive works using ubiquitous computing and mixed reality. It supports and analyses work of artists and creative practitioners incorporating or reflecting on participatory processes to support new roles and forms of engagement for art participants. We aim at proposing a space for discussion that can enliven and enrich the dialogue between human-computer interaction and the creative practices. We present several works that have been exhibited and experienced. The works are used to reflect on the different participative strategies and the role of interaction technologies: enabling authorship, affording connectivity, interacting with artificial beings, reinterpreting the visitor world, and engaging in performative acts.

Journal ArticleDOI
TL;DR: In this paper, the authors examine the shift in consumer behavior and business models from a public policy perspective, and present three case studies to examine the key policy issues that virtual goods are giving rise to, and analyze some of the regulatory responses that have been effected so far.
Abstract: Millions of people around the world are spending billions of euros per year on virtual items, characters and currencies in online games, social networking sites, and other digital hangouts. In this paper, we examine this shift in consumer behavior and business models from a public policy perspective. We present three case studies to examine the key policy issues that virtual goods are giving rise to, and analyze some of the regulatory responses that have been effected so far: judicial protection of the possession of virtual goods in Finland and the Netherlands, statutory regulation of virtual goods trade in Korea, and application of consumer protection law to virtual goods sales in Finland. As with the debate over copyright, the first big content policy debate of the digital era, this new digital policy debate tends to pit individual consumers and entrepreneurs against the interests of publishers and established public policy. However, the roles are curiously reversed: it is not the publishers but the consumers who demand that pieces of digital content be respected as property, and turn to courts to enforce their view. While copyright and virtual goods both aim to impose artificial scarcity on non-rivalrous matter, copyright is designed to provide economic incentives to producers, while in virtual goods scarcity provides benefits to consumers directly.

Proceedings ArticleDOI
22 Jun 2010
TL;DR: This paper examines roles of actors comprising an inter-domain pub/sub network, together with security requirements and minimal required trust associations arising from this setting, and introduces and analyzes a security design for a clean-slate pub/ sub network architecture that secures both the control and data planes.
Abstract: Several publish/subscribe (pub/sub) and data-oriented networking proposals have been presented to overcome limitations of the current message- and host-centric Internet. However, security issues of these solutions have not been addressed comprehensively. In this paper we examine roles of actors comprising an inter-domain pub/sub network, together with security requirements and minimal required trust associations arising from this setting. We then introduce and analyze a security design for a clean-slate pub/sub network architecture that secures both the control and data planes. The solution addresses availability and data integrity while remaining scalable and usable.