scispace - formally typeset
Search or ask a question

Showing papers by "Helsinki Institute for Information Technology published in 2006"


Proceedings Article
13 Jul 2006
TL;DR: In this paper, the problem of learning the best Bayesian network structure with respect to a decomposable score such as BDe, BIC or AIC is studied, which is known to be NP-hard and becomes quickly infeasible as the number of variables increases.
Abstract: We study the problem of learning the best Bayesian network structure with respect to a decomposable score such as BDe, BIC or AIC. This problem is known to be NP-hard, which means that solving it becomes quickly infeasible as the number of variables increases. Nevertheless, in this paper we show that it is possible to learn the best Bayesian network structure with over 30 variables, which covers many practically interesting cases. Our algorithm is less complicated and more efficient than the techniques presented earlier. It can be easily parallelized, and offers a possibility for efficient exploration of the best networks consistent with different variable orderings. In the experimental part of the paper we compare the performance of the algorithm to the previous state-of-the-art algorithm. Free source-code and an online-demo can be found at http://b-course.hiit.fi/bene.

378 citations


Journal ArticleDOI
TL;DR: Playing against another human elicited higher spatial presence, engagement, anticipated threat, post-game challenge appraisals, and physiological arousal, as well as more positively valenced emotional responses, compared to playing against a stranger.
Abstract: The authors examined whether the nature of the opponent (computer, friend, or stranger) influences spatial presence, emotional responses, and threat and challenge appraisals when playing video games. In a within-subjects design, participants played two different video games against a computer, a friend, and a stranger. In addition to self-report ratings, cardiac interbeat intervals (IBIs) and facial electromyography (EMG) were measured to index physiological arousal and emotional valence. When compared to playing against a computer, playing against another human elicited higher spatial presence, engagement, anticipated threat, post-game challenge appraisals, and physiological arousal, as well as more positively valenced emotional responses. In addition, playing against a friend elicited greater spatial presence, engagement, and self-reported and physiological arousal, as well as more positively valenced facial EMG responses, compared to playing against a stranger. The nature of the opponent influences spatial presence when playing video games, possibly through the mediating influence on arousal and attentional processes.

338 citations


Journal ArticleDOI
TL;DR: Three ways to support interruption tolerance by the means of task and interface design are suggested: actively facilitating the development of memory skills, matching encoding speed to task processing demands, and supporting encoding-retrieval symmetry.
Abstract: Typically, we have several tasks at hand, some of which are in interrupted state while others are being carried out. Most of the time, such interruptions are not disruptive to task performance. Based on the theory of Long-Term Working Memory (LTWM; Ericsson, K.A., Kintsch, W., 1995. Long-term working memory. Psychological Review, 102, 211-245), we posit that unless there are enough mental skills and resources to encode task representations to retrieval structures in long-term memory, the resulting memory traces will not enable reinstating the information, which can lead to memory losses. However, once encoded to LTWM, they are virtually safeguarded. Implications of the theory were tested in a series of experiments in which the reading of an expository text was interrupted by a 30-s interactive task, after which the reading was continued. The results convey the remarkably robust nature of skilled memory-when LTWM encoding speed is fast enough for the task-processing imposed by the interface, interruptions have no effect on memory, regardless of their pacing, intensity, or difficulty. In the final experiment where presentation time in the main task was notably speeded up to match the limits of encoding speed, interruptions did hamper memory. Based on the results and the theory, we argue that auditory rehearsal or time-based retrieval cues were not utilized in surviving interruptions and that they are in general weaker strategies for surviving interruptions in complex cognitive tasks. We conclude the paper by suggesting three ways to support interruption tolerance by the means of task and interface design: (1) actively facilitating the development of memory skills, (2) matching encoding speed to task processing demands, and (3) supporting encoding-retrieval symmetry.

117 citations


Proceedings ArticleDOI
10 May 2006
TL;DR: This paper presents the design and prototype implementation of the context-awareness support of the Multi- Protocol Service Discovery and Access (MSDA), along with experimental results that demonstrate the advantages derived by introducing context awareness.
Abstract: Service discovery is a critical functionality of emerging pervasive computing environments. In such environments, service discovery mechanisms need to (i) overcome the heterogeneity of hardware devices, software platforms, and networking infrastructures; and (ii) provide users with an accurate selection of services that meet their current requirements. To address these issues, we have developed the Multi- Protocol Service Discovery and Access (MSDA) middleware platform^2, which provides context-aware service discovery and access in pervasive environments. This paper primarily focuses on the design and implementation of the context-awareness support of MSDA. Context-awareness not only provides a more accurate service selection, but also enables a more efficient dissemination of service requests across heterogeneous pervasive environments. We present the design and prototype implementation of MSDA, along with experimental results that demonstrate the advantages derived by introducing context awareness.

98 citations


Journal ArticleDOI
TL;DR: The dissonance between global versus local network divergence suggests that the interspecies similarity of the global network properties is of limited biological significance, at best, and that the biologically relevant aspects of the architectures of gene coexpression are specific and particular, rather than universal.
Abstract: A genome-wide comparative analysis of human and mouse gene expression patterns was performed in order to evaluate the evolutionary divergence of mammalian gene expression. Tissue-specific expression profiles were analyzed for 9,105 human-mouse orthologous gene pairs across 28 tissues. Expression profiles were resolved into species-specific coexpression networks, and the topological properties of the networks were compared between species. At the global level, the topological properties of the human and mouse gene coexpression networks are, essentially, identical. For instance, both networks have topologies with small-world and scale-free properties as well as closely similar average node degrees, clustering coefficients, and path lengths. However, the human and mouse coexpression networks are highly divergent at the local level: only a small fraction (<10%) of coexpressed gene pair relationships are conserved between the two species. A series of controls for experimental and biological variance show that most of this divergence does not result from experimental noise. We further show that, while the expression divergence between species is genuinely rapid, expression does not evolve free from selective (functional) constraint. Indeed, the coexpression networks analyzed here are demonstrably functionally coherent as indicated by the functional similarity of coexpressed gene pairs, and this pattern is most pronounced in the conserved human-mouse intersection network. Numerous dense network clusters show evidence of dedicated functions, such as spermatogenesis and immune response, that are clearly consistent with the coherence of the expression patterns of their constituent gene members. The dissonance between global versus local network divergence suggests that the interspecies similarity of the global network properties is of limited biological significance, at best, and that the biologically relevant aspects of the architectures of gene coexpression are specific and particular, rather than universal. Nevertheless, there is substantial evolutionary conservation of the local network structure which is compatible with the notion that gene coexpression networks are subject to purifying selection.

88 citations


Journal ArticleDOI
TL;DR: A probabilistic model, Bayesian networks, is applied to analyze direct influences between protein residues and exposure to treatment in clinical HIV-1 protease sequences from diverse subtypes to determine specific role of many resistance mutations against the protease inhibitor nelfinavir, and determine relationships between resistance mutations and polymorphisms.
Abstract: Human Immunodeficiency Virus-1 (HIV-1) antiviral resistance is a major cause of antiviral therapy failure and compromises future treatment options. As a consequence, resistance testing is the standard of care. Because of the high degree of HIV-1 natural variation and complex interactions, the role of resistance mutations is in many cases insufficiently understood. We applied a probabilistic model, Bayesian networks, to analyze direct influences between protein residues and exposure to treatment in clinical HIV-1 protease sequences from diverse subtypes. We can determine the specific role of many resistance mutations against the protease inhibitor nelfinavir, and determine relationships between resistance mutations and polymorphisms. We can show for example that in addition to the well-known major mutations 90M and 30N for nelfinavir resistance, 88S should not be treated as 88D but instead considered as a major mutation and explain the subtype-dependent prevalence of the 30N resistance pathway. Contact: koen.deforche@uz.kuleuven.ac.be Supplementary information: Supplementary data are available at Bioinformatics online.

84 citations


Proceedings ArticleDOI
01 Nov 2006
TL;DR: The design, prototype implementation, and experimental evaluation of Contory, a middleware specifically designed to accomplish efficient context provisioning on mobile devices, are presented and results obtained in a testbed of smart phones demonstrate the feasibility of the approach and quantify the cost of supportingcontext provisioning in terms of energy consumption.
Abstract: Context-awareness can serve to make ubiquitous applications deployed for mobile devices adaptive, personalized, and accessible in dynamically changing environments. Unfortunately, existing approaches for the provisioning of context information in ubiquitous computing environments rarely take into consideration the resource constraints of mobile devices and the uncertain availability of sensors and service infrastructures. This paper presents the design, prototype implementation, and experimental evaluation of Contory, a middleware specifically designed to accomplish efficient context provisioning on mobile devices. To make context provisioning flexible and adaptive based on dynamic operating conditions, Contory integrates multiple context provisioning strategies, namely internal sensors-based, external infrastructure-based, and distributed provisioning in ad hoc networks. Applications can request context information provided by Contory using a declarative query language which features on-demand, periodic, and event-based context queries. Experimental results obtained in a testbed of smart phones demonstrate the feasibility of our approach and quantify the cost of supporting context provisioning in terms of energy consumption.

75 citations


Proceedings ArticleDOI
17 Jul 2006
TL;DR: This work uses social identity theory to motivate why some locations really are significant to the user and considers a more realistic setting where the information consists of GSM cell transitions that are enriched with GPS information whenever a GPS device is available.
Abstract: Existing context-aware mobile applications often rely on location information. However, raw location data such as GPS coordinates or GSM cell identifiers are usually meaningless to the user and, as a consequence, researchers have proposed different methods for inferring so-called places from raw data. The places are locations that carry some meaning to user and to which the user can potentially attach some (meaningful) semantics. Examples of places include home, work and airport. A lack in existing work is that the labeling has been done in an ad hoc fashion and no motivation has been given for why places would be interesting to the user. As our first contribution we use social identity theory to motivate why some locations really are significant to the user. We also discuss what potential uses for location information social identity theory implies. Another flaw in the existing work is that most of the proposed methods are not suited to realistic mobile settings as they rely on the availability of GPS information. As our second contribution we consider a more realistic setting where the information consists of GSM cell transitions that are enriched with GPS information whenever a GPS device is available. We present four different algorithms for this problem and compare them using real data gathered throughout Europe. In addition, we analyze the suitability of our algorithms for mobile devices.

74 citations


Journal ArticleDOI
TL;DR: A requirement analysis for m-commerce transactions, a graph-based transaction model, and a Transaction Manager architecture for a wireless application that protects m- commerce workflows against communication link, application, or terminal crash are presented.

69 citations


Proceedings ArticleDOI
17 Jul 2006
TL;DR: A generic simulator that has been designed with the above mentioned purposes in mind and it can output context information of individual entities both through an interactive GUI and as data streams consisting of comma separated values.
Abstract: The complexity associated to gathering and processing contextual data makes testing mobile context-aware applications and services difficult. Furthermore, the lack of standard data sets and simulation tools makes the evaluation of machine learning algorithms in context-aware settings an even harder task. To ease the situation, we introduce a generic simulator that has been designed with the above mentioned purposes in mind. The simulator has also proven to be a good demonstration tool for mobile services and applications that are aimed at groups. The simulator is highly customizable and it can output context information of individual entities both through an interactive GUI and as data streams consisting of comma separated values. To support a wide range of tasks and scenarios, we have separated the three main information sources: behavior of agents, the scenario being simulated and the used context variable. The simulator has been implemented using Java, and the data streams have been made available through a web service interface.

64 citations


Proceedings ArticleDOI
22 Apr 2006
TL;DR: A messaging application for camera phones with the aim of supporting spectator groups at large-scale events with the idea of collectively created albums called Media Stories, which indicates the centrality of collocated viewing and creation in the use of media.
Abstract: Traditionally, mobile media sharing and messaging has been studied from the perspective of an individual author making media available to other users. With the aim of supporting spectator groups at large-scale events, we developed a messaging application for camera phones with the idea of collectively created albums called Media Stories. The field trial at a rally competition pointed out the collective and participative practices involved in the creation and sense-making of media, challenging the view of individual authorship. Members contributed actively to producing chains of messages in Media Stories, with more than half of the members as authors on average in each story. Observations indicate the centrality of collocated viewing and creation in the use of media. Design implications include providing a ""common space"" and possibilities of creating collective objects, adding features that enrich collocated collective use, and supporting the active construction of awareness and social presence through the created media.

Proceedings ArticleDOI
10 Oct 2006
TL;DR: The results show the proposed method for differencing XML as ordered trees to be feasible and to have the potential to perform on par with tools of a more complex design in terms of both output size and execution time.
Abstract: With the advent of XML we have seen a renewed interest in methods for computing the difference between trees. Methods that include heuristic elements play an important role in practical applications due to the inherent complexity of the problem. We present a method for differencing XML as ordered trees based on mapping the problem to the domain of sequence alignment, applying simple and efficient heuristics in this domain, and transforming back to the tree domain. Our approach provides a method to quickly compute changes that are meaningful transformations on the XML tree level, and includes subtree move as a primitive operation. We evaluate the feasibility of our approach and benchmark it against a selection of existing differencing tools. The results show our approach to be feasible and to have the potential to perform on par with tools of a more complex design in terms of both output size and execution time.

Journal ArticleDOI
TL;DR: Performance, theatre and dramaturgy have begun to figure in the design of interactive systems, and the CRC Cards technique combines role-playing with scenario walkthroughs and use-cases to provide design teams with a software object’s perspective on the physical environment.

Journal ArticleDOI
TL;DR: EEL will predict the location and structure of conserved enhancers after being provided with two orthologous DNA sequences and binding specificity matrices for the transcription factors (TFs) that are expected to contribute to the function of the enhancers to be identified.
Abstract: This protocol describes the use of Enhancer Element Locator (EEL), a computer program that was designed to locate distal enhancer elements in long mammalian sequences. EEL will predict the location and structure of conserved enhancers after being provided with two orthologous DNA sequences and binding specificity matrices for the transcription factors (TFs) that are expected to contribute to the function of the enhancers to be identified. The freely available EEL software can analyze two 1-Mb sequences with 100 TF motifs in about 15 min on a modern Windows, Linux or Mac computer. The output provides several hypotheses about enhancer location and structure for further evaluation by an expert on enhancer function.

Journal Article
TL;DR: In this article, the authors study the computational complexity of relay placement in energy-constrained wireless sensor networks and prove that all of these problem classes are NP-hard, and that in some cases even finding approximate solutions is NP-Hard.
Abstract: We study the computational complexity of relay placement in energy-constrained wireless sensor networks. The goal is to optimise balanced data gathering, where the utility function is a weighted sum of the minimum and average amounts of data collected from each sensor node. We define a number of classes of simplified relay placement problems, including a planar problem with a simple cost model for radio communication. We prove that all of these problem classes are NP-hard, and that in some cases even finding approximate solutions is NP-hard.

Book ChapterDOI
21 Jan 2006
TL;DR: This work defines a number of classes of simplified relay placement problems, including a planar problem with a simple cost model for radio communication, and proves that all of these problem classes are NP-hard, and that in some cases even finding approximate solutions is NP- hard.
Abstract: We study the computational complexity of relay placement in energy-constrained wireless sensor networks. The goal is to optimise balanced data gathering, where the utility function is a weighted sum of the minimum and average amounts of data collected from each sensor node. We define a number of classes of simplified relay placement problems, including a planar problem with a simple cost model for radio communication. We prove that all of these problem classes are NP-hard, and that in some cases even finding approximate solutions is NP-hard.

Journal ArticleDOI
TL;DR: The key finding is that drama methods deepen the designers' involvement in the process and improve understanding of the user communities' behavior.

Proceedings ArticleDOI
11 Dec 2006
TL;DR: A summary of the experiences with mobile middleware research in the four-year Fuego Core project is presented, namely the messaging, event, and file synchronizer services, and their development and usage.
Abstract: In this paper, we present a summary of our experiences with mobile middleware research in the four-year Fuego Core project. The presented work focuses on data communication and synchronization. We present three middleware services for data communication and synchronization, namely the messaging, event, and file synchronizer services, and discuss their development and usage. We conclude with an integrated architecture of these services and the lessons we have learned.

Book ChapterDOI
18 Sep 2006
TL;DR: A new pruning method based on combining techniques for closed and non-derivable itemsets that allows further reductions of itemsets and shows that the reduction is significant in some datasets.
Abstract: Itemset mining typically results in large amounts of redundant itemsets. Several approaches such as closed itemsets, non-derivable itemsets and generators have been suggested for losslessly reducing the amount of itemsets. We propose a new pruning method based on combining techniques for closed and non-derivable itemsets that allows further reductions of itemsets. This reduction is done without loss of information, that is, the complete collection of frequent itemsets can still be derived from the collection of closed non-derivable itemsets. The number of closed non-derivable itemsets is bound both by the number of closed and the number of non-derivable itemsets, and never exceeds the smaller of these. Our experiments show that the reduction is significant in some datasets.

Journal ArticleDOI
TL;DR: In this article, an algorithm for learning PDGs from real-world data is presented, which is capable of learning optimal PDG representations in some cases, and that the computational efficiency of PDG models learned from real life data is very close to the computational efficient of Bayesian network models.

Journal ArticleDOI
TL;DR: This paper addresses the optimization of content-based routing tables organized using the covering relation and presents novel configurations for improving local and distributed operation and presents the poset-derived forest data structure and variants that perform considerably better under frequent filter additions and removals than existing data structures.
Abstract: Event-based systems are seen as good candidates for supporting distributed applications in dynamic and ubiquitous environments because they support decoupled and asynchronous one-to-many and many-to-many information dissemination. Event systems are widely used because asynchronous messaging provides a flexible alternative to RPC. They are typically implemented using an overlay network of routers. A content-based router forwards event messages based on filters that are installed by subscribers and other routers. This paper addresses the optimization of content-based routing tables organized using the covering relation and presents novel configurations for improving local and distributed operation. We present the poset-derived forest data structure and variants that perform considerably better under frequent filter additions and removals than existing data structures. The results offer a significant performance increase to currently known covering-based routing mechanisms.

Journal ArticleDOI
TL;DR: A new statistical method for discovery of a causal ordering using non-normality of observed variables is developed to provide a partial solution to the problem.

Journal ArticleDOI
TL;DR: The Steiner quadruple systems of order 16 are classified up to isomoiphism by means of an exhaustive computer search and a consistency check based on double counting is carried out to gain confidence in the correctness of the classification.

Proceedings Article
30 May 2006
TL;DR: This paper extends the SSH and TLS protocols to support resilient connections that can span several sequential TCP connections, and allows sessions to survive both changes in IP addresses and long periods of disconnection.
Abstract: Disconnection of an SSH shell or a secure application session due to network outages or travel is a familiar problem to many Internet users today In this paper, we extend the SSH and TLS protocols to support resilient connections that can span several sequential TCP connections The extensions allow sessions to survive both changes in IP addresses and long periods of disconnection Our design emphasizes deployability in real-world environments, and addresses many of the challenges identified in previous work, including assumptions made about network middleboxes such as firewalls and NATs We have also implemented the extensions in the OpenSSH and PureTLS software packages and tested them in practice

Proceedings ArticleDOI
20 Aug 2006
TL;DR: This work shows that this problem of partitioning sequential data can be solved optimally in polynomial time using dynamic programming, and proposes faster greedy heuristics that work well in practice.
Abstract: Partitions of sequential data exist either per se or as a result of sequence segmentation algorithms. It is often the case that the same timeline is partitioned in many different ways. For example, different segmentation algorithms produce different partitions of the same underlying data points. In such cases, we are interested in producing an aggregate partition, i.e., a segmentation that agrees as much as possible with the input segmentations. Each partition is defined as a set of continuous non-overlapping segments of the timeline. We show that this problem can be solved optimally in polynomial time using dynamic programming. We also propose faster greedy heuristics that work well in practice. We experiment with our algorithms and we demonstrate their utility in clustering the behavior of mobile-phone users and combining the results of different segmentation algorithms on genomic sequences.

Proceedings ArticleDOI
04 Jul 2006
TL;DR: Contory is presented, a middleware specifically deployed to support provisioning of context information on mobile devices such as smart phones that integrates multiple strategies for context provisioning, namely internal sensors- based, external infrastructure-based, and distributed provisioning in ad hoc networks.
Abstract: This paper presents Contory, a middleware specifically deployed to support provisioning of context information on mobile devices such as smart phones. Contory integrates multiple strategies for context provisioning, namely internal sensors-based, external infrastructure-based, and distributed provisioning in ad hoc networks. Applications can query Contory about context items of different types, using a declarative query language which features on-demand, periodic, and event-based context queries. Contory allows applications to utilize different provisioning mechanisms depending on resource availability and presence of external infrastructures. This paper illustrates our approach along with its design and implementation on smart phones.

Book ChapterDOI
29 Oct 2006
TL;DR: A system for learning and utilizing context-dependent user models that support rule based reasoning and tree augmented naive Bayesian classifiers (TAN) and is in use in the EU IST project MobiLife.
Abstract: We present a system for learning and utilizing context-dependent user models The user models attempt to capture the interests of a user and link the interests to the situation of the user The models are used for making recommendations to applications and services on what might interest the user in her current situation In the design process we have analyzed several mock-ups of new mobile, context-aware services and applications The mock-ups spanned rather diverse domains, which helped us to ensure that the system is applicable to a wide range of tasks, such as modality recommendations (e.g., switching to speech output when driving a car), service category recommendations (e.g., journey planners at a bus stop), and recommendations of group members (e.g., people with whom to share a car) The structure of the presented system is highly modular First of all, this ensures that the algorithms that are used to build the user models can be easily replaced Secondly, the modularity makes it easier to evaluate how well different algorithms perform in different domains The current implementation of the system supports rule based reasoning and tree augmented naive Bayesian classifiers (TAN) The system consists of three components, each of which has been implemented as a web service The entire system has been deployed and is in use in the EU IST project MobiLife In this paper, we detail the components that are part of the system and introduce the interactions between the components In addition, we briefly discuss the quality of the recommendations that our system produces.

Journal ArticleDOI
02 Oct 2006
TL;DR: It is shown that contrary to related works that deal with the security of spread spectrum and quantisation schemes, for non-iid host signals such as images, principal component analysis is not an appropriate technique to estimate the secret carrier.
Abstract: Security is one of the crucial requirements of a watermarking scheme, because hidden messages such as copyright information are likely to face hostile attacks. In this paper, we question the security of an important class of watermarking schemes based on dither modulation (DM). DM embedding schemes rely on the quantisation of a secret component according to an embedded message, and the strategies used to improve the security of these schemes are the use of a dither vector and the use of a secret carrier. In this paper we show that contrary to related works that deal with the security of spread spectrum and quantisation schemes, for non-iid host signals such as images, principal component analysis is not an appropriate technique to estimate the secret carrier. We propose the use of a blind source separation technique called independent component analysis (ICA) to estimate and remove the watermark. In the case of DM embedding, the watermark signal corresponds to a quantisation noise independent of the host signal. An attacking methodology using ICA is presented for digital images; this attack consists first in estimating the secret carrier by an examination of the high-order statistics of the independent components and second in removing the embedded message by erasing the component related to the watermark. The ICA-based attack scheme is compared with a classical attack that has been proposed for attacking DM schemes. The results reported in this paper demonstrate how changes in natural image statistics can be used to detect watermarks and devise attacks. Different implementations of DM watermarking schemes such as pixel, DCT and spread transform-DM embedding can be attacked successfully. Our attack provides an accurate estimate of the secret key and an average improvement of 2 dB in comparison with optimal additive attacks. Such natural image statistics-based attacks may pose a serious threat against watermarking schemes which are based on quantisation techniques.

Book ChapterDOI
05 Mar 2006
TL;DR: Testing significance of mixing and demixing coefficients in ICA is discussed and a proposed test statistics to examine significance of these coefficients statistically statistically is proposed.
Abstract: Independent component analysis (ICA) has been extensively studied since it was originated in the field of signal processing However, almost all the researches have focused on estimation and paid little attention to testing In this paper, we discuss testing significance of mixing and demixing coefficients in ICA We propose test statistics to examine significance of these coefficients statistically A simulation experiment implies the good performance of our testing procedure A real example in psychometrics, which is a new application area of ICA, is also presented

Proceedings ArticleDOI
26 Jun 2006
TL;DR: This paper proposes the hybrid service provisioning model which complements the traditional model of interaction service provider to consumer with peer-to-peer functionalities, and designed and implemented a platform that supports this hybrid service Provisioning model.
Abstract: Mobile users of future ubiquitous environments require novel means for locating relevant services available in their daily surroundings, where relevance has a user-specific definition. In this paper, we propose the hybrid service provisioning model which complements the traditional model of interaction service provider to consumer with peer-to-peer functionalities. In addition to being notified proactively and in a context-aware manner about services available in the surroundings, users can generate several types of contextual messages, attach them to services or to the environment, and share them with other peers. We have designed and implemented a platform that supports this hybrid service provisioning model. The current application prototype runs on commercial smart phones. To demonstrate the feasibility and technical deployability of our approach, we conducted field trials in which the research subject was a community of recreational boaters