scispace - formally typeset
Search or ask a question

Showing papers by "Jon Crowcroft published in 2015"


Journal ArticleDOI
21 Oct 2015
TL;DR: This special issue of ACM Transactions on Multimedia Computing, Communications and Applications (TOMM) provides an opportunity to attract and bring together mobile computing, cyber-physical systems, ubiquitous computing, social computing, wireless networking, and multimedia communications researchers along with user interface designers and practitioners with diverse backgrounds to contribute articles on theoretical, practical, and methodological issues for next-generation interactive technologies, systems, and applications using smartphones.
Abstract: Smartphones (or smart mobile devices) have now truly become a ubiquitous computing device, a computer that the late Mark Weiser envisioned in his ubiquitous computing manifesto. Many applications that could only have been dreamed of have now become a reality due to the powerful computing resources, display, sensing, and networking capabilities of smartphones. With applications ranging from productivity, entertainment, enterprise, social networking, communications, and mixed reality, the smartphone is the “swiss army knife” of it all. However, there are still many untapped elements and unlimited possibilities: Smartphones can provide next-generation interactive systems with more intuitive and intelligent technologies and applications that have not been explored much in detail, especially through the use of mobile computing, sensing, and networking capabilities. Making uses of cyber and physical data accessible by smartphones at a location, new cyber-physical interactive technologies and systems can be designed and integrated to create novel functionalities, methods, and intelligences to interact with humans and environments for better social experiences, as well as more intelligent services and creative applications, such as recommendation systems, advertising platforms, and gaming applications that are interactive to the user smartphones on the spot in a physical environment and situation. This special issue of ACM Transactions on Multimedia Computing, Communications and Applications (TOMM) provides an opportunity to attract and bring together mobile computing, cyber-physical systems, ubiquitous computing, social computing, wireless networking, and multimedia communications researchers along with user interface designers and practitioners with diverse backgrounds to contribute articles on theoretical, practical, and methodological issues for next-generation interactive technologies, systems, and applications using smartphones. There were a record number of submissions (38 in total) for this special issue of ACM TOMM. Twelve high-quality, creative, and interesting articles were selected and accepted, which discuss various challenges and emerging directions of smartphone-based interactive technologies, systems, and applications. This special issue starts off with 5 articles concerning the technologies and applications for better uses and creations of visual/3D images and augmented reality (AR) in the smartphones The first article by Zhu et al. is titled “ShotVis: Smartphone-Based Visualization of OCR Information from Images” and it presents an approach to help smartphone users to easily read and organize text-based data captured by the smartphone’s camera. The captured images with textual data are first processed by optical character recognition, and the recognized information is readable through various intuitive visualization techniques selected and refined by smartphone users through on-screen interactions.

323 citations


Journal ArticleDOI
TL;DR: This paper surveys the literature over the period of 2004-2014 from the state of the art of theoretical frameworks, applications and system implementations, and experimental studies of the incentive strategies used in participatory sensing by providing up-to-date research in the literature.
Abstract: Participatory sensing is now becoming more popular and has shown its great potential in various applications. It was originally proposed to recruit ordinary citizens to collect and share massive amounts of sensory data using their portable smart devices. By attracting participants and paying rewards as a return, incentive mechanisms play an important role to guarantee a stable scale of participants and to improve the accuracy/coverage/timeliness of the sensing results. Along this direction, a considerable amount of research activities have been conducted recently, ranging from experimental studies to theoretical solutions and practical applications, aiming at providing more comprehensive incentive procedures and/or protecting benefits of different system stakeholders. To this end, this paper surveys the literature over the period of 2004–2014 from the state of the art of theoretical frameworks, applications and system implementations, and experimental studies of the incentive strategies used in participatory sensing by providing up-to-date research in the literature. We also point out future directions of incentive strategies used in participatory sensing.

188 citations


Proceedings Article
04 May 2015
TL;DR: It is shown that QJUMP achieves bounded latency and reduces in-network interference by up to 300×, outperforming Ethernet Flow Control (802.3x), ECN (WRED) and DCTCP and pFabric.
Abstract: QJUMP is a simple and immediately deployable approach to controlling network interference in datacenter networks. Network interference occurs when congestion from throughput-intensive applications causes queueing that delays traffic from latency-sensitive applications. To mitigate network interference, QJUMP applies Internet QoS-inspired techniques to datacenter applications. Each application is assigned to a latency sensitivity level (or class). Packets from higher levels are rate-limited in the end host, but once allowed into the network can "jump-the-queue" over packets from lower levels. In settings with known node counts and link speeds, QJUMP can support service levels ranging from strictly bounded latency (but with low rate) through to line-rate throughput (but with high latency variance). We have implemented QJUMP as a Linux Traffic Control module. We show that QJUMP achieves bounded latency and reduces in-network interference by up to 300×, outperforming Ethernet Flow Control (802.3x), ECN (WRED) and DCTCP. We also show that QJUMP improves average flow completion times, performing close to or better than DCTCP and pFabric.

176 citations


Journal ArticleDOI
17 Aug 2015
TL;DR: The Databox is discussed, a personal networked device (and associated services) that collates and mediates access to personal data, allowing us to recover control of the authors' online lives.
Abstract: We are in a 'personal data gold rush' driven by advertising being the primary revenue source for most online companies. These companies accumulate extensive personal data about individuals with minimal concern for us, the subjects of this process. This can cause many harms: privacy infringement, personal and professional embarrassment, restricted access to labour markets, restricted access to best value pricing, and many others. There is a critical need to provide technologies that enable alternative practices, so that individuals can participate in the collection, management and consumption of their personal data. In this paper we discuss the Databox, a personal networked device (and associated services) that collates and mediates access to personal data, allowing us to recover control of our online lives. We hope the Databox is a first step to re-balancing power between us, the data subjects, and the corporations that collect and use our data.

142 citations


Proceedings Article
04 May 2015
TL;DR: Jitsu is presented, a new Xen toolstack that satisfies the demands of secure multitenant isolation on resource-constrained embedded ARM devices by using unikernels: lightweight, compact, single address space, memory-safe virtual machines (VMs) written in a high-level language.
Abstract: Network latency is a problem for all cloud services. It can be mitigated by moving computation out of remote datacenters by rapidly instantiating local services near the user. This requires an embedded cloud platform on which to deploy multiple applications securely and quickly. We present Jitsu, a new Xen toolstack that satisfies the demands of secure multitenant isolation on resource-constrained embedded ARM devices. It does this by using unikernels: lightweight, compact, single address space, memory-safe virtual machines (VMs) written in a high-level language. Using fast shared memory channels, Jitsu provides a directory service that launches unikernels in response to network traffic and masks boot latency. Our evaluation shows Jitsu to be a power-efficient and responsive platform for hosting cloud services in the edge network while preserving the strong isolation guarantees of a type-1 hypervisor.

109 citations


Journal ArticleDOI
TL;DR: In this article, the authors provide a comprehensive survey of the literature on network-layer multipath solutions and present a detailed investigation of two important design issues, namely, the control plane problem of how to compute and select the routes and the data plane problem for how to split the flow on the computed paths.
Abstract: The Internet is inherently a multipath network: For an underlying network with only a single path, connecting various nodes would have been debilitatingly fragile. Unfortunately, traditional Internet technologies have been designed around the restrictive assumption of a single working path between a source and a destination. The lack of native multipath support constrains network performance even as the underlying network is richly connected and has redundant multiple paths. Computer networks can exploit the power of multiplicity, through which a diverse collection of paths is resource pooled as a single resource, to unlock the inherent redundancy of the Internet. This opens up a new vista of opportunities, promising increased throughput (through concurrent usage of multiple paths) and increased reliability and fault tolerance (through the use of multiple paths in backup/redundant arrangements). There are many emerging trends in networking that signify that the Internet's future will be multipath, including the use of multipath technology in data center computing; the ready availability of multiple heterogeneous radio interfaces in wireless (such as Wi-Fi and cellular) in wireless devices; ubiquity of mobile devices that are multihomed with heterogeneous access networks; and the development and standardization of multipath transport protocols such as multipath TCP. The aim of this paper is to provide a comprehensive survey of the literature on network-layer multipath solutions. We will present a detailed investigation of two important design issues, namely, the control plane problem of how to compute and select the routes and the data plane problem of how to split the flow on the computed paths. The main contribution of this paper is a systematic articulation of the main design issues in network-layer multipath routing along with a broad-ranging survey of the vast literature on network-layer multipathing. We also highlight open issues and identify directions for future work.

87 citations


Journal ArticleDOI

[...]

TL;DR: A novel opportunistic routing approach ML-SOR (Multi-layer Social Network based Routing) is proposed which extracts social network information from such a model to perform routing decisions and measures the forwarding capability of a node when compared to an encountered node in terms of node centrality, tie strength and link prediction.

75 citations


Journal ArticleDOI
TL;DR: This study developed a clean-slate implementation of the Raft protocol and built an event-driven simulation framework for prototyping it on experimental topologies and empirically validate the correctness of theRaft protocol invariants and evaluate Raft's understandability claims.
Abstract: The Paxos algorithm is famously difficult to reason about and even more so to implement, despite having been synonymous with distributed consensus for over a decade. The recently proposed Raft protocol lays claim to being a new, understandable consensus algorithm, improving on Paxos without making compromises in performance or correctness.In this study, we repeat the Raft authors' performance analysis. We developed a clean-slate implementation of the Raft protocol and built an event-driven simulation framework for prototyping it on experimental topologies. We propose several optimizations to the Raft protocol and demonstrate their effectiveness under contention. Finally, we empirically validate the correctness of the Raft protocol invariants and evaluate Raft's understandability claims.

74 citations


Proceedings ArticleDOI
30 Sep 2015
TL;DR: The proposed ring model is used and it is shown that flooding can be constrained within a very small neighbourhood to achieve most of the gains which come from areas where the growth rate is relatively low, i.e., the net- work edge.
Abstract: Scoped-flooding is a technique for content discovery in a broad networking context. This paper investigates the effects of scoped-flooding on various topologies in information- centric networking. Using the proposed ring model, we show that flooding can be constrained within a very small neighbourhood to achieve most of the gains which come from areas where the growth rate is relatively low, i.e., the net- work edge. We also study two flooding strategies and com- pare their behaviours. Given that caching schemes favour more popular items in competition for cache space, popular items are expected to be stored in diverse parts of the network compared to the less popular items. We propose to exploit the resulting divergence in availability along with the routers' topological properties to fine tune the flooding radius. Our results shed light on designing efficient con- tent discovery mechanism for future information-centric net- works.

61 citations


Journal ArticleDOI
TL;DR: It is proved that deciding whether region-disjoint paths exist is NP-hard and a heuristic region- Disjoint path algorithm is proposed.
Abstract: Due to their importance to society, communication networks should be built and operated to withstand failures. However, cost considerations make network providers less inclined to take robustness measures against failures that are unlikely to manifest, like several failures coinciding simultaneously in different geographic regions of their network. Considering networks embedded in a two-dimensional plane, we study the problem of finding a critical region—a part of the network that can be enclosed by a given elementary figure of predetermined size—whose destruction would lead to the highest network disruption. We determine that only a polynomial, in the input, number of nontrivial positions for such a figure needs to be considered and propose a corresponding polynomial-time algorithm. In addition, we consider region-aware network augmentation to decrease the impact of a regional failure. We subsequently address the region-disjoint paths problem, which asks for two paths with minimum total weight between a source $(s)$ and a destination $(d)$ that cannot both be cut by a single regional failure of diameter $D$ (unless that failure includes $s$ or $d$ ). We prove that deciding whether region-disjoint paths exist is NP-hard and propose a heuristic region-disjoint paths algorithm.

58 citations


Journal ArticleDOI
TL;DR: Exploring three interdisciplinary areas and the extent to which they overlap and are they all part of the same larger domain?
Abstract: Exploring three interdisciplinary areas and the extent to which they overlap. Are they all part of the same larger domain?

Posted Content
TL;DR: In this article, the authors propose a technical platform enabling people to engage with the collection, management and consumption of personal data; and that this platform should itself be personal, under the direct control of the individual whose data it holds.
Abstract: We propose there is a need for a technical platform enabling people to engage with the collection, management and consumption of personal data; and that this platform should itself be personal, under the direct control of the individual whose data it holds. In what follows, we refer to this platform as the Databox, a personal, networked service that collates personal data and can be used to make those data available. While your Databox is likely to be a virtual platform, in that it will involve multiple devices and services, at least one instance of it will exist in physical form such as on a physical form-factor computing device with associated storage and networking, such as a home hub.

Proceedings ArticleDOI
18 May 2015
TL;DR: A strawman SCN architecture that combines multiple transmission technologies for providing resilient SCN in challenged DIY networks is proposed and key challenges that need to be explored further are identified to realise the full potential of the architecture.
Abstract: Do-It-Yourself (DIY) networks are decentralised networks built by an (often) amateur community. As DIY networks do not rely on the need for backhaul Internet connectivity, these networks are mostly a mix of both offline and online networks. Although DIY networks have their own homegrown services, the current Internet-based cloud services are often useful, and access to some services could be beneficial to the community. Considering that most DIY networks have challenged Internet connectivity, migrating current service virtualisation instances could face great challenges. Service Centric Networking (SCN) has been recently proposed as a potential solution to managing services more efficiently using Information Centric Networking (ICN) principles. In this position paper, we present our arguments for the need for a resilient SCN architecture, propose a strawman SCN architecture that combines multiple transmission technologies for providing resilient SCN in challenged DIY networks and, finally, identify key challenges that need to be explored further to realise the full potential of our architecture.

Proceedings ArticleDOI
18 Nov 2015
TL;DR: The results reveal several interesting findings: rural users do use online social networks, instant messaging applications and online games similarly to urban users; they install unnecessary applications on their mobile phones and are completely obvious to their side effects.
Abstract: Community networks owned and operated by local communities have recently gained popularity as a low cost solution for Internet access. In this paper, we seek to understand the characteristics of Internet usage in community networks and provide useful insights on designing and improving community networks in rural areas. We report the results of a socio-technical study carried out during a three month measurement of a community wireless mesh network (CWMN) which has been operating for two years in a rural area of northern Thailand. An on-site social interview was also conducted to supplement our analysis. The results reveal several interesting findings: rural users do use online social networks, instant messaging applications and online games similarly to urban users; they install unnecessary applications on their mobile phones and are completely obvious to their side effects -- the traffic from these applications accounts for a major share of the traffic leading to numerous network anomalies. Finally our analysis uncovers the characteristic of locality in community networks where users in close geographical proximity interact with each other.

Proceedings ArticleDOI
11 Sep 2015
TL;DR: This position paper introduces an open-source platform for WiFi offloading that leverages the programmable feature of software-defined networking (SDN) to enhance extensibility and deployability in a collaborative manner, and exploits context awareness as a use case to demonstrate the efficacy of the solution.
Abstract: Offloading mobile traffic to WiFi networks (WiFi Offloading) is a cost-effective technique to alleviate the pressure on mobile networks for meeting the surge of data capacity demand. However, most existing proposals from standards developing organizations (SDOs) and research communities are facing a deployment dilemma, either due to overlooking device limitations, lack user incentives, or missing operator supports. In this position paper, we introduce an open-source platform for WiFi offloading to tackle the deployment challenge. Our solution leverages the programmable feature of software-defined networking (SDN) to enhance extensibility and deployability in a collaborative manner. Inspired by our field measurements covering 4G/LTE and 802.11ac/n, we exploit context awareness as a use case to demonstrate the efficacy of our solution. We also discuss the potential usage by cloud service providers given the opportunities behind the growing popularity of mobile virtual network operators (MVNO). We have released our platform under open-source licenses to encourage future collaboration and development with SDOs and research communities.

Journal ArticleDOI
TL;DR: This paper considers the problem of efficient data gathering in sensor networks for arbitrary sensor node deployments, and shows that in many cases the output-sensitive approximation solution performs better than the currently known best results for sensor networks.
Abstract: In this paper we consider the problem of efficient data gathering in sensor networks for arbitrary sensor node deployments. The efficiency of the solution is measured by a number of criteria: total energy consumption, total transport capacity, latency and quality of the transmissions. We present a number of different constructions with various tradeoffs between aforementioned parameters. We provide theoretical performance analysis for our approaches, present their distributed implementation and discuss the different aspects of using each. We show that in many cases our output-sensitive approximation solution performs better than the currently known best results for sensor networks. We also consider our problem under the mobile sensor nodes environment, when the sensors have no information about each other. The only information a single sensor holds is its current location and future mobility plan. Our simulation results validate the theoretical findings.

Proceedings ArticleDOI
14 Jul 2015
TL;DR: It is described how more accurate models for data-center systems can be designed and used in order to create an evaluation framework that allows the exploration of the energy-performance trade-off in VM consolidation strategies with enhanced fidelity.
Abstract: In this paper we challenge the common evaluation practices used in Virtual Machine (VM) consolidation, such as simulation and small testbeds, which fail to capture the fundamental trade-off between energy consumption and performance. We identify a number of over-simplifying assumptions which are typically made about the energy consumption and performance characteristics of modern networked systems. In response, we describe how more accurate models for data-center systems can be designed and used in order to create an evaluation framework that allows the exploration of the energy-performance trade-off in VM consolidation strategies with enhanced fidelity.

Journal ArticleDOI
TL;DR: The main contribution of this paper is a systematic articulation of the main design issues in network-layer multipath routing along with a broad-ranging survey of the vast literature on network- layer multipath solutions.
Abstract: The Internet is inherently a multipath network---for an underlying network with only a single path connecting various nodes would have been debilitatingly fragile. Unfortunately, traditional Internet technologies have been designed around the restrictive assumption of a single working path between a source and a destination. The lack of native multipath support constrains network performance even as the underlying network is richly connected and has redundant multiple paths. Computer networks can exploit the power of multiplicity to unlock the inherent redundancy of the Internet. This opens up a new vista of opportunities promising increased throughput (through concurrent usage of multiple paths) and increased reliability and fault-tolerance (through the use of multiple paths in backup/ redundant arrangements). There are many emerging trends in networking that signify that the Internet's future will be unmistakably multipath, including the use of multipath technology in datacenter computing; multi-interface, multi-channel, and multi-antenna trends in wireless; ubiquity of mobile devices that are multi-homed with heterogeneous access networks; and the development and standardization of multipath transport protocols such as MP-TCP. The aim of this paper is to provide a comprehensive survey of the literature on network-layer multipath solutions. We will present a detailed investigation of two important design issues, namely the control plane problem of how to compute and select the routes, and the data plane problem of how to split the flow on the computed paths. The main contribution of this paper is a systematic articulation of the main design issues in network-layer multipath routing along with a broad-ranging survey of the vast literature on network-layer multipathing. We also highlight open issues and identify directions for future work.

Proceedings ArticleDOI
17 Aug 2015
TL;DR: Coracle, a tool for evaluating distributed consensus algorithms in settings that more accurately represent realistic deployments, is developed and used to test two examples of network configurations that contradict the liveness claims of the Raft algorithm.
Abstract: Distributed consensus is fundamental in distributed systems for achieving fault-tolerance. The Paxos algorithm has long dominated this domain, although it has been recently challenged by algorithms such as Raft and Viewstamped Replication Revisited. These algorithms rely on Paxos's original assumptions, unfortunately these assumptions are now at odds with the reality of the modern internet. Our insight is that current consensus algorithms have significant availability issues when deployed outside the well defined context of the datacenter. To illustrate this problem, we developed Coracle, a tool for evaluating distributed consensus algorithms in settings that more accurately represent realistic deployments. We have used Coracle to test two examples of network configurations that contradict the liveness claims of the Raft algorithm. Through the process of exercising these algorithms under more realistic assumptions, we demonstrate wider availability issues faced by consensus algorithms when deployed on real world networks.

Journal ArticleDOI
TL;DR: This paper considers the problems of providing resilience against loss, and against unacceptable access as a dual, and sees that two apparently different solutions to different technical problems may be transformed into one another, and hence give better insight into both problems.
Abstract: Protecting information has long been an important problem We would like to protect ourselves from the risk of loss: think of the library of Alexandria; and from unauthorized access: consider the very business of the 'Scandal Sheets', going back centuries This has never been more true than today when vast quantities of data (dare one say lesser quantities of information) are stored on computer systems, and routinely moved around the Internet, at almost no cost Computer and communication systems are both fragile and vulnerable, and so the risk of catastrophic loss or theft is potentially much higher A single keystroke can delete a public database, or expose a private dataset to the world In this paper, I consider the problems of providing resilience against loss, and against unacceptable access as a dual Here, we see that two apparently different solutions to different technical problems may be transformed into one another, and hence give better insight into both problems

Proceedings ArticleDOI
07 Sep 2015
TL;DR: This demonstration presents a novel software defined platform for achieving collaborative and energy-aware WiFi offloading that consists of an extensible central controller, programmable offloading agents, and offloading extensions on mobile devices.
Abstract: This demonstration presents a novel software defined platform for achieving collaborative and energy-aware WiFi offloading. The platform consists of an extensible central controller, programmable offloading agents, and offloading extensions on mobile devices. Driven by our extensive measurements of energy consumption on smartphones, we propose an effective energy-aware offloading algorithm and integrate it to our platform. By enabling collaboration between wireless networks and mobile users, our solution can make optimal offloading decisions that improve offloading efficiency for network operators and achieve energy saving for mobile users. To enhance deployability, we have released our platform under open-source licenses on GitHub.

Proceedings ArticleDOI
18 May 2015
TL;DR: In this paper, the authors explore how virtual currencies might be used to provide an end-to-end incentive scheme to convince forwarding nodes that it is profitable to send messages on via the lowest latency mechanism available.
Abstract: Devices connected to the Internet today have a wide range of local communication channels available, such as wireless Wifi, Bluetooth or NFC, as well as wired backhaul. In densely populated areas it is possible to create heterogeneous, multihop communication paths using a combination of these technologies, and often transmit data with lower latency than via a wired Internet connection. However, the potential for sharing meshed wireless radios in this way has never been realised due to the lack of economic incentives to do so on the part of individual nodes.In this paper, we explore how virtual currencies might be used to provide an end-to-end incentive scheme to convince forwarding nodes that it is profitable to send messages on via the lowest latency mechanism available. Clients inject a small amount of money to transmit a message, and forwarding engines compete to solve a time-locked puzzle that can be claimed by the node that delivers the result in the lowest latency. Our approach naturally extends congestion control techniques to a surge pricing model when available bandwidth is low and does not require latency measurements.

Proceedings ArticleDOI
25 May 2015
TL;DR: This paper studies the problem of efficient data recovery using the data mules approach, where a set of mobile sensors with advanced mobility capabilities re-acquire lost data by visiting the neighbors of failed sensors, thereby improving network resiliency.
Abstract: In this paper, we study the problem of efficient data recovery using the data mules approach, where a set of mobile sensors with advanced mobility capabilities re-acquire lost data by visiting the neighbors of failed sensors, thereby improving network resiliency. Our approach involves defining the optimal communication graph and mules' placements such that the overall traveling time and distance is minimized regardless to which sensors crashed. We explore this problem under different practical network topologies such as general graphs, grids and random linear networks and provide approximation algorithms based on multiple combinatorial techniques. Simulation experiments demonstrate that our algorithms outperform various competitive solutions for different network models, and that they are applicable for practical scenarios.

Proceedings Article
18 May 2015
TL;DR: DIYnet 2015 as mentioned in this paper is a series of workshops on DIY networking, which aim to facilitate interdisciplinary exchanges around the complex design space defined by DIY networking solutions, for a more creative interplay between technological and human networks in the city.
Abstract: It is our great pleasure to welcome you to the first of a series of interdisciplinary workshops on DIY networking. They build on a recent successful Dagstuhl seminar by the name "DIY networking: an interdisciplinary perspective", which brought together a highly diverse group of researchers and practitioners to reflect on technological and social issues related to the use of local wireless networks operating outside the public Internet. The seminar initiated a process of bridging the communication gap between those who build technology (e.g. computer scientists, engineers, and hackers) and those who understand better the complex urban environment where this technology is deployed (e.g. social and political scientists, urban planners, designers, and artists). Now in DIYnet 2015 -- hosted by ACM MobiSys -- the participants try to make one more step toward facilitating interdisciplinary exchanges around the complex design space defined by DIY networking solutions, for a more creative interplay between technological and human networks in the city. The technical programme includes both conceptual and experiential entries with DIY networking applications, and novel scientific contributions on important technical questions. We are also very proud to have with us key people coming from different domains that are close to the common object of enquiry: DIY networking. More specifically Michael Smyth (Edinburgh Napier University) will give a keynote talk on Urban Interaction Design, and highlight the interdisciplinary perspective of hybrid space design. Paul Dourish (University of California, Irvine) will give a second keynote talk on The Politics of Infrastructure Projects, which will bring the political dimension. Andreas Unteidig (Berlin University of the Arts) will give a demo of the "hybrid letter box" and introduce the design research perspective. Mathias Jud (independent artist) will give a demo of the community art project http://www.qaul.net, which has received the "Prix Ars electronica [the next idea]", and bring the artistic and activist perspective.

Book ChapterDOI
26 May 2015
TL;DR: A novel one-way message routing scheme based on probabilistic forwarding that guarantees message privacy and sender anonymity through cryptographic means is proposed; utilising an additively homomorphic public-key cryptosystem along with a symmetric cipher.
Abstract: Opinions from people, evident in surveys and microblogging, for instance, may have bias or low user participation due to legitimate concerns about privacy and anonymity. To provide sender (the participant) anonymity, the identity of the message sender must be hidden from the message recipient (the opinion collector) and the contents of the actual message hidden from any intermediate actors (such as, routers) that may be responsible for relaying the message. We propose a novel one-way message routing scheme based on probabilistic forwarding that guarantees message privacy and sender anonymity through cryptographic means; utilising an additively homomorphic public-key cryptosystem along with a symmetric cipher. Our scheme involves intermediate relays and can work with either a centralised or a decentralised registry that helps with connecting the relays to each other. In addition to theoretical analysis, we demonstrate a real-world prototype built with HTML5 technologies and deployed on a public cloud environment. The prototype allows anonymous messaging over HTTP(S), and has been run inside HTML5 browsers on mobile application environments with no configurations at the network level. While we leave constructing the reverse path as future work, the proposal contained in this paper complete and has practical applications in anonymous surveys and microblogging.

Proceedings ArticleDOI
14 Apr 2015
TL;DR: A novel message routing scheme based on probabilistic forwarding that guarantees message privacy and sender anonymity through additively homomorphic public-key encryption and is applicable to anonymous surveys and microblogging is proposed.
Abstract: Opinions from people can either be biased or reflect low participation due to legitimate concerns about privacy and anonymity. To alleviate those concerns, the identity of a message sender should be disassociated from the message while the contents of the actual message should be hidden from any relaying nodes. We propose a novel message routing scheme based on probabilistic forwarding that guarantees message privacy and sender anonymity through additively homomorphic public-key encryption. Our scheme is applicable to anonymous surveys and microblogging.

Proceedings ArticleDOI
08 Jun 2015
TL;DR: This paper proposes an adaptive recommender system by using feedback control frameworks that continuously monitors its changes and estimates the loss of performance from two perspectives: data problem( data aging and data deficient) in training set, and changes of user behavior by “revisiting ratio”.
Abstract: Recommender systems have changed the way people originally find products, information, and even their social circles. However, most existing research activities neglect its time-varying feature, i.e., the growing input data, the change of user behaviors. In order to sustain the high accuracy of recommendations, systems have to be updated regularly. However, the more often the update proceeds, the more cost of time and other computational resources. Thus, it is critical to strike the balance between accuracy and cost. In this paper, we propose an adaptive recommender system by using feedback control frameworks. The proposed solution continuously monitors its changes and estimates the loss of performance (in terms of accuracy) from two perspectives: data problem(data aging and data deficient) in training set, and changes of user behavior by “revisiting ratio”. When the benefit of performing an update exceeds the cost of resources, the system update itself. Theoretical analysis and extensive results by using a real data set are supplemented to show the advantages of the proposed system.