scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Radiator - efficient message propagation in context-aware systems

07 Apr 2014-Journal of Internet Services and Applications (Springer London)-Vol. 5, Iss: 1, pp 4
TL;DR: Radiator, a middleware to assist application programmers implementing efficient context propagation mechanisms within their applications makes an efficient use of network bandwidth, arguably the biggest bottleneck in the deployment of large-scale context propagation systems.
Abstract: Applications such as Facebook, Twitter and Foursquare have brought the mass adoption of personal short messages, distributed in (soft) real-time on the Internet to a large number of users. These messages are complemented with rich contextual information such as the identity, time and location of the person sending the message (e.g., Foursquare has millions of users sharing their location on a regular basis, with almost 1 million updates per day). Such contextual messages raise serious concerns in terms of scalability and delivery delay; this results not only from their huge number but also because the set of user recipients changes for each message (as their interests continuously change), preventing the use of well-known solutions such as pub-sub and multicast trees. This leads to the use of non-scalable broadcast based solutions or point-to-point messaging. We propose Radiator, a middleware to assist application programmers implementing efficient context propagation mechanisms within their applications. Based on each user’s current context, Radiator continuously adapts each message propagation path and delivery delay, making an efficient use of network bandwidth, arguably the biggest bottleneck in the deployment of large-scale context propagation systems. Our experimental results demonstrate a 20x reduction on consumed bandwidth without affecting the real-time usefulness of the propagated messages.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
26 Nov 2018
TL;DR: A new dictionary maintenance algorithm called PreDict is designed that adjusts its operation over time by adapting its parameters to the message stream and that amortizes the resulting compression-induced bandwidth overhead by enabling high compression ratios.
Abstract: Data usage is a significant concern, particularly in smartphone applications, M2M communications and for Internet of Things (IoT) applications. Messages in these domains are often exchanged with a backend infrastructure using publish/subscribe (pub/sub). Shared dictionary compression has been shown to reduce data usage in pub/sub networks beyond that obtained using well-known techniques, such as DEFLATE, gzip and delta encoding, but such compression requires manual configuration, which increases the operational complexity.To address this challenge, we design a new dictionary maintenance algorithm called PreDict that adjusts its operation over time by adapting its parameters to the message stream and that amortizes the resulting compression-induced bandwidth overhead by enabling high compression ratios.PreDict observes the message stream, takes the costs specific to pub/sub into account and uses machine learning and parameter fitting to adapt the parameters of dictionary compression to match the characteristics of the streaming messages continuously over time. The primary goal is to reduce the overall bandwidth of data dissemination without any manual parameterization.PreDict reduces the overall bandwidth by 72.6% on average. Furthermore, the technique reduces the computational overhead by a 2x for publishers and by a 1.4x for subscribers compared to the state of the art using manually selected parameters. In challenging configurations that have many more publishers (10k) than subscribers (1), the overall bandwidth reductions are more than 2x higher than that obtained by the state of the art.

6 citations


Cites background from "Radiator - efficient message propag..."

  • ...Other approaches for reducing bandwidth in pub/sub consider user-defined aggregation functions, which assist application programmers in implementing efficient context propagation [12]....

    [...]

Proceedings ArticleDOI
02 Oct 2016
TL;DR: The proposed procedure allows a user more than three attempts of authentication by switching after two failures to a more secure authentication protocol keeping a balance between QoP and QoE measures.
Abstract: Authenticating users connecting to online services, social networks or m-banking became an indispensable element of our everyday life. Reliable authentication is a foundation of security of Internet services but, on the other hand, also a source of users' frustration due to possible account blocking in case of three fails. In this paper we propose a model of authentication service management which helps in keeping a balance between the authentication security level and positive users' perception of this procedure. The proposed procedure allows a user more than three attempts of authentication by switching after two failures to a more secure authentication protocol keeping a balance between QoP and QoE measures. Finally, the procedure determines an optimal path of authentication using a decision tree algorithm.

1 citations


Cites background from "Radiator - efficient message propag..."

  • ..., [13, 14, 15]) contextual information can be used to improve Internet systems work....

    [...]

29 Mar 2008
TL;DR: The Second Workshop on Scalable Stream Processing System (SSPS) as discussed by the authors continued the success of the First Workshop on SSPS, focusing on the scalability issues of a stream processing system challenged by ever increasing load and stringent requirement on the system.
Abstract: This Second Workshop on Scalable Stream Processing System (SSPS) continued the success of the First Workshop on SSPS. The focus of the workshop is on the scalability issues of a stream processing system challenged by ever increasing load and stringent requirement on the system. Being co-located with EDBT '08 was an ideal setting for the workshop, considering the reputation of EDBT as a top conference on database technology.
01 Jan 2010
TL;DR: The 2010 ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN 2010) as mentioned in this paper received an impressive number of high quality contributions, totaling 58 in the SPOTS track and 117 in the IP track.
Abstract: It is our great pleasure to welcome you to the 9th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN 2010). We hope you enjoy this conference that attracts a diverse set of attendees from both academia and industry and is a leading venue for publications and idea exchange on sensor networks. IPSN is unique in its broad coverage of the field, ranging from analytic foundations to system implementation and platforms. It is a meeting point of theory and practice embodied in its two tracks; Information Processing (IP) and Sensor Platforms, Tools and Design Methods (SPOTS). The field of sensor networks has seen a significant expansion and maturation over the years. In evidence of such growth, IPSN 2010 received an impressive number of high quality contributions, totaling 58 in the SPOTS track and 117 in the IP track. These submissions underwent a very careful review process, each receiving at least 3 reviews and occasionally up to 5 reviews. The review phase was followed by an online discussion by the technical program committee, culminating in a technical program committee meeting. Only 31 papers were selected for publication of which 20 were in the IP track and 11 in the SPOTS track. All accepted papers were assigned shepherds to help further improve the quality of the final manuscripts.
References
More filters
Proceedings ArticleDOI
12 Apr 2010
TL;DR: Ear-Phone, for the first time, leverages Compressive Sensing to address the fundamental problem of recovering the noise map from incomplete and random samples obtained by crowdsourcing data collection.
Abstract: A noise map facilitates monitoring of environmental noise pollution in urban areas. It can raise citizen awareness of noise pollution levels, and aid in the development of mitigation strategies to cope with the adverse effects. However, state-of-the-art techniques for rendering noise maps in urban areas are expensive and rarely updated (months or even years), as they rely on population and traffic models rather than on real data. Participatory urban sensing can be leveraged to create an open and inexpensive platform for rendering up-to-date noise maps.In this paper, we present the design, implementation and performance evaluation of an end-to-end participatory urban noise mapping system called Ear-Phone. Ear-Phone, for the first time, leverages Compressive Sensing to address the fundamental problem of recovering the noise map from incomplete and random samples obtained by crowdsourcing data collection. Ear-Phone, implemented on Nokia N95 and HP iPAQ mobile devices, also addresses the challenge of collecting accurate noise pollution readings at a mobile device. Extensive simulations and outdoor experiments demonstrate that Ear-Phone is a feasible platform to assess noise pollution, incurring reasonable system resource consumption at mobile devices and providing high reconstruction accuracy of the noise map.

741 citations

Journal ArticleDOI
TL;DR: This article identifies key challenges facing optimistic replication systems---ordering operations, detecting and resolving conflicts, propagating changes efficiently, and bounding replica divergence---and provides a comprehensive survey of techniques developed for addressing these challenges.
Abstract: Data replication is a key technology in distributed systems that enables higher availability and performance. This article surveys optimistic replication algorithms. They allow replica contents to diverge in the short term to support concurrent work practices and tolerate failures in low-quality communication links. The importance of such techniques is increasing as collaboration through wide-area and mobile networks becomes popular.Optimistic replication deploys algorithms not seen in traditional “pessimistic” systems. Instead of synchronous replica coordination, an optimistic algorithm propagates changes in the background, discovers conflicts after they happen, and reaches agreement on the final contents incrementally.We explore the solution space for optimistic replication algorithms. This article identifies key challenges facing optimistic replication systems---ordering operations, detecting and resolving conflicts, propagating changes efficiently, and bounding replica divergence---and provides a comprehensive survey of techniques developed for addressing these challenges.

733 citations

Proceedings Article
01 Jan 2003
TL;DR: The architectural challenges facing the design of large-scale distributed stream processing systems are described, and novel approaches for addressing load management, high availability, and federated operation issues are discussed.
Abstract: Stream processing fits a large class of new applications for which conventional DBMSs fall short. Because many stream-oriented systems are inherently geographically distributed and because distribution offers scalable load management and higher availability, future stream processing systems will operate in a distributed fashion. They will run across the Internet on computers typically owned by multiple cooperating administrative domains. This paper describes the architectural challenges facing the design of large-scale distributed stream processing systems, and discusses novel approaches for addressing load management, high availability, and federated operation issues. We describe two stream processing systems, Aurora* and Medusa, which are being designed to explore complementary solutions to these challenges. This paper discusses the architectural issues facing the design of large-scale distributed stream processing systems. We begin in Section 2 with a brief description of our centralized stream processing system, Aurora [4]. We then discuss two complementary efforts to extend Aurora to a distributed environment: Aurora* and Medusa. Aurora* assumes an environment in which all nodes fall under a single administrative domain. Medusa provides the infrastructure to support federated operation of nodes across administrative boundaries. After describing the architectures of these two systems in Section 3, we consider three design challenges common to both: infrastructures and protocols supporting communication amongst nodes (Section 4), load sharing in response to variable network conditions (Section 5), and high availability in the presence of failures (Section 6). We also discuss high-level policy specifications employed by the two systems in Section 7. For all of these issues, we believe that the push-based nature of stream-based applications not only raises new challenges but also offers the possibility of new domain-specific solutions.

624 citations

Journal ArticleDOI
TL;DR: ContextPhone is a software platform consisting of four interconnected modules provided as a set of open source C++ libraries and source code components that helps developers more easily create applications that integrate into both existing technologies and users' everyday lives.
Abstract: Smart phones are a particularly tempting platform for building context-aware applications because they're programmable and often use well-known operating systems. There's a gap, however, between the operating systems' functionality and the features that application developers need. To fill this gap, we've designed and developed ContextPhone, a software platform consisting of four interconnected modules provided as a set of open source C++ libraries and source code components. ContextPhone runs on off-the-shelf mobile phones using Symbian OS and the Nokia Series 60 Smartphone platform. ContextPhone was developed using an iterative, human-centered design strategy. It thus helps developers more easily create applications that integrate into both existing technologies and users' everyday lives.

549 citations

Proceedings ArticleDOI
16 Jul 2000
TL;DR: S.IENA's data model for notifications, the covering relations that formally define the semantics of the data model, the distributed architectures the authors have studied for the service's implementation, and the processing strategies developed to exploit the covering Relations for optimizing the routing of notifications are described.
Abstract: This paper describes the design of SIENA, an Internet-scale event notification middleware service for distributed event-based applications deployed over wide-area networks. SIENA is responsible for selecting the notifications that are of interest to clients (as expressed in client subscriptions) and then delivering those notifications to the clients via access points. The key design challenge for SIENA is maximizing expressiveness in the selection mechanism without sacrificing scalability of the delivery mechanism. This paper focuses on those aspects of the design of SIENA that fundamentally impact scalability and expressiveness. In particular, we describe SIENA's data model for notifications, the covering relations that formally define the semantics of the data model, the distributed architectures we have studied for SIENA's implementation, and the processing strategies we developed to exploit the covering relations for optimizing the routing of notifications.

427 citations