scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Radiator - efficient message propagation in context-aware systems

07 Apr 2014-Journal of Internet Services and Applications (Springer London)-Vol. 5, Iss: 1, pp 4
TL;DR: Radiator, a middleware to assist application programmers implementing efficient context propagation mechanisms within their applications makes an efficient use of network bandwidth, arguably the biggest bottleneck in the deployment of large-scale context propagation systems.
Abstract: Applications such as Facebook, Twitter and Foursquare have brought the mass adoption of personal short messages, distributed in (soft) real-time on the Internet to a large number of users. These messages are complemented with rich contextual information such as the identity, time and location of the person sending the message (e.g., Foursquare has millions of users sharing their location on a regular basis, with almost 1 million updates per day). Such contextual messages raise serious concerns in terms of scalability and delivery delay; this results not only from their huge number but also because the set of user recipients changes for each message (as their interests continuously change), preventing the use of well-known solutions such as pub-sub and multicast trees. This leads to the use of non-scalable broadcast based solutions or point-to-point messaging. We propose Radiator, a middleware to assist application programmers implementing efficient context propagation mechanisms within their applications. Based on each user’s current context, Radiator continuously adapts each message propagation path and delivery delay, making an efficient use of network bandwidth, arguably the biggest bottleneck in the deployment of large-scale context propagation systems. Our experimental results demonstrate a 20x reduction on consumed bandwidth without affecting the real-time usefulness of the propagated messages.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
26 Nov 2018
TL;DR: A new dictionary maintenance algorithm called PreDict is designed that adjusts its operation over time by adapting its parameters to the message stream and that amortizes the resulting compression-induced bandwidth overhead by enabling high compression ratios.
Abstract: Data usage is a significant concern, particularly in smartphone applications, M2M communications and for Internet of Things (IoT) applications. Messages in these domains are often exchanged with a backend infrastructure using publish/subscribe (pub/sub). Shared dictionary compression has been shown to reduce data usage in pub/sub networks beyond that obtained using well-known techniques, such as DEFLATE, gzip and delta encoding, but such compression requires manual configuration, which increases the operational complexity.To address this challenge, we design a new dictionary maintenance algorithm called PreDict that adjusts its operation over time by adapting its parameters to the message stream and that amortizes the resulting compression-induced bandwidth overhead by enabling high compression ratios.PreDict observes the message stream, takes the costs specific to pub/sub into account and uses machine learning and parameter fitting to adapt the parameters of dictionary compression to match the characteristics of the streaming messages continuously over time. The primary goal is to reduce the overall bandwidth of data dissemination without any manual parameterization.PreDict reduces the overall bandwidth by 72.6% on average. Furthermore, the technique reduces the computational overhead by a 2x for publishers and by a 1.4x for subscribers compared to the state of the art using manually selected parameters. In challenging configurations that have many more publishers (10k) than subscribers (1), the overall bandwidth reductions are more than 2x higher than that obtained by the state of the art.

6 citations


Cites background from "Radiator - efficient message propag..."

  • ...Other approaches for reducing bandwidth in pub/sub consider user-defined aggregation functions, which assist application programmers in implementing efficient context propagation [12]....

    [...]

Proceedings ArticleDOI
02 Oct 2016
TL;DR: The proposed procedure allows a user more than three attempts of authentication by switching after two failures to a more secure authentication protocol keeping a balance between QoP and QoE measures.
Abstract: Authenticating users connecting to online services, social networks or m-banking became an indispensable element of our everyday life. Reliable authentication is a foundation of security of Internet services but, on the other hand, also a source of users' frustration due to possible account blocking in case of three fails. In this paper we propose a model of authentication service management which helps in keeping a balance between the authentication security level and positive users' perception of this procedure. The proposed procedure allows a user more than three attempts of authentication by switching after two failures to a more secure authentication protocol keeping a balance between QoP and QoE measures. Finally, the procedure determines an optimal path of authentication using a decision tree algorithm.

1 citations


Cites background from "Radiator - efficient message propag..."

  • ..., [13, 14, 15]) contextual information can be used to improve Internet systems work....

    [...]

29 Mar 2008
TL;DR: The Second Workshop on Scalable Stream Processing System (SSPS) as discussed by the authors continued the success of the First Workshop on SSPS, focusing on the scalability issues of a stream processing system challenged by ever increasing load and stringent requirement on the system.
Abstract: This Second Workshop on Scalable Stream Processing System (SSPS) continued the success of the First Workshop on SSPS. The focus of the workshop is on the scalability issues of a stream processing system challenged by ever increasing load and stringent requirement on the system. Being co-located with EDBT '08 was an ideal setting for the workshop, considering the reputation of EDBT as a top conference on database technology.
01 Jan 2010
TL;DR: The 2010 ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN 2010) as mentioned in this paper received an impressive number of high quality contributions, totaling 58 in the SPOTS track and 117 in the IP track.
Abstract: It is our great pleasure to welcome you to the 9th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN 2010). We hope you enjoy this conference that attracts a diverse set of attendees from both academia and industry and is a leading venue for publications and idea exchange on sensor networks. IPSN is unique in its broad coverage of the field, ranging from analytic foundations to system implementation and platforms. It is a meeting point of theory and practice embodied in its two tracks; Information Processing (IP) and Sensor Platforms, Tools and Design Methods (SPOTS). The field of sensor networks has seen a significant expansion and maturation over the years. In evidence of such growth, IPSN 2010 received an impressive number of high quality contributions, totaling 58 in the SPOTS track and 117 in the IP track. These submissions underwent a very careful review process, each receiving at least 3 reviews and occasionally up to 5 reviews. The review phase was followed by an online discussion by the technical program committee, culminating in a technical program committee meeting. Only 31 papers were selected for publication of which 20 were in the IP track and 11 in the SPOTS track. All accepted papers were assigned shepherds to help further improve the quality of the final manuscripts.
References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
TL;DR: The solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment and examines re-identification attacks that can be realized on releases that adhere to k- anonymity unless accompanying policies are respected.
Abstract: Consider a data holder, such as a hospital or a bank, that has a privately held collection of person-specific, field structured data. Suppose the data holder wants to share a version of the data with researchers. How can a data holder release a version of its private data with scientific guarantees that the individuals who are the subjects of the data cannot be re-identified while the data remain practically useful? The solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment. A release provides k-anonymity protection if the information for each person contained in the release cannot be distinguished from at least k-1 individuals whose information also appears in the release. This paper also examines re-identification attacks that can be realized on releases that adhere to k- anonymity unless accompanying policies are respected. The k-anonymity protection model is important because it forms the basis on which the real-world systems known as Datafly, µ-Argus and k-Similar provide guarantees of privacy protection.

7,925 citations


"Radiator - efficient message propag..." refers background in this paper

  • ...bThe people type can be useful to implement k-Anonymity [51] style privacy mechanisms; the idea is to aggregate as many messages as needed to ensure anonymity....

    [...]

Proceedings ArticleDOI
27 Aug 2001
TL;DR: The concept of a Content-Addressable Network (CAN) as a distributed infrastructure that provides hash table-like functionality on Internet-like scales is introduced and its scalability, robustness and low-latency properties are demonstrated through simulation.
Abstract: Hash tables - which map "keys" onto "values" - are an essential building block in modern software systems. We believe a similar functionality would be equally valuable to large distributed systems. In this paper, we introduce the concept of a Content-Addressable Network (CAN) as a distributed infrastructure that provides hash table-like functionality on Internet-like scales. The CAN is scalable, fault-tolerant and completely self-organizing, and we demonstrate its scalability, robustness and low-latency properties through simulation.

6,703 citations

Proceedings ArticleDOI
27 Sep 1999
TL;DR: Some of the research challenges in understanding context and in developing context-aware applications are discussed, which are increasingly important in the fields of handheld and ubiquitous computing, where the user?s context is changing rapidly.
Abstract: When humans talk with humans, they are able to use implicit situational information, or context, to increase the conversational bandwidth. Unfortunately, this ability to convey ideas does not transfer well to humans interacting with computers. In traditional interactive computing, users have an impoverished mechanism for providing input to computers. By improving the computer’s access to context, we increase the richness of communication in human-computer interaction and make it possible to produce more useful computational services. The use of context is increasingly important in the fields of handheld and ubiquitous computing, where the user?s context is changing rapidly. In this panel, we want to discuss some of the research challenges in understanding context and in developing context-aware applications.

4,842 citations

Journal ArticleDOI
TL;DR: This paper factors out the common denominator underlying these variants: full decoupling of the communicating entities in time, space, and synchronization to better identify commonalities and divergences with traditional interaction paradigms.
Abstract: Well adapted to the loosely coupled nature of distributed interaction in large-scale applications, the publish/subscribe communication paradigm has recently received increasing attention. With systems based on the publish/subscribe interaction scheme, subscribers register their interest in an event, or a pattern of events, and are subsequently asynchronously notified of events generated by publishers. Many variants of the paradigm have recently been proposed, each variant being specifically adapted to some given application or network model. This paper factors out the common denominator underlying these variants: full decoupling of the communicating entities in time, space, and synchronization. We use these three decoupling dimensions to better identify commonalities and divergences with traditional interaction paradigms. The many variations on the theme of publish/subscribe are classified and synthesized. In particular, their respective benefits and shortcomings are discussed both in terms of interfaces and implementations.

3,380 citations