scispace - formally typeset
Search or ask a question

Showing papers by "Deutsche Telekom published in 2009"


Journal ArticleDOI
TL;DR: In this article, the authors present an integrated concept for IT-supported idea competitions in virtual communities for leveraging the potential of crowds that is evaluated in a real-world setting, based on a literature review in the fields of Community Building and Innovation Management, they develop an integrated framework called "Community Engineering for Innovations".
Abstract: ‘Crowdsourcing’ is currently one of the most discussed key words within the open innovation community. The major question for both research and business is how to find and lever the enormous potential of the ‘collective brain’ to broaden the scope of ‘open R&D’. Based on a literature review in the fields of Community Building and Innovation Management, this work develops an integrated framework called ‘Community Engineering for Innovations’. This framework is evaluated in an Action Research project – the case of an ideas competition for an ERP Software company. The case ‘SAPiens’ includes the design, implementation and evaluation of an IT-supported ideas competition within the SAP University Competence Center (UCC) User Group. This group consists of approximately 60,000 people (lecturers and students) using SAP Software for educational purposes. The current challenges are twofold: on the one hand, there is not much activity yet in this community. On the other, SAP has not attempted to systematically address this highly educated group for idea generation or innovation development so far. Therefore, the objective of this research is to develop a framework for a community-based innovation development that generates innovations, process and product ideas in general and for SAP Research, in particular, combining the concepts of idea competitions and virtual communities. Furthermore, the concept aims at providing an interface to SAP Human Resources processes in order to identify the most promising students in this virtual community. This paper is the first to present an integrated concept for IT-supported idea competitions in virtual communities for leveraging the potential of crowds that is evaluated in a real-world setting.

430 citations


Proceedings ArticleDOI
20 Apr 2009
TL;DR: In this article, the authors used signed variants of global network characteristics such as the clustering coefficient, node-level and link-level characteristics, such as distance and similarity measures, to identify unpopular users and predict the sign of links.
Abstract: We analyse the corpus of user relationships of the Slashdot technology news site. The data was collected from the Slashdot Zoo feature where users of the website can tag other users as friends and foes, providing positive and negative endorsements. We adapt social network analysis techniques to the problem of negative edge weights. In particular, we consider signed variants of global network characteristics such as the clustering coefficient, node-level characteristics such as centrality and popularity measures, and link-level characteristics such as distances and similarity measures. We evaluate these measures on the task of identifying unpopular users, as well as on the task of predicting the sign of links and show that the network exhibits multiplicative transitivity which allows algebraic methods based on matrix multiplication to be used. We compare our methods to traditional methods which are only suitable for positively weighted edges.

402 citations


Proceedings ArticleDOI
17 Aug 2009
TL;DR: A network virtualization architecture is described as a technology for enabling Internet innovation and some of its components are evaluated based on experimental results from a prototype implementation to gain insight about its viability.
Abstract: The tussle between reliability and functionality of the Internet is firmly biased on the side of reliability. New enabling technologies fail to achieve traction across the majority of ISPs. We believe that the greatest challenge is not in finding solutions and improvements to the Internet's many problems, but in how to actually deploy those solutions and re-balance the tussle between reliability and functionality. Network virtualization provides a promising approach to enable the coexistence of innovation and reliability. We describe a network virtualization architecture as a technology for enabling Internet innovation. This architecture is motivated from both business and technical perspectives and comprises four main players. In order to gain insight about its viability, we also evaluate some of its components based on experimental results from a prototype implementation.

227 citations


Proceedings ArticleDOI
07 May 2009
TL;DR: This work has developed a formalisation of a detailed research process for design science that takes into account all requirements, and combines qualitative and quantitative research and references well-known research methods.
Abstract: Discussions about the body of knowledge of information systems, including the research domain, relevant perspectives and methods have been going on for a long time. Many researchers vote for a combination of research perspectives and their respective research methodologies; rigour and relevance as requirements in design science are generally accepted. What has been lacking is a formalisation of a detailed research process for design science that takes into account all requirements. We have developed such a research process, building on top of existing processes and findings from design research. The process combines qualitative and quantitative research and references well-known research methods. Publication possibilities and self-contained work packages are recommended. Case studies using the process are presented and discussed.

218 citations


Patent
29 Sep 2009
TL;DR: In this article, the current activity of the user is determined and a schedule selection is made based on the determined current activity, such as voice, ring, vibration, light, and/or text.
Abstract: An apparatus and method for schedule management includes storing at least one schedule for a user. The current activity of the user is determined. At a remind time for each schedule, the user is reminded of the schedule according to a reminder method. The reminder method selection is based on at least the determined current activity of the user. Some of the reminder methods selected between may include, for example, voice, ring, vibration, light, and/or text.

186 citations


Proceedings ArticleDOI
15 Jun 2009
TL;DR: This paper proposes transmitting multiterabyte data through commercial ISPs by taking advantage of already-paid-for off-peak bandwidth resulting from diurnal traffic patterns and percentile pricing, and shows that between sender-receiver pairs with small time-zone difference, simple source scheduling policies are able to take advantage of most of the existing off- peak capacity.
Abstract: Many emerging scientific and industrial applications require transferring multiple Tbytes of data on a daily basis Examples include pushing scientific data from particle accelerators/colliders to laboratories around the world, synchronizing data-centers across continents, and replicating collections of high definition videos from events taking place at different time-zones A key property of all above applications is their ability to tolerate delivery delays ranging from a few hours to a few days Such Delay Tolerant Bulk (DTB) data are currently being serviced mostly by the postal system using hard drives and DVDs, or by expensive dedicated networks In this work we propose transmitting such data through commercial ISPs by taking advantage of already-paid-for off-peak bandwidth resulting from diurnal traffic patterns and percentile pricing We show that between sender-receiver pairs with small time-zone difference, simple source scheduling policies are able to take advantage of most of the existing off-peak capacity When the time-zone difference increases, taking advantage of the full capacity requires performing store-and-forward through intermediate storage nodes We present an extensive evaluation of the two options based on traffic data from 200+ links of a large transit provider with PoPs at three continents Our results indicate that there exists huge potential for performing multi Tbyte transfers on a daily basis at little or no additional cost

168 citations


Proceedings ArticleDOI
02 Feb 2009
TL;DR: This paper is an attempt to identify the core functionalities necessary to build social networking applications and services, and the research challenges in realizing them in a decentralized setting, and presents its own approach at realizing peer-to-peer social networks.
Abstract: Online Social Networks like Facebook, MySpace, Xing, etc. have become extremely popular. Yet they have some limitations that we want to overcome for a next generation of social networks: privacy concerns and requirements of Internet connectivity, both of which are due to web-based applications on a central site whose owner has access to all data. To overcome these limitations, we envision a paradigm shift from client-server to a peer-to-peer infrastructure coupled with encryption so that users keep control of their data and can use the social network also locally, without Internet access. This shift gives rise to many research questions intersecting networking, security, distributed systems and social network analysis, leading to a better understanding of how technology can support social interactions. This paper is an attempt to identify the core functionalities necessary to build social networking applications and services, and the research challenges in realizing them in a decentralized setting. In the tradition of research-path defining papers in the peer-to-peer community [5, 14], we highlight some challenges and opportunities for peer-to-peer in the era of social networks. We also present our own approach at realizing peer-to-peer social networks.

155 citations


Proceedings ArticleDOI
29 Jul 2009
TL;DR: A taxonomy of the most relevant QoS and QoE aspects which result from multimodal human-machine interactions is developed, which provides metrics which make system evaluation more systematic and comparable.
Abstract: Quality of Service (QoS) and Quality of Experience (QoE) are not only important for services transmitting multimedia data, but also for services involving multimodal human-machine interaction. In order to guide the assessment and evaluation of such services, we developed a taxonomy of the most relevant QoS and QoE aspects which result from multimodal human-machine interactions. It consists of three layers: (1) The QoS-influencing factors related to the user, the system, and the context of use; (2) the QoS interaction performance aspects describing user and system behavior and performance; and (3) the QoE aspects related to the quality perception and judgment processes taking place inside the user. For each of these layers, we provide metrics which make system evaluation more systematic and comparable.

130 citations


Proceedings ArticleDOI
06 Mar 2009
TL;DR: This paper proposes three hardware-software approaches to defend against software cache-based attacks - they present different tradeoffs between hardware complexity and performance overhead and proposes novel software permutation to replace the random permutation hardware in the RPcache.
Abstract: Software cache-based side channel attacks present serious threats to modern computer systems. Using caches as a side channel, these attacks are able to derive secret keys used in cryptographic operations through legitimate activities. Among existing countermeasures, software solutions are typically application specific and incur substantial performance overhead. Recent hardware proposals including the Partition-Locked cache (PLcache) and Random-Permutation cache (RPcache) [23], although very effective in reducing performance overhead while enhancing the security level, may still be vulnerable to advanced cache attacks. In this paper, we propose three hardware-software approaches to defend against software cache-based attacks - they present different tradeoffs between hardware complexity and performance overhead. First, we propose to use preloading to secure the PLcache. Second, we leverage informing loads, which is a lightweight architectural support originally proposed to improve memory performance, to protect the RPcache. Third, we propose novel software permutation to replace the random permutation hardware in the RPcache. This way, regular caches can be protected with hardware support for informing loads. In our experiments, we analyze various processor models for their vulnerability to cache attacks and demonstrate that even to the processor model that is most vulnerable to cache attacks, our proposed software-hardware integrated schemes provide strong security protection.

129 citations


Proceedings ArticleDOI
28 Dec 2009
TL;DR: A distributed CoMP transmission approach is implemented and tested in the downlink of an LTE-Advanced trial system operating in real time over 20 MHz bandwidth, with benefits over multi-cell channels recorded in an urban macro-cell scenario.
Abstract: Coordinated multi-point (CoMP) is a new class of transmission schemes for interference reduction in the next generation of mobile networks. We have implemented and tested a distributed CoMP transmission approach in the downlink of an LTE-Advanced trial system operating in real time over 20 MHz bandwidth. Enabling features such as network synchronization, celland user-specific pilots, feedback of multicell channel state information and synchronous data exchange between the base stations have been implemented. Interferencelimited transmission experiments have been conducted using optimum combining with interference-aware link adaptation and cross-wise interference cancellation between the cells. The benefits of CoMP transmission have been studied over multi-cell channels recorded in an urban macro-cell scenario.

127 citations


Proceedings ArticleDOI
15 Sep 2009
TL;DR: A novel around-device interaction interface that allows mobile devices to track coarse hand gestures performed above the device's screen and provides a rough overview of the design space of ADI-based interfaces.
Abstract: In this paper we explore the design space of around-device interaction (ADI). This approach seeks to expand the interaction possibilities of mobile and wearable devices beyond the confines of the physical device itself to include the space around it. This enables rich 3D input, comprising coarse movement-based gestures, as well as static position-based gestures. ADI can help to solve occlusion problems and scales down to very small devices. We present a novel around-device interaction interface that allows mobile devices to track coarse hand gestures performed above the device's screen. Our prototype uses infrared proximity sensors to track hand and finger positions in the device's proximity. We present an algorithm for detecting hand gestures and provide a rough overview of the design space of ADI-based interfaces.

Proceedings ArticleDOI
16 Mar 2009
TL;DR: This paper proposes a rate-efficient codec designed for tree-based retrieval by encoding a tree histogram, which can achieve a more than 5x rate reduction compared to sending compressed feature descriptors.
Abstract: For mobile image matching applications, a mobile device captures a query image, extracts descriptive features, and transmits these features wirelessly to a server. The server recognizes the query image by comparing the extracted features to its database and returns information associated with the recognition result. For slow links, query feature compression is crucial for low-latency retrieval. Previous image retrieval systems transmit compressed feature descriptors, which is well suited for pairwise image matching. For fast retrieval from large databases, however, scalable vocabulary trees are commonly employed. In this paper, we propose a rate-efficient codec designed for tree-based retrieval. By encoding a tree histogram, our codec can achieve a more than 5x rate reduction compared to sending compressed feature descriptors. By discarding the order amongst a list of features, histogram coding requires 1.5x lower rate than sending a tree node index for every feature. A statistical analysis is performed to study how the entropy of encoded symbols varies with tree depth and the number of features.

Proceedings Article
22 Apr 2009
TL;DR: A novel BFT state machine replication protocol called Zeno that trades consistency for higher availability and replaces strong consistency with a weaker guarantee (eventual consistency): clients can temporarily miss each other's updates but when the network is stable the states from the individual partitions are merged by having the replicas agree on a total order for all requests.
Abstract: Many distributed services are hosted at large, shared, geographically diverse data centers, and they use replication to achieve high availability despite the unreachability of an entire data center. Recent events show that non-crash faults occur in these services and may lead to long outages. While Byzantine-Fault Tolerance (BFT) could be used to withstand these faults, current BFT protocols can become unavailable if a small fraction of their replicas are unreachable. This is because existing BFT protocols favor strong safety guarantees (consistency) over liveness (availability). This paper presents a novel BFT state machine replication protocol called Zeno that trades consistency for higher availability. In particular, Zeno replaces strong consistency (linearizability) with a weaker guarantee (eventual consistency): clients can temporarily miss each other's updates but when the network is stable the states from the individual partitions are merged by having the replicas agree on a total order for all requests. We have built a prototype of Zeno and our evaluation using micro-benchmarks shows that Zeno provides better availability than traditional BFT protocols.

Proceedings ArticleDOI
13 May 2009
TL;DR: In this paper, a stochastic game theoretic approach to security and intrusion detection in communication and computer networks is proposed. But the authors focus on the non-cooperative zero-sum or nonzero-sum game.
Abstract: This paper studies a stochastic game theoretic approach to security and intrusion detection in communication and computer networks. Specifically, an Attacker and a Defender take part in a two-player game over a network of nodes whose security assets and vulnerabilities are correlated. Such a network can be modeled using weighted directed graphs with the edges representing the influence among the nodes. The game can be formulated as a non-cooperative zero-sum or nonzero-sum stochastic game. However, due to correlation among the nodes, if some nodes are compromised, the effective security assets and vulnerabilities of the remaining ones will not stay the same in general, which leads to complex system dynamics. We examine existence, uniqueness, and structure of the solution and also provide numerical examples to illustrate our model.

Proceedings ArticleDOI
18 Jan 2009
TL;DR: It is shown that image and feature matching algorithms are robust to significantly compressed features, and a strong correlation between MSE and matching error for feature points and images is established.
Abstract: We investigate transform coding to efficiently store and transmit SIFT and SURF image descriptors. We show that image and feature matching algorithms are robust to significantly compressed features. We achieve nearperfect image matching and retrieval for both SIFT and SURF using ∼2 bits/dimension. When applied to SIFT and SURF, this provides a 16× compression relative to conventional floating point representation. We establish a strong correlation between MSE and matching error for feature points and images. Feature compression enables many application that may not otherwise be possible, especially on mobile devices.

Proceedings ArticleDOI
21 Sep 2009
TL;DR: The deployment of the OpenRoads testbed, a testbed that allows multiple network experiments to be conducted concurrently in a production network, is described and discussed at Stanford University.
Abstract: We have built and deployed OpenRoads [11], a testbed that allows multiple network experiments to be conducted concurrently in a production network. For example, multiple routing protocols, mobility managers and network access controllers can run simultaneously in the same network. In this paper, we describe and discuss our deployment of the testbed at Stanford University. We focus on the challenges we faced deploying in a production network, and the tools we built to overcome these challenges. Our goal is to gain enough experience for other groups to deploy OpenRoads in their campus network.

Proceedings ArticleDOI
19 Apr 2009
TL;DR: It is found that mobile social networks are very robust to the distributions of altruism due to the nature of multiple paths, including the impact of topologies and traffic patterns.
Abstract: Many kinds of communication networks, in particular social and opportunistic networks, rely at least partly on on humans to help move data across the network. Human altruistic behavior is an important factor determining the feasibility of such a system. In this paper, we study the impact of different distributions of altruism on the throughput and delay of mobile social communication system. We evaluate the system performance using four experimental human mobility traces with uniform and community-biased traffic patterns. We found that mobile social networks are very robust to the distributions of altruism due to the nature of multiple paths. We further confirm the results by simulations on two popular social network models. To the best of our knowledge, this is the first complete study of the impact of altruism on mobile social networks, including the impact of topologies and traffic patterns.

Proceedings ArticleDOI
14 Jun 2009
TL;DR: This paper studies two-player security games which can be viewed as sequences of nonzero-sum matrix games played by an Attacker and a Defender and discusses both the classical FP and the stochastic FP, where for the latter the payoff function of each player includes an entropy term to randomize its own strategy, which could be interpreted as a way of concealing its true strategy.
Abstract: We study two-player security games which can be viewed as sequences of nonzero-sum matrix games played by an Attacker and a Defender. At each stage of the game iterations, the players make imperfect observations of each other's previous actions. The underlying decision process can be viewed as a fictitious play (FP) game, but what differentiates this class from the standard one is that the communication channels that carry action information from one player to the other, or the sensor systems, are error prone. Two possible scenarios are addressed in the paper: (i) if the error probabilities associated with the sensor systems are known to the players, then our analysis provides guidelines for each player to reach a Nash equilibrium (NE), which is related to the NE of the underlying static game; (ii) if the error probabilities are not known to the players, then we study the effect of observation errors on the convergence to the NE and the final outcome of the game. We discuss both the classical FP and the stochastic FP, where for the latter the payoff function of each player includes an entropy term to randomize its own strategy, which can be interpreted as a way of concealing its true strategy.

Proceedings ArticleDOI
08 Jun 2009
TL;DR: The problem of Identity Theft is discussed and behavioral biometrics is proposed as a solution, a survey of existing studies and list the challenges and propose solutions.
Abstract: The increase of online services, such as eBanks, WebMails, in which users are verified by a username and password, is increasingly exploited by Identity Theft procedures. Identity Theft is a fraud, in which someone pretends to be someone else is order to steal money or get other benefits. To overcome the problem of Identity Theft an additional security layer is required. Within the last decades the option of verifying users based on their keystroke dynamics was proposed during login verification. Thus, the imposter has to be able to type in a similar way to the real user in addition to having the username and password. However, verifying users upon login is not enough, since a logged station/mobile is vulnerable for imposters when the user leaves her machine. Thus, verifying users continuously based on their activities is required. Within the last decade there is a growing interest and use of biometrics tools, however, these are often costly and require additional hardware. Behavioral biometrics, in which users are verified, based on their keyboard and mouse activities, present potentially a good solution. In this paper we discuss the problem of Identity Theft and propose behavioral biometrics as a solution. We survey existing studies and list the challenges and propose solutions.

Proceedings ArticleDOI
19 Apr 2009
TL;DR: This work modifies the robust soliton distribution of LT codes at the broadcaster, based on the number of input symbols already decoded at the receivers, and shows that significant savings can be even achieved with a low number of feedback messages transmitted at a uniform rate.
Abstract: The erasure resilience of rateless codes, such as Luby-Transform (LT) codes, makes them particularly suitable to a wide variety of loss-prone wireless and sensor network applications, ranging from digital video broadcast to software updates. Yet, traditional rateless codes usually make no use of a feedback communication channel, a feature available in many wireless settings. As such, we generalize LT codes to situations where receiver(s) provide feedback to the broadcaster. Our approach, referred to as Shifted LT (SLT) code, modifies the robust soliton distribution of LT codes at the broadcaster, based on the number of input symbols already decoded at the receivers. While implementing this modification entails little change to the LT encoder and decoder, we show both analytically and through real experiments, that it achieves significant savings in communication complexity, memory usage, and overall energy consumption. Furthermore, we show that significant savings can be even achieved with a low number of feedback messages (on the order of the square root of the total number of input symbols) transmitted at a uniform rate. The practical benefits of Shifted LT codes are demonstrated through the implementation of a real over-the-air programming application for sensor networks, based on the Deluge protocol.

Journal ArticleDOI
TL;DR: An Internet traffic anomaly detection mechanism based on large deviations results for empirical measures and a model-based approach modeling traffic using a Markov modulated process, which shows that applying the methodology (even short-lived) anomalies are identified within a small number of observations.
Abstract: We introduce an Internet traffic anomaly detection mechanism based on large deviations results for empirical measures. Using past traffic traces we characterize network traffic during various time-of-day intervals, assuming that it is anomaly-free. We present two different approaches to characterize traffic: (i) a model-free approach based on the method of types and Sanov's theorem, and (ii) a model-based approach modeling traffic using a Markov modulated process. Using these characterizations as a reference we continuously monitor traffic and employ large deviations and decision theory results to ldquocomparerdquo the empirical measure of the monitored traffic with the corresponding reference characterization, thus, identifying traffic anomalies in real-time. Our experimental results show that applying our methodology (even short-lived) anomalies are identified within a small number of observations. Throughout, we compare the two approaches presenting their advantages and disadvantages to identify and classify temporal network anomalies. We also demonstrate how our framework can be used to monitor traffic from multiple network elements in order to identify both spatial and temporal anomalies. We validate our techniques by analyzing real traffic traces with time-stamped anomalies.

Proceedings Article
16 Oct 2009
TL;DR: In this article, the energy consumption of a telecommunication network increases, but at a reduced slope compared to the assumed traffic volume increase, and the major energy consumption portion shifts from access to backbone networks with rising traffic volume.
Abstract: The energy consumption of a telecommunication network increases, but at a reduced slope compared to the assumed traffic volume increase The major energy consumption portion shifts from access to backbone networks with rising traffic volume

Journal ArticleDOI
TL;DR: In this paper, an analytical framework for optimal rate allocation based on observed available bit rate (ABR) and round-trip time (RTT) over each access network and video distortion-rate (DR) characteristics is proposed.
Abstract: We consider the problem of rate allocation among multiple simultaneous video streams sharing multiple heterogeneous access networks. We develop and evaluate an analytical framework for optimal rate allocation based on observed available bit rate (ABR) and round-trip time (RTT) over each access network and video distortion-rate (DR) characteristics. The rate allocation is formulated as a convex optimization problem that minimizes the total expected distortion of all video streams. We present a distributed approximation of its solution and compare its performance against Hinfin-optimal control and two heuristic schemes based on TCP-style additive-increase-multiplicative-decrease (AIMD) principles. The various rate allocation schemes are evaluated in simulations of multiple high-definition (HD) video streams sharing multiple access networks. Our results demonstrate that, in comparison with heuristic AIMD-based schemes, both media-aware allocation and Hinfin-optimal control benefit from proactive congestion avoidance and reduce the average packet loss rate from 45% to below 2%. Improvement in average received video quality ranges between 1.5 to 10.7 dB in PSNR for various background traffic loads and video playout deadlines. Media-aware allocation further exploits its knowledge of the video DR characteristics to achieve a more balanced video quality among all streams.


Proceedings ArticleDOI
16 Feb 2009
TL;DR: A thrifty water faucet is presented, enabled to move and behave in life-like manners and to step into dialogue with the user, alongside possible implications for the design of future human-machine interfaces.
Abstract: In this paper, we present a novel type of persuasive home appliance: A thrifty water faucet. Through a servo motor construction, it is enabled to move and behave in life-like manners and to step into dialogue with the user. For example about water consumption or hygiene. We sought to research the reactions of users to such an appliance, alongside possible implications for the design of future human-machine interfaces.This project is part of a larger series of experiments in the Living Interfaces project, exploring ways in which reduced and abstract life-like movements can be beneficial for Human-Machine Interaction.

Journal ArticleDOI
TL;DR: A theoretical analysis of the sensor capabilities via a design space is provided and concrete examples of how different sensors can facilitate interactive performance on these devices are shown.
Abstract: Mobile phones offer an attractive platform for interactive music performance. We provide a theoretical analysis of the sensor capabilities via a design space and show concrete examples of how different sensors can facilitate interactive performance on these devices. These sensors include cameras, microphones, accelerometers, magnetometers and multitouch screens. The interactivity through sensors in turn informs aspects of live performance as well as composition though persistence, scoring, and mapping to musical notes or abstract sounds.

Proceedings ArticleDOI
04 Apr 2009
TL;DR: This paper attempts to overcome the problem of switching attention between the magic lens and the information in the background by using a lightweight mobile camera projector unit to augment the paper map directly with additional information.
Abstract: The advantages of paper-based maps have been utilized in the field of mobile Augmented Reality (AR) in the last few years. Traditional paper-based maps provide high-resolution, large-scale information with zero power consumption. There are numerous implementations of magic lens interfaces that combine high-resolution paper maps with dynamic handheld displays. From an HCI perspective, the main challenge of magic lens interfaces is that users have to switch their attention between the magic lens and the information in the background. In this paper, we attempt to overcome this problem by using a lightweight mobile camera projector unit to augment the paper map directly with additional information. The "Map Torchlight" is tracked over a paper map and can precisely highlight points of interest, streets, and areas to give directions or other guidance for interacting with the map.


Proceedings ArticleDOI
25 Sep 2009
TL;DR: The quest to find out what characterizes a potential killer applications for delay Tolerant Networking is embarked upon, which highlights some of the main challenges that needs to be solved to realize these applications and make DTNs a part of the mainstream network landscape.
Abstract: Delay Tolerant Networking (DTN) has attracted a lot of attention from the research community in recent years. Much work have been done regarding network architectures and algorithms for routing and forwarding in such networks. At the same time as many show enthusiasm for this exciting new research area there are also many sceptics, who question the usefulness of research in this area. In the past, we have seen other research areas become over-hyped and later die out as there was no killer app for them that made them useful in real scenarios. Real deployments of DTN systems have so far mostly been limited to a few niche scenarios, where they have been done as proof-of-concept field tests in research pro jects. In this paper, we embark upon a quest to find out what characterizes a potential killer applications for DTNs. Are there applications and situations where DTNs provide services that could not be achieved otherwise, or have potential to do it in a better way than other techniques? Further, we highlight some of the main challenges that needs to be solved to realize these applications and make DTNs a part of the mainstream network landscape.

Proceedings Article
04 Nov 2009
TL;DR: This year's IMC paid particular attention to uphold IMC's salient features: the explicit encouragement of publications re-appraising previous findings on new data sets, and the co-existence, in the program, of both full-length and short papers.
Abstract: It is our great pleasure to welcome you to the 9th ACM Internet Measurement Conference -- IMC 2009. This year's conference continues its tradition of being the premier forum for the dissemination of research results on furthering our understanding of how to collect or analyze Internet measurements, to give insight into how the Internet behaves. We paid particular attention to uphold IMC's salient features: the explicit encouragement of publications re-appraising previous findings on new data sets (something traditionally relegated to journal publications), and the co-existence, in the program, of both full-length and short papers. Short papers are intended to convey exciting work in progress with potentially less mature results. This year, we raised the short paper size by one page to 7 pages (as opposed to 14 pages for full papers), in an attempt to relieve the difficulty authors usually face in fitting a full set of references in their short submissions. IMC is a very selective conference. This year, however, in response to the will of the IMC steering committee and without compromising on quality, a record number of 41 papers were accepted out of 183 submissions. More precisely, 27 full papers were accepted out of 115 submissions, and 14 short papers were accepted out of 68 submissions. As a consequence, the conference has now grown, for the first time, to a full 3-day, more inclusive program. In selecting the final program, we were assisted by 22 highly skilled technical program committee members, who put an incredible amount of time, effort and professionalism into the selection process. We are indebted to them, as well as to our external reviewers, for their help and thank them all for their dedication. The paper selection process was carried out as 4 successive phases: in phase 1, all submitted papers received 2 reviews; this phase identified 123 papers for further consideration and that received at least an additional review during the second reviewing phase. This was followed by a phase of intensive on-line discussions which resulted in 80 papers being selected for final consideration and thorough discussions during the TPC meeting that was held in Berlin, Germany in July, and which was attended by the vast majority of the program committee members.