scispace - formally typeset
Search or ask a question

Showing papers by "Hannes Hartenstein published in 2010"


BookDOI
01 Jan 2010
TL;DR: This chapter discusses VANET Convenience and Efficiency Applications, as well as a Design Framework for Realistic Vehicular Mobility Models, and the challenges of Data Security in Vehicular Networks.
Abstract: Foreword. About the Editors. Preface. Acknowledgements. List of Contributors. 1 Introduction (Hannes Hartenstein and Kenneth P. Laberteaux). 1.1 Basic Principles and Challenges. 1.2 Past and Ongoing VANET Activities. 1.3 Chapter Outlines. 1.4 References. 2 Cooperative Vehicular Safety Applications (Derek Caveney). 2.1 Introduction. 2.2 Enabling Technologies. 2.3 Cooperative System Architecture. 2.4 Mapping for Safety Applications. 2.5 VANET-enabled Active Safety Applications. 2.6 References. 3 Information Dissemination in VANETs (Christian Lochert, Bjorn Scheuermann and Martin Mauve). 3.1 Introduction. 3.2 Obtaining Local Measurements. 3.3 Information Transport. 3.4 Summarizing Measurements. 3.5 Geographical Data Aggregation. 3.6 Conclusion. 3.7 References. 4 VANET Convenience and Efficiency Applications (Martin Mauve and Bjorn Scheruermann). 4.1 Introduction. 4.2 Limitations. 4.3 Applications. 4.4 Communication Paradigms. 4.5 Probabilistic, Area-based Aggregation. 4.6 Travel Time Aggregation. 4.7 Conclusion. 4.8 References. 5 Vehicular Mobility Modeling for VANETs (Jerome Harri). 5.1 Introduction. 5.2 Notation Description. 5.3 Random Models. 5.4 Flow Models. 5.5 Traffic Models. 5.6 Behavioral Models. 5.7 Trace or Survey-based Models. 5.8 Integration with Network Simulators. 5.9 A Design Framework for Realistic Vehicular Mobility Models. 5.10 Discussion and Outlook. 5.11 Conclusion. 5.12 References. 6 Physical Layer Considerations for Vehicular Communications (Ian Tan and Ahmad Bahai). 6.1 Standards Overview. 6.2 Previous Work. 6.3 Wireless Propagation Theory. 6.4 Channel Metrics. 6.5 Measurement Theory. 6.6 Emperical Channel Characterization at 5.9 GHz. 6.7 Future Directions. 6.8 Conclusion. 6.9 Appendix: Deterministic Multipath Channel Derivations. 6.10 Appendix: LTV Channel Response. 6.11 Appendix: Measurement Theory Details. 6.12 References. 7 MAC Layer and Scalability Aspects of Vehicular Communication Networks (Jens Mittag, Felix Schmidt-Eisenlohr, Moritz Killat, Marc Torrent-Moreno and Hannes Hartenstein). 7.1 Introduction: Challenges and Requirements. 7.2 A Survey on Proposed MAC Approaches for VANETs. 7.3 Communication Based on IEEE 802.11p. 7.4 Performance Evaluation and Modeling. 7.5 Aspects of Congestion Control. 7.6 Open Issues and Outlook. 7.7 References. 8 Efficient Application Level Message Coding and Composition (Craig L Robinson). 8.1 Introduction to the Application Environment. 8.2 Message Dispatcher. 8.3 Example Applications. 8.4 Data Sets. 8.5 Predictive Coding. 8.6 Architecture Analysis. 8.7 Conclusion. 8.8 References. 9 Data Security in Vehicular Communication Networks (AndreWeimerskirch, Jason J Haas, Yih-Chun Hu and Kenneth P Laberteaux). 9.1 Introduction. 9.2 Challenges of Data Security in Vehicular Networks. 9.3 Network, Applications, and Adversarial Model. 9.4 Security Infrastructure. 9.5 Cryptographic Protocols. 9.6 Privacy Protection Mechanisms. 9.7 Implementation Aspects. 9.8 Outlook and Conclusions. 9.9 References. 10 Standards and Regulations (John B Kenney). 10.1 Introduction. 10.2 Layered Architecture for VANETs. 10.3 DSRC Regulations. 10.4 DSRC Physical Layer Standard. 10.5 DSRC Data Link Layer Standard (MAC and LLC). 10.6 DSRC Middle Layers. 10.7 DSRC Message Sublayer. 10.8 Summary. 10.9 Abbreviations and Acronyms. 10.10 References. Index.

702 citations


Proceedings ArticleDOI
30 Dec 2010
TL;DR: The results indicate that a suboptimal gear choice can void the benefits of the speed adaptation, and the first results of a scale-up simulation using a real-world inner-city road network are presented.
Abstract: “Smart” vehicles of the future are envisioned to aid their drivers in reducing fuel consumption and emissions by wirelessly receiving phase-shifting information of the traffic lights in their vicinity and computing an optimized speed in order to avoid braking and acceleration maneuvers. Previous studies have demonstrated the potential environmental benefit in small-scale simulation scenarios. To assess the overall benefit, large-scale simulations are required. In order to ensure computational feasibility, the applied simulation models need to be simplified as far as possible without sacrificing credibility. Therefore this work presents the results of a sensitivity analysis and identifies gear choice and the distance from the traffic light at which vehicles are informed as key influencing factors. Our results indicate that a suboptimal gear choice can void the benefits of the speed adaptation. Furthermore, we present first results of a scale-up simulation using a real-world inner-city road network and discuss the range in which we expect the saving in fuel consumption to be in reality.

167 citations


Proceedings ArticleDOI
01 Dec 2010
TL;DR: The study shows that UMTS will likely suffer from capacity limitations while LTE could perform reasonably well, and the focus is on the random access performance of the uplink channel.
Abstract: Vehicular safety communication promises to reduce accidents by assistance systems such as cross-traffic assistance. The information exchange is mostly foreseen to be handled via Dedicated Short Range Communication (DSRC). At intersections, DSRC reception is likely to be problematic due to Non-Line-Of-Sight reception conditions. Alternatively, the required information exchange could also be handled via cellular systems. While cellular systems provide potentially better coverage, they impose other performance constraints. This paper analyzes the suitability of UMTS and LTE for cross-traffic assistance as a worst case application in terms of load and latency demands. It investigates capacity and latency characteristics and discusses influence factors on performance as well as operational aspects. The focus is on the random access performance of the uplink channel. While cellular systems might have some advantages over DSRC, the study shows that UMTS will likely suffer from capacity limitations while LTE could perform reasonably well.

63 citations


Journal ArticleDOI
TL;DR: This book brings together the various paper publications by Kerner and his coauthors in a concise and readable manner.
Abstract: This book brings together the various paper publications by Kerner and his coauthors in a concise and readable manner.

58 citations


Journal ArticleDOI
TL;DR: A novel proactive congestion control policy for vehicular ad-hoc networks is proposed, in which each vehicle's communication parameters are adapted based on their individual application requirements, while globally minimising the channel load to prevent channel congestion.
Abstract: This letter proposes a novel proactive congestion control policy for vehicular ad-hoc networks, in which each vehicle's communication parameters are adapted based on their individual application requirements. Contrary to other approaches, where transmission resources tend to be assigned based on system-level performance metrics, the technique proposed in this paper aims to individually satisfy the target application performance of each vehicle, while globally minimising the channel load to prevent channel congestion.

55 citations


Proceedings ArticleDOI
18 Apr 2010
TL;DR: The architecture of the physical layer emulator for OFDM-based IEEE~802.11 communications into the popular NS-3 simulator is outlined and initial results which highlight the promise of the new architecture in providing more detailed simulations to the networking community are presented.
Abstract: Many of the simulations reported in wireless networking literature contain several abstractions at the physical layer and the corresponding channel models. In particular, the basic simulation unit assumed in such simulations is the frame (or packet), which omits considerations of the signal processing details at the physical layer, such as frame construction and reception. Due to this abstraction, available channel models for network simulators are applied to frames as a whole and cannot reflect properly the effects of fast fading or frequency-selective channels. Moreover, it is not possible to study the mechanisms of the physical layer and their impact on higher layers such as the MAC. Therefore, we propose to address the lack of accurate physical layer representation in modern network simulators by incorporating a physical layer emulator for OFDM-based IEEE~802.11 communications into the popular NS-3 simulator. In this paper, we outline the architecture of the physical layer emulator and present initial results which highlight the promise of the new architecture in providing more detailed simulations to the networking community. The additional memory and computational requirements of the new model are also discussed.

51 citations


16 May 2010
TL;DR: This work provides an implementation for the Log-NormalShadowing, Nakagami, Rayleigh and Rice wave propagation models, and sets up a framework that allows easy integration of additional models in future, to support probabilistic propagation models extended in the OMNeT++ Mobility Framework.
Abstract: When performing wireless network simulations, the lack of precise channel modeling in simulator frameworks becomes a serious problem. Often deterministic models are used for packet propagation, which describe real conditions insufficiently. To close this gap we extended the OMNeT++ Mobility Framework to support probabilistic propagation models. We provide an implementation for the Log-Normal-Shadowing, Nakagami, Rayleigh and Rice wave propagation models and set up a framework that allows easy integration of additional models in future. Due to the characteristics of probabilistic radio models a fixed maximum packet propagation range encounters the problem of inaccurate simulation results as relevant events may be suppressed. On the other hand, unlimited packet propagation, which guarantees for correct simulation runs, causes unnecessary simulation overhead. In this work we present an approach to limit the event delivery to the area where the probability that the event is relevant to the simulation exceeds an adjustable threshold. In order to validate our extensions we successfully performed a detailed crosscheck with the network simulator NS-2 and run a performance evaluation and comparison.

41 citations


Proceedings ArticleDOI
13 Sep 2010
TL;DR: BitMON is a Java-based out-of-the-box platform for monitoring the BitTorrent DHT that monitors the DHT's size in peers as well as the peers' IP addresses, port numbers, countries of origin and session length and the long-term evolution of these indicators can be graphically displayed.
Abstract: The distributed hash table (DHT) formed by BitTorrent has become very popular as a basis for various kinds of services. Services based on this DHT often assume certain characteristics of the DHT. For instance, for realizing a decentralized bootstrapping service a minimum number of peers running on a certain port are required. However, key characteristics change over time. Our measurements show that e.g. the number of concurrent users grew from 5 to over 7 millions of users during the last months. For making reliable assumptions it is thus essential to monitor the P2P network. This demo presents BitMON, a Java-based out-of-the-box platform for monitoring the BitTorrent DHT. This tool does not only crawl the network, but also automatically analyzes the collected data and visualizes the results. BitMON monitors the DHT's size in peers as well as the peers' IP addresses, port numbers, countries of origin and session length. Also, the long-term evolution of these indicators can be graphically displayed. Furthermore, BitMON is designed as a framework and can easily be extended or adapted to monitor other P2P networks.

20 citations


Proceedings ArticleDOI
23 May 2010
TL;DR: A thorough analysis of the Sybil attack w.r.t. the resource requirements to operate Sybil nodes, a distributed approach to limit the impact of Sybil attacks effectively, and a new approach called *RACING* to improve the resistance of DHTs againstSybil attacks are proposed.
Abstract: Current peer-to-peer (P2P) systems are vulnerable to a variety of attacks due to the lack of a central authorization authority. The Sybil attack, i.e., the forging of multiple identities, is crucial as it can enable an attacker to control a substantial fraction or even the entire P2P system. However, the correlation between the resources available to an attacker and the resulting influence on the P2P system has yet not been studied in detail. The contributions of our paper are twofold: i) we present an approach for assessing the actual threats of Sybil attacks and ii) we propose a distributed approach to limit the impact of Sybil attacks effectively. Therefore, we conduct a thorough analysis of the Sybil attack w.r.t. the resource requirements to operate Sybil nodes and we investigate the quantitative influence of Sybil nodes on the overall system. Our study focuses on Kademlia, a very popular distributed hash table (DHT) which is for instance used in BitTorrent. We ran extensive Internet measurements within the BitTorrent DHT to determine the actual required resources to operate nodes. To evaluate the quantitative influence of Sybil nodes, we additionally conducted a comprehensive simulation study. The results show that upstream network bandwidth is the dominating factor concerning resources. Furthermore, we illustrate that small portions of Sybil nodes are tolerable in terms of global system stability. Finally, we propose a new approach called *RACING* to improve the resistance of DHTs against Sybil attacks. By establishing a new distributed identity registration procedure based on IP addresses, we are able to effectively limit the number of Sybil nodes.

19 citations


Proceedings ArticleDOI
16 May 2010
TL;DR: A "simulation-as-a-service" approach consisting of two building blocks: a web-interface/server front-end used to remotely configure the simulation of ITS applications and a back-end consisting of a controller and a High Capacity Computing platform using Kernel-based Virtual Machine (KVM) to run the remote simulations independently of the required simulation environment.
Abstract: Due to its high demand in processing power, the large-scale simulation of ITS applications is likely to also benefit from the concept of differentiation of the usage of a resource or a service and its physical location on remote High Capacity Computing (HCC) platforms as introduced by cloud computing. In this paper, we propose a "simulation-as-a-service" approach consisting of two building blocks: a web-interface/server front-end used to remotely configure the simulation of ITS applications, and a back-end consisting of a controller and a High Capacity Computing (HCC) platform using Kernel-based Virtual Machine (KVM)to run the remote simulations independently of the required simulation environment. Our demonstration illustrates how simulations of ITS applications requiring a large number of executions could be configured on a web-interface and remotely run in parallel on HCC platforms.

12 citations


Proceedings ArticleDOI
25 Mar 2010
TL;DR: The thesis is that the distributed character and heterogeneity of involved systems requires appropriate information-consistency mechanisms that go beyond what is offered by current FIM protocols and software in order to avoid inconsistencies in identity information.
Abstract: Collaborations by the use of inter-organizational business processes can help companies to achieve a competitive edge over competing businesses. Typically, these collaborations require an efficient identity management (IdM) that ensures the authorized access to services in different security domains. The successful implementation of an IdM in distributed systems requires to cope with a diversity of systems and to manage the challenges of integration. While integration should not introduce an unnecessary degree of dependence and complexity, various IdM goals should be achieved by integration: in particular, collaboration-wide consistency of identity information. Due to its decentralized and modular design, a federated identity management (FIM) approach is a promising strategy in distributed systems. Our thesis is that the distributed character and heterogeneity of involved systems requires appropriate information-consistency mechanisms that go beyond what is offered by current FIM protocols and software in order to avoid inconsistencies in identity information. In this paper we identify causes leading to inconsistencies in FIM. We present requirements necessary to cope with the consistency issue and analyze research, FIM standards and protocols w.r.t. the stated requirements. An analysis showed that FIM does not consider the consistency issuesufficiently. However, we point out which parts can be used as building blocks to achieve information consistency. Therefore, we design a system – called FedWare – that combines identity-related middleware services with existing FIM technologies. To provide an efficient integration of systems, we reduce development effort by providing reusable services. By decoupling systems, e.g., via a publish/subscribe mechanism, we reduce operation effort.

Proceedings ArticleDOI
19 Jul 2010
TL;DR: A consistency model for identity information in distributed systems named ID-consistency is introduced based on a formalization of identity information and considers semantic and causal relations as well as a so-called inconsistency window that denotes the time period between a change to information and the moment when the change is fully disseminated.
Abstract: In distributed IT systems, replication of information is commonly used to strengthen the fault tolerance on a technical level or the autonomy of an organization on a business level. In particular, information related to the identity of a user, which is used to authorize service access, is often replicated for these reasons. To ensure correct authorization decisions, replicas have to be kept consistent. However, an appropriate definition of “consistency” is required that takes into account the need for the following aspects: (i) semantic and causal relations between identity information, and (ii) temporal aspects with respect to an acceptable duration of the dissemination of occurring attribute changes. Both identity-information specifics and temporal aspects are not addressed sufficiently by existing consistency models. In this paper we introduce a consistency model for identity information in distributed systems named ID-consistency. ID-consistency is based on a formalization of identity information and considers semantic and causal relations as well as a so-called inconsistency window that denotes the time period between a change to information and the moment when the change is fully disseminated. Therefore, the model reveals the fundamental structure of an IdM system and helps in the design and analysis of corresponding dissemination middleware in distributed systems. We exemplarily show how to make use of the concept of ID-consistency to analyze and improve a real-world IdM system using CardSpace for demonstration purposes.

Proceedings ArticleDOI
01 Oct 2010
TL;DR: An approach is proposed that allows a controlled data dissemination based on an automated user approval by introducing an additional party called Identity Delegate, designed in consideration of the following central ideas: user centricity, privacy, efficiency, and efficiency.
Abstract: The growing number of IT services in distributed systems increases the need to allow users to keep track of which personal data is retained by which service. User-centric federated identity management (FIM) tackles this goal by enabling users to approve each data dissemination between the providers of identity-related information, so-called identity providers (IdPs), and the consumers of this information, the service providers. To prevent a single IdP from gaining a comprehensive set of user information, user-centric FIM motivates the use of multiple IdPs even though this distribution of responsibilities might result in information redundancy and therefore raises consistency issues. User-centric FIM systems do not cope with information consistency sufficiently, mainly because these systems require that each dissemination of user attributes is manually approved by the user. We propose an approach, named User-Controlled Automated Identity Delegation, that allows a controlled data dissemination based on an automated user approval by introducing an additional party called Identity Delegate. The Identity Delegate is designed in consideration of the following central ideas: (i) user centricity - all data dissemination is still under user control, (ii) privacy - the delegate cannot read or gather personal data, (iii) efficiency - the effort to integrate and operate the delegate within an existing FIM system is kept low. We cover the experience made with an implementation based on Windows CardSpace.

Proceedings ArticleDOI
24 Sep 2010
TL;DR: This work shows that the ratio of simulation-based capacity estimates and the upper bound is similar for a wide range of system configurations and that the communication system may only be used up to 22% of its upper capacity bound such that service requirements can still be fulfilled.
Abstract: The periodic transmission of status updates by all vehicles in a vehicular network represents a service primitive that forms the basis for a lot of envisioned applications, in particular safety related ones. Due to the limited resources that a wireless communication system like IEEE 802.11p is capable to provide, the question raises how much data each node may provide to the system such that the information can still be delivered with the quality of service required by the applications. In this work, local broadcasts capacity is introduced together with straight-forward upper and lower bounds, and estimated by extensive detailed simulations. We show that the ratio of simulation-based capacity estimates and the upper bound is similar for a wide range of system configurations and that the communication system may only be used up to 22% of its upper capacity bound such that service requirements can still be fulfilled.

Book ChapterDOI
01 Jan 2010
TL;DR: In this paper, the authors focus on the Portaldienste and das Identitatsmanagement im Fokus der technischen innovation, e.g., in Informationsmanagement, and dediziert Fragestellungen zur IT-Governance and IT-Compliance.
Abstract: Der Beitrag beschreibt wesentliche seit dem Jahr 2005 an der Universitat Karlsruhe bzw. am Karlsruher Institut fur Technologie erzielte Ergebnisse in Bezug auf technische und organisatorische Integration fur das Informationsmanagement. Insbesondere stehen die Portaldienste und das Identitatsmanagement im Fokus der technischen Innovation. Daneben werden zwei organisatorische Innovationen vorgestellt, die sich dediziert Fragestellungen zur IT-Governance und IT-Compliance widmen. Abschliesend werden erzielte Schlusselerfahrungen diskutiert, die im Zuge des Aufbaus eines integrierten Informationsmanagements gemacht wurden. Ausblickend wollen wir einen Bezug zu den allgegenwartigen Problemstellungen und verbundenen Herausforderungen derartiger Vorhaben aufzeigen – Herausforderungen, die waren, sind und bleiben werden.

Proceedings ArticleDOI
28 Jan 2010
TL;DR: This work proposes a system design framework for IAM that can be used to evaluate different design decisions in advance and demands for criteria and metrics to differentiate architectural approaches.
Abstract: Identity and access management (IAM) systems are used to assure authorized access to services in distributed environments. The design decisions of IAM systems, in particular the arrangement of the involved components, have significant impact on performance, access control accuracy, and costs of the overall system. Hence, systematic engineering of IAM systems demands for criteria and metrics to differentiate architectural approaches. Therefore, we propose a system design framework for IAM that can be used to evaluate different design decisions in advance.

Book ChapterDOI
01 Jan 2010
TL;DR: This work uses the HP XC4000 for an extensive and detailed sensitivity analysis in order to evaluate the robustness and performance of communication protocols as well as to capture the complex characteristics of such systems in terms of an empirical model.
Abstract: Over the past several years, there has been significant interest and progress in using wireless communication technologies for vehicular environments in order to increase traffic safety and efficiency. Due to the fact that these systems are still under development and large-scale tests based on real hardware are difficult to manage, simulations are a widely-used and cost-efficient method to explore such scenarios. Furthermore, simulations provide a possibility to look at specific aspects individually and to identify major influencing effects out of a wide range of configurations. In this context, we use the HP XC4000 for an extensive and detailed sensitivity analysis in order to evaluate the robustness and performance of communication protocols as well as to capture the complex characteristics of such systems in terms of an empirical model.

Journal ArticleDOI
TL;DR: Dieser Artikel beschreibt, wie die Cloud-Plattform Apache Hadoop für verteilte Simulationen eingesetzt werden kann („Howto“) and präsentiert zugehörige Leistungsbewertungen sowie Nutzungserfahrungen.
Abstract: Dieser Artikel beschreibt, wie die Cloud-Plattform Apache Hadoop fur verteilte Simulationen eingesetzt werden kann („Howto“) und prasentiert zugehorige Leistungsbewertungen sowie Nutzungserfahrungen. Diskrete ereignisbasierte Simulationen benotigen typischerweise betrachtliche Rechenkapazitaten, da die zugehorigen Modelle in der Regel einen grosen zu explorierenden Parameterraum besitzen. Bislang wurden zumeist dedizierte Cluster fur die Bewaltigung dieser Aufgaben verwendet. Cloud Computing bietet nun immense Rechenressourcen in einer flexiblen und kosteneffizienten Art und Weise an. Folglich kann Cloud Computing eine potentielle Basis fur die verteilte Durchfuhrung von Simulationsstudien bieten, ohne dabei ein Rechencluster vor Ort besitzen zu mussen. Jedoch ist die manuelle Verteilung von hundert oder tausend Simulationslaufen sehr aufwendig. Daher wird in dieser Arbeit ein Ansatz vorgestellt, der es ermoglicht, die Verteilung von Simulationslaufen automatisiert unter Verwendung des sogenannten MapReduce-Konzepts durchzufuhren. MapReduce ermoglicht es, sehr rechenintensive Aufgaben in kleinere unabhangige Teilaufgaben zu unterteilen und ist im Umfeld des Cloud Computing etabliert. Der Fokus dieser Arbeit liegt auf der Verteilung von untereinander unabhangigen und somit trivial parallelisierbaren Simulationslaufen, d.h. beispielsweise Laufe mit unterschiedlichen Parameterwerten. Zunachst wird die Umsetzung der beiden benotigten Funktionen Map und Reduce prasentiert. Die anschliesende Leistungsbewertung erfolgte unter Verwendung der Open-Source Implementierung Apache Hadoop sowie der Cloud-Umgebung Amazon Elastic Compute Cloud (EC2).

01 Jan 2010
TL;DR: In this article, the authors investigated first steps towards finding a theoretical foundation for inter-vehicle communication and presented a sketch of a roadmap for future work in this direction, based on the work of a working group.
Abstract: This working group investigated first steps towards finding a theoretical foundation for inter-vehicle communication. The main outcome is a sketch of a roadmap for future work in this direction.

Proceedings ArticleDOI
15 Mar 2010
TL;DR: A simulation management approach that ensures reproducibility and traceability of simulation runs as well as improves efficiency of simulation processes by automation of common simulation tasks is introduced.
Abstract: For credibility of simulation results, reproducibility of simulation runs is a must. However, reproducibility requires a thorough management of all data involved in the simulation process. The corresponding management of data can be error-prone and time consuming if performed manually. In this paper we introduce a simulation management approach that ensures reproducibility and traceability of simulation runs as well as improves efficiency of simulation processes by automation of common simulation tasks. We implemented our approach as an Eclipse plugin. We show that information gained by explicit simulation management can be used to automatically organize and archive all necessary data to reproduce a simulation. While our tool was implemented with focus on the network simulator ns-2, our concepts can be applied to other simulation environments, too.

01 Jan 2010
TL;DR: This paper presents a treatment of Scalar Quantization For Relative Error using an SAT-Based Scheme to Determine Optimal Fix-free Codes.
Abstract: 2011: John Z. Sun, Massachusetts Institute of Technology, "Scalar Quantization For Relative Error" (coauthored with Vivek Goyal). 2010: Navid Abedini, Texas A&M University, "A SAT-Based Scheme to Determine Optimal Fix-free Codes" (coauthored with Sunil Khatri and Serap Savari). 2009: Pavol Hanus, Technische Universitat Munchen, "Source Coding Scheme for Multiple Sequence Alignments" (coauthored with Janis Dingel, Georg Chalkidis, Joachim Hagenauer).