scispace - formally typeset
Search or ask a question

Showing papers by "University of Passau published in 2013"


Journal ArticleDOI
TL;DR: A survey of current research in the Virtual Network Embedding (VNE) area is presented and a taxonomy of current approaches to the VNE problem is provided and opportunities for further research are discussed.
Abstract: Network virtualization is recognized as an enabling technology for the future Internet. It aims to overcome the resistance of the current Internet to architectural change. Application of this technology relies on algorithms that can instantiate virtualized networks on a substrate infrastructure, optimizing the layout for service-relevant metrics. This class of algorithms is commonly known as "Virtual Network Embedding (VNE)" algorithms. This paper presents a survey of current research in the VNE area. Based upon a novel classification scheme for VNE algorithms a taxonomy of current approaches to the VNE problem is provided and opportunities for further research are discussed.

1,174 citations


Proceedings ArticleDOI
02 Sep 2013
TL;DR: A sparse auto encoder method for feature transfer learning for speech emotion recognition using a common emotion-specific mapping rule from a small set of labelled data in a target domain to improve the performance relative to learning each source domain independently.
Abstract: In speech emotion recognition, training and test data used for system development usually tend to fit each other perfectly, but further 'similar' data may be available. Transfer learning helps to exploit such similar data for training despite the inherent dissimilarities in order to boost a recogniser's performance. In this context, this paper presents a sparse auto encoder method for feature transfer learning for speech emotion recognition. In our proposed method, a common emotion-specific mapping rule is learnt from a small set of labelled data in a target domain. Then, newly reconstructed data are obtained by applying this rule on the emotion-specific data in a different domain. The experimental results evaluated on six standard databases show that our approach significantly improves the performance relative to learning each source domain independently.

335 citations


Proceedings ArticleDOI
11 Nov 2013
TL;DR: A variability-aware approach to performance prediction via statistical learning that works progressively with random samples, without additional effort to detect feature interactions is proposed.
Abstract: Configurable software systems allow stakeholders to derive program variants by selecting features. Understanding the correlation between feature selections and performance is important for stakeholders to be able to derive a program variant that meets their requirements. A major challenge in practice is to accurately predict performance based on a small sample of measured variants, especially when features interact. We propose a variability-aware approach to performance prediction via statistical learning. The approach works progressively with random samples, without additional effort to detect feature interactions. Empirical results on six real-world case studies demonstrate an average of 94% prediction accuracy based on small random samples. Furthermore, we investigate why the approach works by a comparative analysis of performance distributions. Finally, we compare our approach to an existing technique and guide users to choose one or the other in practice.

180 citations


Journal ArticleDOI
TL;DR: In this article, the benefits of a deviation from the status quo as well as the concerns that are unique to certain non-net neutrality scenarios are identified and a framework for policy decisions is derived and it is discussed how the concept of neutrality extends to other parts of the Internet ecosystem.
Abstract: This paper is intended as an introduction to the debate on net neutrality and as a progress report on the growing body of academic literature on this issue. Different non-net neutrality scenarios are discussed and structured along the two dimensions of network and pricing regime. With this approach, the consensus on the benefits of a deviation from the status quo as well as the concerns that are unique to certain non-net neutrality scenarios can be identified. Moreover, a framework for policy decisions is derived and it is discussed how the concept of neutrality extends to other parts of the Internet ecosystem.

163 citations


Proceedings ArticleDOI
18 May 2013
TL;DR: A model-checking tool chain for C-based and Java-based product lines, called SPLverifier, is developed, which is used to compare sample- based and family-based strategies with regard to verification performance and the ability to find defects.
Abstract: Product-line technology is increasingly used in mission-critical and safety-critical applications. Hence, researchers are developing verification approaches that follow different strategies to cope with the specific properties of product lines. While the research community is discussing the mutual strengths and weaknesses of the different strategies - mostly at a conceptual level - there is a lack of evidence in terms of case studies, tool implementations, and experiments. We have collected and prepared six product lines as subject systems for experimentation. Furthermore, we have developed a model-checking tool chain for C-based and Java-based product lines, called SPLverifier, which we use to compare sample-based and family-based strategies with regard to verification performance and the ability to find defects. Based on the experimental results and an analytical model, we revisit the discussion of the strengths and weaknesses of product-line-verification strategies.

158 citations


Proceedings ArticleDOI
18 Aug 2013
TL;DR: A key finding is that variability-aware analysis outperforms most sampling heuristics with respect to analysis time while preserving completeness.
Abstract: The advent of variability management and generator technology enables users to derive individual variants from a variable code base based on a selection of desired configuration options. This approach gives rise to the generation of possibly billions of variants that, however, cannot be efficiently analyzed for errors with classic analysis techniques. To address this issue, researchers and practitioners usually apply sampling heuristics. While sampling reduces the analysis effort significantly, the information obtained is necessarily incomplete and it is unknown whether sampling heuristics scale to billions of variants. Recently, researchers have begun to develop variability-aware analyses that analyze the variable code base directly exploiting the similarities among individual variants to reduce analysis effort. However, while being promising, so far, variability-aware analyses have been applied mostly only to small academic systems. To learn about the mutual strengths and weaknesses of variability-aware and sampling-based analyses of software systems, we compared the two strategies by means of two concrete analysis implementations (type checking and liveness analysis), applied them to three subject systems: Busybox, the x86 Linux kernel, and OpenSSL. Our key finding is that variability-aware analysis outperforms most sampling heuristics with respect to analysis time while preserving completeness.

152 citations


Journal ArticleDOI
TL;DR: This work proposes an approach to predict a product's non-functional properties, based on the product's feature selection, and shows how already little domain knowledge can improve predictions and discusses trade-offs regarding accuracy and the required number of measurements.
Abstract: Context: A software product line is a family of related software products, typically created from a set of common assets Users select features to derive a product that fulfills their needs Users often expect a product to have specific non-functional properties, such as a small footprint or a bounded response time Because a product line may have an exponential number of products with respect to its features, it is usually not feasible to generate and measure non-functional properties for each possible product Objective: Our overall goal is to derive optimal products with respect to non-functional requirements by showing customers which features must be selected Method: We propose an approach to predict a product's non-functional properties based on the product's feature selection We aggregate the influence of each selected feature on a non-functional property to predict a product's properties We generate and measure a small set of products and, by comparing measurements, we approximate each feature's influence on the non-functional property in question As a research method, we conducted controlled experiments and evaluated prediction accuracy for the non-functional properties footprint and main-memory consumption But, in principle, our approach is applicable for all quantifiable non-functional properties Results: With nine software product lines, we demonstrate that our approach predicts the footprint with an average accuracy of 94%, and an accuracy of over 99% on average if feature interactions are known In a further series of experiments, we predicted main memory consumption of six customizable programs and achieved an accuracy of 89% on average Conclusion: Our experiments suggest that, with only few measurements, it is possible to accurately predict non-functional properties of products of a product line Furthermore, we show how already little domain knowledge can improve predictions and discuss trade-offs between accuracy and required number of measurements With this technique, we provide a basis for many reasoning and product-derivation approaches

123 citations


Proceedings ArticleDOI
01 Oct 2013
TL;DR: The system classifies 30 second long recordings of 10 different acoustic scenes using large-scale audio feature extraction using SVM and an approach called Latent Perceptual Indexing, whereby SVM achieve the best results.
Abstract: This work describes a system for acoustic scene classification using large-scale audio feature extraction. It is our contribution to the Scene Classification track of the IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events (D-CASE). The system classifies 30 second long recordings of 10 different acoustic scenes. From the highly variable recordings, a large number of spectral, cepstral, energy and voicing-related audio features are extracted. Using a sliding window approach, classification is performed on short windows. SVM are used to classify these short segments, and a majority voting scheme is employed to get a decision for longer recordings. On the official development set of the challenge, an accuracy of 73 % is achieved. SVM are compared with a nearest neighbour classifier and an approach called Latent Perceptual Indexing, whereby SVM achieve the best results. A feature analysis using the t-statistic shows that mainly Mel spectra are the most relevant features.

117 citations


Journal ArticleDOI
TL;DR: In this work, the state of the art dynamic key management schemes are classified into different groups and summarized based on the evaluation metrics, and several basic evaluation metrics are introduced.

115 citations


Journal ArticleDOI
TL;DR: A holistic view of the FEATUREHOUSE approach is provided based on rich experience with numerous languages and case studies and reflections on several years of research, to unify languages and tools that rely on superimposition by using the language-independent model of feature structure trees (FSTs).
Abstract: Superimposition is a composition technique that has been applied successfully in many areas of software development. Although superimposition is a general-purpose concept, it has been (re)invented and implemented individually for various kinds of software artifacts. We unify languages and tools that rely on superimposition by using the language-independent model of feature structure trees (FSTs). On the basis of the FST model, we propose a general approach to the composition of software artifacts written in different languages. Furthermore, we offer a supporting framework and tool chain, called FEATUREHOUSE. We use attribute grammars to automate the integration of additional languages. In particular, we have integrated Java, C#, C, Haskell, Alloy, and JavaCC. A substantial number of case studies demonstrate the practicality and scalability of our approach and reveal insights into the properties that a language must have in order to be ready for superimposition. We discuss perspectives of our approach and demonstrate how we extended FEATUREHOUSE with support for XML languages (in particular, XHTML, XMI/UML, and Ant) and alternative composition approaches (in particular, aspect weaving). Rounding off our previous work, we provide here a holistic view of the FEATUREHOUSE approach based on rich experience with numerous languages and case studies and reflections on several years of research.

112 citations


Book ChapterDOI
16 Mar 2013
TL;DR: This work presents an approach that integrates abstraction and interpolationbased refinement into an explicit-value analysis, i.e., a program analysis that tracks explicit values for a specified set of variables (the precision).
Abstract: ion, counterexample-guided refinement, and interpolation are techniques that are essential to the success of predicate-based program analysis. These techniques have not yet been applied together to explicit-value program analysis. We present an approach that integrates abstraction and interpolationbased refinement into an explicit-value analysis, i.e., a program analysis that tracks explicit values for a specified set of variables (the precision). The algorithm uses an abstract reachability graph as central data structure and a path-sensitive dynamic approach for precision adjustment. We evaluate our algorithm on the benchmark set of the Competition on Software Verification 2012 (SV-COMP'12) to show that our new approach is highly competitive. We also show that combining our new approach with an auxiliary predicate analysis scores significantly higher than the SV-COMP'12 winner.

Proceedings ArticleDOI
26 Oct 2013
TL;DR: The nature of feature interactions systematically and comprehensively is explored, classified in terms of order and visibility, which will have significant implications on research in this area, for example, on the efficiency of interaction-detection or performance-prediction techniques.
Abstract: The feature-interaction problem has been keeping researchers and practitioners in suspense for years. Although there has been substantial progress in developing approaches for modeling, detecting, managing, and resolving feature interactions, we lack sufficient knowledge on the kind of feature interactions that occur in real-world systems. In this position paper, we set out the goal to explore the nature of feature interactions systematically and comprehensively, classified in terms of order and visibility. Understanding this nature will have significant implications on research in this area, for example, on the efficiency of interaction-detection or performance-prediction techniques. A set of preliminary results as well as a discussion of possible experimental setups and corresponding challenges give us confidence that this endeavor is within reach but requires a collaborative effort of the community.

Book ChapterDOI
Dirk Beyer1
16 Mar 2013
TL;DR: The 2nd International Competition on Software Verification (SV-COMP 2013) as mentioned in this paper was the second edition of this thorough evaluation of fully automatic verifiers for software programs.
Abstract: This report describes the 2nd International Competition on Software Verification (SV-COMP 2013), which is the second edition of this thorough evaluation of fully automatic verifiers for software programs. The reported results represent the 2012 state-of-the-art in automatic software verification, in terms of effectiveness and efficiency, and as available and participated. The benchmark set of verification tasks consists of 2 315 programs, written in C, and exposing features of integers, heap-data structures, bit-vector operations, and concurrency; the properties include reachability and memory safety. The competition is again organized as a satellite event of TACAS.

Journal ArticleDOI
TL;DR: It is demonstrated that background colors have the potential to improve program comprehension, independently of size and programming language of the underlying product.
Abstract: Software-product-line engineering aims at the development of variable and reusable software systems. In practice, software product lines are often implemented with preprocessors. Preprocessor directives are easy to use, and many mature tools are available for practitioners. However, preprocessor directives have been heavily criticized in academia and even referred to as "#ifdef hell", because they introduce threats to program comprehension and correctness. There are many voices that suggest to use other implementation techniques instead, but these voices ignore the fact that a transition from preprocessors to other languages and tools is tedious, erroneous, and expensive in practice. Instead, we and others propose to increase the readability of preprocessor directives by using background colors to highlight source code annotated with ifdef directives. In three controlled experiments with over 70 subjects in total, we evaluate whether and how background colors improve program comprehension in preprocessor-based implementations. Our results demonstrate that background colors have the potential to improve program comprehension, independently of size and programming language of the underlying product. Additionally, we found that subjects generally favor background colors. We integrate these and other findings in a tool called FeatureCommander, which facilitates program comprehension in practice and which can serve as a basis for further research.

Proceedings ArticleDOI
23 Jun 2013
TL;DR: Methods and models for a map-based vehicle self-localization approach that reaches a global positioning accuracy in both lateral and longitudinal direction significantly below one meter and an orientation accuracy below one degree even at a speed up to 100 km/h in real-time are presented.
Abstract: Cooperative driver assistance functions benefit from sharing information on the local environments of individual road users by means of communication technology and advanced sensor data fusion methods. However, the consistent integration of environment models as well as the subsequent interpretation of traffic situations impose high requirements on the self-localization accuracy of vehicles. This paper presents methods and models for a map-based vehicle self-localization approach. Basically, information from the vehicular environment perception (using a monocular camera and laser scanner) is associated with data of a high-precision digital map in order to deduce the vehicle's position. Within the Monte-Carlo localization approach, the association of road markings is reduced to a prototype fitting problem which can be solved efficiently due to a map model based on smooth arc splines. Experiments on a rural road show that the localization approach reaches a global positioning accuracy in both lateral and longitudinal direction significantly below one meter and an orientation accuracy below one degree even at a speed up to 100 km/h in real-time.

Journal ArticleDOI
TL;DR: An exploratory study on 10 feature-oriented systems found that the majority of feature interactions could be detected based on feature-based specifications, but some specifications have not been modularized properly and require undesirable workarounds to modularization.

Journal ArticleDOI
TL;DR: It is shown that the energy-optimal allocation of virtualized services in a heterogeneous server infrastructure is NP-hard and can be modeled as a variant of the multidimensional vector packing problem and a model to predict the performance degradation of a service when it is consolidated with other services is proposed.
Abstract: Increasing power consumption of IT infrastructures and growing electricity prices have led to the development of several energy-saving techniques in the last couple of years. Virtualization and consolidation of services is one of the key technologies in data centers to reduce overprovisioning and therefore increase energy savings. This paper shows that the energy-optimal allocation of virtualized services in a heterogeneous server infrastructure is NP-hard and can be modeled as a variant of the multidimensional vector packing problem. Furthermore, it proposes a model to predict the performance degradation of a service when it is consolidated with other services. The model allows considering the tradeoff between power consumption and service performance during service allocation. Finally, the paper presents two heuristics that approximate the energy-optimal and performance-aware resource allocation problem and shows that the allocations determined by the proposed heuristics are more energy-efficient than the widely applied maximum-density consolidation.

Journal ArticleDOI
TL;DR: This work criticise the normative foundations upon which the new German law expressly designed to permit male circumcision even outside of medical settings seems to rest, and analyse two major flaws in the new law which are considered emblematic of the difficulty that any legal attempt to protect medically irrelevant genital cutting is bound to face.
Abstract: Non-therapeutic circumcision violates boys’ right to bodily integrity as well as to self-determination. There is neither any verifiable medical advantage connected with the intervention nor is it painless nor without significant risks. Possible negative consequences for the psychosexual development of circumcised boys (due to substantial loss of highly erogenous tissue) have not yet been sufficiently explored, but appear to ensue in a significant number of cases. According to standard legal criteria, these considerations would normally entail that the operation be deemed an ‘impermissible risk’— neither justifiable on grounds of parental rights nor of religious liberty: as with any other freedom right, these end where another person’s body begins. Nevertheless, after a resounding decision by a Cologne district court that non-therapeutic circumcision constitutes bodily assault, the German legislature responded by enacting a new statute expressly designed to permit male circumcision even outside of medical settings. We first criticise the normative foundations upon which such a legal concession seems to rest, and then analyse two major flaws in the new German law which we consider emblematic of the difficulty that any legal attempt to protect medically irrelevant genital cutting is bound to face.

Journal ArticleDOI
TL;DR: Kynoid is the first extension for Android which enables the enforcement of security policies of data-items stored in shared resources and shows the feasibility of the framework by providing a proof-of-concept implementation.

Journal ArticleDOI
TL;DR: This article examined the extent to which different types of substantive project contributions as well as social factors predict whether a scientist is named as author on a paper and inventor on a patent resulting from the same project.

Proceedings ArticleDOI
23 Jan 2013
TL;DR: It is argued that feature-oriented software evolution relying on automatic traceability, analyses, and recommendations reduces existing challenges in understanding and managing evolution.
Abstract: In this paper, we develop a vision of software evolution based on a feature-oriented perspective. From the fact that features provide a common ground to all stakeholders, we derive a hypothesis that changes can be effectively managed in a feature-oriented manner. Assuming that the hypothesis holds, we argue that feature-oriented software evolution relying on automatic traceability, analyses, and recommendations reduces existing challenges in understanding and managing evolution. We illustrate these ideas using an automotive example and raise research questions for the community.

Proceedings ArticleDOI
18 Aug 2013
TL;DR: The impact of precision reuse on industrial verification problems created from 62 Linux kernel device drivers with 1119 revisions is experimentally shown.
Abstract: Continuous testing during development is a well-established technique for software-quality assurance. Continuous model checking from revision to revision is not yet established as a standard practice, because the enormous resource consumption makes its application impractical. Model checkers compute a large number of verification facts that are necessary for verifying if a given specification holds. We have identified a category of such intermediate results that are easy to store and efficient to reuse: abstraction precisions. The precision of an abstract domain specifies the level of abstraction that the analysis works on. Precisions are thus a precious result of the verification effort and it is a waste of resources to throw them away after each verification run. In particular, precisions are reasonably small and thus easy to store; they are easy to process and have a large impact on resource consumption. We experimentally show the impact of precision reuse on industrial verification problems created from 62 Linux kernel device drivers with 1119 revisions.

Book ChapterDOI
23 Sep 2013
TL;DR: It is shown that every 3-connected 1-planar graph has a straight-line drawing on an integer grid of quadratic size, with the exception of a single edge on the outer face that has one bend.
Abstract: A graph is 1-planar if it can be drawn in the plane such that each edge is crossed at most once. In general, 1-planar graphs do not admit straight-line drawings. We show that every 3-connected 1-planar graph has a straight-line drawing on an integer grid of quadratic size, with the exception of a single edge on the outer face that has one bend. The drawing can be computed in linear time from any given 1-planar embedding of the graph.

Journal ArticleDOI
TL;DR: In this paper, a positive degree-day glacier mass-balance model is used to constrain paleo-climate conditions associated with reconstructed LGM glacier extents of four central European upland regions: the Vosges Mountains, the Black Forest, the Bavarian Forest, and the Giant Mountains.

Proceedings ArticleDOI
27 Apr 2013
TL;DR: A six-week self-reporting study on smartphone usage was conducted in order to investigate the accuracy of self-reported information, and logged data was used as ground truth to compare the subjects' reports against.
Abstract: Self-reporting techniques, such as data logging or a diary, are frequently used in long-term studies, but prone to subjects' forgetfulness and other sources of inaccuracy. We conducted a six-week self-reporting study on smartphone usage in order to investigate the accuracy of self-reported information, and used logged data as ground truth to compare the subjects' reports against. Subjects never recorded more than 70% and, depending on the requested reporting interval, down to less than 40% of actual app usages. They significantly overestimated how long they used apps. While subjects forgot self-reports when no automatic reminders were sent, a high reporting frequency was perceived as uncomfortable and burdensome. Most significantly, self-reporting even changed the actual app usage of users and hence can lead to deceptive measures if a study relies on no other data sources. With this contribution, we provide empirical quantitative long-term data on the reliability of self-reported data collected with mobile devices. We aim to make researchers aware of the caveats of self-reporting and give recommendations for maximizing the reliability of results when conducting large-scale, long-term app usage studies.

Book ChapterDOI
12 Sep 2013
TL;DR: This work strengthens the standard unlinkability definition by Brzuska et al. at PKC ’10, making it robust against malicious or buggy signers and uses standard digital signatures, which makes them compatible with existing infrastructure.
Abstract: Sanitizable signatures allow for controlled modification of signed data. The essential security requirements are accountability, privacy and unlinkability. Unlinkability is a strong notion of privacy. Namely, it makes it hard to link two sanitized messages that were derived from the same message-signature pair. In this work, we strengthen the standard unlinkability definition by Brzuska et al. at PKC ’10, making it robust against malicious or buggy signers. While state-of-the art schemes deploy costly group signatures to achieve unlinkability, our construction uses standard digital signatures, which makes them compatible with existing infrastructure.

Proceedings ArticleDOI
28 Oct 2013
TL;DR: The review concludes that the use of gameful design for in-vehicle applications seems to be promising, however, gamified applications related to the serious task of driving require thought-out rules and extensive testing in order to achieve the desired goal.
Abstract: In this paper, we review the use of gameful design in the automotive domain. Outside of vehicles the automotive industry is mainly using gameful design for marketing and brand forming. For in-vehicle applications and for applications directly connected to real vehicles, the main usage scenarios of gameful design are navigation, eco-driving and driving safety. The objective of this review is to answer the following questions: (1) What elements of gameful design are currently used in the automotive industry? (2) What other automotive applications could be realized or enhanced by applying gameful design? (3) What are the challenges and limitations of gameful design in this domain especially for in-vehicle applications? The review concludes that the use of gameful design for in-vehicle applications seems to be promising. However, gamified applications related to the serious task of driving require thought-out rules and extensive testing in order to achieve the desired goal.

Proceedings ArticleDOI
25 Aug 2013
TL;DR: This work proposes a novel overlap detection system using Long Short-Term Memory (LSTM) recurrent neural networks, used to generate framewise overlap predictions which are applied for overlap detection.
Abstract: Detecting segments of overlapping speech (when two or more speakers are active at the same time) is a challenging problem. Previously, mostly HMM-based systems have been used for overlap detection, employing various different audio features. In this work, we propose a novel overlap detection system using Long Short-Term Memory (LSTM) recurrent neural networks. LSTMs are used to generate framewise overlap predictions which are applied for overlap detection. Furthermore, a tandem HMM-LSTM system is obtained by adding LSTM predictions to the HMM feature set. Experiments with the AMI corpus show that overlap detection performance of LSTMs is comparable to HMMs. The combination of HMMs and LSTMs improves overlap detection by achieving higher recall.

Proceedings ArticleDOI
01 May 2013
TL;DR: It is found that navigation in social networks appears to differ from human navigation in information networks in interesting ways and in order to apply decentralized search to information networks, stochastic adaptations are required.
Abstract: Models of human navigation play an important role for understanding and facilitating user behavior in hypertext systems. In this paper, we conduct a series of principled experiments with decentralized search - an established model of human navigation in social networks - and study its applicability to information networks. We apply several variations of decentralized search to model human navigation in information networks and we evaluate the outcome in a series of experiments. In these experiments, we study the validity of decentralized search by comparing it with human navigational paths from an actual information network - Wikipedia. We find that (i) navigation in social networks appears to differ from human navigation in information networks in interesting ways and (ii) in order to apply decentralized search to information networks, stochastic adaptations are required. Our work illuminates a way towards using decentralized search as a valid model for human navigation in information networks in future work. Our results are relevant for scientists who are interested in modeling human behavior in information networks and for engineers who are interested in using models and simulations of human behavior to improve on structural or user interface aspects of hypertextual systems.

Journal ArticleDOI
TL;DR: The algorithm is easy to implement, its computational cost is close to the size of the quantization, and it achieves strong asymptotic optimality provided this property holds for the Brownian motion (bridge) quantization.
Abstract: We present a fully constructive method for quantization of the solution X of a scalar SDE in the path space Lp[0,1] or C[0,1]. The construction relies on a refinement strategy which takes into account the local regularity of X and uses Brownian motion (bridge) quantization as a building block. Our algorithm is easy to implement, its computational cost is close to the size of the quantization, and it achieves strong asymptotic optimality provided this property holds for the Brownian motion (bridge) quantization.