scispace - formally typeset
Search or ask a question

Showing papers by "University of Passau published in 2009"


Journal ArticleDOI
TL;DR: Pay what you want (PWYW) as mentioned in this paper is a new participative pricing mechanism in which consumers have maximum control over the price they pay, which can even lead to an increase in seller revenues.
Abstract: Pay what you want (PWYW) is a new participative pricing mechanism in which consumers have maximum control over the price they pay. Previous research has suggested that participative pricing increases consumers' intent to purchase. However, sellers using PWYW face the risk that consumers will exploit their control and pay nothing at all or a price below the seller's costs. In three field studies, the authors find that prices paid are significantly greater than zero. They analyze factors that influence prices paid and show that PWYW can even lead to an increase in seller revenues.

342 citations


Book ChapterDOI
01 Jan 2009
TL;DR: The GI/BSI/DFKI Protection Profile constitutes after the implementation of the identified improvements as the proposed evaluation methodology for remote electronic voting systems and can now be applied to available systems.
Abstract: The previous part discusses the GI/BSI/DFKI Protection Profile which constitutes after the implementation of the identified improvements as the proposed evaluation methodology for remote electronic voting systems. The result can now be applied to available systems. Currently, there is no system that has been evaluated against the GI/BSI/DFKI Protection Profile or even against the improved version.

332 citations


Proceedings ArticleDOI
16 May 2009
TL;DR: This work unify languages and tools that rely on superimposition by using the language-independent model of feature structure trees (FSTs), and proposes a general approach to the composition of software artifacts written in different languages.
Abstract: Superimposition is a composition technique that has been applied successfully in many areas of software development. Although superimposition is a general-purpose concept, it has been (re)invented and implemented individually for various kinds of software artifacts. We unify languages and tools that rely on superimposition by using the language-independent model of feature structure trees (FSTs). On the basis of the FST model, we propose a general approach to the composition of software artifacts written in different languages, Furthermore, we offer a supporting framework and tool chain, called FEATUREHOUSE. We use attribute grammars to automate the integration of additional languages, in particular, we have integrated Java, C#, C, Haskell, JavaCC, and XML. Several case studies demonstrate the practicality and scalability of our approach and reveal insights into the properties a language must have in order to be ready for superimposition.

231 citations


Proceedings ArticleDOI
16 May 2009
TL;DR: With FeatureIDE, this work refactored FeatureIDE into an open source framework that encapsulates the common ideas of feature-oriented software development and that can be reused and extended beyond AHEAD.
Abstract: Tools support is crucial for the acceptance of a new programming language. However, providing such tool support is a huge investment that can usually not be provided for a research language. With FeatureIDE, we have built an IDE for AHEAD that integrates all phases of feature-oriented software development. To reuse this investment for other tools and languages, we refactored FeatureIDE into an open source framework that encapsulates the common ideas of feature-oriented software development and that can be reused and extended beyond AHEAD. Among others, we implemented extensions for FeatureC++ and FeatureHouse, but in general, FeatureIDE is open for everybody to showcase new research results and make them usable to a wide audience of students, researchers, and practitioners.

228 citations


Journal ArticleDOI
TL;DR: In this article, the authors compared the forecast accuracy of different methods, namely prediction markets, tipsters and betting odds, and assesses the ability of prediction markets and tipsters to generate profits systematically in a betting market.
Abstract: This article compares the forecast accuracy of different methods, namely prediction markets, tipsters and betting odds, and assesses the ability of prediction markets and tipsters to generate profits systematically in a betting market We present the results of an empirical study that uses data from 678–837 games of three seasons of the German premier soccer league Prediction markets and betting odds perform equally well in terms of forecasting accuracy, but both methods strongly outperform tipsters A weighting-based combination of the forecasts of these methods leads to a slightly higher forecast accuracy, whereas a rule-based combination improves forecast accuracy substantially However, none of the forecasts leads to systematic monetary gains in betting markets because of the high fees (25%) charged by the state-owned bookmaker in Germany Lower fees (eg, approximately 12% or 0%) would provide systematic profits if punters exploited the information from prediction markets and bet only on a selected number of games Copyright © 2008 John Wiley & Sons, Ltd

180 citations


Posted ContentDOI
TL;DR: In this article, the authors investigated how patent applications and grants held by new ventures improve their ability to attract venture capital (VC) financing, and they found that VCs pay attention to patent quality, financing those ventures faster which later turn out to have high-quality patents.
Abstract: This paper investigates how patent applications and grants held by new ventures improve their ability to attract venture capital (VC) financing. We argue that investors are faced with considerable uncertainty and therefore rely on patents as signals when trying to assess the prospects of potential portfolio companies. For a sample of VC-seeking German and British biotechnology companies we have identified all patents filed at the European Patent Office (EPO). Applying hazard rate analysis, we find that in the presence of patent applications, VC financing occurs earlier. Our results also show that VCs pay attention to patent quality, financing those ventures faster which later turn out to have high-quality patents. Patent oppositions increase the likelihood of receiving VC, but ultimate grant decisions do not spur VC financing, presumably because they are anticipated. Our empirical results and interviews with VCs suggest that the process of patenting generates signals which help to overcome the liabilities of newness faced by new ventures.

179 citations


Journal ArticleDOI
TL;DR: A survey of vulnerabilities in the context of Web Services is given, showing that Web Services are exposed to attacks well-known from common Internet protocols and additionally to new kinds of attacks targeting Web Services in particular.
Abstract: Being regarded as the new paradigm for Internet communication, Web Services have introduced a large number of new standards and technologies. Though founding on decades of networking experience, Web Services are not more resistant to security attacks than other open network systems. Quite the opposite is true: Web Services are exposed to attacks well-known from common Internet protocols and additionally to new kinds of attacks targeting Web Services in particular. Along with their severe impact, most of these attacks can be performed with minimum effort from the attacker’s side. This article gives a survey of vulnerabilities in the context of Web Services. As a proof of the practical relevance of the threats, exemplary attacks on widespread Web Service implementations were performed. Further, general countermeasures for prevention and mitigation of such attacks are discussed.

136 citations


Book ChapterDOI
29 Jun 2009
TL;DR: The feasibility of superimposition for model composition is analyzed, the corresponding tool support is offered, and the experiences with three case studies are discussed.
Abstract: In software product line engineering, feature composition generates software tailored to specific requirements from a common set of artifacts. Superimposition is a technique to merge code pieces belonging to different features. The advent of model-driven development raises the question of how to support the variability of software product lines in modeling techniques. We propose to use superimposition as a model composition technique in order to support variability. We analyze the feasibility of superimposition for model composition, offer corresponding tool support, and discuss our experiences with three case studies (including an industrial case study).

117 citations


Journal ArticleDOI
TL;DR: In this article, the authors analyze the feasibility of using virtual stock markets for the identification of lead users in consumer products markets and show that they can be an effective instrument to identify lead users.

114 citations


Proceedings ArticleDOI
04 Sep 2009
TL;DR: A method to infer the orientation of mobile device carried in a pocket from the acceleration signal acquired when the user is walking is presented and how to compute the orientation within the horizontal plane is demonstrated.
Abstract: We present a method to infer the orientation of mobile device carried in a pocket from the acceleration signal acquired when the user is walking. Whereas previous work has shown how to determine the the orientation in the vertical plane (angle towards earth gravity), we demonstrate how to compute the orientation within the horizontal plane. To validate our method we compare the output of our method with GPS heading information when walking in a straight line. On a total of 16 different orientations and traces we have a mean difference of 5 degrees with 2.5 degrees standard deviation.

113 citations


Proceedings ArticleDOI
24 Aug 2009
TL;DR: The impact of the optional feature problem is examined in two case studies from the domain of embedded database systems, and different state-of-the-art solutions and their trade-offs are surveyed.
Abstract: A software product line is a family of related programs that are distinguished in terms of features. A feature implements a stakeholders' requirement. Different program variants specified by distinct feature selections are produced from a common code base. The optional feature problem describes a common mismatch between variability intended in the domain and dependencies in the implementation. When this situation occurs, some variants that are valid in the domain cannot be produced due to implementation issues. There are many different solutions to the optional feature problem, but they all suffer from drawbacks such as reduced variability, increased development effort, reduced efficiency, or reduced source code quality. We examine the impact of the optional feature problem in two case studies from the domain of embedded database systems, and we survey different state-of-the-art solutions and their trade-offs. Our intension is to raise awareness of the problem, to guide developers in selecting an appropriate solution for their product line, and to identify opportunities for future research.

Proceedings ArticleDOI
15 Jun 2009
TL;DR: The newly started European research project OPPORTUNITY is introduced within which mobile opportunistic activity and context recognition systems are developed within which the approach is followed along opportunistic sensing, data processing and interpretation, and autonomous adaptation and evolution to environmental and user changes.
Abstract: Opportunistic sensing allows to efficiently collect information about the physical world and the persons behaving in it. This may mainstream human context and activity recognition in wearable and pervasive computing by removing requirements for a specific deployed infrastructure. In this paper we introduce the newly started European research project OPPORTUNITY within which we develop mobile opportunistic activity and context recognition systems. We outline the project's objective, the approach we follow along opportunistic sensing, data processing and interpretation, and autonomous adaptation and evolution to environmental and user changes, and we outline preliminary results.

Journal ArticleDOI
TL;DR: The past, present, and future of wearable platform research are examined by examining the rise of the smart phone and its impact on wearable computing research.
Abstract: Do smart phones render wearable computers obsolete? Where does the rise of the smart phone leave wearable computing research? We answer these questions by examining past, present, and future of wearable platform research.

Book ChapterDOI
29 Jun 2009
TL;DR: This paper presents CIDE, an SPL development tool that guarantees syntactic correctness for all variants of an SPL, and shows how the underlying mechanism abstracts from textual representation and generalizes it to arbitrary languages.
Abstract: A software product line (SPL) is a family of related program variants in a well-defined domain, generated from a set of features. A fundamental difference from classical application development is that engineers develop not a single program but a whole family with hundreds to millions of variants. This makes it infeasible to separately check every distinct variant for errors. Still engineers want guarantees on the entire SPL. A further challenge is that an SPL may contain artifacts in different languages (code, documentation, models, etc.) that should be checked. In this paper, we present CIDE, an SPL development tool that guarantees syntactic correctness for all variants of an SPL. We show how CIDE’s underlying mechanism abstracts from textual representation and we generalize it to arbitrary languages. Furthermore, we automate the generation of plug-ins for additional languages from annotated grammars. To demonstrate the language-independent capabilities, we applied CIDE to a series of case studies with artifacts written in Java, C++, C, Haskell, ANTLR, HTML, and XML.

Proceedings ArticleDOI
04 Oct 2009
TL;DR: This paper provides a model that supports both physical and virtual separation and describes refactorings in both directions, so every virtually separated product line can be automatically transformed into a physically separated one (replacing annotations by refinements and vice versa).
Abstract: Physical separation with class refinements and method refinements a la AHEAD and virtual separation using annotations a la #ifdef or CIDE are two competing implementation approaches for software product lines with complementary advantages. Although both approaches have been mainly discussed in isolation, we strive for an integration to leverage the respective advantages. In this paper, we lay the foundation for such an integration by providing a model that supports both physical and virtual separation and by describing refactorings in both directions. We prove the refactorings complete, so every virtually separated product line can be automatically transformed into a physically separated one (replacing annotations by refinements) and vice versa. To demonstrate the feasibility of our approach, we have implemented the refactorings in our tool CIDE and conducted four case studies.

Journal ArticleDOI
TL;DR: In this article, the role of governments as lenders of last resort is discussed and a formal illustration of a rescue mechanism, which allows to distinguish between illiquid but solvent and insolvent banks is provided.
Abstract: The recent global financial crisis represents a major economic challenge. In order to prevent such market failure, it is vital to understand what caused the crisis and what are the lessons to be learned. Given the tremendous bailout packages worldwide, we discuss the role of governments as lenders of last resort. In our view, it is important not to suspend the market mechanism of bankruptcy via granting rescue packages. Only those institutions which are illiquid but solvent should be rescued, and this should occur at a significant cost for the respective institution. We provide a formal illustration of a rescue mechanism, which allows to distinguish between illiquid but solvent and insolvent banks. Furthermore, we argue that stricter regulation cannot be the sole consequence of the crisis. There appears to be a need for improved risk awareness, more sophisticated risk management and an alignment of interest among the participants in the market for credit risk.

Journal ArticleDOI
TL;DR: This paper introduces a numerically stable Approximate Vanishing Ideal (AVI) Algorithm which computes a set of polynomials that almost vanish at the given points and almost form a border basis.

Book ChapterDOI
01 Jan 2009
TL;DR: In this paper, a survey of numerical integration with respect to measures μ on infinite-dimensional spaces, e.g., Gaussian measures on function spaces or distributions of diffusion processes on the path space, is presented.
Abstract: We survey recent results on numerical integration with respect to measures μ on infinite-dimensional spaces, e.g., Gaussian measures on function spaces or distributions of diffusion processes on the path space. Emphasis is given to the class of multi-level Monte Carlo algorithms and, more generally, to variable subspace sampling and the associated cost model. In particular we investigate integration of Lipschitz functionals. Here we establish a close relation between quadrature by means of randomized algorithms and Kolmogorov widths and quantization numbers of μ. Suitable multi-level algorithms turn out to be almost optimal in the Gaussian case and in the diffusion case.

Proceedings ArticleDOI
20 Apr 2009
TL;DR: This work proposes a user-driven scheme where graphs of entities -- represented by globally identifiable declarative artifacts -- self-organize in a dynamic and probabilistic manner and lets end-users freely define associations between arbitrary entities.
Abstract: We tackle the problem of disambiguating entities on the Web. We propose a user-driven scheme where graphs of entities -- represented by globally identifiable declarative artifacts -- self-organize in a dynamic and probabilistic manner. Our solution has the following two desirable properties: i) it lets end-users freely define associations between arbitrary entities and ii) it probabilistically infers entity relationships based on uncertain links using constraint-satisfaction mechanisms. We outline the interface between our scheme and the current data Web, and show how higher-layer applications can take advantage of our approach to enhance search and update of information relating to online entities. We describe a decentralized infrastructure supporting efficient and scalable entity disambiguation and demonstrate the practicability of our approach in a deployment over several hundreds of machines.

Journal ArticleDOI
TL;DR: In this article, the authors use Habermas' idea of a post-secular society as a prism through which they examine the return of religion and its impact on secularization.
Abstract: The ‘return of religion’ as a social phenomenon has aroused at least three different debates, with the first being the ‘clash of civilizations’, the second criticizing ‘modernity’, and the third focusing on the public/private distinction. This article uses Habermas’ idea of a post-secular society as a prism through which we examine the return of religion and impact on secularization. In doing so, we attempt to understand the new role of religion as a challenger of the liberal projects following the decline of communism. Against this background, section four focuses on Habermas’s central arguments in his proposal for a post-secular society. We claim that the problematique in Habermas’s analysis must be placed within the wider framework of an emerging global public sphere. In this context we examine the problem of religion’s place in political process and the two readings of Habermas as suggested by Simone Chambers.

Journal ArticleDOI
TL;DR: SwiftMotif is suitable for real-time applications, accounts for the uncertainty associated with the occurrence of certain motifs, e.g., due to noise, and considers local variability in the time domain.

Book ChapterDOI
16 Sep 2009
TL;DR: A smart, easy to install sensor is described that can be built to do the measurements and the algorithms which can determine the consistency of the substance in the mixer, how many eggs are being boiled, what size of coffee has been prepared or whether a cutting machine was used to cut bread or salami.
Abstract: This paper builds on previous work by different authors on monitoring the use of household devices through analysis of the power line current. Whereas previous work dealt with detecting which device is being used, we go a step further and analyze how the device is being used. We focus on a kitchen scenario where many different devices are relevant to activity recognition. The paper describes a smart, easy to install sensor that we have built to do the measurements and the algorithms which can for example determine the consistency of the substance in the mixer, how many eggs are being boiled (and if they are soft or hard), what size of coffee has been prepared or whether a cutting machine was used to cut bread or salami. A set of multi user experiments has been performed to validate the algorithms.

Posted Content
Abstract: Entrepreneurs who decide to enter an industry are faced with different levels of effective entry costs in different countries. These costs are heavily influenced by economic policy. What is not well understood is how international trade affects the government incentive to impact on entry costs, and how entry subsidies can be used strategically in open economies. We present a general equilibrium model of monopolistic competition with two (potentially) asymmetric countries and heterogeneous firms where government subsidizes entry of domestic entrepreneurs. Under autarky the entry subsidy indirectly corrects for the monopoly pricing distortion. In the autarky equilibrium these subsidies trigger entry, but they eventually do not lead to more but to better firms in the market. In the open economy there is another, strategic motive for entry subsidies as the tightening of domestic market selection also affects exporting decisions for domestic and foreign firms. Our analysis shows that entry subsidies in the Nash-equilibrium are first increasing, then decreasing in the level of trade openness. This implies a U-shaped relationship between openness and effective entry costs. Merging cross-country data on entry costs with international trade openness indices we empirically confirm this theoretical prediction.

Book ChapterDOI
16 Sep 2009
TL;DR: It is demonstrated that synchronization actions can be automatically identified and used for stream synchronization across widely different sensors such as acceleration, sound, force, and a motion tracking system.
Abstract: A major challenge in using multi-modal, distributed sensor systems for activity recognition is to maintain a temporal synchronization between individually recorded data streams. A common approach is to use well defined 'synchronization actions' performed by the user to generate, easily identifiable pattern events in all recorded data streams. The events are then used to manually align data streams. This paper proposes an automatic method for this synchronization. We demonstrate that synchronization actions can be automatically identified and used for stream synchronization across widely different sensors such as acceleration, sound, force, and a motion tracking system. We describe fundamental properties and bounds of our event-based synchronization approach. In particular, we show that the event timing relation is transitive for sensor groups with shared members. We analyzed our synchronization approach in three studies. For a large dataset of 5 users and totally 308 data stream minutes we achieved a synchronization error of 0.3 s for more than 80% of the stream.

Journal ArticleDOI
TL;DR: An affine equivariant, constrained heteroscedastic model and criterion with trimming for clustering contaminated, grouped data is established and breakdown points of the estimated parameters are computed thereby showing asymptotic robustness of the method.
Abstract: We establish an affine equivariant, constrained heteroscedastic model and criterion with trimming for clustering contaminated, grouped data. We show existence of the maximum likelihood estimator, propose a method for determining an appropriate constraint, and design a strategy for finding reasonable clusterings. We finally compute breakdown points of the estimated parameters thereby showing asymptotic robustness of the method.

Journal ArticleDOI
TL;DR: The authors analyzes subsidies for intracity and intercity commuting in an urban economics framework with two cities and agglomeration externalities, where workers may commute within and between cities.

Journal ArticleDOI
01 Dec 2009
TL;DR: An approach for decomposing data management software for embedded systems using feature-oriented programming is presented and a software product line that allows to generate tailor-made data management systems is demonstrated.
Abstract: Applications in the domain of embedded systems are diverse and store an increasing amount of data. In order to satisfy the varying requirements of these applications, data management functionality is needed that can be tailored to the applications' needs. Furthermore, the resource restrictions of embedded systems imply a need for data management that is customized to the hardware platform. In this paper, we present an approach for decomposing data management software for embedded systems using feature-oriented programming. The result of such a decomposition is a software product line that allows us to generate tailor-made data management systems. While existing approaches for tailoring software have significant drawbacks regarding customizability and performance, a feature-oriented approach overcomes these limitations, as we will demonstrate. In a non-trivial case study on Berkeley DB, we evaluate our approach and compare it to other approaches for tailoring DBMS.

Journal ArticleDOI
TL;DR: The max version of the crossing minimization problem, which attempts to minimize the discrimination against any permutation, is contributed and the NP-hardness of the common crossing minimizations problem for k=4 permutations is shown.

Journal ArticleDOI
TL;DR: The primary aim of this analysis is to show the effect of balking, impatience, and buffer size on the steady-state performance measures.

Proceedings ArticleDOI
20 Nov 2009
TL;DR: This is a position paper identifying the research orientation with a time horizon of 10 years, together with the key challenges for the capabilities in the Management and Service-aware Networking Architectures (MANA) part of the Future Internet (FI) allowing for parallel and federated Internet(s).
Abstract: Future Internet (FI) research and development threads have recently been gaining momentum all over the world and as such the international race to create a new generation Internet is in full swing: GENI [16], Asia Future Internet [19], Future Internet Forum Korea [18], European Union Future Internet Assembly (FIA) [8]. This is a position paper identifying the research orientation with a time horizon of 10 years, together with the key challenges for the capabilities in the Management and Service-aware Networking Architectures (MANA) part of the Future Internet (FI) allowing for parallel and federated Internet(s).