scispace - formally typeset
Search or ask a question

Showing papers by "University of Mannheim published in 1998"


Journal ArticleDOI
TL;DR: This paper found that people tend to sell assets that have gained value (winners) and keep assets that had lost value (losers) and that people value gains and losses relative to a reference point (the initial purchase price of shares).
Abstract: The ‘disposition effect’ is the tendency to sell assets that have gained value (‘winners’) and keep assets that have lost value (‘losers’). Disposition effects can be explained by the two features of prospect theory: the idea that people value gains and losses relative to a reference point (the initial purchase price of shares), and the tendency to seek risk when faced with possible losses, and avoid risk when a certain gain is possible. Our experiments were designed to see if subjects would exhibit disposition effects. Subjects bought and sold shares in six risky assets. Asset prices fluctuated in each period. Contrary to Bayesian optimization, subjects did tend to sell winners and keep losers. When the shares were automatically sold after each period, the disposition effect was greatly reduced.

740 citations


Proceedings ArticleDOI
TL;DR: Comparative investigations on early shot boundary detection algorithms designed explicitly to detect specific complex editing operations, such as fades and dissolves, are taken into account and show that while hard cuts and fades can be detected reliably, dissolves are still an open research issue.
Abstract: Various methods of automatic shot boundary detection have been proposed and claimed to perform reliably. Although the detection of edits is fundamental to any kind of video analysis since it segments a video into its basic components, the shots, only few comparative investigations on early shot boundary detection algorithms have been published. These investigations mainly concentrate on measuring the edit detection performance, however, do not consider the algorithms? ability to classify the types and to locate the boundaries of the edits correctly. This paper extends these comparative investigations. More recent algorithms designed explicitly to detect specific complex editing operations such as fades and dissolves are taken into account, and their ability to classify the types and locate the boundaries of such edits are examined. The algorithms? performance is measured in terms of hit rate, number of false hits, and miss rate for hard cuts, fades, and dissolves over a large and diverse set of video sequences. The experiments show that while hard cuts and fades can be detected reliably, dissolves are still an open research issue. The false hit rate for dis-solves is usually unacceptably high, ranging from 50% up to over 400%. Moreover, all algorithms seem to fail under roughly the same conditions.

510 citations


Book ChapterDOI
01 Nov 1998
TL;DR: The European Community (EC) is governed without government and therefore it is bound to be governed in a particular way as discussed by the authors, and the European Community is penetrating into the political life of member states and its particular mode of governing may disseminate across national borders.
Abstract: The European Community (EC) is governed without government and, therefore, it is bound to be governed in a particular way In addition, EC governance is penetrating into the political life of member states and its particular mode of governing may disseminate across national borders These, in a nutshell, are the two hypotheses that will be tested The first is that Europe’s supranational Community functions according to a logic different from that of the representative democracies of its member states Its purpose and institutional architecture are distinctive, promoting a particular mode of governance The second is that the process of ‘Europeanisation,’ that is extending the boundaries of the relevant political space beyond the member states, will contribute to a change of governance at national and sub-national levels Being a member of the EU is concomitant with the interpenetrating of systems of governance; any polity which is part of such a ‘penetrated system’ is bound to change in terms of established patterns of governing

277 citations


Journal ArticleDOI
TL;DR: In this article, the authors examined the relation of bank loan terms to borrower risk defined by the banks internal credit rating and found that riskier borrowers pay higher loan rate premiums and rely more on bank finance.
Abstract: This study examines the relation of bank loan terms to borrower risk defined by the banks’ internal credit rating. The analysis is not restricted to a static view. It also incorporates rating transition and its implications on the relation. Money illusion and phenomena linked with relationship banking are discovered as important factors. The results show that riskier borrowers pay higher loan rate premiums and rely more on bank finance. Housebanks obtain more collateral and provide more finance. Caused by money illusion in times of high market interest rates loan rate premiums are relatively small whereas in times of low market interest rates they are relatively high. There was no evidence for an appropriate adjustment of loan terms to rating changes.

246 citations


Journal ArticleDOI
TL;DR: A model checking algorithm for verifying whether a concurrent probabilistic process satisfies a PBTL formula assuming fairness constraints is presented and adaptations of existing model checking algorithms for pCTL to obtain procedures for PBTL under fairness constraints are proposed.
Abstract: We consider concurrent probabilistic systems, based on probabilistic automata of Segala & Lynch [55], which allow non-deterministic choice between probability distributions. These systems can be decomposed into a collection of "computation trees" which arise by resolving the non-deterministic, but not probabilistic, choices. The presence of non-determinism means that certain liveness properties cannot be established unless fairness is assumed. We introduce a probabilistic branching time logic PBTL, based on the logic TPCTL of Hansson [30] and the logic PCTL of [55], resp. pCTL of [14]. The formulas of the logic express properties such as "every request is eventually granted with probability at least p". We give three interpretations for PBTL on concurrent probabilistic processes: the first is standard, while in the remaining two interpretations the branching time quantifiers are taken to range over a certain kind of fair computation trees. We then present a model checking algorithm for verifying whether a concurrent probabilistic process satisfies a PBTL formula assuming fairness constraints. We also propose adaptations of existing model checking algorithms for pCTL* [4, 14] to obtain procedures for PBTL* under fairness constraints. The techniques developed in this paper have applications in automatic verification of randomized distributed systems.

227 citations


Journal ArticleDOI
TL;DR: Men's acceptance of rape myths has been shown to correlate positively with self-reported rape proclivity (RP) as mentioned in this paper, and the causal pathway underlying this correlation has been explored.
Abstract: Men's rape myth acceptance (RMA; prejudiced beliefs that serve to exonerate the rapist and blame the victim) has been shown to correlate positively with self-reported rape proclivity (RP). To explore the causal pathway underlying this correlation, two experiments were conducted in which the relative cognitive accessibility of RMA and RP was varied. Male students were asked to report their RP in the context of a scale assessing attraction toward sexual aggression (Experiment 1) or in response to five realistic date-rape scenarios (Experiment 2), either before or after they filled out a 20-item RMA scale. In both studies, the correlation of RMA and RP was significantly greater in the after than in the before condition, suggesting that the belief in rape myths has a causal influence on men's proclivity to rape. © 1998 John Wiley & Sons, Ltd.

221 citations


Posted Content
TL;DR: In this paper, the authors extend the two-stage approach by allowing the probability weighting function to depend on the type of uncertainty, and derive relations between decision weights, probability judgments and probability Weighting under uncertainty.
Abstract: Decision weights are an important component in recent theories for decision making under uncertainty. To better explain these decision weights, a two stage approach has been proposed: first, the probability of an event is judged and then this probability is transformed by the probability weighting function known from decision making under risk. We extend the two stage approach by allowing the probability weighting function to depend on the type of uncertainty. Using this more general approach, certain properties of decision weights can be attributed to certain properties of probability judgments and/or to certain properties of probability weighting. After deriving these relations between decision weights, probability judgments and probability weighting under uncertainty, we present an empirical study which shows that it is indeed neccessary to allow the probability weighting function to be source dependent. The analysis includes an examination of properties of the probability weighting function under uncertainty which have not been considered yet.

197 citations


Journal ArticleDOI
TL;DR: A general efficient method for the fast evaluation of trigonometric polynomials at nonequispaced nodes based on the approximation of the poynomials by special linear combinations of translates of suitable functions ϕ is proposed.
Abstract: In this paper, we are concerned with fast Fourier transforms for nonequispaced grids. We propose a general efficient method for the fast evaluation of trigonometric polynomials at nonequispaced nodes based on the approximation of the polynomials by special linear combinations of translates of suitable functions ϕ. We derive estimates for the approximation error. In particular, we improve the estimates given by Dutt and Rokhlin [7]. As a practical consequence, we obtain a criterion for the choice of the parameters involved in the fast transforms.

148 citations


Journal ArticleDOI
TL;DR: This article shows how the traditional innovation models can be extended to incorporate competition and to map the process of substitution among successive product generations.
Abstract: The diffusion of innovations over time is a highly dynamic and complex problem. It is influenced by various factors like price, advertising, and product capabilities. Traditional models of innovation diffusion ignore the complexity underlying the process of diffusion. Their aim is normative decision support, but these models do not appropriately represent the structural fundamentals of the problem. The use of the system dynamics methodology allows the development of more complex models to investigate the process of innovation diffusion. These models can enhance insight in the problem structure and increase understanding of the complexity and the dynamics caused by the influencing elements. This article shows how the traditional innovation models can be extended to incorporate competition and to map the process of substitution among successive product generations. Several model simulations show the potential of using system dynamics as the modeling methodology in the field of new product diffusion models. © 1998 John Wiley & Sons, Ltd.

147 citations


Proceedings Article
24 Aug 1998
TL;DR: It is shown that application of SMAs to TPC-D Query 1 results in a speed up of two orders of magnitude, and some further tuning possibilities for SMAs are discussed.
Abstract: Small Materialized Aggregates (SMAs for short) are considered a highly flexible and versatile alternative for materialized data cubes. The basic idea is to compute many aggregate values for small to medium-sized buckets of tuples. These aggregates are then used to speed up query processing. We present the general idea and present an application of SMAs to the TPC-D benchmark. We show that application of SMAs to TPC-D Query 1 results in a speed up of two orders of magnitude. Then, we elaborate on the problem of query processing in the presence of SMAs. Last, we briefly discuss some further tuning possibilities for SMAs.

134 citations


Journal ArticleDOI
TL;DR: In this article, the authors give a qualitative classification of all compact subgroups Γ ⊂ GL n (F ), where F is a local field and n is arbitrary, up to finite index and a finite number of abelian subquotients.

Journal ArticleDOI
TL;DR: The authors examined the influence of majority opinion on attitudes in the absence of persuasive argumentation and found that participants who were either high or low in accuracy motivation were presented with an opinion poll that conveyed consensus information and the sample size of the poll.
Abstract: This study examined the influence of majority opinion on attitudes in the absence of persuasive argumentation. Participants who were either high or low in accuracy motivation were presented with an opinion poll that conveyed consensus information and the sample size of the poll. According to the law of large numbers (LLN), large polls provide more reliable estimates of consensus than smaller polls. Results generally supported predictions. Less-motivated participants tended to be influenced by consensus regardless of poll size, whereas highly motivated participants based attitudes on this information only if the poll was reliably large. Thus, participants who were highly motivated seemed to appreciate the LLN when making their attitude judgments. Consistent with the heuristic-systematic model, process measures indicated that consensus influenced attitudes through both heuristic and biased systematic processing under high motivation, but it influenced attitudes only via heuristic processing when motivation ...

Posted Content
TL;DR: It is found that simultaneous bargaining over “packages” should be a prevailing phenomenon, but this result raises the question of which agenda should come up endogenously when agents bargain over a set of unrelated issues.
Abstract: The first part of this paper shows that in a noncooperative bargaining model with alternating offers and time preferences the timing of issues (the agenda) matters even if players become arbitrarily patient. This result rises the question which agenda should come up endogenously when agents bargain over a set of unrelated issues. It is found that simultaneous bargaining over ''packages'' should be a prevailing phenomenon, but we also point to the possibility of multiple equilibria involving even considerable delay.

Journal ArticleDOI
TL;DR: A new fast algorithm for the computation of the matrix-vector product Pa in O(N log 2 N) arithmetical operations is presented, which divides into a fast transform which replaces Pa with C N+1 ā and a subsequent fast cosine transform.
Abstract: Consider the Vandermonde-like matrix P:= (P k (cos jπ/N)) j,k=0 N , where the polynomials P k satisfy a three-term recurrence relation. If P k are the Chebyshev polynomials T k , then P coincides with C N+1 := (cos jkπ/N) j,k=0 N . This paper presents a new fast algorithm for the computation of the matrix-vector product Pa in O(N log 2 N) arithmetical operations. The algorithm divides into a fast transform which replaces Pa with C N+1 ā and a subsequent fast cosine transform. The first and central part of the algorithm is realized by a straightforward cascade summation based on properties of associated polynomials and by fast polynomial multiplications. Numerical tests demonstrate that our fast polynomial transform realizes Pa with almost the same precision as the Clenshaw algorithm, but is much faster for N ≥ 128.

Journal ArticleDOI
TL;DR: In this article, the authors compared the call market, the continuous auction and the dealer market and found that the former is much more efficient than the latter when average prices are analyzed.
Abstract: This paper reports the results of 18 market experiments that were conducted in order to compare the call market, the continuous auction and the dealer market. The design incorporates asymmetric information but guarantees that the ex-ante quality of the private signals of all traders is identical. Therefore, the aggregation of diverse information can be analyzed in the absence of insider trading. Single transaction prices in the call and continuous auction market are found to be much more efficient than prices in the dealer market. The latter is, however, very efficient when average prices are analyzed. Averaging the prices of a trading period largely eliminates the bid-ask spread. The conclusion is therefore that prices in a dealer market convey high quality information, but at the expense of high transaction costs. The call market, although exhibiting small pricing errors, shows a systematic tendency towards underadjustment to new information. An analysis of market liquidity using various measures proposed in the literature shows that execution costs are lowest in the call market and highest in the dealer market. The analysis also reveals that both the trading volume and Roll's (1984) serial covariance estimator are inappropriate measures of execution costs in the present context. The quality of the private signals traders receive influences portfolio structure but does not influence end-of-period wealth. This result is consistent with efficient price discovery in the experimental markets.

Journal ArticleDOI
TL;DR: This paper found that post-message attitudes were more positive under high consensus than under low consensus; this effect was mediated via thought valence but not via thought convergence, and these effects were replicated if consensus information preceded message processing but not if it was presented after message processing.
Abstract: The authors studied effects of majority and minority support on persuasion for nondiscrepant positions. In two experiments, students (N = 188) read messages on previously unknown attitude objects. These messages were attributed to numerical majorities (high consensus) or minorities (low consensus). The results show that consensus information can bias systematic processing of message content. High consensus evoked positively biased cognitive responses that focused on message content (convergent processing), whereas low consensus elicited negatively biased processing that pertained to new aspects of the issue (divergent processing). Post-message attitudes were more positive under high consensus than under low consensus; this effect was mediated via thought valence but not via thought convergence. In Experiment 2, these effects were replicated if consensus information preceded message processing but not if it was presented after message processing. Furthermore, in both experiments, cognitive activity was low...

Journal ArticleDOI
TL;DR: The HIT'88/'89 regimen was well tolerated and efficacious in regard to response rates and early PSF particularly in medulloblastoma and anaplastic ependymoma and based on these results the prospectively randomized trial HIT'91 was designed to investigate the optimal timing of chemotherapy.
Abstract: Background Preradiation chemotherapy could be beneficial in malignant brain tumors, because the blood-brain tumor-barrier is disrupted after surgery, bone marrow recovery--essential for intense chemotherapy--is still intact, and CNS toxicity and ototoxicity of active drugs are lower before irradiation of a child's brain. Patients and methods A neoadjuvant phase 2 and a single arm pilot trial were initiated to investigate the efficacy and toxicity of an intense multidrug regimen before radiotherapy in 147 patients aged between 3 and 29; 9 years with medulloblastoma (94), malignant glioma (22), ependymoma (21), and stPNET (10). They were treated with one or two cycles consisting of procarbazine, ifosfamide/mesna with etoposide, high dose methotrexate/CF, and cisplatin with cytarabine. Results Radiation therapy was delayed for 17-30 weeks (median 23 weeks) in 112 patients who received two cycles. Chemotherapy was well tolerated. Serious infections were observed in 20 patients, with one fatal fungal septicemia. In 69 high risk patients with a residual tumor and/or solid CNS metastases an objective response (CR plus PR) was achieved in 67% medulloblastoma, 57% stPNET, 55% anaplastic ependymoma and 25% malignant glioma. Progression-free survival (PFS) at 5 years was 57% in 14 high risk patients with medulloblastoma, who achieved a complete response (CR). After a less than CR the PFS was 20% (p = 0.01). Overall survival at 5 years was 57% in medulloblastoma, 62% in ependymoma, 36% in malignant glioma and 30% in stPNET. Conclusion The HIT'88/'89 regimen was well tolerated and efficacious in regard to response rates and early PSF particularly in medulloblastoma and anaplastic ependymoma. Based on these results the prospectively randomized trial HIT'91 was designed to investigate the optimal timing of chemotherapy. Preradiation chemotherapy according to the HIT'88/'89 regimen was compared with the standard regimen using CCNU, cisplatin, and vincristine after radiation therapy. Additionally, strict quality control of the three treatment modalities was instituted to help improve the survival rates in both trial arms.

Journal ArticleDOI
TL;DR: A function with small size unbounded weight threshold—AND circuits for which all threshold—XOR circuits have exponentially many nodes is presented, which answers the basic question of separating subsets of the hypercube by hypersurfaces induced by sparse real polynomials.
Abstract: We investigate the computational power of threshold—AND circuits versus threshold—XOR circuits. In contrast to the observation that small weight threshold—AND circuits can be simulated by small weight threshold—XOR circuit, we present a function with small size unbounded weight threshold—AND circuits for which all threshold—XOR circuits have exponentially many nodes. This answers the basic question of separating subsets of the hypercube by hypersurfaces induced by sparse real polynomials. We prove our main result by a new lower bound argument for threshold circuits. Finally we show that unbounded weight threshold gates cannot simulate alternation: There are \( AC^{0,3} \)-functions which need exponential size threshold—AND circuits.

Journal ArticleDOI
TL;DR: The paper applies the method of comparative dynamic analysis to the full Grossman model and derives the equations implicitly defining the complete trajectories of the endogenous variables, relying on the concept of Frisch decision functions.

Book ChapterDOI
11 Oct 1998
TL;DR: The Enhanced ChainMail algorithm is presented, which extends the capabilities of an existing algorithm for modeling deformable tissue, 3D ChainMail, by enabling the modeling of inhomogeneous material.
Abstract: The focus of this paper is the newly developed Enhanced ChainMail Algorithm that will be used for modeling the vitreous humor in the eye during surgical simulation. The simulator incorporates both visualization and biomechanical modeling of a vitrectomy, an intra-ocular surgical procedure for removing the vitreous humor. The Enhanced ChainMail algorithm extends the capabilities of an existing algorithm for modeling deformable tissue, 3D ChainMail, by enabling the modeling of inhomogeneous material. In this paper, we present the enhanced algorithm and demonstrate its capabilities in 2D.

Journal ArticleDOI
TL;DR: Using a particular kind of pseudomonotonicity for multivalued mappings, an existence result is proved for equilibria, variational inequalities, and a combination of both.
Abstract: Using a particular kind of pseudomonotonicity for multivalued mappings, an existence result is proved for equilibria, variational inequalities, and a combination of both.

Book ChapterDOI
23 Mar 1998
TL;DR: More efficient attacks to attack triple DES are presented, one of which reduces the overall number of steps to roughly 2 108 and other attacks optimize the number of encryptions at the cost of increasing theNumber of other operations.
Abstract: The standard technique to attack triple encryption is the meet-in-the-middle attack which requires 2112 encryption steps. In this paper, more efficient attacks are presented. One of our attacks reduces the overall number of steps to roughly 2108. Other attacks optimize the number of encryptions at the cost of increasing the number of other operations. It is possible to break triple DES doing 290 single encryptions and no more than 2113 faster operations.

Journal ArticleDOI
TL;DR: The visual process method as mentioned in this paper is a computer-aided procedure that allows for the recording of the viewing sequence, viewing time, and amount of information used with respect to various face and body parts in judgments of physical attractiveness.

Journal ArticleDOI
TL;DR: In this paper, it was shown that if every player is using either a belief-based learning scheme with bounded recall or a generalized fictitious play learning scheme, then for sufficiently large time, the players' bids are in equilibrium in the one-shot auction in which the types are commonly known.

Journal Article
TL;DR: In this paper, more efficient attacks are presented, such as the meet-in-the-middle attack, which requires 2 112 encryption steps, and the fast triple DES attack with 2 90 single encryptions and no more than 2 113 faster operations.
Abstract: The standard technique to attack triple encryption is the meet-in-the-middle attack which requires 2 112 encryption steps. In this paper, more efficient attacks are presented. One of our attacks reduces the overall number of steps to roughly 2 108 . Other attacks optimize the number of encryptions at the cost of increasing the number of other operations. It is possible to break triple DES doing 2 90 single encryptions and no more than 2 113 faster operations.

Journal ArticleDOI
TL;DR: In this paper, the authors consider existence, characterization, and calculation of equilibria in transportation networks, when the route capacities and demand requirements depend on time, and they express the problem in terms of a variational inequality and is situated in a Banach space setting.
Abstract: We consider existence, characterization, and calculation of equilibria in transportation networks, when the route capacities and demand requirements depend on time. The problem is expressed in terms of a variational inequality and is situated in a Banach space setting. © Academie des Sciences/Elsevier, Paris

Journal ArticleDOI
TL;DR: The soundness of proving the validity of qualitative properties of probabilistic processes under fairness constraints is shown, which generalizes the soundness results for extreme and α-fairness established by Pnueli in 1983 and 1993.

Journal ArticleDOI
TL;DR: In this article, it was shown that the second-best sharing rule is a linear function of aggregated output, and that the first-best rule is also linear in aggregate output.

01 Jan 1998
TL;DR: A novel, integrated teaching and learning tool - called digital lecture board - which takes into account the requirements of synchronous, computer-based distance education and allows for asynchronous usage modes, for instance, the preparation of lectures.
Abstract: This paper presents a novel, integrated teaching and learning tool - called digital lecture board - which takes into account the requirements of synchronous, computer-based distance education. For almost two years, the TeleTeaching project Mannheim-Heidelberg has been using video conferencing tools for transmitting lectures and seminars. These tools prove to be insufficient for the purpose of teleteaching since they are not powerful enough to support team work, they are not flexible enough for the use of media, and are somewhat difficult to handle by non-experts. We discuss shortcomings of the existing tools and disclose features we had in mind while designing the digital lecture board. Embedded in a teaching and learning system, the digital lecture board even allows for asynchronous usage modes, for instance, the preparation of lectures. Moreover, we cover implementation issues of the current prototype.

Journal ArticleDOI
TL;DR: In this article, the authors investigate whether respondents' answers to survey questions on the desired number of working hours contain additional information on the preferences of the individuals, and analyze whether deviations between desired hours and actual hours of work help to predict future job changes or changes in hours worked.
Abstract: Empirical implementation of labour supply theories is usually based on realized labour market behaviour. This requires strong assumptions about the impact of labour demand. A possibility to avoid these assumptions is to make use of subjective data on desired labour supply. In this paper we investigate whether respondents’ answers to survey questions on the desired number of working hours contain additional information on the preferences of the individuals. Using panel data for the Netherlands, we analyze whether deviations between desired hours and actual hours of work help to predict future job changes or changes in hours worked. We use parametric and recently developed nonparametric tests. The results suggest that subjective information on desired working hours are helpful in explaining female labour supply. For males the evidence is mixed.