scispace - formally typeset
Search or ask a question

Showing papers in "Computing in Economics and Finance in 2005"


Journal ArticleDOI
TL;DR: In this article, the authors introduce a simple agent-based model in which the ubiquitous stylized facts (fat tails, volatility clustering) are emergent properties of the interaction among traders.
Abstract: The behavioral origins of the stylized facts of financial returns have been addressed in a growing body of agent-based models of financial markets. While the traditional efficient market viewpoint explains all statistical properties of returns by similar features of the news arrival process, the more recent behavioral finance models explain them as imprints of universal patterns of interaction in these markets. In this paper we contribute to this literature by introducing a very simple agent-based model in which the ubiquitous stylized facts (fat tails, volatility clustering) are emergent properties of the interaction among traders. The simplicity of the model allows us to estimate the underlying parameters, since it is possible to derive a closed form solution for the distribution of returns. We show that the tail shape characterizing the fatness of the unconditional distribution of returns can be directly derived from some structural variables that govern the traders' interactions, namely the herding propensity and the autonomous switching tendency.

386 citations


Journal ArticleDOI
TL;DR: In this article, the authors considered the problem of averaging opinions under bounded confidence when agents employ, beside an arithmetic mean, means like a geometric mean, a power mean or a random mean in aggregating opinions.
Abstract: The paper treats opinion dynamics under bounded confidence when agents employ, beside an arithmetic mean, means like a geometric mean, a power mean or a random mean in aggregating opinions. The different kinds of collective dynamics resulting from these various ways of averaging are studied and compared by simulations. Particular attention is given to the random mean which is a new concept introduced in this paper. All those concrete means are just particular cases of a partial abstract mean, which also is a new concept. This comprehensive concept of averaging opinions is investigated also analytically and it is shown in particular, that the dynamics driven by it always stabilizes in a certain pattern of opinions.

179 citations


Posted Content
TL;DR: This paper developed a dynamic general equilibrium model where workers can engage in search while on the job and showed that on-the-job search is a key component in explaining labor market dynamics in models of equilibrium unemployment.
Abstract: We develop a dynamic general equilibrium model where workers can engage in search while on the job. We show that on-the-job search is a key component in explaining labor market dynamics in models of equilibrium unemployment. The model predicts fluctuations of unemployment, vacancies, and labor productivity whose relative magnitudes replicate the data. A standard search and matching model suggests much lower volatitilities of these variables. Intuitively, in a boom, rising search activity on the job avoids excessive tightening of the labor market for expanding firms. This keeps wage pressures low, thus further increasing firms' incentives to post new jobs. Labor market tightness as measured by the vacancy-unemployment ratio is as volatile as in the data. The interaction between on-the-job search and job creation also generates a strong internal propagation mechanism

135 citations


Book ChapterDOI
TL;DR: In this article, a simple model of market entry and exit is used to calculate hysteresis indices for economic time series, and the explanatory power of these indices with regard to the equilibrium rate of unemployment in the UK is assessed.
Abstract: This paper points out what hysteresis is using a simple model of market entry and exit. A procedure for calculating hysteresis indices for economic time series is outlined. Some preliminary results are presented to assess the explanatory power of hysteresis variables with regard to the equilibrium rate of unemployment in the UK. We find that both natural and “unnatural” variables enter a cointegrating vector for UK unemployment 1959-1996. The natural variable is the replacement ratio. The “unnatural” variables are the hysteresis index of the exchange rate; and hysteresis indices for the real oil price and the real interest rate.

105 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigate the spread of innovations on a social network and investigate whether firms can learn about the network structure and consumer characteristics when only limited information is available, and use this information to evolve a successful directed advertising strategy.
Abstract: We investigate the spread of innovations on a social network. The network consists of agents that are exposed to the introduction of a new product. Consumers decide whether or not to buy the product based on their own preferences and the decisions of their neighbors in the social network. We use and extend concepts from the literature on epidemics and herd behavior to study this problem. The central question of this paper is whether firms can learn about the network structure and consumer characteristics when only limited information is available, and use this information to evolve a successful directed-advertising strategy. In order to do so, we extend existing models to allow for heterogeneous agents and both positive and negative externalities. The firm can learn a directed-advertising strategy that takes into account both the topology of the social consumer network and the characteristics of the consumer. Such directed-advertising strategies outperform random advertising.

92 citations


Journal ArticleDOI
TL;DR: In this paper, an effective and easy-to-implement frequency filter is proposed, obtained by convolving a raised-cosine window with the ideal rectangular filter response function.
Abstract: An effective and easy-to-implement frequency filter is proposed, obtained by convolving a raised-cosine window with the ideal rectangular filter response function. Three other filters, Hodrick--Prescott, Baxter--King, and Christiano--Fitzgerald, are thoroughly reviewed. A bandpass version of the Hodrick--Prescott filter is also introduced and used. The behavior of the windowed filter is compared to the others through their frequency responses and by applying them to both quarterly and monthly artificial, known-structure series and real macroeconomic data. The windowed filter has almost no leakage and is better than the others at eliminating high-frequency components. Its response in the passband is significantly flatter, and its behavior at low frequencies ensures a better removal of undesired long-term components. These improvements are particularly evident when working with short-length time series, which are common in macroeconomics. The proposed filter is stationary and symmetric, therefore, it induces no phase-shift. It uses all the information contained in the input data and stationarizes series integrated up to order two. It thus proves to be a good candidate for extracting frequency-defined series components.

80 citations


Journal ArticleDOI
TL;DR: A new numerical algorithm for solving the Sylvester equation involved in higher-order perturbation methods developed for solving stochastic dynamic general equilibrium models surpasses other methods used so far in terms of computational time, memory consumption, and numerical stability.
Abstract: This paper presents a new numerical algorithm for solving the Sylvester equation involved in higher-order perturbation methods developed for solving stochastic dynamic general equilibrium models. The new algorithm surpasses other methods used so far (including the very popular doubling algorithm) in terms of computational time, memory consumption, and numerical stability.

80 citations


Journal ArticleDOI
TL;DR: The results indicate that using publicly available financial data, it is possible to replicate the credit ratings of the firms with a satisfactory accuracy.
Abstract: Credit ratings issued by international agencies are extensively used in practice to support investment and financing decisions. Furthermore, a considerable portion of the financial research has been devoted to the analysis of credit ratings, in terms of their effectiveness, and practical implications. This paper explores the development of appropriate models to replicate the credit ratings issued by a rating agency. The analysis is based on a multicriteria classification method used in the development of the model. Special focus is laid on testing the out-of-time and out-of-sample effectiveness of the models and a comparison is performed with other parametric and non-parametric classification methods. The results indicate that using publicly available financial data, it is possible to replicate the credit ratings of the firms with a satisfactory accuracy.

71 citations


Journal ArticleDOI
TL;DR: In this article, the authors analyzed the behavior of some of the most used tests of long memory: the R/S analysis, the modified R /S, the Geweke and Porter-Hudak (GPH) test and the detrended fluctuation analysis (DFA).
Abstract: Many time series in diverse fields have been found to exhibit long memory. This paper analyzes the behaviour of some of the most used tests of long memory: the R/S analysis, the modified R/S, the Geweke and Porter-Hudak (GPH) test and the detrended fluctuation analysis (DFA). Some of these tests exhibit size distortions in small samples. It is well known that the bootstrap procedure may correct this fact. Here I examine the size and power of those tests for finite samples and different distributions, such as the normal, uniform, and lognormal. In the short-memory processes such as AR, MA and ARCH and long memory ones such as ARFIMA, p-values are calculated using the post-blackening moving-block bootstrap. The Monte Carlo study suggests that the bootstrap critical values perform better. The results are applied to financial return time series.

62 citations


Journal ArticleDOI
TL;DR: This paper shows how a high-level matrix programming language may be used to perform Monte Carlo simulation, bootstrapping, estimation by maximum likelihood and GMM, and kernel regression in parallel on symmetric multiprocessor computers or clusters of workstations.
Abstract: This paper shows how a high-level matrix programming language may be used to perform Monte Carlo simulation, bootstrapping, estimation by maximum likelihood and GMM, and kernel regression in parallel on symmetric multiprocessor computers or clusters of workstations. The implementation of parallelization is done in a way such that an investigator may use the programs without any knowledge of parallel programming. A bootable CD that allows rapid creation of a cluster for parallel computing is introduced. Examples show that parallelization can lead to important reductions in computational time. Detailed discussion of how the Monte Carlo problem was parallelized is included as an example for learning to write parallel programs for Octave.

44 citations


Posted Content
TL;DR: In this article, the authors explore whether forecasting an aggregate variable using information on its disaggregate components can improve the prediction mean squared error over forecasting the disaggregates and aggregating those forecasts, or usin g only aggregate information in forecasting the aggregate.
Abstract: We explore whether forecasting an aggregate variable using information on its disaggregate components can improve the prediction mean squared error over forecasting the disaggregates and aggregating those forecasts, or usin g only aggregate information in forecasting the aggregate. An implication of a theory of prediction is that the first should outperform the alternative methods to forecasting the aggregate in population. However, forecast models are based on sample information. The data generation process and the forecast model selected might differ. We show how changes in collinearity between regressors affect the bias-variance trade-off in model selection and h ow the criterion used to select variables in the forecasting model affects forecast accura cy. We investigate why forecasting the aggregate using information on its disaggregate components improves forecast accuracy of the aggregate forecast of Euro area inflation in some situa tions, but not in others. The empirical evidence on Euro-zone inflation forecasts sugges ts that more information can help, more so by including macroeconomic variables than disaggregate components.

Posted Content
TL;DR: In this paper, an extension of McKean's (1965) incomplete Fourier transform method was used to solve the two-factor partial differential equation for the price and early exercise surface of an American call option, in the case where the volatility of the underlying volatility evolves randomly.
Abstract: This paper provides an extension of McKean’s (1965) incomplete Fourier transform method to solve the two-factor partial differential equation for the price and early exercise surface of an American call option, in the case where the volatility of the underlying evolves randomly. The Heston (1993) square-root process is used for the volatility dynamics. The price is given by an integral equation dependent upon the early exercise surface, using a free boundary approximation that is linear in volatility. By evaluating the pricing equation along the free surface boundary, we provide a corresponding integral equation for the early exercise region. An algorithm is proposed for solving the integral equation system, based upon numerical integration techniques for Volterra integral equations. The method is implemented, and the resulting prices are compared with the constant volatility model. The computational efficiency of the numerical integration scheme is also considered

Journal ArticleDOI
TL;DR: In this paper, the authors examined possible existence of business cycle asymmetries in Canada, France, Japan, UK, and USA real GDP growth rates using neural networks nonlinearity tests and tests based on a number of nonlinear time series models.
Abstract: This study examines possible existence of business cycle asymmetries in Canada, France, Japan, UK, and USA real GDP growth rates using neural networks nonlinearity tests and tests based on a number of nonlinear time series models. These tests are constructed using in-sample forecasts from artificial neural networks (ANN) as well as time series models.Our study results based on neural network tests show that there is statistically significant evidence of business cycle asymmetries in these industrialized countries. Similarly, our study results based on a number of time series models also show that business cycle asymmetries do prevail in these countries. So we are not able to evaluate the impact of monetary policy or any other shocks on GDP in any of these countries based on linear models.

Journal ArticleDOI
TL;DR: In this paper, the authors presented three approaches to value American continuous-installment options written on assets without dividends or with continuous dividend yield, and derived closed-form formulas by approximating the optimal stopping and exercise boundaries as multipiece exponential functions, which is compared to the finite difference method to solve the inhomogeneous Black-Scholes PDE and a Monte Carlo approach.
Abstract: We present three approaches to value American continuous-installment options written on assets without dividends or with continuous dividend yield. In an American continuous-installment option, the premium is paid continuously instead of up-front. At or before maturity, the holder may terminate payments by either exercising the option or stopping the option contract. Under the usual assumptions, we are able to construct an instantaneous riskless dynamic hedging portfolio and derive an inhomogeneous Black--Scholes partial differential equation for the initial value of this option. This key result allows us to derive valuation formulas for American continuous-installment options using the integral representation method and consequently to obtain closed-form formulas by approximating the optimal stopping and exercise boundaries as multipiece exponential functions. This process is compared to the finite difference method to solve the inhomogeneous Black--Scholes PDE and a Monte Carlo approach.

Journal ArticleDOI
TL;DR: In this article, the authors present a model that integrates the discrete working time choice of heterogenous households into a general equilibrium setting where wages are determined by sectoral bargaining between firms and trade unions.
Abstract: We present a model that integrates the discrete working time choice of heterogenous households into a general equilibrium setting where wages are determined by sectoral bargaining between firms and trade unions. The model is calibrated to German micro and macro data. We then use it to analyse a stylised policy reform designed to stimulate labour supply.

Journal ArticleDOI
TL;DR: In this paper, the authors define an extension of Dantzig-Wolfe decomposition for the variational inequality (VI) problem, a modeling framework that is widely used for models of competitive or oligopolistic markets.
Abstract: The creation and ongoing management of a large economic model can be greatly simplified if the model is managed in separate smaller pieces defined, e.g. by region or commodity. For this purpose, we define an extension of Dantzig--Wolfe decomposition for the variational inequality (VI) problem, a modeling framework that is widely used for models of competitive or oligopolistic markets. The subproblem, a collection of independent smaller models, is a relaxed VI missing some "difficult" constraints. The subproblem is modified at each iteration by information passed from the last solution of the master problem in a manner analogous to Dantzig--Wolfe decomposition for optimization models. The master problem is a VI which forms convex combinations of proposals from the subproblem, and enforces the difficult constraints. A valid stopping condition is derived in which a scalar quantity, called the "convergence gap," is monitored. The convergence gap is a generalization of the primal-dual gap that is commonly monitored in implementations of Dantzig--Wolfe decomposition for optimization models. Convergence is proved under conditions general enough to be applicable to many models. An illustration is provided for a two-region competitive model of Canadian energy markets.

Posted ContentDOI
TL;DR: In this article, the authors consider the decision making process in the context of a model in which inflation forecast targeting is used but there is heterogeneity among the members of the committee and find that internally generated forecasts of output and market generated expectations of medium term inflation provide the best description of discrete changes in interest rates.
Abstract: The transparency and openness of the monetary policymaking process at the Bank of England has provided very detailed information on both the decisions of individual members of the Monetary Policy Committee and the information on which they are based. In this paper we consider this decision making process in the context of a model in which inflation forecast targeting is used but there is heterogeneity among the members of the committee. We find that internally generated forecasts of output and market generated expectations of medium term inflation provide the best description of discrete changes in interest rates. We also find a role for asset prices through the equity market, foreign exchange market and housing prices. There are also identifiable forms of heterogeneity among members of the committee that improves the predictability of interest rate changes. This can be thought of as supporting the argument that full transparency of monetary policy decision making can be welfare enhancing.

Journal ArticleDOI
TL;DR: The approach described here greatly increases gains from parallel execution and opens possibilities for re-writing objective functions to make further efficiency gains.
Abstract: Many economic models are completed by finding a parameter vector ? that optimizes a function f(?), a task that can only be accomplished by iterating from a starting vector ?0. Use of a generic iterative optimizer to carry out this task can waste enormous amounts of computation when applied to a class of problems defined here as finite mixture models. The finite mixture class is large and important in economics and eliminating wasted computations requires only limited changes to standard code. Further, the approach described here greatly increases gains from parallel execution and opens possibilities for re-writing objective functions to make further efficiency gains.


Posted Content
TL;DR: In this paper, the authors develop the Generalize Taylor Economy (GTE) in which there are many sectors with overlapping contracts of different lengths and show that monetary shocks will be more persistent when there are longer contracts.
Abstract: n this paper we develop the Generalize Taylor Economy (GTE) in which there are many sectors with overlapping contracts of different lengths. We are able to show that even in economies with the same average contract length, monetary shocks will be more persistent when there are longer contracts. In particular we are able to solve the puzzle of why Calvo contracts appear to be more persistent than simple Taylor contracts: it is because the standard calibration of Calvo contracts is not correct

Journal ArticleDOI
TL;DR: In this paper, double higher-order hidden Markov chain models (DHHMMs) were developed for extracting information about the hidden sequence of the states of an economy from the spot interest rates and credit ratings of bonds.
Abstract: Estimating and forecasting the unobservable states of an economy are important and practically relevant topics in economics Central bankers and regulators can use information about the market expectations on the hidden states of the economy as a reference for decision and policy makings, for instance, deciding monetary policies Spot interest rates and credit ratings of bonds contain important information about the hidden sequence of the states of the economy In this paper, we develop double higher-order hidden Markov chain models (DHHMMs) for extracting information about the hidden sequence of the states of an economy from the spot interest rates and credit ratings of bonds We consider a discrete-state model described by DHHMMs and focus on the qualitative aspect of the unobservable states of the economy The observable spot interest rates and credit ratings of bonds depend on the hidden states of the economy which are modelled by DHHMMs The DHHMMs can incorporate the persistent phenomena of the time series of spot interest rates and the credit ratings We employ the maximum likelihood method and the EM algorithm, namely Viterbi's algorithm, to uncover the optimal hidden sequence of the states of the economy which can be interpreted the "best" estimate of the sequence of the underlying economic states generating the spot interest rates and credit ratings of the bonds Then, we develop an efficient maximum likelihood estimation method to estimate the unknown parameters in our model Numerical experiment will be conducted to illustrate the implementation of the model

Book ChapterDOI
TL;DR: In this article, an agent-based, evolutionary, model trying to formalize from the bottom up individual behaviors and interactions in both product and labor markets is presented. And the model is able robustly to reproduce Beveridge, Wage and Okun curves under quite broad behavioral and institutional settings.
Abstract: Three well-known aggregate regularities (i.e. Beveridge, Wage, and Okun curves) seem to provide a quite complete picture of the interplay between labor market macro-dynamics and the business cycle. Nevertheless, existing theoretical literature still lacks micro-founded models which are able to account jointly for these three crucial stylized facts. In this paper, we present an agent-based, evolutionary, model trying to formalize from the bottom up individual behaviors and interactions in both product and labor markets. We describe as endogenous processes both vacancy and wage setting, as well as matching and bargaining, demand and price formation. Firms enjoy labor productivity improvements (technological progress) and are selected on the basis of their revealed competitiveness (which is also affected by their hiringand wage-setting behaviors). Simulations show that the model is able robustly to reproduce Beveridge, Wage and Okun curves under quite broad behavioral and institutional settings. Moreover, the system generates endogenously an Okun coefficient greater than one even if individual firms employ production functions exhibiting constant returns to labor. Monte Carlo simulations also indicate that statistically detectable shifts in Okun and Beveridge curves emerge as the result of changes in institutional, behavioral, and technological parameters. Finally, the model generates quite sharp predictions about how system parameters affect aggregate performance (i.e. average GDP growth) and its volatility.

Journal ArticleDOI
TL;DR: In this paper, the Koehler and Symanovski copula function with specific marginals, such as the skew Student-t, the skew generalized secant hyperbolic, and skew generalized exponential power distributions, was examined for modeling financial returns and measuring dependent risks.
Abstract: This study examines the Koehler and Symanovski copula function with specific marginals, such as the skew Student-t, the skew generalized secant hyperbolic, and the skew generalized exponential power distributions, in modelling financial returns and measuring dependent risks. The copula function can be specified by adding interaction terms to the cumulative distribution function for the case of independence. It can also be derived using a particular transformation of independent gamma functions. The advantage of using this distribution relative to others lies in its ability to model complex dependence structures among subsets of marginals, as we show for aggregate dependent risks of some market indices.

Journal ArticleDOI
TL;DR: It is argued that in the typical macroeconomic model with valuable leisure, labor function is particularly convenient for parameterizing, and it is found that using the labor-function parameterization instead of the standard consumption- function parameterization reduces computational time by more than a factor of 10.
Abstract: Euler-equation methods for solving nonlinear dynamic models involve parameterizing some policy functions. We argue that in the typical macroeconomic model with valuable leisure, labor function is particularly convenient for parameterizing. This is because under the labor-function parameterization, the intratemporal first-order condition admits a closed-form solution, while under other parameterizations, there should be a numerical solution. In the context of a simulation-based parameterized expectations algorithm, we find that using the labor-function parameterization instead of the standard consumption-function parameterization reduces computational time by more than a factor of 10.

Journal ArticleDOI
TL;DR: In this paper, the performance of a grid-based Euler-equation method in the given context was studied. And they found that such a method converges to an interior solution in a wide range of parameter values, not only in the "test" model with the closed-form solution but also in more general settings, including those with uncertainty.
Abstract: The standard neoclassical growth model with quasi-geometric discounting is shown elsewhere (Krusell, P. and Smith, A., CEPR Discussion Paper No. 2651, 2000) to have multiple solutions. As a result, value-iterative methods fail to converge. The set of equilibria is however reduced if we restrict our attention to the interior (satisfying the Euler equation) solution. We study the performance of a grid-based Euler-equation methods in the given context. We find that such a method converges to an interior solution in a wide range of parameter values, not only in the "test" model with the closed-form solution but also in more general settings, including those with uncertainty.

Posted Content
TL;DR: In this article, the authors examine the choice between internal or external provision of information technology services for US credit unions and find that the likelihood that a credit union outsources its system is increasing in the number of other credit unions in the same geographic market who outsource.
Abstract: We examine the choice between internal or external provision of information technology (“IT†) services for US credit unions. Credit unions may provide their own systems for tracking loans and deposit accounts or they may choose to outsource these systems from external providers. Empirically, the likelihood that a credit union outsources its system is increasing in the number of other credit unions in the same geographic market who outsource. This empirical regularity may be due to common characteristics of credit unions in close proximity that make them more or less likely to outsource. It may also be due to complementarities whereby one credit union’s decision to outsource affects the relative costs associated with outsourcing for the other credit unions in the market. Anecdotal evidence suggests that the credit unions communicate with one another at the local level due to their non-profit status and lack of significant competitive overlap. This level of communication may give rise to strategic complementarities present in models of social interactions as well as network effects. In this study, we estimate a structural model in which credit unions outsource if doing so achieves a cost savings relative to maintaining their own systems. The decision to outsource is modeled as a game in which the simultaneous outsourcing decisions of the other credit unions in the market are contained as an argument in the relative costs associated with outsourcing. Ours is an example of a game with strategic complementarities in the sense that a given agent’s payoff is increasing in the number of other players who take the same action. The problems arising from multiple equilibria which are endemic to these games have received significantly less attention, in large part because interest in the economics of social interactions and network externalities (two leading examples involving complementarities among agents’ actions) has been a relatively recent phenomenon. To estimate the extent of complementarities in credit union outsourcing decisions, we adapt the intuition in recent work by Ciliberto and Tamer (2004) and propose a method for estimating payoff parameters that places no restrictions on which outcome obtains when the model is consistent with multiple equilibria. Unlike previous work, however, we are able to overcome the curse of dimensionality that makes estimation of the model impossible for markets with more than five or so firms. We present an algorithm to find the fixed points of the best reply correspondence using known properties of the set of pure strategy Nash equilibria. Our model is also powerful enough to assess coordination failures in different sized markets. Our main empirical finding is that the probability that the observed outcome is pareto dominated by another outcome, when the observed outcome is consistent with multiple PSNE, is U-shaped reaching its minimum at five credit unions.

Journal ArticleDOI
TL;DR: This article developed a model of currency crises that helps explain the evidence that real depreciations are perceived to be very costly ("fear of floating") and tried to understand the reasons behind this fear.
Abstract: Currency crises are usually associated with large real depreciations. In some countries real depreciations are perceived to be very costly ("fear of floating"). In this paper we try to understand the reasons behind this fear. We first look at episodes of currency crises in the ’90s and establish that countries entering a crisis with high levels of foreign debt tend to experience large real exchange rate overshooting (devaluation in addition of the long run equilibrium level) and large output contractions. We develop a model of currency crises that helps explain this evidence. The key element of the model is the presence of a margin constraint on the domestic country. Real devaluations, by reducing the value of domestic assets relative to international liabilities, make countries with high foreign debt more likely to hit the constraint. When countries hit the constraint they are forced to sell domestic assets and this causes a further devaluation of the currency (overshooting) and a reduction of their stock prices (overreaction). This fire sale can have a significant negative wealth effect. The model highlights a key tradeoff when considering fixed v/s flexible regime; a fixed exchange regime can, by avoiding exchange rate overshooting, mitigate the negative wealth effect but at the cost of additional distortions and output drops in the short run. There are plausible parameter values under which fixed exchange rates dominate flexible.

Journal ArticleDOI
TL;DR: In this article, a constrained bivariate switching model was developed to explore empirically the behavior of wage and price Phillips-curves for high and low-inflation regimes, respectively.
Abstract: We develop a constrained bivariate switching model to explore empirically the behavior of wage and price Phillips-curves for high- and low-inflation regimes. Using this switching regression technique with a structural simultaneous equations model of Phillips curves, we identify significant lower floors for wage and price inflation. We interpret these lower floors as the relevant downward rigidity for wages and prices. Such floors imply that the adverse real-wage adjustment mechanism that can be identified in the high-inflation regime may disappear in the low-inflation regime, where money-wage inflation and price inflation, and thus real-wage movements, may become rigid. Consequently, the economy may be stabilized then, but trapped in a long period of stagnation in such a low-inflation situation. Such properties of kinked wage and price Phillips-curves are thus important and could also be of help to break another important destabilizing feedback channel, the Fisher debt deflation mechanism, according to which economies, in which highly indebted firms are unable to prevent price deflation, will experience severe crisis or even economic breakdown if the resulting deflationary spiral cannot be stopped.

Journal ArticleDOI
TL;DR: This work develops a model that not only uses individual learning but also the “social learning” that operates through evolutionary selection that emerges relatively quickly in a repeated Battle of the Sexes game.
Abstract: We use adaptive models to understand the dynamics that lead to efficient and fair outcomes in a repeated Battle of the Sexes game. Human subjects appear to easily recognize the possibility of a coordinated alternation of actions as a mean to generate an efficient and fair outcome. Yet such typical learning models as Fictitious Play and Reinforcement Learning have found it hard to replicate this particular result. We develop a model that not only uses individual learning but also the "social learning" that operates through evolutionary selection. We find that the efficient and fair outcome emerges relatively quickly in our model.

Posted Content
TL;DR: In this article, the authors consider the quadratic optimal control problem with regime shifts and forward-looking agents and compute the optimal (non-linear) monetary policy in a small open economy subject to (symmetric or asymmetric) risks of change in some of its key parameters such as inflation inertia, degree of exchange rate pass-through, elasticity of aggregate demand to interest rate, etc.
Abstract: In this paper we consider the quadratic optimal control problem with regime shifts and forward-looking agents. This extends the results of Zampolli (2003) who considered models without forward-looking expectations. Two algorithms are presented: The first algorithm computes the solution of a rational expectation model with random parameters or regime shifts. The second algorithm computes the time-consistent policy and the resulting Nash-Stackelberg equilibrium. The formulation of the problem is of general form and allows for model uncertainty and incorporation of policymaker’s judgement. We apply these methods to compute the optimal (non-linear) monetary policy in a small open economy subject to (symmetric or asymmetric) risks of change in some of its key parameters such as inflation inertia, degree of exchange rate pass-through, elasticity of aggregate demand to interest rate, etc.. We normally find that the time-consistent response to risk is more cautious. Furthermore, the optimal response is in some cases non-monotonic as a function of uncertainty. We also simulate the model under assumptions that the policymaker and the private sector hold the same beliefs over the probabilities of the structural change and different beliefs (as well as different assumptions about the knowledge of each other’s reaction function).