scispace - formally typeset
Search or ask a question
Journal ArticleDOI

25 years of European merger control

TL;DR: In this article, the determinants of common European merger policy over its first 25 years, from 1990 to 2014, were studied using a novel dataset at the level of relevant antitrust markets and containing all relevant merger cases notified to the European Commission.
About: This article is published in International Journal of Industrial Organization.The article was published on 2021-05-01 and is currently open access. It has received 4 citations till now. The article focuses on the topics: Merger control & Dominance (economics).

Summary (6 min read)

1 Introduction

  • Competition policy, that is, the design and enforcement of competition rules, is a cornerstone of the European Union (EU)’s program to enhance the European single market and foster growth.
  • The three overturned prohibitions by the Court of First Instance at the beginning of the 2000s marked the peak of this process.
  • Thus, instead of only looking at the determinants of a merger decision in the aggregate, the authors also investigate the factors that caused competitive concerns in specific sub-markets and how they have changed over time.
  • The authors find that the existence of barriers to entry, the increase of concentration measures and, in particular, the share of product markets with competitive concerns are positively associated with the likelihood of an intervention.
  • After this static investigation, the authors then study the dynamics of the impact of a number of key determinants over time.

2.1 Institutional Details

  • The European Communities Merger Regulation (ECMR) was passed in 1989 and came into force in September 1990.2.
  • Following this second investigation phase, the EC can again unconditionally clear the merger (phase-2 clearance), clear the merger subject to commitments by the merging parties (phase-2 remedy) or prohibit the merger (phase-2 prohibition).
  • Significant changes to European merger control were introduced in 2004 through an amendment to ECMR with the aim of bringing merger control closer to economic principles: the concept of an efficiency defense was introduced, a chief economist was appointed, the timetable for remedies was improved and horizontal merger guidelines were issued.
  • After the 2004 reform, the test used by the European Commission can be most accurately described as a significant impediment of effective competition (SIEC) test, which is more closely aligned with US practice (Bergman et al., 2007; Szücs, 2012).

2.2 Previous Literature

  • Mergers are studied extensively, with a large body of both theoretical and empirical literature on questions such as firms’ incentives to merge and merger policy effectiveness.
  • Post-reform, mergers between US firms, full mergers, and cross-border mergers, decrease the probability of intervention while conglomerate mergers are more likely to be challenged.
  • While the authors use control variables measuring relative market size and market concentration, both HHI as well as market size are based on European-wide industry sales data6 rather than on the market shares of merging parties and competitors as reported in the case documents.
  • He then estimates probit models at this concerned market level for horizontal overlap markets, interacting all explanatory variables with a post-reform indicator variable.
  • Thus, Mini (2018) is the only paper that studies the determinants of merger policy interventions at the relevant product and geographic market level based on the population of European merger decisions as the authors do.

3 Data and Descriptives

  • The data contain almost the entire population of DG Comp’s merger decisions, both in the dimension of time and with regard to the scope of the decisions encompassed.
  • The data set contains information on the name and country of the merging parties (acquirer and target), the date of the notification, the date of the decision9 and the type of decision eventually taken by DG Comp (clearance, remedy, and prohibition) or whether the proposing parties withdrew the notification.
  • The dataset further contains information on the nature of mergers.
  • If for example the market share range indicated is [0-10] percent, the authors record a market share of Table 3 shows summary statistics for the market share related variables.
  • The variable Post-merger HHI (low) is a lower bound of the post-merger HHI: it is calculated as the square of the merging parties’ joint market share plus the sum of squared market shares of competitors, whenever information on competitors’ market shares is available.

4 Linear probability model

  • The authors explore the association between merger characteristics and the intervention decision by DG Comp within a parametric approach.
  • The authors first replicate the results of the existing literature, which explain a competition authority’s decision as a function of merger characteristics at the merger level.
  • In contrast to previous studies, the authors explicitly estimate different models in various sub-samples to assess the issue of sample selection, which could arise because some important indicators – prominently market share and concentration measures – are only observable for ca. 60% of the mergers.
  • Second, as a merger often affects many different markets, while its characteristics and effects on competition can be heterogeneous across these affected markets, the authors investigate in a second step the correlation between merger characteristics and DG Comp’s intervention decision at the market level.
  • Lastly, in order to allow for heterogeneity in the correlation between merger characteristics and intervention decisions, the authors look at the evolution of these relationships over time.

4.1 Methodology

  • The authors employ a linear probability model to estimate the relationship between merger characteristics and the intervention decisions of DG Comp.12.
  • The authors define the indicator variable intervention to be equal to one if DG Comp prohibited the merger, cleared the merger subject to remedies in phase-1, cleared the merger subject to remedies in phase-2, or the merging parties withdrew the merger proposal in phase-2.
  • Thus, in a second step, the authors estimate the correlation between market and merger characteristics and DG Comp’s assessment at the level of the concerned product/geographic market.
  • In each of the year-specific OLS regressions, the authors include industry fixed effects.
  • 14We also run models where the authors use the level of the market shares rather than the dummy variable for high market shares.the authors.

4.2.1 Determinants of Intervention - Merger Level

  • The authors present four specifications run at both the merger and market levels.
  • Hence, this specification basically includes all mergers decided by DG Comp.
  • Hence, specifications 2 and 3 present the results for the same specification as 1 split into those cases without information on market shares (specification 2) and those with information on market shares (specification 3).
  • Neither merger characteristics (full mergers and joint ventures) nor the variables indicating alternative theories of harm (foreclosure concerns, vertical mergers, conglomerate mergers) significantly affect the Commission’s decisions.
  • Finally, in the sample including market share information (column 4), the indicator for a joint market share above 50% has no effect whereas the indicator pertaining to HHIs strongly and significantly increases the probability of challenge.

4.2.2 Determinants of Concern - Market Level

  • Table 6 contains the same sets of regressions at the concerned market level.
  • In general, more covariates appear to be significantly associated with competitive concerns at the market level than what is observed at the merger level.
  • While this might be a statistical results due to the larger number of observations in these regressions, it is likely that the aggregation to the merger level hides some of the EC’s more fine-grained considerations concerning specific markets.
  • In addition, the risk of foreclosure also has a positive and significant, though smaller, effect.
  • Market size now plays a more decisive role, with national markets increasing the probability of concerns in all specifications except (2).

4.2.3 Determinants of Concern - Market Level - Split Sample over Time

  • The authors explore the heterogeneity in the correlation between merger characteristics and competitive concerns by DG Comp over time by running separate OLS regressions splitting the market-level dataset over years (regrouping notification years 1990-1994).
  • The authors consider market share and concentration to be important determinants of merger decisions, thus these are included in the analysis.
  • As discussed in the previous section, while the estimated coefficients might differ across samples, the relevant determinants of intervention or competitive concerns are the same across the different subsamples.
  • Thus, in the last six years of the data, 2008 - 2013, high concentration was not a significant determinant of competitive concerns.
  • OLS regression results, as well as coefficient plots equivalent to the ones shown here.

5 Machine Learning/Causal Forests

  • In Section 4, the authors explore the association between concentration, market shares, entry barriers, and the risk of foreclosure with the intervention decision by DG Comp parametrically.
  • Causal forests are a flexible tool to uncover heterogeneous effects, in particular when there are many covariates and potentially complex interactions between them.
  • First, this approach allows a much better modelling of the process that leads to a particular decision by taking into account the specificities of each merger.
  • While the authors still should be careful to interpret the coefficient estimates in a causal way, the potential bias in the coefficient estimates should be reduced.
  • Therefore, some of their key concepts are measured by means of simple dichotomous dummy variables rather than more complex metrics.

5.1.1 Background on Heterogeneous Treatment Effects

  • This question relates to the literature on heterogeneous treatment effects, where one major problem is the fear that researchers might iteratively search for subgroups with high treatment effects and only report results for these subgroups.
  • The reported heterogeneity in treatment effects might then be purely spurious.
  • Yij is the outcome variable (binary in the present case) for market i in merger j, Wij is a binary treatment variable (i.e. their structural indicators), τ(Xij) is the effect of Wij on Yij at point Xij in covariate space, and eij is an error term that may be correlated with Wij.
  • In these instances, methods such as nearest-neighbor matching or other local methods allow for consistently estimating τ(x).

5.1.2 Estimation using Causal Forests

  • The authors use the causal forest algorithm by Athey et al. (2017) implemented in the generalized random forest (grf) package in R to investigate how the correlation between the treatment variables and DG Comp’s intervention decision varies with merger characteristics.
  • Causal forests are based on the random forest methodology by Breiman (2001).
  • The outcome Yij for observation ij is then predicted by identifying the leaf containing observation ij based on its characteristics Xij and setting the prediction to the mean outcome within that leaf.
  • Minimizing the expected mean squared error of predicted treatment effects (rather than the infeasible mean squared error), is shown to be equivalent to maximizing the variance of the predicted treatment effects across leaves with a penalty for within-leaf variance (variance of treatment and control group mean outcomes within leaves).
  • These are the same four indicator variables as those used in the previous regressions: high post-merger concentration, joint market share above 50%, barriers to entry, and risk of foreclosure.

5.2 Estimation Results

  • The authors present the results of the correlation analysis between the four main variables of interest and the competitive concerns by DG Comp using causal forests.
  • The authors set all the other covariates included in X to their mean respectively median sample value.
  • The authors then predict the treatment effects at the data points of this prediction dataset using the causal forest grown and plot the treatment effect along with the point-wise 95% confidence intervals.
  • The estimated conditional average treatment effect did not change much using these different node sizes.
  • 20Rather than taking the mean merger over the entire sample, the authors also created a prediction dataset based on the mean merger for which they have information on the market shares and concentration variables.

5.2.1 Treatment - High Concentration

  • Figure 6 shows the predicted correlation between the high concentration indicator variable and competitive concerns of DG Comp over time setting all other covariates to their mean (dark blue), respectively median (light blue), value.
  • The conditional average treatment effect predicted by the causal forest is 0.14, which is slightly higher than the coefficient on the high concentration indicator in specification 4 in Table 6.
  • This indicates that, once the authors use a richer model that better describes the process behind DG Comp’s decisions, the impact of this structural indicator is less volatile and much more consistent over time.
  • Nonetheless, the importance of concentration appears to follow a downward trend over the years.
  • For the predicted correlation setting all other covariates to median rather than mean values, the drop in correlation in 2001/2002 is even more pronounced and insignificant as of 2001.

5.2.2 Treatment - Joint Market Share above 50%

  • Figure 7 shows the predicted correlation between the indicator variable for merging parties’ market shares above 50% and competitive concerns of DG Comp over time, as before setting all other covariates to their mean (dark blue), respectively median (light blue), value.
  • While the predicted correlation is positive and significant up until 2010 (at least setting all other covariates to their mean), market shares seem to become a less important intervention decision criterion since the early 2000s and even become insignificant as of 2011.
  • Notice again that, as for concentration, the correlations estimated by means of the causal forest seem to be much less volatile and more consistent over time than those estimated based on the simple linear probability model.
  • Putting the developments of the correlation between concentration and market share measures with the intervention decision by DG Comp together highlights the shift away from evaluating mergers based on structural indicators towards a more economics based approach.
  • By much, the authors only report the predictions based on the mean merger over the entire sample.

5.2.3 Treatment - Barriers to Entry

  • Figure 8 shows the predicted correlation between the presence of entry barriers in the concerned market and competitive concerns of DG Comp over time, again setting all other covariates to their mean (dark blue), respectively median (light blue), value.
  • The conditional average treatment effect predicted by the causal forest is 0.46, which is higher than the coefficient on the entry barrier indicator in any specification in Table 6.
  • Furthermore, there is considerable heterogeneity in the predicted correlation between the existence of entry barriers and competitive concerns over time.
  • While the predicted correlation with concerns was essentially zero up to 1997, it becomes positive, significant, and of increasing importance since 1998.
  • This development is also in line with the shift of DG Comp’s merger policy toward a more economics based approach.

5.2.4 Treatment - Risk of Foreclosure

  • Lastly, Figure 9 shows the predicted correlation between the indicator variable for risk of foreclosure in the concerned market and competitive concerns of DG Comp over time, setting all other covariates to their mean (dark blue), respectively median (light blue), value.
  • The conditional average treatment effect predicted by the causal forest is 0.51, which is more than the double of the coefficient on the foreclosure indicator in the specifications in Table 6.
  • The confidence intervals for the predicted correlation are very wide, especially in the early years with fewer merger cases, and no clear pattern for the relationship between risk of foreclosure and competitive concerns emerges.
  • There is a positive and mostly significant correlation that, if anything, seems to become more important over time.

6 Conclusion

  • The authors study the time-dynamics of the EC’s merger decision procedure over the first 25 years of European merger control using a new dataset containing all merger cases with an official decision documented by DG Comp (more than 5000 individual decisions).
  • The authors find that the existence of barriers to entry, the increase of concentration measures and, in particular, the share of product markets with competitive concerns increase the likelihood of an intervention.
  • In order to obtain a more fine-grained picture of the decision determinants, the authors extend their analysis to the specific product and geographic markets concerned by a merger.
  • The parametric estimations are quite volatile and do not allow for uncovering clear patterns over time.
  • In particular, the authors find that concentration as well as the merging parties’ market shares have become less important decision determinants over time and are even insignificant in most recent years.

Did you find this useful? Give us your feedback

Citations
More filters
Journal ArticleDOI
TL;DR: In this article , the effects of mergers and acquisitions on the markups of non-merging rival firms across a broad set of industries were analyzed, showing that rivals significantly increase their markups after mergers relative to a matched control group.
Abstract: This paper analyzes the effects of mergers and acquisitions on the markups of non-merging rival firms across a broad set of industries. We exploit expert market definitions from the European Commission's merger decisions to identify relevant competitors in narrowly defined product markets. Applying recent methodological advances in the estimation of production functions, we estimate markups as a measure of market power. Our results indicate that rivals significantly increase their markups after mergers relative to a matched control group. Consistent with increases in market power, the effects are particularly pronounced in markets with few players, high initial markups and concentration. We also provide evidence that merger rivals reduce their employment, sales and investment, while their profits increase around the time of a merger.

22 citations

Journal ArticleDOI
TL;DR: Using the highly flexible, non-parametric random forest algorithm to predict DG Comp’s assessment of competitive concerns in markets affected by a merger, it is found that the predictive performance of the random forests is much better than the performance of simple linear models.
Abstract: I study the predictability of the EC’s merger decision procedure before and after the 2004 merger policy reform based on a dataset covering all affected markets of mergers with an official decision documented by DG Comp between 1990 and 2014. Using the highly flexible, non-parametric random forest algorithm to predict DG Comp’s assessment of competitive concerns in markets affected by a merger, I find that the predictive performance of the random forests is much better than the performance of simple linear models. In particular, the random forests do much better in predicting the rare event of competitive concerns. Secondly, postreform, DG Comp seems to base its assessment on a more complex interaction of merger and market characteristics than pre-reform. The highly flexible random forest algorithm is able to detect these potentially complex interactions and, therefore, still allows for high prediction precision.

2 citations

Journal ArticleDOI
TL;DR: In this article , the authors investigated whether and when differences in the financial and fiscal regulatory systems between the countries of acquirers and targets that engage in cross-border M&As influence rivals' corporate responses.
Book ChapterDOI
01 Jan 2022
References
More filters
Journal ArticleDOI
01 Oct 2001
TL;DR: Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression.
Abstract: Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R. Schapire, Machine Learning: Proceedings of the Thirteenth International conference, aaa, 148–156), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression.

79,257 citations

Journal ArticleDOI
TL;DR: The Elements of Statistical Learning: Data Mining, Inference, and Prediction as discussed by the authors is a popular book for data mining and machine learning, focusing on data mining, inference, and prediction.
Abstract: (2004). The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Journal of the American Statistical Association: Vol. 99, No. 466, pp. 567-567.

10,549 citations

Journal ArticleDOI
TL;DR: A discussion of matching, randomization, random sampling, and other methods of controlling extraneous variation is presented in this paper, where the objective is to specify the benefits of randomization in estimating causal effects of treatments.
Abstract: A discussion of matching, randomization, random sampling, and other methods of controlling extraneous variation is presented. The objective is to specify the benefits of randomization in estimating causal effects of treatments. The basic conclusion is that randomization should be employed whenever possible but that the use of carefully controlled nonrandomized data to estimate causal effects is a reasonable and necessary procedure in many cases. Recent psychological and educational literature has included extensive criticism of the use of nonrandomized studies to estimate causal effects of treatments (e.g., Campbell & Erlebacher, 1970). The implication in much of this literature is that only properly randomized experiments can lead to useful estimates of causal effects. If taken as applying to all fields of study, this position is untenable. Since the extensive use of randomized experiments is limited to the last half century,8 and in fact is not used in much scientific investigation today,4 one is led to the conclusion that most scientific "truths" have been established without using randomized experiments. In addition, most of us successfully determine the causal effects of many of our everyday actions, even interpersonal behaviors, without the benefit of randomization. Even if the position that causal effects of treatments can only be well established from randomized experiments is taken as applying only to the social sciences in which

8,377 citations

Journal ArticleDOI
TL;DR: This paper developed a non-parametric causal forest for estimating heterogeneous treatment effects that extends Breiman's widely used random forest algorithm, and showed that causal forests are pointwise consistent for the true treatment effect and have an asymptotically Gaussian and centered sampling distribution.
Abstract: Many scientific and engineering challenges—ranging from personalized medicine to customized marketing recommendations—require an understanding of treatment effect heterogeneity. In this paper, we develop a non-parametric causal forest for estimating heterogeneous treatment effects that extends Breiman's widely used random forest algorithm. In the potential outcomes framework with unconfoundedness, we show that causal forests are pointwise consistent for the true treatment effect, and have an asymptotically Gaussian and centered sampling distribution. We also discuss a practical method for constructing asymptotic confidence intervals for the true treatment effect that are centered at the causal forest estimates. Our theoretical results rely on a generic Gaussian theory for a large family of random forest algorithms. To our knowledge, this is the first set of results that allows any type of random forest, including classification and regression forests, to be used for provably valid statistical infe...

1,156 citations

Journal ArticleDOI
TL;DR: This paper provides a data-driven approach to partition the data into subpopulations that differ in the magnitude of their treatment effects, and proposes an “honest” approach to estimation, whereby one sample is used to construct the partition and another to estimate treatment effects for each subpopulation.
Abstract: In this paper we propose methods for estimating heterogeneity in causal effects in experimental and observational studies and for conducting hypothesis tests about the magnitude of differences in treatment effects across subsets of the population. We provide a data-driven approach to partition the data into subpopulations that differ in the magnitude of their treatment effects. The approach enables the construction of valid confidence intervals for treatment effects, even with many covariates relative to the sample size, and without “sparsity” assumptions. We propose an “honest” approach to estimation, whereby one sample is used to construct the partition and another to estimate treatment effects for each subpopulation. Our approach builds on regression tree methods, modified to optimize for goodness of fit in treatment effects and to account for honest estimation. Our model selection criterion anticipates that bias will be eliminated by honest estimation and also accounts for the effect of making additional splits on the variance of treatment effect estimates within each subpopulation. We address the challenge that the “ground truth” for a causal effect is not observed for any individual unit, so that standard approaches to cross-validation must be modified. Through a simulation study, we show that for our preferred method honest estimation results in nominal coverage for 90% confidence intervals, whereas coverage ranges between 74% and 84% for nonhonest approaches. Honest estimation requires estimating the model with a smaller sample size; the cost in terms of mean squared error of treatment effects for our preferred method ranges between 7–22%.

913 citations

Frequently Asked Questions (2)
Q1. What are the contributions in this paper?

The authors study the evolution of the EC ’ s merger decision procedure over the first 25 years of European competition policy. Using non-parametric machine learning techniques, the authors find that the importance of market shares and concentration measures has declined while the importance of barriers to entry and the risk of foreclosure has increased in the EC ’ s merger assessment following the 2004 merger policy reform. 

In order to obtain a more fine-grained picture of the decision determinants, the authors extend their analysis to the specific product and geographic markets concerned by a merger. The authors find that more determinants significantly affect the Commission ’ s competitive concerns at the market level than seen at the merger level. In particular, the authors find that concentration as well as the merging parties ’ market shares have become less important decision determinants over time and are even insignificant in most recent years.