scispace - formally typeset
Open AccessJournal ArticleDOI

Taylorizing business school research: On the ‘one best way’ performative effects of journal ranking lists

John Mingers, +1 more
- 01 Aug 2013 - 
- Vol. 66, Iss: 8, pp 0018726712467048
Reads0
Chats0
TLDR
In this article, the authors examine how work is shaped by performance measures, focusing on the use of journal lists, rather than the detail of their construction, in conditioning the research activity of academics.
Abstract
The article critically examines how work is shaped by performance measures. Its specific focus is upon the use of journal lists, rather than the detail of their construction, in conditioning the research activity of academics. It is argued that an effect of the ‘one size fits all’ logic of journal lists is to endorse and cultivate a research monoculture in which particular criteria, favoured by a given list, assume the status of a universal benchmark of performance (‘research quality’). The article demonstrates, with reference to the Association of Business Schools (ABS) ‘Journal Guide’, how use of a journal list can come to dominate and define the focus and trajectory of a field of research, with detrimental consequences for the development of scholarship.

read more

Content maybe subject to copyright    Report

Kent Academic Repository
Full text document (pdf)
Copyright & reuse
Content in the Kent Academic Repository is made available for research purposes. Unless otherwise stated all
content is protected by copyright and in the absence of an open licence (eg Creative Commons), permissions
for further reuse of content should be sought from the publisher, author or other copyright holder.
Versions of research
The version in the Kent Academic Repository may differ from the final published version.
Users are advised to check http://kar.kent.ac.uk for the status of the paper. Users should always cite the
published version of record.
Enquiries
For any further enquiries regarding the licence status of this document, please contact:
researchsupport@kent.ac.uk
If you believe this document infringes copyright then please contact the KAR admin team with the take-down
information provided at http://kar.kent.ac.uk/contact.html
Citation for published version
Mingers, John and Willmott, Hugh (2013) Taylorizing Business School Research: On the 'One
Best Way' Performative Effects of Journal Ranking Lists. Human Relations, 66 (8). pp. 1051-1073.
ISSN 0018 7267.
DOI
https://doi.org/10.1177/0018726712467048
Link to record in KAR
http://kar.kent.ac.uk/32785/
Document Version
Author's Accepted Manuscript

1
Taylorizing business school research: On the one best w
performative effects of journal ranking lists
Abstract
The paper critically examines how work is shaped by performance measures. Its specific
focus is upon the use of journal lists, rather than the detail of their construction, in
conditioning the research activity of academicsI         
  logic of journal lists is to endorse and cultivate a research monoculture in which
particular criteria, favoured by a given list, assume the status of a universal benchmark of
performance (research quality). The paper demonstrates, with reference to the
Association of Business Schools (ABS) Journal Guide, how use of a journal list can come to
dominate and define the focus and trajectory of a field of research, with detrimental
consequences for the development of scholarship.
Keywords
performance measurement, work culture, journal lists, Taylorization, knowledge
development, research evaluation, performativity

2
Introduction
The creation of `journal quality lists’ in the field of business and management in the UK has
coincided with the growing importance and formalization of national research evaluation
exercises (Geary, Marriott and Rowlinson, 2004; Keenoy, 2005; see also Gendron, 2008).
The compilers and advocates of these lists say that their intention is to provide an objective
measure of the comparative esteem of journals using a standardized quality metric, thereby
overcoming information asymmetries associated with the use of ‘insider’ knowledge (e.g.
Rowlinson et al, 2011)
i. Their use, it is further suggested, can correct the biases ascribed to
evaluators of research quality (see, for example, Taylor, 2011). However, when the lists are
used as a standard to calculate the equivalent of an exchange value of outputs (e.g. journal
articles) and authors on the academic hiring, promotion and transfer markets, such
justifications largely disregard the extent to which lists contribute to, and have further
potential to promote, a commodification of academic labour and a narrowing of scholarship
(Bryson, 2004; Willmott, 1995; Harley and Lee, 1997; Van Fleet et al, 2011).
The pressures upon business school academics are particularly intense where these schools
have become amongst the largest of University departments, with corresponding implications
for institutional funding and reputation. The significance and influence of journal lists
increases as competition between institutions for resources, symbolic as well as material,
intensifies. As journal quality lists (e.g. those created by the Financial Times and the
Association of Business Schools) become influential for processes of recruitment, promotion
and the selection of staff/outputs for submission to evaluation exercises, they come to shape
the nature, structure and conditions of academic work (Espeland and Sauder, 2007; Sauder
and Espeland, 2009). Such performative effects are, of course, greatest when they weaken or
marginalize alternative criteria and processes of evaluation. Examining the use and effects of

3
journal lists is therefore important not simply for better understanding, or refining, how such
metrics are devised (see Truex et al, 2011 for a critical review) but also, and more
significantly, for appreciating and questioning their constitutive role in defining and policing
the focus and direction of research activity.
Regardless of the particular methodology or algorithm used to compute journal lists (see
Morris, Harvey and Kelly, 2009 for a typology), their design shoehorns horizontal diversity
of research and scholarship into a single, seemingly authoritative vertical order. By valorising
the ’research agenda’ institutionalized in the topics, methods and perspectives favoured by
‘A’ category journals, the use of journal lists to assess the quality of research sends out a
strong market signal’: it privileges the research agenda pursued in those journals; and,
conversely, it devalues research published elsewhere, irrespective of its content and
contribution. When an article’s place of publication, as indicated by its ranking in a journal
list, becomes more significant or valued than its scholarly content, faculty find themselves
increasingly in receipt of the following kind of ‘advice’ from Deans, research directors and
senior colleagues. If you wish to be counted as ‘research active’ and so be submitted to the
XXX evaluation exercise or to improve your promotion prospects, your work should be
designed, shaped and honed to emulate the genre of research published in journals most
highly ranked in the prescribed journal list. Failure to demonstrate this competence risks
staying on probation / not being counted as research-active / not being considered for
promotion/ being moved to a teaching only contract”. Whatever its intended purpose, the
journal list has become a potent instrument of managerial decision-making whose use, we
will argue, has the performative effect of homogenizing, in addition to commodifying and
individualizing, research activity.

4
This concern complements a number of other objections levelled against journal lists which
range from issues about the technicalities of their construction, through criticisms of their
neglect or devaluation of other kinds of publication (e.g. monographs), to their exclusion of
research that matters and their obsessive or fetishised use (zbilgin, 2009: 113; see also
Harzing and Metz, 2012; Worthington and Hodgson, 2005; Keenoy, 2005; Clarke, Knights
and Jarvis, 2012; Knights, Clarke and Jarvis, 2011; Willmott, 2011). Our focus here, in
contrast, is upon the performative effects of a “one size fits all” logic of research evaluation.
(see also Nkomo, 2009). To illustrate these effects, our analysis examines in some detail the
development, justification and application of the Association of Business Schools (ABS)
Academic Journal Quality Guide. Our example is taken from the UK context where the use of
journal lists is probably most widely and deeply embedded. But, of course, their use has been
widespread, and seems set to become more influential. Downloads of the `Guide’ from the
ABS website are reported to have been, in one year (2010), `90,000from nearly 100
countries’ (Rowlinson et al, 2011: 443).
We begin by considering the squeeze on heterogeneity by the “one size fits all” philosophy
enshrined in the compilation of journal lists a restrictive process that is increasingly
reinforced by reliance upon citation counts and impact factors. To underscore the
homogenizing influence of journal lists, we draw a parallel between the “one best way”
design of industrial production advocated by Frederick Taylor (see Kanigel, 1995) and the
“one size fits all” design philosophy enshrined in journal lists. Detailed consideration is given
to the establishment and use of the ABS Quality Guide’ before we assess claims that its use
brings cultural and economic benefits.
Measuring scholarship

Citations
More filters
Journal ArticleDOI

Google Scholar, Scopus and the Web of Science: a longitudinal and cross-disciplinary comparison

TL;DR: A longitudinal comparison of eight data points between 2013 and 2015 shows a consistent and reasonably stable quarterly growth for both publications and citations across the three databases, suggesting that all three databases provide sufficient stability of coverage to be used for more detailed cross-disciplinary comparisons.
Journal ArticleDOI

A review of theory and practice in scientometrics

TL;DR: The historical development of scientometrics, sources of citation data, citation metrics and the “laws” of scientometry, normalisation, journal impact factors and other journal metrics, visualising and mapping science, evaluation and policy, and future developments are considered.
Journal ArticleDOI

Supply chain finance: A systematic literature review and bibliometric analysis

TL;DR: Wang et al. as discussed by the authors adopted a systematic literature review methodology combined with bibliometric, network and content analysis based on 348 papers identified from mainstream academic databases, which provided insights not previously fully captured or evaluated by other reviews on this topic, including key authors, key journals and the prestige of the reviewed papers.
Journal ArticleDOI

Evaluation practices and effects of indicator use : a literature review

TL;DR: A review of the international literature on evaluation systems, evaluation practices, and metrics (mis)uses was written as part of a larger review commissioned by the Higher Education Funding Co....
Journal ArticleDOI

Careering through academia: Securing identities or engaging ethical subjectivities?

TL;DR: In this article, the authors reflect upon careering, securing identities and ethical subjectivities in academia in the context of audit, accountability and control surrounding new managerialism in UK Business Schools, and illustrate how rather than resisting an ever-proliferating array of governmental technologies of power, academics chase the illusive sense of a secure self through "careering"; a frantic and frenetic individualistic strategy designed to moderate the pressures of excessive managerial competitive demands.
References
More filters
Journal ArticleDOI

The Seductive Power of Academic Journal Rankings: Challenges of Searching for the Otherwise

TL;DR: Adler and Harzing as discussed by the authors focus on the challenges of heeding their call to generate alternatives and point out the many ways we have all in some way or another fallen under the seductive power of academic journal rankings even as we harbor serious reservations about their value.
Journal ArticleDOI

The Assessment of Research Quality in UK Universities: Peer Review or Metrics?

TL;DR: In this paper, the authors investigated the extent to which the outcomes of the 2008 Research Assessment Exercise in the UK, determined by peer review, can be explained by a set of quantitative indicators.
Journal ArticleDOI

The drivers of citations in management science journals

TL;DR: This study investigates the number of citations received by papers published in six well-known management science journals and provides some insights into the determinants of a paper's impact that may be helpful for particular stakeholders to make important decisions.
Journal ArticleDOI

From Journal Rankings to Making Sense of the World

TL;DR: Adler and Harzing as mentioned in this paper demonstrate that journal ranking systems are imbued with conundrums which have undermined the fundamental purpose of social scientific research to make sense of the world, and recommend a number of steps that can be taken in order to ameliorate ill-conceived methods of ranking papers, journals, and institutions of research.
Journal ArticleDOI

‘Physics Envy’, Cognitive Legitimacy or Practical Relevance: Dilemmas in the Evolution of Management Research in the UK

TL;DR: In this article, the authors argue that the conflicting sources of legitimacy could be undermining the international research competitiveness of UK schools and that a far-reaching review of management education and research is necessary.
Related Papers (5)
Frequently Asked Questions (2)
Q1. What are the contributions in this paper?

The paper critically examines how work is shaped by performance measures. The paper demonstrates, with reference to the Association of Business Schools ( ABS ) さJournal Guideざ, how use of a journal list can come to dominate and define the focus and trajectory of a field of research, with detrimental consequences for the development of scholarship. 

In the UK context, the authors suggest that the Business and Management Panel for any future evaluation exercises might: 1. Reiterate the exclusion of all use of journal lists from the evaluation process. It is hoped that the evidence and arguments presented in this paper will stimulate further discussion of the pros and cons of the use of journals lists.