scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Taylorizing business school research: On the ‘one best way’ performative effects of journal ranking lists

01 Aug 2013-Human Relations (SAGE Publications)-Vol. 66, Iss: 8, pp 0018726712467048
TL;DR: In this article, the authors examine how work is shaped by performance measures, focusing on the use of journal lists, rather than the detail of their construction, in conditioning the research activity of academics.
Abstract: The article critically examines how work is shaped by performance measures. Its specific focus is upon the use of journal lists, rather than the detail of their construction, in conditioning the research activity of academics. It is argued that an effect of the ‘one size fits all’ logic of journal lists is to endorse and cultivate a research monoculture in which particular criteria, favoured by a given list, assume the status of a universal benchmark of performance (‘research quality’). The article demonstrates, with reference to the Association of Business Schools (ABS) ‘Journal Guide’, how use of a journal list can come to dominate and define the focus and trajectory of a field of research, with detrimental consequences for the development of scholarship.

Summary (3 min read)

Introduction

  • Examining the use and effects of journal lists is therefore important not simply for better understanding, or refining, how such metrics are devised (see Truex et al, 2011 for a critical review) but also, and more significantly, for appreciating and questioning their constitutive role in defining and policing the focus and direction of research activity.
  • Detailed consideration is given to the establishment and use of the ABS ‘Quality Guide’ before the authors assess claims that its use brings cultural and economic benefits.

Heterogeneity at risk

  • The image of sporting league tables, with their organization into divisions and top clubs, is frequently invoked to characterize and legitimize journal rankings.
  • The aptness of the parallel is, however, limited and misleading.
  • In competitive sports, team managers have broadly the same objective: in many ball sports, for example, the aim is to score goals without conceding.
  • At the same time, it would be naïve to deny how editors are subject to reputational and commercial pressures that are intensified by the competitive use of citation scores and JIFs as performance indicators which may induce them inter alia to emulate the genre of scholarship published in top-tier journals.
  • These can then be “cashed in” in the promotion and transfer markets, and in the selection process for submission of staff/outputs to research evaluation exercises.

Marginalizing innovation

  • It would be surprising if many journal editors were unable to resist all temptation, including material incentives, to game the system in order to increase their JIFs – for example, by favouring articles that are assessed to have “shooting star” potential (see above), or by inviting authors to consider including reference to articles that have been published in the journal.
  • Well-documented examples include the areas of sustainability (see Wells, 2010) and innovation studies (see Rafols et al, 2011).
  • When challenged by a letter in OR/MS Today (Ackerman and 48 others, 2009), signed by academics from around the world, the response from the Editor was that, as far as Operations Research was concerned, non-mathematical OR was not OR and therefore not publishable.
  • To the extent that a journal list broadly reflects, endorses and reproduces the hegemony of this neo-positivist tradition of scholarship, ts most potent effect is to devalue and marginalize, if not exclude, heterodox forms of scholarly contribution, and thereby induce a homogenization of research activity.
  • For a majority of senior managers - from research directors, through deans to vicechancellors - the ranking of their university’s departments, including its business school, in national evaluation exercises is the key indicator of its status in relation to other universities.

Performance anxiety

  • Determining whether the outputs of staff are of adequate quality to be submitted to research evaluation exercises and, if so, which publications to select, presents university managers with a major challenge, especially when the reputation of the university and the business school is research-based, and so is heavily dependent upon the outcome.
  • When faced with such troublesome decisions, the availability of a journal list is a seductive decision-making aid as it purports to provide an impersonal and objective basis for assessing the quality of research published by staff, and thus offers a basis for making and justifying difficult and divisive decisions.
  • The appeal of a list is especially strong in contexts of diversity where there are multiple paradigmatic differences over what counts as quality - differences that can be as acute and delicate within business schools as they are between schools.

PLACE EXHIBIT 2 ABOUT HERE

  • Some indications of how the 2008 Panel would operate were provided in its statement of Working Methods, published in January 2006 (see Exhibit 2).
  • A lingering question remained, however, about how practically this undertaking could be fulfilled in the time available (a few months) by a Panel comprising less than twenty members.
  • It could be inferred from its Working Methods statement that the Panel would rely upon its “professional judgment” to assess the remaining outputs; and that it would do this without reading the outputs “in detail”.
  • For members of research committees, research directors and deans of business schools, a possible remedy for their performance anxieties (see above) presented itself in the form of the indicator of publication quality provided by a journal rankings listvi.
  • Given this application of the ABS list, and its continuing use in preparations for the 2014 evaluation exercise, it is relevant to consider further how its construction and justification has contributed to exerting an influence over the range and direction of business and management research.

Managing by numbers: The ABS ╄Jラ┌ヴミ;ノ Q┌;ノキデ┞ G┌キSWげ

  • In the run-up up to the 2008 evaluation exercise, th ABS “Journal Quality Guide” became adopted as the “de facto standard” across UK business schools (Mingers, Watson and Scaparra, 2012: 3).
  • Its relevance for this purpose was crudely signalled by Version 1 of the ABS list in which journals were grouped using a five point scale that directly mimicked the scale used by the Panel.
  • In place of any principled articulation or defence of the ABS list – for example, by indicating how it might contribute to promoting more innovative research and scholarship - its architects simply anticipate and openly commend its managerial use.
  • In other words, the list is commended as a handy, expedient tool for those charged with making onerous decisions about their colleagues’ careers, yet who are disinclined to prioritize time for reading and assessing the work itself, or seeking advice from subject specialists.

Performative Effects of the ABS List as a Policy Tool

  • Their analysis necessarily relies upon available, aggregated information on the outcomes of the 2008 exercise profiled as the percentage of outputs awarded a particular grade from 4 to 1 and ungraded (see Exhibit 1).
  • The implication is that academics should direct their research activity primarily to what is publishable in journals rather than through other media (e.g. monographs).
  • This shows the vast range of research carried out within business and management that does not have the “ABS stamp” of recognition.
  • Evidence that the ABS list was used in making submissions – for instance comparing the ABS journals that were submitted with those that were not, 45% of those not submitted were ABS 1*, while only 4% were ABS 4*.

Discussion: The ABS List and the Taylorization of Business School Research

  • Rather like Taylor, any problems associated with using the list are attributed to its imperfect or misdirected application or, more recently, to “predelictions and prejudices” amongst journal editors and referees that the ABS list merely “reflects” and renders more “visible” and available to “challenge”, thereby ignoring its performative effects (Morris et al, 2011: 563).
  • In order to create a metric with general applicability or universality, its particularity must be obscured– a particularity that unavoidably privileges the values of certain research traditions while it marginalizes others.
  • But a moratorium on the use of journal lists for making decisions on recruitment, promotion and research evaluation submission is more consistent with their analysis, and so is the preferred policy recommendation.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

Kent Academic Repository
Full text document (pdf)
Copyright & reuse
Content in the Kent Academic Repository is made available for research purposes. Unless otherwise stated all
content is protected by copyright and in the absence of an open licence (eg Creative Commons), permissions
for further reuse of content should be sought from the publisher, author or other copyright holder.
Versions of research
The version in the Kent Academic Repository may differ from the final published version.
Users are advised to check http://kar.kent.ac.uk for the status of the paper. Users should always cite the
published version of record.
Enquiries
For any further enquiries regarding the licence status of this document, please contact:
researchsupport@kent.ac.uk
If you believe this document infringes copyright then please contact the KAR admin team with the take-down
information provided at http://kar.kent.ac.uk/contact.html
Citation for published version
Mingers, John and Willmott, Hugh (2013) Taylorizing Business School Research: On the 'One
Best Way' Performative Effects of Journal Ranking Lists. Human Relations, 66 (8). pp. 1051-1073.
ISSN 0018 7267.
DOI
https://doi.org/10.1177/0018726712467048
Link to record in KAR
http://kar.kent.ac.uk/32785/
Document Version
Author's Accepted Manuscript

1
Taylorizing business school research: On the one best w
performative effects of journal ranking lists
Abstract
The paper critically examines how work is shaped by performance measures. Its specific
focus is upon the use of journal lists, rather than the detail of their construction, in
conditioning the research activity of academicsI         
  logic of journal lists is to endorse and cultivate a research monoculture in which
particular criteria, favoured by a given list, assume the status of a universal benchmark of
performance (research quality). The paper demonstrates, with reference to the
Association of Business Schools (ABS) Journal Guide, how use of a journal list can come to
dominate and define the focus and trajectory of a field of research, with detrimental
consequences for the development of scholarship.
Keywords
performance measurement, work culture, journal lists, Taylorization, knowledge
development, research evaluation, performativity

2
Introduction
The creation of `journal quality lists’ in the field of business and management in the UK has
coincided with the growing importance and formalization of national research evaluation
exercises (Geary, Marriott and Rowlinson, 2004; Keenoy, 2005; see also Gendron, 2008).
The compilers and advocates of these lists say that their intention is to provide an objective
measure of the comparative esteem of journals using a standardized quality metric, thereby
overcoming information asymmetries associated with the use of ‘insider’ knowledge (e.g.
Rowlinson et al, 2011)
i. Their use, it is further suggested, can correct the biases ascribed to
evaluators of research quality (see, for example, Taylor, 2011). However, when the lists are
used as a standard to calculate the equivalent of an exchange value of outputs (e.g. journal
articles) and authors on the academic hiring, promotion and transfer markets, such
justifications largely disregard the extent to which lists contribute to, and have further
potential to promote, a commodification of academic labour and a narrowing of scholarship
(Bryson, 2004; Willmott, 1995; Harley and Lee, 1997; Van Fleet et al, 2011).
The pressures upon business school academics are particularly intense where these schools
have become amongst the largest of University departments, with corresponding implications
for institutional funding and reputation. The significance and influence of journal lists
increases as competition between institutions for resources, symbolic as well as material,
intensifies. As journal quality lists (e.g. those created by the Financial Times and the
Association of Business Schools) become influential for processes of recruitment, promotion
and the selection of staff/outputs for submission to evaluation exercises, they come to shape
the nature, structure and conditions of academic work (Espeland and Sauder, 2007; Sauder
and Espeland, 2009). Such performative effects are, of course, greatest when they weaken or
marginalize alternative criteria and processes of evaluation. Examining the use and effects of

3
journal lists is therefore important not simply for better understanding, or refining, how such
metrics are devised (see Truex et al, 2011 for a critical review) but also, and more
significantly, for appreciating and questioning their constitutive role in defining and policing
the focus and direction of research activity.
Regardless of the particular methodology or algorithm used to compute journal lists (see
Morris, Harvey and Kelly, 2009 for a typology), their design shoehorns horizontal diversity
of research and scholarship into a single, seemingly authoritative vertical order. By valorising
the ’research agenda’ institutionalized in the topics, methods and perspectives favoured by
‘A’ category journals, the use of journal lists to assess the quality of research sends out a
strong market signal’: it privileges the research agenda pursued in those journals; and,
conversely, it devalues research published elsewhere, irrespective of its content and
contribution. When an article’s place of publication, as indicated by its ranking in a journal
list, becomes more significant or valued than its scholarly content, faculty find themselves
increasingly in receipt of the following kind of ‘advice’ from Deans, research directors and
senior colleagues. If you wish to be counted as ‘research active’ and so be submitted to the
XXX evaluation exercise or to improve your promotion prospects, your work should be
designed, shaped and honed to emulate the genre of research published in journals most
highly ranked in the prescribed journal list. Failure to demonstrate this competence risks
staying on probation / not being counted as research-active / not being considered for
promotion/ being moved to a teaching only contract”. Whatever its intended purpose, the
journal list has become a potent instrument of managerial decision-making whose use, we
will argue, has the performative effect of homogenizing, in addition to commodifying and
individualizing, research activity.

4
This concern complements a number of other objections levelled against journal lists which
range from issues about the technicalities of their construction, through criticisms of their
neglect or devaluation of other kinds of publication (e.g. monographs), to their exclusion of
research that matters and their obsessive or fetishised use (zbilgin, 2009: 113; see also
Harzing and Metz, 2012; Worthington and Hodgson, 2005; Keenoy, 2005; Clarke, Knights
and Jarvis, 2012; Knights, Clarke and Jarvis, 2011; Willmott, 2011). Our focus here, in
contrast, is upon the performative effects of a “one size fits all” logic of research evaluation.
(see also Nkomo, 2009). To illustrate these effects, our analysis examines in some detail the
development, justification and application of the Association of Business Schools (ABS)
Academic Journal Quality Guide. Our example is taken from the UK context where the use of
journal lists is probably most widely and deeply embedded. But, of course, their use has been
widespread, and seems set to become more influential. Downloads of the `Guide’ from the
ABS website are reported to have been, in one year (2010), `90,000from nearly 100
countries’ (Rowlinson et al, 2011: 443).
We begin by considering the squeeze on heterogeneity by the “one size fits all” philosophy
enshrined in the compilation of journal lists a restrictive process that is increasingly
reinforced by reliance upon citation counts and impact factors. To underscore the
homogenizing influence of journal lists, we draw a parallel between the “one best way”
design of industrial production advocated by Frederick Taylor (see Kanigel, 1995) and the
“one size fits all” design philosophy enshrined in journal lists. Detailed consideration is given
to the establishment and use of the ABS Quality Guide’ before we assess claims that its use
brings cultural and economic benefits.
Measuring scholarship

Citations
More filters
Journal ArticleDOI
TL;DR: A longitudinal comparison of eight data points between 2013 and 2015 shows a consistent and reasonably stable quarterly growth for both publications and citations across the three databases, suggesting that all three databases provide sufficient stability of coverage to be used for more detailed cross-disciplinary comparisons.
Abstract: This article aims to provide a systematic and comprehensive comparison of the coverage of the three major bibliometric databases: Google Scholar, Scopus and the Web of Science. Based on a sample of 146 senior academics in five broad disciplinary areas, we therefore provide both a longitudinal and a cross-disciplinary comparison of the three databases. Our longitudinal comparison of eight data points between 2013 and 2015 shows a consistent and reasonably stable quarterly growth for both publications and citations across the three databases. This suggests that all three databases provide sufficient stability of coverage to be used for more detailed cross-disciplinary comparisons. Our cross-disciplinary comparison of the three databases includes four key research metrics (publications, citations, h-index, and hI, annual, an annualised individual h-index) and five major disciplines (Humanities, Social Sciences, Engineering, Sciences and Life Sciences). We show that both the data source and the specific metrics used change the conclusions that can be drawn from cross-disciplinary comparisons.

930 citations


Cites background from "Taylorizing business school researc..."

  • ...The adverse impact of this “audit culture” is well documented (see e.g. Adler & Harzing, 2009; Mingers & Willmott, 2013)....

    [...]

  • ...The adverse impact of this ‘‘audit culture’’ is well documented (see e.g. Adler and Harzing 2009; Mingers and Willmott 2013)....

    [...]

Journal ArticleDOI
TL;DR: The historical development of scientometrics, sources of citation data, citation metrics and the “laws” of scientometry, normalisation, journal impact factors and other journal metrics, visualising and mapping science, evaluation and policy, and future developments are considered.

560 citations


Cites background from "Taylorizing business school researc..."

  • ...Indeed journal ranking lists such as the UK Association of Business Schools’ (ABS) has a huge effect on research behaviour (Mingers & Willmott, 2013)....

    [...]

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors adopted a systematic literature review methodology combined with bibliometric, network and content analysis based on 348 papers identified from mainstream academic databases, which provided insights not previously fully captured or evaluated by other reviews on this topic, including key authors, key journals and the prestige of the reviewed papers.

361 citations

Journal ArticleDOI
TL;DR: A review of the international literature on evaluation systems, evaluation practices, and metrics (mis)uses was written as part of a larger review commissioned by the Higher Education Funding Co....
Abstract: This review of the international literature on evaluation systems, evaluation practices, and metrics (mis)uses was written as part of a larger review commissioned by the Higher Education Funding Co ...

290 citations


Cites background from "Taylorizing business school researc..."

  • ...In economics and many departments in business studies, however, publication productivity has been strongly stimulated by the ubiquitous use of journal rankings as obligatory publication outlets for faculty (Mingers and Willmott 2013)....

    [...]

Journal ArticleDOI
TL;DR: In this article, the authors reflect upon careering, securing identities and ethical subjectivities in academia in the context of audit, accountability and control surrounding new managerialism in UK Business Schools, and illustrate how rather than resisting an ever-proliferating array of governmental technologies of power, academics chase the illusive sense of a secure self through "careering"; a frantic and frenetic individualistic strategy designed to moderate the pressures of excessive managerial competitive demands.
Abstract: This paper reflects upon careering, securing identities and ethical subjectivities in academia in the context of audit, accountability and control surrounding new managerialism in UK Business Schools. Drawing upon empirical research, we illustrate how rather than resisting an ever-proliferating array of governmental technologies of power, academics chase the illusive sense of a secure self through ‘careering’; a frantic and frenetic individualistic strategy designed to moderate the pressures of excessive managerial competitive demands. Emerging from our data was an increased portrayal of academics as subjected to technologies of power and self, simultaneously being objects of an organizational gaze through normalizing judgements, hierarchical observations and examinations. Still this was not a monolithic response, as there were those who expressed considerable disquiet as well as a minority who reported ways to seek out a more embodied engagement with their work. In analyzing the careerism and preoccupation with securing identities that these technologies of visibility and self-discipline produce, we draw on certain philosophical deliberations and especially the later Foucault on ethics and active engagement to explore how academics might refuse the ways they have been constituted as subjects through new managerial regimes.

180 citations


Cites background from "Taylorizing business school researc..."

  • ...…managerialist pressures to perform, and these disciplinary techniques of evaluating research output, teaching quality and public/social impact assessments have become normalized and naturalized (Clarke et al., 2012; Harley, 2002; Harley and Lee, 1997; Keenoy, 2003; Mingers and Willmott, 2013)....

    [...]

References
More filters
Book
01 Jan 1981
TL;DR: The Soft Systems Methodology (SSM) as discussed by the authors is an alternative approach which enables managers of all kinds and at any level to deal with the subtleties and confusions of the situations they face.
Abstract: Whether by design, accident or merely synchronicity, Checkland appears to have developed a habit of writing seminal publications near the start of each decade which establish the basis and framework for systems methodology research for that decade."" Hamish Rennie, Journal of the Operational Research Society, 1992 Thirty years ago Peter Checkland set out to test whether the Systems Engineering (SE) approach, highly successful in technical problems, could be used by managers coping with the unfolding complexities of organizational life. The straightforward transfer of SE to the broader situations of management was not possible, but by insisting on a combination of systems thinking strongly linked to real-world practice Checkland and his collaborators developed an alternative approach - Soft Systems Methodology (SSM) - which enables managers of all kinds and at any level to deal with the subtleties and confusions of the situations they face. This work established the now accepted distinction between hard systems thinking, in which parts of the world are taken to be systems which can be engineered, and soft systems thinking in which the focus is on making sure the process of inquiry into real-world complexity is itself a system for learning. Systems Thinking, Systems Practice (1981) and Soft Systems Methodology in Action (1990) together with an earlier paper Towards a Systems-based Methodology for Real-World Problem Solving (1972) have long been recognized as classics in the field. Now Peter Checkland has looked back over the three decades of SSM development, brought the account of it up to date, and reflected on the whole evolutionary process which has produced a mature SSM. SSM: A 30-Year Retrospective, here included with Systems Thinking, Systems Practice closes a chapter on what is undoubtedly the most significant single research programme on the use of systems ideas in problem solving. Now retired from full-time university work, Peter Checkland continues his research as a Leverhulme Emeritus Fellow. "

7,467 citations


"Taylorizing business school researc..." refers background in this paper

  • ...Operational Research (OR) has mathematical roots but since the 1970s, especially in the UK, the adequacy of mathematical modelling of complex real-world problems has been questioned, resulting in the development of a new area of OR, known as ‘soft OR’ or ‘soft systems’ (Checkland, 1981)....

    [...]

Book
28 Dec 2001
TL;DR: In this article, the authors discuss academic disciplines overlaps, boundaries and specialisms aspects of community life patterns of communication academic careers and the wider context implications for theory and practice in the context of communication.
Abstract: Points of departure academic disciplines overlaps, boundaries and specialisms aspects of community life patterns of communication academic careers the wider context implications for theory and practice. Appendix: research issues.

2,981 citations


"Taylorizing business school researc..." refers background in this paper

  • ...…of the ABS list themselves acknowledge, has shown considerable resiliance, up until now at least, in ‘resist(ing) normative pressures to coalesce around a set of ontological, epistemological and methodological norms (Tranfield and Starkey, 1998)’ (Morris et al., 2009: 1444; see also Becher, 1989)....

    [...]

Journal ArticleDOI
TL;DR: Investigating the methodological sophistication of case studies as a tool for generating and testing theory by analyzing case studies published during the period 1995-2000 in 10 influential management journals finds that case studies emphasized external validity at the expense of the more fundamental quality measures, internal and construct validity.
Abstract: This article investigates the methodological sophistication of case studies as a tool for generating and testing theory by analyzing case studies published during the period 1995-2000 in 10 influential management journals We find that case studies emphasized external validity at the expense of the two more fundamental quality measures, internal and construct validity Comparing case studies published in the three highest-ranking journals with the other seven, we reveal strategies that may be useful for authors wishing to conduct methodologically rigorous case study research

1,878 citations


"Taylorizing business school researc..." refers background in this paper

  • ...…to internal, external or construct validity; and no preoccupation with the operationalization, measurement and statistical analysis of variables – the ethos of positivism, in the numerous guises of neo-positivism, tends to hold sway (e.g. Gibbert et al., 2008; Scandura and Williams, 2000)....

    [...]

Journal ArticleDOI
TL;DR: In this paper, a framework for investigating the consequences, both intended and unintended, of public measures has been proposed, identifying two mechanisms, self-fulfilling prophecy and commensuration, that induce reactivity and then distinguishing patterns of effects produced by reactivity.
Abstract: Recently, there has been a proliferation of measures responding to demands for accountability and transparency. Using the example of media rankings of law schools, this article argues that the methodological concept of reactivity—the idea that people change their behavior in reaction to being evaluated, observed, or measured—offers a useful lens for disclosing how these measures effect change. A framework is proposed for investigating the consequences, both intended and unintended, of public measures. The article first identifies two mechanisms, self‐fulfilling prophecy and commensuration, that induce reactivity and then distinguishes patterns of effects produced by reactivity. This approach demonstrates how these increasingly fateful public measures change expectations and permeate institutions, suggesting why it is important for scholars to investigate the impact of these measures more systematically.

1,638 citations


"Taylorizing business school researc..." refers background in this paper

  • ...…the Association of Business Schools) become influential for processes of recruitment, promotion and the selection of staff/outputs for submission to evaluation exercises, they come to shape the nature, structure and conditions of academic work (Espeland and Sauder, 2007; Sauder and Espeland, 2009)....

    [...]

  • ...those created by the Financial Times and the Association of Business Schools) become influential for processes of recruitment, promotion and the selection of staff/outputs for submission to evaluation exercises, they come to shape the nature, structure and conditions of academic work (Espeland and Sauder, 2007; Sauder and Espeland, 2009)....

    [...]

Journal ArticleDOI
TL;DR: In this article, a comparison of the strategies employed in management research in two periods, 1995-97 and 1985-87, was conducted through a content analysis of articles from the Academy of Management Journal.
Abstract: This study is a comparison of the strategies employed in management research in two periods, 1995–97 and 1985–87. Through a content analysis of articles from the Academy of Management Journal, Admi...

1,162 citations


"Taylorizing business school researc..." refers background in this paper

  • ...…to internal, external or construct validity; and no preoccupation with the operationalization, measurement and statistical analysis of variables – the ethos of positivism, in the numerous guises of neo-positivism, tends to hold sway (e.g. Gibbert et al., 2008; Scandura and Williams, 2000)....

    [...]

  • ...Even when the data are qualitative – and when there is no explicit hypothesis testing of propositions; no reference to internal, external or construct validity; and no preoccupation with the operationalization, measurement and statistical analysis of variables – the ethos of positivism, in the numerous guises of neo-positivism, tends to hold sway (e.g. Gibbert et al., 2008; Scandura and Williams, 2000)....

    [...]

Frequently Asked Questions (2)
Q1. What are the contributions in this paper?

The paper critically examines how work is shaped by performance measures. The paper demonstrates, with reference to the Association of Business Schools ( ABS ) さJournal Guideざ, how use of a journal list can come to dominate and define the focus and trajectory of a field of research, with detrimental consequences for the development of scholarship. 

In the UK context, the authors suggest that the Business and Management Panel for any future evaluation exercises might: 1. Reiterate the exclusion of all use of journal lists from the evaluation process. It is hoped that the evidence and arguments presented in this paper will stimulate further discussion of the pros and cons of the use of journals lists.