scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Behavior-based Evaluation Instrument for Judges

01 Sep 1995-Justice System Journal (Routledge)-Vol. 18, Iss: 2, pp 173-184
TL;DR: In this article, the authors developed an instrument that uses "critical incidents" of actual judicial behavior as benchmarks for scales to measure judicial performance across six dimensions, i.e., Halo/Horn and leniency effects.
Abstract: The beliefthatjudicialperformance shouldbe evaluatedhasgainedincreasing momentum. A numberofsiates hizve usedsurveys ofattorneysas aprimary source ofinformationabout judicial performance. The evaluation survey, if not carefully constructed, may lead to biased evaluations. This research reports on an effort to create a survey that would be easily administered while at the same time providing informationfreefrom bias. Using a proceduredeveloped in otheroccupationaljields. the researchersdevelopedan instrument that uses "critical incidents" of actual judicial behavior as benchmarks for scales to measure judicial performance across six dimensions. Each ofthe six dimensions consists ofjive items. The instrument wasjield tested and shown to be free from bias (i.e., Halo/ Horn and leniency effects) often found in evaluation instruments. The people of the United States believe that voting for public officials is the ultimate expression of democracy. In some states, this adherence to the democratic ideal means judges are elected rather than appointed to office (these elections may be partisan or nonpartisan). Otherstates initially appointjudges and still provide for a sense ofdemocratic accountability by holding retention elections. Thus, on some periodic basis. voters decide whether to retain a judge. Regardless of the circumstances, we expect the electorate to become informed about the candidates and the issues. However, research has found that the citizenry pays little heed to judicial elections (Jacob, 1990). Complicating the issue for the voters is the lack of infonnation available regarding judicial candidates. Underlying this problem in many cases is the strong norm to protect the judicial system's independence, including a desire to pennit judges to make decisions without fear of retribution nor favoritism (Farthing­ Capowich, 1985). In many states, rules ofjudicial conduct severely limit what can be said by or about a jurist or his or her judicial record. As a result, there has been a strong desire
Citations
More filters
Journal ArticleDOI
TL;DR: In this article, the authors show that gender and race bias still exist in attorney surveys conducted in accordance with the ABA's Guidelines, which results in predictable problems with the reliability and validity of the information obtained through these survey instruments.
Abstract: Judicial performance evaluations (JPEs) are a critical part of selecting judges, especially in states using merit-based selection systems. This article shows empirical evidence that gender and race bias still exist in attorney surveys conducted in accordance with the ABA's Guidelines. This systematic bias is related to a more general problem with the design and implementation of JPE surveys, which results in predictable problems with the reliability and validity of the information obtained through these survey instruments. This analysis raises questions about the validity and reliability of the JPE. This is a particularly poor outcome, as it means that we are subjecting many judges to state-sponsored evaluations that are systematically biased against women and minorities.

13 citations

Posted Content
TL;DR: In this paper, the focus is on the individual judicial officer, identifying how judges ought to perform their judicial work and assessing any departures from the model, but there is considerable diversity in judging which abstract models of JPE may not anticipate.
Abstract: English Abstract: Judicial performance evaluation processes and programs tend to imply an abstract, normative model of the proper judge. The focus is on the individual judicial officer, identifying how judges ought to perform their judicial work and assessing any departures from the model. However, there is considerable diversity in judging which abstract models of JPE may not anticipate. Importantly, judicial performance occurs within a context – the practical and natural settings in which every day judicial work is undertaken. This entails time constraints, workload patterns, and dependence on the activities of others, factors over which the judicial officer may have little control, but which in turn may affect his/her behaviour. Often, judicial performance is taken to refer to in-court work only. Judicial work also occurs outside court and outside regular court hours and so may be less visible for judicial performance evaluation. Although there is considerable variety in judicial experiences of judging, JPE only sometimes includes self-perceptions or judges’ own reflections on their work. Social science and socio-legal research, including original empirical data from Australia, investigates judging in various contexts and explores judicial officers’ experiences of their work. Such empirical research can widen understandings of judicial performance and evaluation.Spanish Abstract: Los procesos y programas de evaluacion del rendimiento judicial tienden a implicar un modelo normativo abstracto del juez competente. La atencion se centra en el funcionario judicial individual, identificando como deben realizar su labor los jueces y determinando cualquier desviacion respecto al modelo. Sin embargo, a la hora de juzgar, existe una gran diversidad que los modelos abstractos de evaluacion del rendimiento judicial no pueden anticipar. Es importante destacar que el desempeno judicial se produce en un contexto – el marco practico y natural en el que se desarrolla cada dia la labor judicial. Esto conlleva falta de tiempo, patrones de carga de trabajo y dependencia de actividades desempenadas por otros, factores sobre los que el funcionario judicial puede tener poco control, pero que, a su vez, puede afectar a su comportamiento. A menudo, se entiende por desempeno judicial unicamente el trabajo desarrollado en la sala. El trabajo judicial tambien se produce fuera de la sala y fuera de las horas regulares del tribunal, por lo que puede ser menos visible para la evaluacion del rendimiento judicial. Aunque existe una gran variedad de experiencias judiciales a la hora de juzgar, la evaluacion del rendimiento judicial solo incluye en contadas ocasiones las percepciones o las reflexiones sobre su trabajo de los propios jueces. Las ciencias sociales y la investigacion socio-juridica, incluyendo datos empiricos originales de Australia, investigan el hecho de juzgar en diversos contextos y explora las experiencias laborales de los funcionarios judiciales. Esta investigacion empirica puede contribuir a ampliar la comprension del rendimiento y evaluacion judicial.

12 citations


Cites background from "A Behavior-based Evaluation Instrum..."

  • ...A central premise of Judicial Performance Evaluation (JPE) is that, in order to evaluate judicial performance and judicial quality, it is essential to understand judicial behaviour (Bernick and Pratto 1995)....

    [...]

02 Jul 2014
TL;DR: In this paper, the authors investigate judging in various contexts and explore judicial officers' experiences of their work, focusing on the individual judicial officer, identifying how judges ought to perform their judicial work and assessing any departures from the model.
Abstract: Judicial performance evaluation processes and programs tend to imply an abstract, normative model of the proper judge. The focus is on the individual judicial officer, identifying how judges ought to perform their judicial work and assessing any departures from the model. However, there is considerable diversity in judging which abstract models of JPE may not anticipate. Importantly, judicial performance occurs within a context – the practical and natural settings in which every day judicial work is undertaken. This entails time constraints, workload patterns, and dependence on the activities of others, factors over which the judicial officer may have little control, but which in turn may affect his/her behaviour. Often, judicial performance is taken to refer to in-court work only. Judicial work also occurs outside court and outside regular court hours and so may be less visible for judicial performance evaluation. Although there is considerable variety in judicial experiences of judging, JPE only sometimes includes self-perceptions or judges’ own reflections on their work. Social science and socio-legal research, including original empirical data from Australia, investigates judging in various contexts and explores judicial officers’ experiences of their work. Such empirical research can widen understandings of judicial performance and evaluation. Los procesos y programas de evaluacion del rendimiento judicial tienden a implicar un modelo normativo abstracto del juez competente. La atencion se centra en el funcionario judicial individual, identificando como deben realizar su labor los jueces y determinando cualquier desviacion respecto al modelo. Sin embargo, a la hora de juzgar, existe una gran diversidad que los modelos abstractos de evaluacion del rendimiento judicial no pueden anticipar. Es importante destacar que el desempeno judicial se produce en un contexto – el marco practico y natural en el que se desarrolla cada dia la labor judicial. Esto conlleva falta de tiempo, patrones de carga de trabajo y dependencia de actividades desempenadas por otros, factores sobre los que el funcionario judicial puede tener poco control, pero que, a su vez, puede afectar a su comportamiento. A menudo, se entiende por desempeno judicial unicamente el trabajo desarrollado en la sala. El trabajo judicial tambien se produce fuera de la sala y fuera de las horas regulares del tribunal, por lo que puede ser menos visible para la evaluacion del rendimiento judicial. Aunque existe una gran variedad de experiencias judiciales a la hora de juzgar, la evaluacion del rendimiento judicial solo incluye en contadas ocasiones las percepciones o las reflexiones sobre su trabajo de los propios jueces. Las ciencias sociales y la investigacion socio-juridica, incluyendo datos empiricos originales de Australia, investigan el hecho de juzgar en diversos contextos y explora las experiencias laborales de los funcionarios judiciales. Esta investigacion empirica puede contribuir a ampliar la comprension del rendimiento y evaluacion judicial. DOWNLOAD THIS PAPER FROM SSRN : http://ssrn.com/abstract=2533861

10 citations

Journal ArticleDOI
TL;DR: The authors found that the survey component has difficulty distinguishing among the judges on the basis of relevant criteria and the question prompts intended to measure performance on different ABA Categories are also indistinguishable, and found evidence that female judges do disproportionately worse than male judges.
Abstract: Judicial Performance Evaluation (JPE) is generally seen as an important part of the merit system, which often suffers from a lack of relevant voter information. Utah's JPE system has undergone significant change in recent years. Using data from the two most recent JPE surveys, we provide a preliminary look at the operation of this new system. Our results suggest that the survey component has difficulty distinguishing among the judges on the basis of relevant criteria. The question prompts intended to measure performance on different ABA Categories are also indistinguishable. We also find evidence that, on some measures, female judges do disproportionately worse than male judges. We suggest that the free response comments and the new Court Observation Program results may improve the ability of the commission to make meaningful distinctions among the judges on the basis of appropriate criteria.

2 citations


Cites background from "A Behavior-based Evaluation Instrum..."

  • ...This is where things seem to go wrong (Bernick and Pratto 1995; Elek, Rottman and Cutler 2012; Gill 2014)....

    [...]

  • ...Many other works have emphasized the lack of question uniformity and reliability in these JPEs (Aynes 1981; Bernick and Pratto 1995; White 2001)....

    [...]

References
More filters
Book
01 Jan 1989
TL;DR: In this paper, the authors present an approach to estimating coverage error in analytical statistics, including the role of the survey interviewer and its effect on the sample design, as well as the effect of non-response.
Abstract: 1. An Introduction to Survey Errors.1.1 Diverse Perspectives on Survey Research.1.2 The Themes of Integration: Errors and Costs.1.3 The Languages of Error.1.4 Classifications of Error Within Survey Statistics.1.5 Terminology of Errors in Psychological Measurement.1.6 The Language of Errors in Econometrics.1.7 Debates About Inferential Errors.1.8 Important features of language Differences.1.9 Summary: The Tyranny of the Measurable.1.10 Summary and Plan of This Book.2. An Introduction to Survey Costs.2.1 Rationale for a Joint Concern About Costs and Errors.2.2 Use of Cost and Error Models in Sample Design.2.3 Criticisms of Cost-Error Modeling to Guide Survey Decisions.2.4 Nonlinear Cost Models Often Apply to Practical Survey Administration.2.5 Survey Cost Models are Inherently Discontinuous.2.6 Cost Models Often Have Stochastic Features.2.7 Domains of Applicability of Cost Models Must Be Specified.2.8 Simulation Studies Might Be Best Suited to Design Decisions.2.9 Is Time Money?2.10 Summary: Cost Models and Survey Errors.3. Costs and Errors of Covering the Population.3.1 Definitions of Populations Relevant to the Survey.3.2 Coverage Error in Descriptive Statistics.3.3 An Approach to Coverage Error in Analytic Statistics.3.4 Components of Coverage Error.3.5 Coverage Problems with the Target Population of Households.3.6 Measurement of and Adjustments for Noncoverage Error.3.7 Survey Cost Issues Involving Coverage Error.3.8 Summary.4. Nonresponse in Sample Surveys.4.1 Nonresponse Rates.4.2 Response Rate Calculation.4.3 Temporal Change in Response Rates.4.4 Item Missing Data.4.5 Statistical Treatment of Nonresponse in Surveys.4.6 Summary.5. Probing the Causes of Nonresponse and Efforts to Reduce Nonresponse.5.1 Empricial Correlates of Survey Participation.5.2 Efforts by Survey Methodologists to Reduce Refusals.5.3 Sociological Concepts Relevant to Survey Nonresponse.5.4 Psychological Attributes of Nonrespondents.5.5 Summary.6. Costs and Errors Arising from Sampling.6.1 Introduction.6.2 The nature of Sampling Error.6.3 Measuring Sampling Error.6.4 Four Effects of the Design on Sampling Error.6.5 The Effect of Nonsampling Errors on Sampling Error Estimates.6.6 Measuring Sampling Errors on Sample Means and Proportions from Complex Samples.6.7 The Debate on Reflecting the Sample Design in Estimating Analytic Statistics.6.8 Reflecting the Sample Design When Estimating Complex Statistics.6.9 Summary.7. Empirical Estimation of Survey Measurement Error.7.1 A First Estimation of Observational Errors Versus Errors of Nonobservation.7.2 Laboratory Experiments Resembling the Survey Interview.7.3 Measures External to the Survey.7.4 Randomized Assignment of Measurement Procedures to Sample Persons.7.5 Repeated measurements of the Same Persons.7.6 Summary of Individual Techniques of Measurement Errors Estimation.7.7 Combinations of Design Features.7.8 Summary.8. The Interviewer as a Source of Survey Measurement Error.8.1 Alternative Views on the Role of the Observer.8.2 The Roles of the Survey Interviewer.8.3 Designs for Measuring Interview Variance.8.4 Interviewer Effects in Personal Interview Surveys.8.5 Interviewer Effects in Centralized Telephone Surveys.8.6 Explaining the Magnitude of Interviewer Effects.8.7 Summary of Research on Interviewer Variance.8.8 Measurement of Interviewer Compliance With Training Guidelines.8.9 Experiments in Manipulating Interviewer Behavior.8.10 Social Psychological and Sociological Explanations for Response Error Associated with Interviewers.8.11 Summary.9. The Respondent as a Source of Measurement Error.9.1 Components of Response Formation Process.9.2 The Encoding Process and the Absence of Knowledge Relevant to the Survey Question.9.3 Comprehension of the Survey Question.9.4 Retrieval of Information from Memory.9.5 Judgment of Appropriate Answer.9.6 Communication of Response.9.7 Sociological and Demographic Correlates of Respondent Error.9.8 Summary.10. Measurement Errors Associated with the Questionnaire.101 Properties of Words in Questions.10.2 Properties of Question Structure.10.3 Properties of Question Order.10.4 Conclusions About Measurement Error Related to the Questionnaire.10.5 Estimates of Measurement Error from Multiple Indicator Models.10.6 Cost Models Associated with Measurement Error Through the Questionnaire.10.7 Summary.11. Response Effects of the Mode of Data Collection.11.1 Two Very Different Questions About Mode of Data Collection.11.2 Properties of Media of Communication.11.3 Applying Theories of Mediated Communication to Survey Research.11.4 Findings from the Telephone Survey Literature.11.5 Interaction Effects of Mode and Other Design.11.6 Survey Costs and the Mode of Data Collection.11.7 Cost and Error Modeling with Telephone and Face to Face Surveys.11.8 Summary.References.Index.

1,470 citations

Book
01 Jan 1982
TL;DR: In this paper, the authors reviewed the book "Applied psychology in personnel management" by Cascio, and found that the book was well-suited for applied psychology in management.
Abstract: The article reviews the book “Applied Psychology in Personnel Management,” by Wayne Cascio.

638 citations