scispace - formally typeset
Search or ask a question

Showing papers in "Canadian Psychology in 1993"


Journal ArticleDOI
TL;DR: In this article, the authors present a new paradigm using experimental mathematics to examine the claims made in the levels of measurement controversy, which is referred to as monte carlo simulation, and demonstrate that the approach advocated in this paper is linked closely to representational theory.
Abstract: The notion that nonparametric methods are required as a replacement of parametric statistical methods when the scale of measurement in a research study does not achieve a certain level was discussed in light of recent developments in representational measurement theory. A new approach to examining the problem via computer simulation was introduced. Some of the beliefs that have been widely held by psychologists for several decades were examined by means of a computer simulation study that mimicked measurement of an underlying empirical structure and performed two - sample Student t - tests on the resulting sample data. It was concluded that there is no need to replace parametric statistical tests by nonparametric methods when the scale of measurement is ordinal and not interval.Stevens' (1946) classic paper on the theory of scales of measurement triggered one of the longest standing debates in behavioural science methodology. The debate -- referred to as the levels of measurement controversy, or measurement - statistics debate -- is over the use of parametric and nonparametric statistics and its relation to levels of measurement. Stevens (1946; 1951; 1959; 1968), Siegel (1956), and most recently Siegel and Castellan (1988) and Conover (1980) argue that parametric statistics should be restricted to data of interval scale or higher. Furthermore, nonparametric statistics should be used on data of ordinal scale. Of course, since each scale of measurement has all of the properties of the weaker measurement, statistical methods requiring only a weaker scale may be used with the stronger scales. A detailed historical review linking Stevens' work on scales of measurement to the acceptance of psychology as a science, and a pedagogical presentation of fundamental axiomatic (i.e., representational) measurement can be found in Zumbo and Zimmerman (1991).Many modes of argumentation can be seen in the debate about levels of measurement and statistics. This paper focusses almost exclusively on an empirical form of rhetoric using experimental mathematics (Ripley, 1987). The term experimental mathematics comes from mathematical physics. It is loosely defined as the mimicking of the rules of a model of some kind via random processes. In the methodological literature this is often referred to as monte carlo simulation. However, for the purpose of this paper, the terms experimental mathematics or computer simulation are preferred to monte carlo because the latter is typically referred to when examining the robustness of a test in relation to particular statistical assumptions. Measurement level is not an assumption of the parametric statistical model (see Zumbo & Zimmerman, 1991 for a discussion of this issue) and to call the method used herein "monte carlo" would imply otherwise. The term experimental mathematics emphasizes the modelling aspect of the present approach to the debate.The purpose of this paper is to present a new paradigm using experimental mathematics to examine the claims made in the levels of measurement controversy. As Michell (1986) demonstrated, the concern over levels of measurement is inextricably tied to the differing notions of measurement and scaling. Michell further argued that fundamental axiomatic measurement or representational theory (see, for example, Narens & Luce, 1986) is the only measurement theory which implies a relation between measurement scales and statistics. Therefore, the approach advocated in this paper is linked closely to representational theory. The novelty of this approach, to the authors knowledge, is in the use of experimental mathematics to mimic representational measurement. Before describing the methodology used in this paper, we will briefly review its motivation.Admissible TransformationsRepresentational theory began in the late 1950's with Scott and Suppes (1958) and later with Suppes and Zinnes (1963), Pfanzagl (1968), and Krantz, Luce, Suppes & Tversky (1971). …

247 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present two divergent theories of immigrants' adaptation implied by the terms assimilation and acculturation, namely, linear and bidimensional models, and orthogonal cultural identification.
Abstract: This paper will review the predominant models developed by researchers to assess the psychological adaptation of immigrants in the host society. The use of the terms assimilation and acculturation, to reflect the process of change undergone by immigrants, will be discussed. Although these terms have been used interchangeably, the outcome of change is very different in each. The difference between assimilation and acculturation is reflected in the models of adaptation regrouped under linear and bidimensional models. A third model, called orthogonal cultural identification, is presented in light of the criticisms made of the first two types of models. The psychosocial changes undergone by immigrants who move from one country of residence to another, have been subsumed under the terms of assimilation or acculturation. Although these two terms refer to similar processes of change, within individuals, their outcome is quite different. Eisenstadt (1954) identified three stages in the migration process. The first consists of the needs or dispositions which motivate a person to migrate; the second stage is the physical transition itself, from the original society to the new one; the third stage refers to the absorption of the immigrant within the social and cultural framework of the new society. Researchers agree that psychosocial changes experienced by immigrants, in this third phase, include "the learning of new roles, the transformation of primary group values, and the extension of participation, beyond the primary group, in the main spheres of the social system" (Eisenstadt, 1954, p. 9). There is much disagreement, however, about whether successful adaptation is marked by loss of identification with the heritage culture (Eisenstadt, 1954) or whether adaptation can occur without any such loss (Spindler, 1978).This paper will present two divergent theories of immigrants' adaptation implied by the terms assimilation and acculturation. The models which these theories have generated will be divided into two broad categories: 1) Linear models -- representing cultural change on a linear bipolar continuum, going from the heritage culture to the host culture, and 2) Bidimensional models -- in which two independent dimensions of cultural change are crossed at right angles to each other, resulting in four adaptation styles which immigrants can adopt.Some of the early models of acculturation and assimilation will be presented, since many of these have served as guidelines for later research. The critical review of models will be followed by findings on the outcome of immigrants' adaptation, according to the two types of models. Our intent is to assess whether various Canadian ethnic groups tend to maintain their heritage culture or to replace it with the host culture. In light of the criticisms made of the models reviewed and of the findings summarized, a third model, orthogonal cultural identification, will be presented.AssimilationAssimilation is a term used as far back as 1677, in reference to conformity with a country in which one lives (Oxford English Dictionary, 1989). Simons (1900) defined assimilation at the turn of the century as: "that process of adjustment or accommodation which occurs between the members of two different races, if their contact is prolonged and if the necessary psychic conditions are present" (p. 791). Park and Burgess (In International Encyclopedia of the Social Sciences [IESS], 1968) defined it as "a process of interpenetration and fusion in which persons or groups acquire the memories, sentiments, and attitudes of other persons or groups, and, by sharing their experience and history, are incorporated with them in a common cultural life" (p. 438).Eisenstadt (1985) established a very clear interaction between immigrants and the host society, during the process of assimilation. Successful assimilation, according to Eisenstadt, occurs when immigrants have become full participants in the "institutions" of the host society and identify completely with that society. …

168 citations


Journal ArticleDOI
TL;DR: In this paper, a sequence of developments is proposed which would have the effect of reworking the elementary sensorymotor schemata present at birth and which are causal in nature into the propositional representational states which develop in two-year-olds and which operate on the basis of meaning, significance and intentionality.
Abstract: Developmental psychology, like anthropological psychology, has allowed us to see that others, including the young, must be described not merely as failed attempts to achieve modern adult norms but as peoples in their own right. Secondly, it has shown that the structures and processes needed for the explanation of adult minds, specifically symbolic representational systems, cannot be assumed to be present in children. In this paper I examine the notion of representation and its role in cognitive theory. The representational theory of mind simply takes for granted the existence of such representational states and processes as symbol use, belief, meaning and intention. The problem for developmental psychology is to explain the origin and development of such states. A sequence of developments is proposed which would have the effect of reworking the elementary sensorymotor schemata present at birth and which are causal in nature into the propositional representational states which develop in twoto four - year - olds and which operate on the basis of meaning, significance and intentionality. The theory is used to explain a series of intellectual achievements of young children.In her introduction to this symposium Callaghan (this issue) cited the contradictory claims of Wundt and Baldwin as to the relation between developmental theory and general psychological theory, Wundt claiming that development cannot be studied fruitfully without reference to adult norms and Baldwin that adult norms can be understood only in terms of their developmental history. In my view both are right. A theory of adult cognition is required if we are to understand developmental changes as solutions to problems which are more or less universal and which have been solved in particular ways by adults in that culture.Equally important, however, is the significance of developmental theory for theories of adult cognition. Baldwin's claim was that no process can be understood unless we understand how it came to be -- a view which would be hotly contested by structuralists who insist that we distinguish synchronic from diachronic descriptions. That, after all, was what made possible the development of modern semantic theory as opposed to the traditional etymological accounts of word and sentence meaning.Developmental studies make at least two other contributions to general psychology. First, they help free us from our adult ethnocentrism which sees all members of all cultures other than the currently dominant one as faltering steps toward or failed attempts at achieving modern adult norms by insisting that other cultural or age groups be understood on their own terms and in their own rights. That, after all, was the legacy of modern anthropology to cultural studies and it was the legacy of Piaget to developmental studies.But secondly, developmental studies may permit us to critically examine the underlying assumptions about mental functions and to either reject them as unwarranted or, in the best case, provide a basis for justifying them. This is the case for the topic I shall develop, namely, the assumption that the mind is a representational system, a system which operates on the basis of symbols and meanings rather than simply chemical and biological causes. The ProblemA central question in cognitive psychology is how the brain, which is a purely physical - biological causal system, can ever produce or come to be a mental system, one which operates on the basis of meanings, beliefs and intentions. It seems inescapable that we experience the actions of ourselves and others in terms of beliefs, desires and intentions -- mental states -- and yet as scientists we are committed to the view that the brain is a purely causal system. How are we to reconcile these seemingly incommensurable notions?We are all familiar with the traditional alternatives. Abolish the notion of mind as the behaviourists attempted to do and explain behaviour in terms of complex causal states which reflect accumulated patterns of experienced objects and events. …

73 citations


Journal ArticleDOI
TL;DR: In this paper, a correlational analysis of attendance records and grades in a first-year psychology course was performed, and a correlation between attendance and final grades in the course yielded r =.66, p <.01.
Abstract: A correlational analysis of attendance records and grades in a firsl-ycar psychology class was performed. Subjects were informed that the attendance records would not affect their grades in the course. A correlation between attendance and filial grades in the course yielded r = .66, p < .01. Most psychology professors will teach a firstyear psychology course at some point in their careers. Most, if not all, will find that attendance is a more serious problem in this course than in any other they ever teach. There are many potential Factors producing this result, some of which cannot be overcome. For example, assume thai, students put greater ciTort into courses in the realm of their intended major. First-year psychology, or any first-year course, has a greater propot lion of students for which the material is outside their intended major as compared to upperyear courses. This problem is inherent in the structure of university programmes. Someone who intends to major in economics can probably only take one introductory economics course in first year. Indeed, most fields of study arc structured this way with the intention that students receive a broad education, as well as a general overview of their intended area of study. This is clearly a worthwhile approach; however, it also means first-year courses have many students whose interests lie elsewhere. Research has revealed significant relationships between attendance and grades (GusCanariiaii Psychology/Psychologic ranariifiiiic, 34:2 sell, 1976; Jones, 1984; Street, 1975; Vidler, 198")Buckalew, Daly and Cofficld (1986) correlated initial class attendance of undergraduates to final grades and found a significant correlation ol r = .31. They concluded that initial attendance is a lair predictor of future academic performance. The present paper offers a correlational analysis of the relationship between attendance during the second semester of a twoseinc.sl.er first-year psychology course and final grades in the course. The implications of this study were obvious: A non-significant relationship would suggest thai restructuring of the course was necessary because the course itself was not offering the students any knowledge that a thorough reading of the textbook could not impart; a positive relationship would provide information to students as to the relevance of attendance, which may be important to them when making their own scheduling decisions.

60 citations


Journal ArticleDOI
TL;DR: In this article, the authors discuss five myths concerning the application of parametric and nonparametric tests and argue that the claim that parametric tests are more powerful than non-parametric ones is more illusory than real.
Abstract: Five myths concerning the application of parametric and nonparametric tests are discussed. Well known considerations of power, robustness, and scale of measurement are reviewed briefly. Less well known ideas about the nature of the null hypothesis and generality of application are outlined. It is concluded that in many applications behavioural researchers are using what appear to be parametric tests, but actually are evaluating nonparametric hypotheses and estimating the probability of a Type I error that would be obtained with a nonparametric test.Statisticians have debated the relative merits of parametric and nonparametric inference for over 60 years, and increasingly that literature favours nonparametric inference when applied to data from behavioural research. Nevertheless, even a cursory look at the psychological research literature reveals that the parametric platform has been more convincing to psychologists. Why is there a schism between statisticians and researchers? We hope to answer this question by suggesting that the issue of parametric versus nonparametric inference has been dominated by a collection of interrelated myths and half - truths that have mislead researchers into using, or believing they are using, parametric tests. In debunking these myths we argue that the victory of parametric inference over nonparametric inference is more illusory than real. The myths to be discussed are:1. Parametric tests are more powerful than nonparametric tests.2. Parametric tests are robust.3. Nonparametric tests are tests on non - interval data -- and t - and F - tests are exclusively parametric tests.4. The null hypotheses evaluated by parametric tests are direct and clear, whereas the null hypotheses evaluated by nonparametric tests are indirect and vague.5. Nonparametric tests are restricted in their application.These myths are all tied in one way or another to evaluating the validity of statistical tests performed on real - life data -- that is, evaluating the "believability" of obtained probabilities of Type I error. Since the validity of all statistical tests relies on satisfying certain assumptions, we begin by briefly reviewing the assumptions underlying parametric and nonparametric tests.AssumptionsThe assumptions underlying the best - known and most frequently usedparametric statistics include:1. All observations are randomly and independently sampled from their parent populations.2. The population distributions from which samples are selected are normal.3. All populations have the same variance.4. The data are measured on at least an interval scale.In contrast, nonparametric statistics make fewer and generally much weaker assumptions. Most importantly, though less well known, nonparametric tests need not assume random sampling. Although all nonparametric tests assume independence of sample observations, that assumption can be tied to random assignment in experiments or exchangeability in observational studies, rather than to random sampling.Given only this information, the behavioural researcher should be convinced to use nonparametric tests. Rarely do we know the extent to which population assumptions are met, and even less often do we randomly sample our subjects. Nevertheless, we persist in using parametric tests primarily because we are persuaded by the myths listed above. The validity of these myths is examined next.I. Parametric Tests are More Powerful than Nonparametric TestsThe power of a test is the probability of correctly rejecting a null hypothesis. Efficiency is a relative term comparing the power of one test to another when both are used to test the same null hypothesis, and the relative efficiency of one test with respect to another is the ratio of the sample sizes needed for both tests to achieve the same power. Thus, another way to phrase the first myth is to say that nonparametric tests require more subjects to achieve the same power as parametric tests. …

55 citations


Journal ArticleDOI
TL;DR: The permutation model was originally designed for parameter estimation and expressed the null hypothesis in terms of population parameters as mentioned in this paper, and it is often more appropriate for the analysis of psychological data.
Abstract: There are two especially useful models of statistical inference, although only one, the normal curve model, is universally taught. The less well known permutation model is contrasted with the normal model and the case made that the permutation model is often more appropriate for the analysis of psychological data. Inappropriate interpretations generated by teaching only the normal model are illustrated. It is recommended that both models be taught so that students and applied workers have a better chance both of understanding the nature of the hypothesis that is being tested, and of correctly discriminating the statistical conditions that support causal inference and generality inference.Nearly all statistical testing is introduced using the "normal curve" model that leads to analysis of data by t and F ratios and to a preoccupation with generalizability of research results. The model is so common that many applied workers assume that it is the only model for statistical testing. The normal model was originally designed for parameter estimation and expresses the null hypothesis in terms of population parameters. For example, the null hypothesis for assessing the difference between two means is often expressed as "Equation". As we all know, the ideal conditions for using the normal model to test statistical hypotheses such as this involve cases randomly selected from normally distributed parent populations with equal variances.Even textbook authors who introduce statistical testing with the binomial test emphasize the necessity of random sampling from a specified population. In most empirical research however, the concept of population enters statistical analysis not because the experimenter has actually randomly sampled some population to which he or she wishes to generalize, but because the only way researchers have been taught to interpret the results of statistical tests is in terms of inferences about populations.Kempthorne (1979) proposed that attempts to place all inferential situations in the normal model are misguided. He suggests that trying to encompass all types of investigation under one framework has led to the pooling of different types of investigation that have strongly different logical natures. For example, surveys and comparative experiments have different methods of data collection and different inferential goals. A major concern of surveys is external validity and generality inference, whereas comparative experiments are more concerned with internal validity and causal inference. Thus, we may need different statistical models for different research contexts.The Permutation ModelAn alternative to the normal model is the permutation or randomization model initiated by Fisher (1935) and developed by Pitman (1937a, 1937b, 1938). The permutation model is nonparametric because no formal assumptions are made about the population parameters of the reference distribution, i.e., the distribution to which an obtained result is compared to determine its probability when the null hypothesis is true. Typically the reference distribution is a sampling distribution for parametric tests and a permutation distribution for many nonparametric tests.(f.1)For many applied workers, "nonparametric" has been equated with rank tests. That is, either the data are ranks, or scores are transformed to ranks before conducting a statistical test. It is important to point out that the familiar rank tests, such as the Wilcoxon Rank - Sum test or Mann - Whitney U test, are members of a family of tests called permutation tests or randomization tests. It is less well known that there are two sets of permutation tests, those based on ranks and those based on scores. The tests on ranks traditionally have been named after the persons who were important in their development (e.g., Mann - Whitney, Wilcoxon, Kruskal - Wallis, Friedman). Often, tests of scores are referred to just as permutation tests although some reference has been made to the Fisher - Pitman test (e. …

44 citations


Journal ArticleDOI
TL;DR: Examining some of the potentially problematic ethical issues that can arise when personal therapy is included in graduate programs for psychologists is examined.
Abstract: Clinical and counselling psychologists in B.C. were surveyed about their opinions on personal therapy as a component of professional training, and about the circumstances under which personal therapy had been provided to them while they were in graduate school. Respondents identified potential benefits and risks of personal therapy. The majority, 88%, saw at least one benefit for the experience, and 83% saw at least one risk. About half (41%) of them had undergone therapy as part of their clinical training, and in many cases this experience was provided in a manner inconsistent with ethical guidelines. Of those receiving personal therapy, 46% reported that therapy was required rather than optional, 62% were not able to choose their therapist, and 69% received therapy from an academic colleague. Several ethical issues concerning therapeutic interventions in the training of psychotherapists are discussed.When being trained to work in the helping professions, many future therapists undergo some type of psychotherapeutic treatment as part of their preparation. This "personal therapy" experience has been used in training psychoanalysts (Caligor, 1985), other psychodynamic therapists (Battegay, 1983; Strupp, Butler, & Rosser, 1988), family therapists (Forman, 1984; Francis, 1988; Guldner, 1978), group therapists (Salvendy, 1985), behaviour therapists (McNamara, 1986) and clinical psychologists (Garfield & Kurtz, 1976; Guy, Stark, & Poelstra, 1988).Although psychoanalysis has traditionally asserted that personal therapy, the "training analysis", is a necessary prerequisite for developing professional competence (Caligor, 1985), opinion varies concerning the importance of this experience for psychologists. In their survey of 87 APA - approved clinical training programs, Wampler and Strupp (1976) found that 4% of the programs required personal therapy, 67% encouraged it, and the rest neither encouraged nor discouraged it. As a result, not all psychologists receive the same exposure to personal therapy. A recent survey of APA members working in the areas of clinical psychology, psychotherapy, and independent practice (Guy et al., 1988), showed that 70% had received therapy before completing their degree.The presumed benefits of this experience include mastery of technique as a result of exposure to a role model, increased self - awareness, a sense of conviction aboutthe validity of the therapeutic model, and resolution of personal problems or "emotional baggage" which could interfere with the therapist's effectiveness in the treatment situation (Nierenberg, 1972). Arguments against such experiences include the risk of limiting the trainees' openness to a variety of therapeutic models, emotional and financial stress on trainees, and the lack of conclusive evidence that personal therapy is an effective method of training professional helpers (Clark, 1986; Macaskill, 1988).A variety of ethical issues can also be raised if personal therapy is mandatory for students, or if it is provided by faculty members who are also serving as instructors of those students. However, these issues are seldom addressed in the literature on personal therapy, and most of the published studies deal with the format and content of the therapy experience (e.g., Francis, 1988; McNamara, 1986) rather than with the ethical implications (e.g., Newman, 1981). In contrast, the current study examines some of the potentially problematic ethical issues that can arise when personal therapy is included in graduate programs.In response to concerns that a CPA member raised about the use of "encounter sessions" in a group process course, the Committee on Ethics of the Canadian Psychological Association (CECPA) considered the potentially serious ethical ramifications of providing therapeutic processes within the context of an educational setting. The Committee recommended several guidelines (CECPA, 1988) for the inclusion of personal therapy experiences as part of any training program, and its suggestions are congruent with a report from the Ethics Committee of the American Psychological Association (APA, 1987) on the ethical issues raised by required psychotherapy for trainees. …

38 citations


Journal ArticleDOI
TL;DR: The authors introduce a batterie informatisee d'evaluation of the memoire comportant dix epreuves d"evaluation, i.e., memoire de travail, les niveaux de traitement en memoire a long terme and composantes de recuperation explicite and implicite.
Abstract: ResumeLes recents developpements de l'approche cognitive joints a l'avenement de l'ordinateur dans l'environnement des neuropsychologues et chercheurs - eures en cognition ont permis de raffiner les outils d'exploration des fonctions mnesiques aupres de populations normale et neuropathologique. L'objectif de cet article est d'introduire une batterie informatisee d'evaluation de la memoire comportant dix epreuves d'evaluation. Ces epreuves s'articulent autour de trois concepts centraux: la memoire de travail, les niveaux de traitement en memoire a long terme et les composantes de recuperation explicite et implicite. La pertinence et l'utilite d'une telle batterie d'evaluation des fonctions mnesiques en regard de certaines populations cliniques presentant des troubles de la memoire est illustree par une etude de cas.L'evaluation neuropsychologique partage, traditionnellement, des visees anatomo - cliniques avec l'expertise neurologique: l'objectif commun est d'identifier les sites neuropathologiques a l'origine de symptomes comportementaux. Toutefois, le developpement rapide des technologies d'investigation neurologique (imagerie cerebrale, resonance magnetique, etc.) a sensiblement reduit la contribution de l'evaluation neuropsychologique a la localisation de lesions cerebrales. De fait, sa contribution actuelle est davantage orientee vers la comprehension des troubles cognitifs observes, tant sur le plan theorique (Ellis & Young, 1988) qu'en vue d'une eventuelle reinsertion sociale du patient (Glisky & Schacter, 1988; Glisky, Schacter & Tulving, 1986; Van der Linden, 1989). Ce reajustement, quant aux objectifs vises par la neuropsychologie, ne va pas sans celui des methodes utilisees.Les methodes psychometriques traditionnelles seront ici remises en cause. Contrairement a ces methodes, l'apport de l'approche cognitive inspiree des theories recentes du traitement de l'information, apparait davantage prometteur dans l'atteinte de ces nouveaux objectifs. Le but de cet article est de presenter une batterie informatisee d'epreuves inspirees de cette derniere approche. Cette batterie s'adresse aux neuropsychologues appele - e - s a evaluer les capacites mnesiques residuelles et dysfonctionnelles de diverses populations neuropathologiques. Nous verrons comment l'approche cognitive differe de celle traditionnellement rencontree dans les milieux cliniques, tant sur le plan dela conception des epreuves que sur le plan des objectifs d'evaluation fixes.L'APPROCHE PSYCHOMETRIQUEL'approche psychometrique classique en neuropsychologie repose sur l'utilisation de vastes batteries d'epreuves normalisees telles l'Echelle Clinique de Memoire de Wechsler. Ces batteries, largement repandues en clinique, permettent en principe de situer la performance d'un patient en la comparant a celle d'un groupe de reference. Il s'agit la d'un premier avantage a l'utilisation de ces epreuves. De plus, d'un milieu clinique a l'autre, les quotients d'intelligence (Echelle d'Intelligence pour Adulte) et de memoire (Echelle Clinique de Memoire de Weschler) par exemple, permettent d'etablir une base initiale de comparaison entre les rapports de cas.La disponibilite de normes pour la plupart de ces outils psychometriques n'est toutefois pas garante de leur validite. Pour un bon nombre d'entre elles, leur conception remonte a quelques decennies alors que les variables socioculturelles et demographiques etaient tres differentes de celles d'aujourd'hui. Cette desuetude peut expliquer, notamment, l'absence de normes appropriees pour les individus ages. Pourtant, il est desormais admis que le profil cognitif differe d'un sous - groupe d'age a un autre et qu'en ce sens, il est important de disposer de normes pour la population agee qui compte pour l'essentiel de la clientele neuropsychologique.Le second aspect problematique des epreuves psychometriques est l'absence de criteres objectifs dans la selection du materiel. …

28 citations


Journal ArticleDOI
TL;DR: The rights of children will be considered, children's legal and developmental competence to consent to treatment will be explored, and the ethical issues associated with treating minors will be addressed.
Abstract: This paper examines children's rights under the Charter, the law of consent, and the ethics associated with the consent to treatment issue. Consistent with the Charter, the common law recognizes the right of competent minors to consent on their own behalf. Decisions regarding competence to consent are made on the basis of cognitive capacity, and not age. In contrast, consent legislation is largely silent on the question of capacity and instead specifies arbitrary ages at which minors may consent. Considerable variation exists across provinces both in the legal age of consent and in the extent to which common law principles are reflected in consent legislation. As a result of the complexity and apparent contradictions of the law, the circumstances under which minors may consent remain unclear in the minds of many practitioners. Equally problematic from the perspective of the psychologist, is the fact that much of consent legislation is directed towards treatment in hospitals and/or treatment by physicians and dentists. It is argued that in the absence of relevant consent legislation, psychologists have both a legal and an ethical responsibility to determine their minor clients' capacity to consent. Revisions to the existing Code of Ethics that recognize the potential capacity of minors to consent are discussed.One of the most difficult legal and ethical issues faced by health professionals is that of the minor who seeks treatment without parental consent. Equally difficult, from both a legal and an ethical perspective, are the issues of treating a child against the child's wishes, or voluntarily committing a child who does not want to be committed. At present Canada has no uniform law of consent; the onus is thus placed upon the provider of services to determine whether a child has the capacity to consent. In the present paper, the rights of children will be considered, children's legal and developmental competence to consent to treatment will be explored, and the ethical issues associated with treating minors will be addressed.Historically, the courts respected the rights of parents to exercise control over their children's activities, welfare and destiny (Weithorn, 1983). It was assumed that parents were the natural advocates for their children and would, in most instances, act in their best interest (Landau, 1986). Presumed by law to lack the cognitive ability and capacity of adults, children were denied the rights accorded to adults, and instead were afforded special protection by the State. In situations in which parents abused their parental rights, the State was prepared to intervene to supervise and, if necessary, remove a child from his/her parents. Thus, both parents and the State exercised control over children; at no point in this process was the child's right to separate consultation or representation considered.Early in the seventies the focus of the children's rights movement shifted from an emphasis on protection and nurturance rights to a consideration of self - determination rights (Hart, 1991; Margolin, 1978). Children's rights activists argued that children should be afforded the same constitutional guarantees as adults including, in certain instances, the right to act independently of parental control and/or authority (Hart, 1991; Mulvey, Reppucci & Weithorn, 1984). Considerable constitutionally based litigation followed, with the U.S. Supreme Court ultimately extending constitutional protections to children as individuals, and subsequently recognizing children's rights to treatment and privacy (Hart, 1991; Mulvey et al., 1984).Children and the Canadian Charter of Rights and FreedomsCanadian courts have only recently begun to address the issue of children's rights under the Canadian Charter of Rights and Freedoms (1982). Drafted explicitly to protect individuals from unjustified discrimination or unwarranted state intrusion in their lives, the Charter is considered the supreme law of Canada (s. …

22 citations


Journal ArticleDOI

18 citations


Journal ArticleDOI
TL;DR: In this article, the authors extend the M and M motif to parametric and nonparametric statistics, particularly with reference to power, robustness, scale of measurement, the null hypothesis, and generality of application.
Abstract: Some Myths Concerning Parametric and Nonparametric Tests by Hunter and May Hunter and May offer a paper on myths and misconceptions (M and M's) that is an excellent companion article to Brewer (1985), who wrote on M and M's in statistical textbooks. Brewer addressed hypothesis testing, confidence intervals, and sampling distributions and the Central Limit Theorem. Hunter and May extend the M and M motif to parametric and nonparametric statistics, particularly with reference to power, robustness, scale of measurement, the null hypothesis, and generality of application.In the section on power, Hunter and May point out that when underlying assumptions of the parametric test are violated nonparametric tests may be more powerful. They call this a "knee - jerk argument" because this fact is usually ignored in selecting tests. In considering alternatives to normal theory statistics, they offer what they consider to be the definitive argument: "... the reason some nonparametric tests are less powerful than parametric tests is not because they are nonparametric tests per se, but because they are rank or nominal - scale tests and therefore are based on less information".In contradistinction to their reasoning, consider the following analogy: both an accomplished opera singer sings and an off - key beginning tuba player plays dots and dashes of the International Morse Code. While some may consider the opera singer's notes to be sounds of music, there is, in fact, no more information in those dots and dashes than in the off - key notes of the beginning tuba player, with respect to the code. If the complexity and subtlety of what is often imagined to be included in interval scales is noise and not signal, parametric tests will have no more information available than a rank test, and will be less efficient by trying to discriminate a signal from noise when in fact there isn't any. This is my interpretation of Hemelrijk (1961): the cost of being robust with respect to both Type I and Type II error under nonnormality precludes the t test from remaining the Uniformly Most Powerful Unbiased test under nonnormality.In the M and M section on the robustness of parametric tests, they cite Micceri (1989) as evidence of the widespread problem of nonnormality in psychology and education data. Yet, there are many, many Monte Carlo studies that demonstrate that normal theory tests such as the F and t test are robust to departures from normality. These studies used well known mathematical functions (e.g., cauchy, chi - square, exponetional, uniform) to model real data and showed that so long as sample sizes are about equal, sample sizes are at least 20 - 25 per group, and the tests are two - tailed, rather than one - tailed, the t test is robust.Micceri's (1989) argument, echoed by Hunter and May, was that those mathematical functions are poor models of psychology and education data, and consequently Monte Carlo studies based on them are not convincing. His study pointed out how radical real distributions may be, such as the so - called multi - modal lumpy, extreme bimodal, extreme asymmetric, digit preference, and discrete mass at zero with gap distributions. Nevertheless, a Monte Carlo study by Sawilowsky and Blair (1992) demonstrated by sampling with replacement from Micceri's data sets, that so long as sample sizes were equal, about 20 - 25, and tests were two tailed, the independent and dependent samples t tests were robust by any definition.The real issue of the effects of nonnormality, as indicated by Sawilowsky and Blair (1992), is on the comparative power, not robustness, of the t test. For example, a Monte Carlo comparison (10,000 repetitions) of the power for the t test and Wilcoxon test with a sample size of (5,15) drawn from an extreme asymmetric distribution identified by Micceri (1989) indicated that at the .05 alpha level and effect size of .20"Greek not transcribed", the power of the Wilcoxon test was . …

Journal ArticleDOI
TL;DR: In this paper, the authors describe common sources of nonnormality in psychological data and examine the distinction between data cleaning and robust estimation using M - estimators and make recommendations for using these techniques in practice.
Abstract: Research in statistics has demonstrated that the classical estimates of means, variances and correlations are sensitive to small departures from the normal curve. Statisticians have urged caution in the use of classical statistics and have proposed a variety of alternatives which are robust with respect to departures from normality. Robust statistics continue, however, to be little used in psychological research. In this paper we describe common sources of nonnormality in psychological data and examine the distinction between data cleaning and robust estimation. Robust estimation using M - estimators is discussed and recommendations for using these techniques in practice are presented.It is common practice among social and life scientists to adopt an implied continuity principle when interpreting the results of a statistical analysis. It is often assumed, for example, that data which are observed to deviate only slightly in form from that of the familiar normal curve, will only slightly distort the usual estimates of means, standard deviations, correlations and associated hypothesis tests. With increasing departure from an underlying normal model, the greater it is assumed, will be the inaccuracy of the computed statistics.Over the past several decades, research in statistics has demonstrated that a continuity principle of the form described above for normal theory based statistics is invalid. The classical estimates of means, variances and correlations have been shown to be highly sensitive to even small departures from an underlying normal model. A single outlying observation, for example, can strongly bias these statistics and thereby provide misleading or invalid results (see for example, Huber, 1981; Hampel, Ronchetti, Rousseeuw, & Stahel, 1986; Zimmerman, & Zumbo, 1993). For an example where the presence of a single outlier in a sample of 29 observations results in a change of the correlation coefficient from .99 to 0 see Devlin, Gnanadesikan, and Kettenring (1981).The sensitivity of classical statistics to small deviations from normality has important implications for the analysis of research data in psychology. The sensitivity of standard estimates of means and variances to nonnormality can adversely affect analysis of variance (ANOVA) results, while in the case of product moment correlations the lack of robustness will often bias results obtained from principal component analysis, common factor analysis and the analysis of covariance structures (i.e., structural equation modelling). Factor analysis results, for example, which initially appear to provide meaningful factors are often, on a closer examination of the data, simply the result of one or two outliers (Huber, 1981, p. 199).The poor performance of classical statistics in the presence of small departures from normality has led some statisticians (Tukey, 1977, pp. 103 - 106; Hogg, 1977, pp. 1 - 17) to warn that the routine use of classical statistics is unsafe. They recommend that classical estimates of means, variances and correlations only be used in conjunction with alternative methods that are robust with respect to departures from normality. Although there is an increasing amount of statistical software which incorporates robust methods, the use of these methods continues to be, despite some urging by statisticians (Stahel, 1989), little used in applied research. In the behavioural sciences, this is in part likely a result of undergraduate methodology courses that often describe the ANOVA as being robust with respect to type I error and nonnormality (see, for example, Glass & Stanley, 1970, p. 372; Glass, Peckham, & Sanders, 1972). Although ANOVA has some moderate robustness properties with respect to type I error and nonnormality, it is, in relation to type II error, very nonrobust (Hampel, et al., 1986, p. 344; Zimmerman & Zumbo, 1993). This places a researcher in an unusual situation when interpreting ANOVA results. …


Journal ArticleDOI
TL;DR: Werker et al. as discussed by the authors found that four-month old infants preferentially attend to infant-directed over adult-directed talk when spoken by a female speaker, however there were no similar studies published using male speakers.
Abstract: This article begins with a brief review of research concerning the possible functional roles of the prosodic aspects of infant - directed speech ("baby - talk"), showing that this style of speaking could have both attentional and affective functions (in addition to linguistic ones). It is then argued that studies of other speech registers could profit from taking similar approaches and even using similar techniques to those used in infant - directed speech research to tease apart the linguistic, attentional and affective components of these other speaking styles. Nursing - home register is used to exemplify the importance of examining the social, emotional and linguistic ramifications of a communication style separately.Studies of the special speech register we use with infants both inform us about psychology in general and raise questions about other communicative behaviours that might not have been asked if it were not for these developmental studies. Since knowing what questions to ask is often the keystone to important research, this is an important contribution developmental psychology can make to psychology. The methods used in infant - directed speech studies also suggest how questions about other styles of verbal communication might best be answered.Baby RegisterOne of the more interesting features of interactions with babies is the particular way in which people modify their speech when addressing infants. This type of speech, which is usually referred to as "baby - talk" or "motherese", differs from normal adult - directed talk in many ways (in the order of 100 documented characteristics, Brown, 1986) including having simplified syntax, shorter utterances, more questions, more repetition, and special prosodic features. Since not only mothers but all adults, and even preschool children have been shown to use this specific style of speech when addressing infants, it is more appropriately referred to as "infant - directed speech" or "infant - directed talk" (Werker & McLeod, 1989).The characteristic prosodic features of infant - directed talk include higher pitch, exaggerated pitch modulation, elongated vowels, longer pauses and increased rhythmicity: What Darwin (1877) called the "sweet music of the species". These characteristic modifications are found in a wide range of diverse languages such as Japanese, French, Italian, German, Mandarin and English (e.g., Fernald et al., 1989); a fact that has led people to speculate that infant - directed talk prosody may be an important functional aspect of the infant's social environment(f.1).Three categories of functions have been suggested for infant - directed talk prosody. 1) Linguistic: These modifications simplify and highlight relevant linguistic components of speech. 2) Attentional: The modifications may be effective in gaining and maintaining infants' attention. 3) Affective: The modifications may contribute to positive affective interactions between parents and infants. Note that these are not mutually exclusive functions.INFANT PREFERENCETo examine the evidence for the latter two possible functions for infant - directed talk, Janet Werker and I did several experiments that suggest ways in which speech registers more generally might be studied. Our first simple experiment was designed to determine if infants prefer to attend to infant - directed over adult - directed talk when the speaker is a male. Prior to our work, fourmonth old infants had been shown to preferentially attend to infant - directed over adult - directed talk when spoken by a female (e.g., Fernald, 1985), however there were no similar studies published using male speakers.The stimuli used in this first study were audio - video recordings of one actor and one actress reciting an identical script to both a six - month old (in infant - directed prosody), and to an adult (in adult - directed prosody). By having an identical script, we controlled for many linguistic differences between conditions, and by using video tapes we ensured that subjects don't influence the speaker and that all subjects within a treatment condition get the same stimuli. …

Journal ArticleDOI
TL;DR: In this article, the authors focus on methods for analyzing complex data, i.e., data that do not conform to the assumptions of independence and homoscedasticity on which many classical procedures are based.
Abstract: This paper focusses on methods for analyzing "complex" data, i.e., data that do not conform to the assumptions of independence and homoscedasticity on which many classical procedures are based. Primary attention will be given to regression analysis, with ANOVA as a special case, though reference to related work on loglinear models and logit analysis will also be made.Complex survey data typically arise from surveys involving stratification and several levels of unit selection, i.e., several levels of clustering involving, in area surveys for example, city blocks, dwellings within block, and individuals within dwellings. Since individuals within a cluster are likely to be more similar, one to another, than to individuals in different clusters, a simple statistical model based on independent observations is not appropriate. An additional complexity often encountered in large surveys is that the first level clusters, or primary sampling units (psu's) may be selected from the target population with unequal probability. Complex data also arise in experimental setups, for example when more than one animal from a litter is included in the experiment, or when an experiment includes measurements of both of a subject's eyes (Rosner, 1982) or both of a subject's ears (Coren and Hakstian, 1990). Major advances have been made over the last decade and a half in understanding the effects on classical statistical analyses of ignoring data complexity. Ignoring clustering can result in inflated Type I errors for test statistics (Scott and Holt, 1982; Rao and Scott, 1981, 1984; Rao and Thomas, 1988; Zumbo and Zimmerman, 1991). Ignoring the survey selection mechanism, i.e., the survey design, can in some cases, result in biased estimates of regression parameters (Nathan and Holt, 1980; Holt, Smith and Winter, 1980). Succinct reviews of these issues have been given by Nathan (1988) and by Nathan and Smith (1989). Various methods for analyzing complex data that take account of the complexity have now been developed, several of which are described in detail by Skinner, Holt and Smith (1989). These methods are not yet well known to psychologists and other behavioural researchers, and it is hoped that this paper will encourage these practitioners to familiarize themselves with the new analytic tools that are becoming available.The paper is organized around three sub - themes. First, the problems associated with using standard methods and software on complex data are discussed. A simple example explaining and illustrating the dangers of ignoring clustering is given in Section 2. The second sub - theme is that much of the work on alternative strategies for complex data analysis is based on an inferential frame work (design - based inference) that is fundamentally different from the model - based inference familiar to most psychologists. Sections 3, 4 and 5 of the paper provide an introduction to some aspects of design - based (or finite population) inference, and contrast it with the more familiar model - based approach. Examples are given. The third sub - theme relates to the analysis of complex experimental data. Though model - based inference is by far the most popular approach to analyzing experiments in psychology, the randomization approach is increasingly being advocated as an alternative (see the paper by May in this issue). In Section 6, it will be argued that design - based inference provides a third approach to analyzing some experimental setups involving clustered data. An example involving rat litters is described.The Effect of Ignoring Sample StructureThis section concentrates on the dangers of ignoring clustering, a common feature of complex survey and experimental data. Table 1 provides a hypothetical data set containing 12 observations of a single character y. The hypothesis to be tested is that the mean "Greek not transcribed" of the population from which the y values are drawn is equal to two. The second column of Table 1 presents the data with no information about sample structure, in which case the analyst can do little but assume independence and homoscedasticity of the observations, and try a one - sample t - test (here we ignore distributional subtleties). …

Journal ArticleDOI
TL;DR: The use of personality measures in personnel selection has not met with substantial success in the past, recent evidence has suggested that personality measures are related to performance criteria which are unrelated to cognitive ability when the traits measured are conceptually related to these criteria as discussed by the authors.
Abstract: P. GREGORY IRVINGThe University of Western OntarioAbstractAlthough the use of personality as a predictor in personnel selection has not met with substantial success in the past, recent evidence has suggested that personality measures are related to performance criteria which are unrelated to cognitive ability when the traits measured are conceptually related to these criteria. It seems that personality measures may predict job performance dimensions which cannot be predicted by cognitive ability measures. The use of personality measures in personnel selection may be warranted when a careful job analysis is undertaken to determine which performance dimensions may be related to personality traits. As early as 1923, Freyd (cited in Guion, 1983) recognized that certain steps must be undertaken to ensure that the personnel selection procedures used by organizations are valid. These steps included conducting a job analysis to determine the characteristics which led to success or failure on the job, designating a single - measure criterion of success, developing an exhaustive list of abilities required for success, finding or devising a measure of these abilities, and statistically comparing the test scores with the criterion scores. Freyd's steps continue to represent sound personnel selection practises in major Industrial/Organizational psychology textbooks (e.g., Cascio, 1987; Landy, 1989).Despite the fact that Freyd outlined these steps some 70 years ago, researchers investigating the relationship between personality and job performance have tended to ignore his advice. Early attempts to use personality traits to predict various job criteria have generally used a shotgun approach in which a large number of scales were correlated with a large number of criteria. Such an approach has been used to predict accident rates of truck drivers (Parker, 1953) and job satisfaction of farmers (Brayfield & Marsh, 1957). The predictor in both studies was the clinical scales of the Minnesota Multiphasic Personality Inventory (MMPI). The problem with this empirical approach to predicting job - related criteria is that one may expect at least some relationships to be significant by chance alone and attempts at cross - validation would likely result in substantial shrinkage in the validity coefficients. The assessment of personality has been a controversial topic in personnel selection. Over the past several decades, a number of literature reviews have been conducted resulting in conflicting viewpoints regarding the predictability of job performance based on personality traits (Ghiselli & Barthol, 1953; Guion & Gottier, 1965; Schmitt, Gooding, Noe & Kirsch, 1984; Tett, Jackson & Rothstein, 1991). Although Guion and Gottier (1965) stated that "there is no generalizable evidence that personality measures can be recommended as good or practical tools for employee selection" (p. 159), they observed the importance of predicting job criteria which are unrelated to cognitive ability. Without exception, these reviews have cautioned against the shotgun approach to prediction which has plagued previous attempts to validate personality measures in personnel selection. Hollenbeck and Whitener (1988) have suggested that one of the reasons for the poor predictive ability of personality variables in previous studies is that many of the validation studies lacked statistical power (see Schmidt, Hunter, & Urry, 1976, for a discussion of the lack of power in validation studies). As evidence, they point to the fact that in Guion and Gottier's (1965) review, 62 of 100 validation studies had sample sizes of less than 84.Personality assessments have long been used as a part of the selection process for several professions including police officers (Burbeck & Furnham, 1985; Inwald & Shusman, 1984), flight attendants (Ferris, Bergin & Gilmore, 1986), and firefighters (Johnson, 1983). Much of the recent research involving personality measures in personnel selection, however, has employed such measures in screening for psychological problems (e. …

Journal ArticleDOI
TL;DR: For instance, this paper found that older adults consistently overestimate the distance between dots within Gestalt groups and overestimate distances between dots that fall in different groups, while younger adults were more likely to ignore the configurational aspects of the Gestalt displays.
Abstract: This paper stresses the importance of developmental research in evaluating theories of visual attention. Although researchers have long used theories developed with the help of college - age subjects to better understand children, there has been a reluctance to use developmental data to better understand theories of attention. I try to show that developmental studies can and do provide a unique vantage point from which to assess these theories. Three research steps are proposed: (1) theoretically - important differences and similarities between age groups are established, (2) theoretical constructs are mapped to these age differences/similarities, (3) data are collected to examine the relation between the constructs and age. Several examples of the use of this strategy are summarized. The variable of age is shown to play a role in testing theories of Gestalt grouping, perceptual organization, spatial orienting, and attentional filtering.How can research in the development of attention inform our understanding of the psychology of attention? I welcome the opportunity to discuss this question explicitly in a paper, largely because this is the way that I have often implicitly framed my research questions. I hasten to add, however, that I admit to this with some trepidation since I have not always felt encouraged to ask this question.I was trained in a graduate program that did not have a separate division called "Developmental." Instead, studies of the development of various processes and functions were carried out by professors who gathered together under such umbrellas as "Perception and Cognition," "Neuroscience," "Social Psychology," and "Learning." Right or wrong, the implicit assumption was that development was not a topic to pursue in its own right. Rather, one always studied the development of something. In my case, this was the development of perception and attention in humans. The first piece of research I could call my own in graduate school was a developmental exploration of a perception phenomenon that my advisor, Joan Girgus, had recently published with Stanley Coren (Coren & Girgus, 1980). They had shown that there are reliable distortions in the perceived distance associated with the traditional displays of Gestalt grouping. Adult subjects consistently underestimated the distance between dots within Gestalt groups and overestimated distances between dots that fall in different Gestalt groups.My project involved testing for the presence of these distortions in a total of 100 subjects - 20 each in five age groups between the ages of 5 and 24 years. The results were very clear. Although all age groups were equally accurate in estimating the distances between dots in control figures, the subjective distortions of distances in the Gestalt displays were much larger for the younger subjects.Now the traditional developmental approach to these data would be to consider their implications for theories of normative perceptual development. For instance, one implication is that human observers are better able to attend selectively to the task - relevant dots with increasing age, and conversely, to successfully ignore the configurational aspects of the Gestalt displays. I don't want to belittle this aspect of the data. I believe it is necessary and important to outline the normal course of perceptual development with these sorts of tasks. However, the particular angle on these data that interested me was their potential for shedding light on a long - standing debate within the mainstream of perception. What causes perceptual grouping per se? Accounts of perceptual grouping span a full range of possibilities, from those that rely on sensory or "hard - wired" mechanisms such as spatial filtering (Ginsgurg, 1978; Uttal, 1975), to those that attribute grouping to preattentive mechanisms in the early stages of visual processing (Julesz, 1975; Kahneman, 1973; Neisser, 1967), to those that appeal to "intelligent" or "constructive" mechanisms in later stages of processing (Gregory, 1978; Hochberg, 1982). …

Journal ArticleDOI
TL;DR: In this article, the authors examine the specific relevance of these and other criticisms for research on sex differences, and conclude that descriptions of the results of sex comparisons must reflect the data more accurately than they now do.
Abstract: The application of null hypothesis testing to psychological research has been much criticized (e.g., Bakan, 1966; Gigerenzer & Murray, 1987). I examine the specific relevance of these and other criticisms for research on sex differences. Four specific problems are identified: (1) drawing inferences about general properties that are attributed to all members of a population; (2) the distinction between the size of p and the size and theoretical importance of a difference; (3) the frequently unjustified assumption of normality; and (4) the semantic problems inherent in the language of interpretation. A few solutions are explored, and it is concluded that descriptions of the results of sex comparisons, as well as others, must reflect the data more accurately than they now do. A number of years ago, in an invited address that I gave at the annual meeting of the Canadian Psychological Association, I criticized the way in which standard null hypothesis statistics are used in research on sex differences. I argued that in just about every known comparison of the behaviour of females and males there is generally a substantial amount of overlap between the distributions of the dependent variable for the members of each sex (if the behaviour in question is physically possible for both). This is true even if a statistical test has led to the inference that the difference is "significant" (p .01, p Notwithstanding such extreme cases of overlap, the mere fact that any overlap occurs at all, I reasoned, renders logically false the descriptive statements that are made when differences are said to be "significant". For example, to cite a currently contentious issue, suppose it is found that the average score obtained by boys on a math test is "significantly" higher than that obtained by girls. Since these scores invariably overlap, (see, e.g., Benbow, 1988, p. 219; Hyde et. al., 1990), then it is false to translate the statistical result into an affirmation which states that boys are better at math than are girls. I should perhaps add that this logic also applies when girls obtain higher average scores than boys, a result which has been observed in a larger variety of situations than seems to be commonly known (Kimball, 1989).During the question period, I was asked how it is possible to make a special case that null hypothesis testing is inappropriate for sex differences, when these tests were in general use for most other kinds of psychological data. My questioner clearly did not mean to imply that significance testing should be discarded altogether (or so I understood him) but rather, since these types of statistical procedures are the standard tools of contemporary experimental psychology, and therefore must be valid, then it could hardly be legitimate to make a special case for their irrelevance to sex differences. To my embarrassment, I was unable at that time to give a satisfactory reply, neither to the questioner nor to myself. On the one hand, I was convinced that my reasoning was correct as far as sex differences are concerned, yet, on the other hand, I was still so attached to significance tests as a general tool for psychological research that I was not yet ready to think that their application could be more generally problematical.Over the intervening years I have discovered a rather substantial literature that is critical of the ways in which null hypothesis testing has been integrated into the entire psychological research process. …

Journal ArticleDOI
TL;DR: It will be argued that ensuring sound consent practices is both a legal and ethical obligation, to the extent that consent is based on regard for autonomy, observing proper informed consent is consistent with fundamental goals for offering psychological services.
Abstract: The professional literature on informed consent has been critically reviewed and its implications for clinical psychology practice are discussed. The legal and ethical rights of patients and obligations of psychologists are detailed. Specific examples of possible problem areas in professional practice are highlighted, and practical recommendations are suggested for guiding the practitioner through issues on which legal doctrine is sometimes vague, ambiguous, or yet to be established.Courts have traditionally "deferred to the presumed expertise of the professionals" when complained against on issues of patient rights (Bloom & Asher, 1982, p. 19). Recent years, however, have brought increases in litigation, court decisions and legislative statutes bearing on the rights of psychiatric patients. These changes have provided the impetus behind a proliferation of mental health and legal professional literature dedicated to patient rights, which has, in turn, led to the re - evaluation of patient care policies in mental health facilities (Bloom & Asher, 1982). One of the most important patient rights is that of informed consent (Ludlam, 1978). It must be noted that the law of informed consent is inherently medically oriented; explicit reference to the duties and rights of physicians and patients and to medical procedures is typical in most legislation and case law. This orientation derives from the prevalence of medical litigation in the case law defining the requirements of informed consent. Concomitantly, much of the professional literature discusses consent in a medical context.Despite its emphasis on medicine, however, the legalities of informed consent are equally applicable to mental health care. This discussion will focus specifically on the legal doctrine of consent to treatment. Its implications for mental health professions, generally, and clinical psychology practice, specifically, will be highlighted. It will be argued that ensuring sound consent practices is both a legal and ethical obligation. Moreover, to the extent that consent is based on regard for autonomy, observing proper informed consent is consistent with fundamental goals for offering psychological services.Informed ConsentThe definition of informed consent is a highly complex issue involving law, ethics, and morality (Ludlam, 1978); it is also a source of continuing controversy in the professional literature. Informed consent has been described as a myth, a fiction, and an unattainable goal that has become a legal requirement (Sprung & Winick, 1989). Less cynically, it has been typically construed as a process ideally involving the mutual participation of both the professional and the client in a shared decision making process regarding treatment (President's Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research, 1982; cited in Sprung & Winick, 1989).The philosophical or moral basis of informed consent relies on an assumption of a fundamental right of autonomy, or self - determination (Arboleda - Florez, 1988). The legal right of autonomy, however, is not absolute, and each society has an obligation to set appropriate restrictions that balance the rights of the individual with the rights of society (Eth & Robb, 1986; Rozovsky & Rozovsky, 1990). In regards to the consent issue, there are five typical exceptions to the rule requiring that informed consent be obtained from the patient prior to treatment. They include: 1) medical emergencies, where there is immediate danger to life and the delay that would be necessary to obtain consent might be harmful; 2) incompetency, where the patient is unable to give a legally valid consent; 3) therapeutic privilege, where there is discretion to withhold information which might have a detrimental effect on patient health if disclosed; 4) waiver, where the patient can waive his or her right to be informed, to make the treatment decision, or both; and 5) mentally ill and dangerous, where mentally ill patients who are imminently in danger of harming themselves or others may be involuntarily committed and treated (Faden & Beauchamp, 1986; Sprung & Winick, 1989; Alberta Mental Health Act, 1990). …

Journal ArticleDOI
TL;DR: In this article, an alternative system is proposed which attempts to integrate training and broaden its relevance while reducing the likely time it will take for each student to complete their program, which is the main reason why far too many doctoral students fail to pursue research after graduation.
Abstract: Consideration is given to the present form of graduate training and the presumed goals of such training are noted. This review led to the conclusion that the present form that research training takes follows archaic traditions which in fact represent an obstacle to effective training. In particular, the master's thesis, the doctoral dissertation, and the comprehensive examinations are seen as the principle stumbling blocks which together take the joy out of research, present discontinuities in training, and seriously delay student progress. An alternative system is proposed which attempts to integrate training and broaden its relevance while reducing the likely time it will take for each student to complete their program.I was prompted to write this paper by my observation that far too many doctoral students fail to pursue research after graduation. This is true for most clinical students but also for quite a number of nonclinical students. Since so much of our graduate training, and especially the efforts of supervisors, is focussed on teaching students to be researchers, such an outcome is extremely disappointing and calls for a reconsideration of our approach in training graduate students.The main thrust of my concern here is the failure of our training programs to instill in our students appropriate attitudes toward research, as well as the necessary skills, that might foster a post - graduate research career. The doctoral dissertation is the main mechanism by which we attempt to train students to do, and enjoy, research; it is my view that the traditional requirements for the dissertation are outmoded and actually present an obstacle to effective training, particularly in instilling a passion for research. However, the doctoral dissertation is but one component (albeit a large component in terms of time investment) of graduate training and we must consider the whole process if we are to revamp our programmes.Possibly we should reconsider the goals we have in mind for graduate training, but I do not find that to be a sufficiently attractive alternative. I believe that the goal of training our students to be scientist - practitioners is the most appropriate model yet articulated for applied students, and this requires that they be trained in research as an integral part of their applied work. Non - applied students presumably share the same goals as their teachers since in choosing to enter graduate training, which does not lead to applied work, they have declared their desire to train as researchers.The goals, then, of graduate training appear to be to inform students of current relevant knowledge, to teach them to apply such knowledge, and to train them to be researchers. The latter goal, you will note, is to train them to be researchers; not just to be able to do research but to continue to do so after graduation. Others may disagree with me about the goals of graduate training but for the rest of this paper I will assume that these are our goals.First I will describe current graduate training and point to the confusion that seems to exist in terms of the purpose of its various components, to its failure to achieve the goals articulated above, and to the financial burden to universities that current training seems to create. Indeed, my main aim here is to point to the failings of the present system, a system that in much of its form has been with us since psychology became an academic discipline. It would be surprising if a 19th century educational programme was suited to the needs of the late 20th century. In fact, what is not surprising (but should be) is that universities, which typically see themselves at the forefront of knowledge and as leading society in constructive directions, should so stubbornly hold to an antiquated training system. We are faced with a system that presently does not achieve its goals at all well; we must set aside our traditions and examine other ways to achieve our aims. …

Journal ArticleDOI
TL;DR: In this paper, the authors investigate the valeur scientifique d'un modele en deux paliers which rendrait compte de la detresse psychologique en milieu organisationnel.
Abstract: ResumeL'objectif de la presente recherche est d'examiner la valeur scientifique d'un modele en deux paliers qui rendrait compte de la detresse psychologique en milieu organisationnel. Dans ce modele, le degre de detresse des travailleurs est d'abord influence par la perception de trois stresseurs au travail specifiques: la surcharge quantitative de travail, les conflits de role et l'ambiguite de role. En revanche, la perception de ces stresseurs est determinee par la presence de variables individuelles (i.e., le lieu de controle et le patron de comportements de type A), de variables interpersonnelles (i.e., le soutien social recu) et de variables organisationnelles (i.e., la latitude decisionnelle du travailleur). L'echantillon se compose de 636 sujets issus de quatre organisations du secteur tertiaire. Les resultats des analyses de regression multiple montrent que les perceptions de surcharge sont reliees au patron de comportements de type A et au soutien du superieur immediat. Par ailleurs, la perception liee a l'ambiguite de role est associee au soutien accorde par le superieur et a certaines variables sociodemographiques. Quant a la perception liee au conflit de role, celle - ci est associee egalement au soutien du superieur immediat, au patron de comportements de type A, mais en plus a la latitude decisionnelle. Enfin, le degre de detresse psychologique est fonction des perceptions de surcharge et d'ambiguite de role, du soutien accorde par le superieur, des evenements biographiques et de la scolarite. Les implications theoriques de ces resultats sont discutees.Au cours des quinze dernieres annees, l'etude des determinants de la detresse psychologique en milieu de travail a retenu l'attention des chercheurs en psychologie organisationnelle (Kahn, Wolf, Quinn, Snoek, & Rosenthal, 1964), en management (Ivancevich, Matteson, & Preston, 1982) ainsi qu'en sociologie (Pearlin & Schooler (1978). L'analyse de la productivite organisationnelle, l'examen des fonctions du travail dans la construction du sentiment d'identite personnelle et la recherche d'un equilibre entre les competences individuelles et les besoins de la societe constituent, en effet, des themes d'interet qui conduisent necessairement a l'etude de la detresse psychologique liee a la sphere du travail. Dans ce contexte, la detresse psychologique constitue sans doute le symptome d'une dysfonction individuelle, organisationnelle ou sociale dont il faut comprendre les vicissitudes afin d'en prevenir les consequences: absenteisme, alienation, baisse du rendement, chomage, deterioration du fonctionnement psychosocial, etc. Jusqu'a present, les travaux des chercheurs ont surtout porte sur les facteurs previsionnels individuels, interpersonnels ou organisationnels de la detresse psychologique, et tres rarement sur les determinants de la perception des stresseurs au travail. Il faut cependant souligner le caractere fragmentaire de ces recherches (Van Sell, Brief, & Schuler, 1981); les etudes ne portent generalement que sur l'un de ces aspects (e.g., Brief, Rude, & Rabinowitz, 1983; Dignam & West, 1988) ou sur une combinaison limitee des facteurs en cause (e.g., Frew & Bruning, 1987). L'objectif et l'originalite de la presente recherche est d'examiner la valeur scientifique d'un modele multidimensionnel en deux paliers qui rendrait compte a la fois de la perception des stresseurs au travail et de la detresse psychologique associee au travail. Dans ce modele, le degre de detresse des travailleurs est d'abord influence par la perception des stresseurs au travail specifiques: la surcharge de travail, l'ambiguite de role et le conflit de role. En revanche, nous croyons que la perception de ces stresseurs est elle - meme determinee par la presence de variables individuelles (i.e., le lieu de controle et le patron de comportements de type A), de variables interpersonnelles (i.e., le soutien social recu) et de variables organisationnelles (i. …


Journal ArticleDOI
TL;DR: Faulty logic: Language vs ThoughtThere has been criticism of the underlying logic of the Piagetian position (e.g., Siegel, 1978, 1982a), and Siegel and Hodkin (1982a).
Abstract: There is a great deal of validity in the position outlined by Macnamara and Austin; however, their ideas are part of an historical tradition. For the past 60 years, there have been a number of individuals who have noted major inconsistencies in Piagetian theory and/or significant problems with the methodology of the studies conducted within this framework. We welcome the addition of another voice to this chorus.We must know history in order not to be condemned to repeat it." George SantayanaThe only thing new in the world is the history you don't know." Harry TrumanIt was with a combination of surprise, delight, frustration, and anger that I read, "Physics and Plasticine" by Macnamara and Austin. As these are words that one does not typically find in a scientific journal, I shall explain why an article would engender such strong emotions. While I agree with the basic premises and conclusions of Macnamara and Austin (hence the delight), I feel that this article represents exceptionally poor scholarship (hence the surprise and the anger). In various ways, most of what Macnamara and Austin have written has been noted before and they seem to have ignored the challenges to the Piagetian system that have been made over the past 60 years and reviewed, for example, in Brainerd (1978a,b), Donovan and McIntyre (1990), Siegel (1978), and Siegel and Hodkin (1982) among others (hence the frustration). These challenges to the Piagetian ideas over the past 60 years (see Siegel & Hodkin, 1982 for a detailed review), start, as far as I have been able to determine, with the famous anthropologist, Margaret Mead (1932), who was critical of Piaget's concept of animism and what she perceived as Piaget erroneously attributing animistic thinking to the young child.Macnamara and Austin claim that "There is opposition [to Piagetian theory] but it is mainly addressed to matters of experimental design and of problems relating to the transition from one stage to the next." Macnamara and Austin are incorrect. The opposition to Piagetian approach was based both on the presence of logical fallacies within the Piagetian position and criticisms of the methodology. While it is true that we criticized the methodology used by Piaget and his supporters, we also demonstrated that there were the problems with the underlying assumptions of this theory. We showed that the some of the premises ofthe theory were fundamentally flawed and that because of logical errors in their reasoning, the conclusions reached by the Piagetians were simply wrong. I will address some of these issues in this article.Faulty Logic: Language vs ThoughtThere has been criticism of the underlying logic of the Piagetian position (e.g., Siegel, 1978, 1982a). The Piagetian position is that cognitive operations emerge and develop independently of language. But the Piagetians rely on the child's verbal justifications and explanations to infer the existence of a particular cognitive structure. This use of language to measure thought is one of the fundamental problems with the theory because language is required to measure the cognitive operations that are supposed to exist without language. It is a paradox to rely on language to infer the existence or the nonexistence of a particular cognitive structure. In addition, there is ample evidence that preschool children, and in some cases even older children, do not understand the terminology of the questions used in Piagetian tasks or misunderstand or misinterpret the meaning of the question (e.g., Baron, Lawson, & Siegel, 1975; Lawson, Baron, & Siegel, 1974; Siegel, 1971a,b; Siegel 1977; Siegel, 1982a; Siegel & Goldstein, 1969). Therefore, "if language is necessary to the measurement of a cognitive operation, the absence of such an operation cannot logically be inferred in a child ... whose language production and comprehension is immature and inadequate for the task. For such a child, the existence of a preoperational stage becomes at best, indeterminate". …

Journal ArticleDOI
TL;DR: In this paper, a releve de litterature s'inscrit dans ce nouveau courant and vise a documenter les aspects qualitatifs des roles of mere, epouse, and travailleuse susceptibles d'affecter la sante et le bien -and des femmes.
Abstract: ResumeL'accroissement du nombre de femmes sur le marche du travail dans les deux dernieres decennies a souleve beaucoup d'inquietudes et de speculations tant de la part de la presse populaire que de la communaute scientifique. N'etudiant au depart que l'effet du nombre de roles occupes, les recherches ont maintenant evolue pour se concentrer davantage sur les conditions qui ameliorent ou qui portent prejudice a la sante physique et au bien - etre affectif des femmes. Ce releve de litterature s'inscrit dans ce nouveau courant et vise a documenter les aspects qualitatifs des roles de mere, epouse et travailleuse susceptibles d'affecter la sante et le bien - etre des femmes. Malgre la grande diversite des caracteristiques etudiees et des resultats obtenus, deux themes recurrents se retrouvent independamment du role etudie: 1) l'importance du sentiment de controle sur sa vie et 2) l'importance du support emotionnel pour le bien - etre affectif et physique des femmes. En dernier lieu, les auteures proposent la construction d'un modele theorique unifie qui irait au - dela des roles individuels pour permettre l'evaluation de la situation de la femme dans sa totalite, en termes d'effets potentiels sur sa sante physique et mentale.L'accroissement du nombre de femmes sur le marche du travail dans les deux dernieres decennies compte parmi l'un des changements sociaux majeurs au Canada (Marshall, 1989). Aujourd'hui, meme la presence d'enfants en bas age ne pose plus un frein a la participation active des femmes. En s'appuyant sur des statistiques canadiennes, Parliament (1990) demontre que le taux d'activite global des meres d'enfants d'age prescolaire bondit de 43% en 1979 a 62% en 1989. Cependant, malgre leur entree massive sur le marche du travail, les meres continuent d'assumer la plus grande partie des taches reliees a l'entretien menager et aux enfants. Selon l'estimation de Hochschild et Machung (1989), les meres employees travailleraient au total (incluant les charges domestiques et celles du travail a l'exterieur) un mois de plus par an que leur conjoint. Ce paradoxe souleve beaucoup d'inquietudes et de speculations tant de la part de la presse populaire que de la communaute scientifique (Sorensen & Verbrugge, 1987). Plusieurs chercheurs ont examine l'effet sur la sante physique et mentale des femmes d'ajouter le role de travailleuse aux roles plus traditionnels de mere et d'epouse. Deux hypotheses principales les ont guides. L'hypothese de deficit (scarcity hypothesis) stipule que chaque etre humain possede une quantite limitee de temps et d'energie et que chacun des roles occupes par un individu utilise une partie de cette energie (Coser, 1974; Goode, 1960). Plus une femme occupe de roles, plus elle a d'exigences a remplir et plus elle s'expose a une surcharge de travail. Cette multiplicite de roles ouvre aussi la porte a des conflits entre les roles (ex.: mere versus travailleuse). En somme, cette hypothese postule que l'occupation simultanee des roles de mere, d'epouse et de travailleuse conduit au surmenage physique et emotionnel.Contrairement a cette vision negative, une deuxieme formulation theorique plus recente souligne les benefices associes a la multiplicite de roles. L'hypothese d'accroissement (enhancement hypothesis) propose que cette accumulation de roles mene a l'epanouissement et aux developpement personnels (Marks, 1977; Sieber, 1974). Selon cette theorie, les roles fournissent un mecanisme renforce par des recompenses monetaires (salaire, par ex.) et non - monetaires (prestige, par ex.) mis de l'avant par la societe pour y participer (Hirsch & Rapkin, 1986). De source de conflits et de stress, le role de travailleuse devient alors source d'avantages et de benefices de toutes sortes (Helson, Elliott, & Leigh, 1990). Ces deux hypotheses ont donne lieu a plusieurs etudes scientifiques. Au bout du compte, l'hypothese d'accroissement a recueilli davantage d'appuis empiriques. En effet, la grande majorite de ces etudes concluent que le fait per se de cumuler plusieurs roles (Barnett & Baruch, 1985; Pietromonaco, Manis, & Frohardt - Lane, 1986; Thoits, 1983; 1986; Verbrugge, 1983; Verbrugge & Madans, 1985; Waldron & Jacobs, 1989) n'entraine pas d'effet negatif sur la sante et le bien - etre des femmes. …

Journal ArticleDOI
TL;DR: The issues of confidentiality as it pertains to the areas of mandatory child abuse reporting, the duty to protect, informed consent, third party access, and client access to records, will be addressed.
Abstract: This paper focusses on the ethical and legal aspects of confidentiality for Canadian psychologists, with particular emphasis on clinical psychology. The concepts of confidentiality, privileged communication, and privacy are clarified. The law of privileged communication in Canada is presented. Ethical standards, provincial and federal legislation, and case law bearing on confidentiality in clinical practice are discussed. Issues of mandatory child abuse reporting, the duty to protect, informed consent, and third party and client access to records, are explored. Suggestions are made to the psychologist regarding the management of confidentiality. The helping relationship is the most fragile of all professional relationships. More than any other it requires the client to disclose intimate personal information. The confidentiality of the relationship has long been regarded as its cornerstone (Keith - Spiegel & Koocher, 1985). The area of confidentiality is complicated, however, by misunderstandings about the commonly used terms, confidentiality, privacy, and privilege (Keith - Spiegel & Koocher, 1985). Further, the ethical and legal requirements surrounding the topic of confidentiality have rarely been clarified, and never in the Canadian context.The purpose of this paper is to explore the ethical and legal aspects of confidentiality for Canadian psychologists, with particular emphasis on clinical psychology. Space limitations preclude covering all of the relevant issues. It is not possible, for example, to cover the literature relating to research, the educational system, or the prison system. Ethical standards related to confidentiality in clinical practice will be reviewed. The issue of confidentiality as it pertains to the areas of mandatory child abuse reporting, the duty to protect, informed consent, third party access, and client access to records, will be addressed. The issue of privileged communication in Canada will be reviewed. In addition, provincial and federal legislation, and case law bearing on the practice of psychology will be considered.Privacy, Privilege, and ConfidentialityPrivacy is a basic human right accorded to all Canadian citizens. It reflects the right of an individual to control how much of his or her thoughts, feelings, or other personal information can be shared with others (Keith - Spiegel & Koocher, 1985). Section 7 of the Canadian Charter of Rights and Freedoms (1982) guarantees that "Everyonehas the right to life, liberty, and security of the person and the right not to be deprived thereof except in accordance with the principles of fundamental justice" (p. 260). Further, British Columbia, Saskatchewan, Manitoba, and Quebec have all adopted privacy legislation: the Privacy Act of British Columbia (1979), the Privacy Act of Saskatchewan (1978), the Privacy Act of Manitoba (1987), and the Quebec Charter of Human Rights and Freedoms (1977). Confidentiality and privilege developed from the individual's right to privacy. Privilege is a "legal term that describes the quality of certain specific types of relationships that prevent information, acquired in such relationships, from being disclosed in court or other legal proceedings" (Keith - Spiegel & Koocher, 1985, p. 58). Historically, privileged communication was applied to the solicitorclient relationship, and grew from the belief that discussions between a client and solicitor had to be protected in an adversial justice system, if justice was to be served (Picard, 1984). In those circumstances where a psychologist has been accorded privileged communication in court, he or she is required to retain the client's confidence, unless the client has waived the right to privilege, or has made mental status an element in legal proceedings. Confidentiality implies "an explicit contract or promise not to reveal anything about a client, except under circumstances agreed to by both source and subject" (Keith - Spiegel & Koocher, 1985, p. …

Journal ArticleDOI
TL;DR: In this paper, a qualitatively oriented project was developed to train students to understand the description and interpretation of social phenomena in the real world, where participants were instructed to define a specific problem, select and interpret relevant examples of social episodes, and then compare their own ideas with concepts and findings in the literature.
Abstract: Current teaching methods in empirical psychology favour an uncritical learning of the "literature" over the direct observation of social events. To redress this imbalance a qualitatively oriented project was developed to sensitive students to the description and interpretation of social phenomena. Students were instructed to define a specific problem, select and interpret relevant examples of social episodes, and then compare their own ideas with concepts and findings in the "literature." The study reported here examined the students' experiences of success, pleasure, and interest after undertaking this assignment. Regression analysis indicated that the assignment was found more interesting by students whose guiding motive for studying psychology was to gain wisdom. Perceived meaningfulness and personal relevance of the project also played a role. The guiding motive of a search for wisdom also shaped pleasure, as did the student's relative comfort with method. Subjective judgements of success were primarily affected by relative ease at articulating one's thoughts. Grades were accurately predicted by the guiding motive of a search for wisdom, difficulty articulating thoughts, and comfort with method. How can we best train students to understand the social world? By social world I mean the "lived - world" (Giorgi, 1970), filled with events which can be described in natural everyday language. It is a world prior to scientific analysis. Whether we speak of jealousy, sympathy, or divorce, these phenomena do not need scientific psychology in order to exist. Observing the lived - world can yield a rich resource of natural data which, when examined in a disciplined manner, affords an understanding of the dynamics of thought, action, feeling, interaction, etc. It is therefore important that students develop skills of observation and interpretation. They should also become aware of their frames of reference and learn to discount personal biases, beliefs, and expectations.While these skills are developed during clinical training, mainstream empirical psychology does not actively encourage their use. Instead, a different kind of discourse is emphasized, one which proposes general principles to explain and predict social phenomena. This discourse, in the form of the psychological "literature", sets problems, specifies research procedures, and provides theories to integratethe resulting findings (Kuhn, 1970). The "significant effects" that result from the use of specific experimental paradigms are highly attractive because they are the main vehicle of publication. Students may simply adopt operations which "work" (i.e., produce reliable effects), uncritically accepting the underlying assumptions and supporting theory.Much can be learned from the area of ethology which used the careful observation of animal behaviour as a basis for developing theories regarding the animal social world. Descriptive observation should precede the formalization of laws to account for events which occur in the lived - world. I am advocating a balance between direct observation of naturally occurring events and the critical examination of relevant literatures. One approach that united observation with critical evaluation is described in this paper.A Harvest of Social PhenomenaThe observers, undergraduate students in an advanced social psychology course, were instructed to adopt a sequence of steps in examining a social phenomenon of their choice. First, they individually (in a private consultation) identified a specific social phenomenon or problem for in - depth examination. Simply encouraging students to go out and observe the world produced a remarkable harvest of problems and social phenomena. Sample topics were: jealousy, vicarious embarrassment, drug abuse, shyness, flirtation, alcoholism in the family, immigrant experiences, aggression in sports, the world of the derelict, problems associated with revealing one's homosexuality, and racial discrimination. …

Journal ArticleDOI
TL;DR: In this paper, the authors focus on the structure of an informel, i.e., a set of structures informelles of an organisational structure, which correspond to the notion of "symbol not transcribed" (SNT).
Abstract: Dans toute organisation, un certain nombre de relations interpersonnelles emergent spontanement sans avoir ete planifiees par l'autorite legitime (Zaremba, 1988). Les liens que les membres de l'organisation etablissent entre eux sur la base de sentiments et d'interets personnels en conformite, en neutralite ou en opposition aux objectifs de la coalition dominante dans l'organisation ou dans ses unites, forment un reseau informel d'echanges, structure parallele a la structure officielle de l'organisation (Scott, 1981). Selon Starapoli (1975), l'organisation informelle est comme un iceberg souvent invisible de l'exterieur mais habituellement plus puissant que l'organisation formelle. Il s'agit d'une force pouvant aller jusqu'a renverser les buts officiels ou encore pouvant la rendre plus efficace en depit d'une mauvaise gestion (Baker, 1981).Deja identifie a l'epoque de Jules Cesar comme l'atteste l'emploi des termes de jure par opposition a de facto, est formel ce qui est voulu et planifie alors que l'informel emerge spontanement (Jacques, 1979a; Tichy & Fombrun, 1979). Dans une organisation, l'informel correspond "Symbol not transcribed"aux relations interpersonnelles qui ne sont pas mandatees en vertu des regles de l'organisation formelle, mais qui emergent spontanement de maniere a satisfaire les besoins des individus"Symbol not transcribed" (Farris, 1979). Alors que l'organisation formelle repond a une certaine logique du cout et de l'efficacite, l'organisation informelle repond a la logique des sentiments et des besoins humains (Roethlisberger & Dickson, 1939/1967).Le present article porte sur l'une des structures informelles de l'organisation, a savoir le groupe informel. Il y a une certaine confusion sur la signification du terme informel lorsqu'applique au groupe: l'informel peut referer a la "Symbol not transcribed"relative independance vis - a - vis la structure formelle de l'organisation"Symbol not transcribed" ou bien a "Symbol not transcribed"la relative absence de structure interne formelle"Symbol not transcribed" (Stevenson, Pearce & Porter, 1985). Dans cette recherche ne sont consideres informels que les groupes dont le statut demeure informel, c'est - a - dire ceux dont les relations entre les membres ne sont pas prescrites par l'organisation, peu importe le degre de structuration et de formalisation interne.Un certain nombre d'etudes (Baker, 1981; Muti, 1968; Polsky, 1978; Roethlisberger & Dickson, 1939/1967; Tichy, 1973; Wilson, 1978;) ont deja aborde specifiquement le sujet du groupe informel. Toutefois, la majorite des ecrits restent encore epars et sans lien evident entre eux. Cet eclatement de l'information provient en partie de l'origine multidisciplinaire des recherches et des reflexions: la psychologie sociale, la psychologie industrielle, la sociologie, l'anthropologie, le management et les communications. En faisant une analyse historique de la question, Tichy (1981) constate que les differents courants de recherche ont evolue de facon separee sans integration veritable. De plus, il existe peu de recherches empiriques dans le domaine (Tichy, 1973; Polsky, 1978; Stevenson, Pearce & Porter, 1985; Farris, 1979) et celles effectuees dans le cadre des organisations sont rarissimes (Cobb, 1986a).L'objectif du present document est donc de faire le point sur la documentation traitant, de pres ou de loin, du groupe informel. Plus specifiquement, cette recension vise a identifier comment, dans la documentation scientifique, sont concues la nature du groupe informel, ses conditions d'emergence, ses modalites de fonctionnement et ses interactions avec l'organisation.Bref historiqueLa structure informelle apparait comme une realite fort presente et de plus en plus reconnue depuis les premieres decouvertes (fortuites) et l'etude subsequente qui s'ensuivit a l'usine General Electric de Hawthorne pres de Chicago. La recherche d'origine visait tout simplement a apprecier l'effet de l'eclairement sur la productivite des travailleurs. …

Journal ArticleDOI
TL;DR: A number of attempts have been made to identify the sources of disunity in psychology as discussed by the authors and their implications for the profession of psychology in the immediate future; however, the emphasis is on more recent events and discussions of the issue; and consideration is given to their implications on the implications of these discussions for the future of psychology.
Abstract: The paper examines the issue of disunity within the discipline of psychology, within the profession of psychology, and between the two. References are made to tensions that have existed in or between academic/scientific and applied/professional psychology throughout the history of psychology in Canada and the United States. The emphasis, however, is on more recent events and discussions of the issue; and consideration is given to their implications for the profession of psychology in the immediate future. The paper attempts to identify factors within the discipline, the profession, and in society that could operate to strengthen the links between professional and academic psychology, and that, given sufficient interest and determination on the part of organized psychology, could offset the forces which have threatened the sense of community among psychologists.In deciding to devote this paper to a discussion of the unity, or disunity, of the discipline of psychology I have chosen a topic that has been debated mainly by psychologists whose work is in the universities. And although the paper is intended for an audience that I expected would consist mainly of professional psychologists, I thought it could be useful to examine the implications that the views of disunity within the discipline may have for professional psychology, and to consider what professional psychologists might be able to do to salvage a sense of community among scientist and professional psychologists, to our mutual benefit.A number of attempts have been made to identify the sources of disunity in psychology. Many of these papers, reports and symposia have pointed, usually with regret, to the increasing diversity and specialization in both the discipline and the profession. Some have considered the role of differences in theoretical orientation among psychologists, or differences in their view of the proper direction that research should take. Others have identified an increasing divergence between the aims and interests of the psychologist as scientist/academic and those of the psychologist as practitioner. Still other discussions have been concerned with actual or potential splits in psychological organizations.Unity in the Early YearsTensions in and between the discipline and the profession. Early accounts suggest that not only have psychologists been engaging in practice for a long time, but tension, dissension and schisms, within the discipline or between the discipline and the profession, have been part of the style of North American psychology for at least one hundred years.As long ago as 1896 an American psychologist, Lightner Witmer, founded the first psychological clinic in the United States at the University of Pennsylvania (Fagan, 1992, p. 237). In these years both Witmer and G. Stanley Hall, a founder not only of the APA but of the child study movement as well, were considered by many of their psychologist colleagues to be engaged in work that was "less than scientific" (p. 239). In the 1930s and 40s Kurt Lewin's field theory and his research, described by Danziger (1990) and Ash (1992), "was either ignored or met with complete incomprehension" (Ash, p. 205) by mainstream psychology in the United States. Another well - known example may be found in Henry A. Murray's radical differences with other members of the Department of Psychology at Harvard, notably with Boring and Lashley, on theoretical issues, the appropriate questions for research, and the content of the curriculum. In Murray's words, academic psychologists were "looking critically at the wrong things" (Triplet, 1992, p. 304). The history of psychology is replete with examples of psychologists viewing each other as looking critically at the wrong things, often with unfortunate consequences, not only for individual psychologists but for the field as a whole.The professional activities of academic psychologists and their attitudes toward the profession. …

Journal ArticleDOI
TL;DR: Several contributions of attachment theory and research are discussed: 1) the construct of parent/child bonds as an enduring rather than short-term phenomenon; 2) a validated measure of that construct in infancy; 3) an organizational approach to behavioural observation; 4) new connections between researchers and clinicians; and 5) new connection between clinicians and clinicians as discussed by the authors.
Abstract: Four contributions of attachment theory and research are discussed: 1) the construct of parent/child bonds as an enduring rather than short - term phenomenon; 2) a validated measure of that construct in infancy; 3) an organizational approach to behavioural observation; and, 4) new connections between researchers and clinicians It is suggested that future research will focus on ontogenetic development and intergenerational transmission of attachmentThe study of parent - child relationships has a long and venerable history "Attachment" is the most recent and current label for the emotional bonds between children and parents, the most immediate predecessors being "dependency" and "object relations" Each represented a theoretical and empirical approach to the phenomenon which experienced a period of enthusiasm Is "attachment" simply the latest buzzword for the same old thing? In this paper I review the contributions of attachment theory and research to our understanding of human behaviour and make some predictions about future contributions Since the primary task is to evaluate the contributions of attachment theory and research, I will not document limitations or focus upon controversies in the field, and the reader is referred to existing sources for this perspective (eg Campos, Barrett, Lamb, Goldsmith and Stenberg, 1983; Kagan, 1982; Lamb, Thompson, Gardner, Charnov, and Estes, 1984)Attachment theory as articulated by Bowlby (1969) combined and integrated ideas from psychoanalysis and ethology Bowlby argued that affectional ties between caregiver and offspring have a biological basis best understood in an evolutionary context Since the survival of human young depends on adult caregiving, our evolutionary history has selected a genetic bias among infants to behave in ways that maintain and enhance proximity to the caregiver and elicit caregiver attention and investment A complementary evolutionary history biases adults to behave reciprocally Psychoanalytic theory emphasizes the caregiver's initial role in reducing physiological arousal Social learning theory emphasizes the caregiver as teacher Attachment theory focusses on the parent's role as protector and provider of security (All of these views acknowledge that parents play multiple roles: teacher, caregiver, playmate etc They differ with respect to which role is considered most influential) Furthermore, psychoanalytic and learning theory viewed the child as initially passive, Bowlby credited the infant with active participation Prior theories viewed infants as dependent on caregivers and considered dependency a state to be outgrown, but an attachment is expected to endure Thus, attachment is a quality of relationships that is a life - span construct I believe this is a major shift in orientation attributable to attachment theoryComponents of AttachmentThe concept of attachment includes social components (it is a property of social relationships), emotional components (each participant in the relationship feels emotional bonds with the other), cognitive components (each participant forms a cognitive scheme - a working model of the relationship and its participants), and behavioural components (participants engage in behaviours that reflect and maintain the relationship) The nature and interrelationships of these components changes with development, but the relationship enduresOver the first year, the infant's proximity - promoting behaviours (orienting to the caregiver, signals such as cries and vocalizations, and direct actions such as approaching and clinging) become organized into a goal - corrected system focussed on a specific caregiver, usually the mother When the attachment system is in its goal state (ie, there is adequate proximity and contact), attachment behaviours subside; when the goal state is threatened, attachment behaviours are activated Furthermore, because the attachment system operates in the context of other related systems (e …

Journal ArticleDOI
TL;DR: In this article, the authors present a survey of alternatives to normal theory in the social sciences and show that the majority of these alternatives are based on the assumption of the Ex- gaussian or Gamma distribution.
Abstract: Upon reflection, we were struck by the enormity of the task at hand. It was impossible to treat all of the alternatives to normal theory in a matter of two to three hours. Alternatives come from many perspectives and clearly demonstrate that methodology is "vibrant" in the social sciences. The topics selected were, in part, circumstantial and partly driven by a consideration of the fundamental assumptions of classical (normal theory) statistics.However, one very important point was not clearly stated in the proceedings. This point is most clearly articulated by Jacob Cohen in his 1965 monograph entitled "Some statistical issues in Psychological research" (cf., Schutz & Gessaroli, 1992):Statistical analysis is a tool, not a ritualistic religion. It is for use, not for reverence, and it should be approached in the spirit that it was made for psychologists rather than vice versa. As one of many tools in the psychologist's kit, it is frequently not relevant and is sometimes of considerable utility. It is certainly not as important as, nor can it even partly replace good ideas or well - conceived experimental strategems, although it may be virtually indispensable in testing out an idea or rounding out a good experiment. (p. 95)This quotation is important when discussing alternatives to normal theory. It is not enough to simply go through the statistical gyrations hoping to sanctify the findings with a p - value less than .05. We believe that what is lacking in psychological research is a strong theoretical foundation which drives the conceptualization of the random variables (i.e., the measures), the type of study and the statistical analyses. An excellent example of this is experiments involving reaction time or response time measures. Reaction time experiments were chosen simply because some excellent theoretical work has clarified the generating process of reaction times, and therefore the conceptualization of the random variable. But the approach to this psychological phenomenon also clearly highlights how the knowledge of the random variable is underutilized in Psychological experimentation.Most psychological studies have rather heuristic models of the process being studied; an exception to this is decision making and cognition where many explicit models (both quantitative and otherwise) are proposed. It has been argued quite convincingly that reaction times are a realization of the Ex - gaussian or Gamma distribution (distributionswhich have very long tails and are often not symmetrical). At this point, researchers usually design experiments to test the equality of means in various experimental conditions and use reaction time as the dependent variable. The data is then processed through ordinary least - squares ANOVA. From our point of view, this process of experimentation has two problems. First, the researcher is not making use of the knowledge of the generating process. That is, ordinary least squares breaks down quickly with asymmetric and long - tailed data. Second, conceptualizing the study according to mean differences between groups is discarding the intricate theory which went into arguing for Ex - gaussian or Gamma generating processes. Zumbo and McMorran (1991) propose that rather than examine mean differences in reaction time it may be beneficial to model the data in each condition as an Ex - gaussian or Gamma process by methods of maximum - likelihood estimation. At this point, a likelihood ratio test can be used to test the difference in the rate parameters (i.e., response time parameters) in the various conditions. The Zumbo and McMorran approach makes use of the detailed substantive theory for modelling the process and testing parameters in different conditions rather than simply testing mean differences between groups. Psychologists need to leave testing mean differences behind them.Reflections on the SymposiumThe papers in this symposium reflect a change in statistical practice and data analysis in the last three decades. …