scispace - formally typeset
Search or ask a question

Showing papers in "Behavior Research Methods Instruments & Computers in 2004"


Journal ArticleDOI
TL;DR: It is argued the importance of directly testing the significance of indirect effects and provided SPSS and SAS macros that facilitate estimation of the indirect effect with a normal theory approach and a bootstrap approach to obtaining confidence intervals to enhance the frequency of formal mediation tests in the psychology literature.
Abstract: Researchers often conduct mediation analysis in order to indirectly assess the effect of a proposed cause on some outcome through a proposed mediator. The utility of mediation analysis stems from its ability to go beyond the merely descriptive to a more functional understanding of the relationships among variables. A necessary component of mediation is a statistically and practically significant indirect effect. Although mediation hypotheses are frequently explored in psychological research, formal significance tests of indirect effects are rarely conducted. After a brief overview of mediation, we argue the importance of directly testing the significance of indirect effects and provide SPSS and SAS macros that facilitate estimation of the indirect effect with a normal theory approach and a bootstrap approach to obtaining confidence intervals, as well as the traditional approach advocated by Baron and Kenny (1986). We hope that this discussion and the macros will enhance the frequency of formal mediation tests in the psychology literature. Electronic copies of these macros may be downloaded from the Psychonomic Society's Web archive at www.psychonomic.org/archive/.

15,041 citations


Journal ArticleDOI
TL;DR: The database will be useful for investigators interested in cuing, priming, recognition, network theory, linguistics, and implicit testing applications, and for evaluating the predictive value of free association probabilities as compared with other measures, such as similarity ratings and co-occurrence norms.
Abstract: Preexisting word knowledge is accessed in many cognitive tasks, and this article offers a means for indexing this knowledge so that it can be manipulated or controlled. We offer free association data for 72,000 word pairs, along with over a million entries of related data, such as forward and backward strength, number of competing associates, and printed frequency. A separate file contains the 5,019 normed words, their statistics, and thousands of independently normed rhyme, stem, and fragment cues. Other files provide n x n associative networks for more than 4,000 words and a list of idiosyncratic responses for each normed word. The database will be useful for investigators interested in cuing, priming, recognition, network theory, linguistics, and implicit testing applications. They also will be useful for evaluating the predictive value of free association probabilities as compared with other measures, such as similarity ratings and co-occurrence norms. Of several procedures for measuring preexisting strength between two words, the best remains to be determined. The norms may be downloaded from www.psychonomic.org/archive/.

2,012 citations


Journal ArticleDOI
TL;DR: Standard text readability formulas scale texts on difficulty by relying on word length and sentence length, whereas Coh-Metrix is sensitive to cohesion relations, world knowledge, and language and discourse characteristics.
Abstract: Advances in computational linguistics and discourse processing have made it possible to automate many language- and text-processing mechanisms. We have developed a computer tool called Coh-Metrix, which analyzes texts on over 200 measures of cohesion, language, and readability. Its modules use lexicons, part-of-speech classifiers, syntactic parsers, templates, corpora, latent semantic analysis, and other components that are widely used in computational linguistics. After the user enters an English text, Coh-Metrix returns measures requested by the user. In addition, a facility allows the user to store the results of these analyses in data files (such as Text, Excel, and SPSS). Standard text readability formulas scale texts on difficulty by relying on word length and sentence length, whereas Coh-Metrix is sensitive to cohesion relations, world knowledge, and language and discourse characteristics.

1,271 citations


Journal ArticleDOI
TL;DR: A database of 575 individual faces ranging from ages 18 to 93 is described, developed to be more representative of age groups across the lifespan, with a special emphasis on recruiting older adults.
Abstract: Faces constitute a unique and widely used category of stimuli. In spite of their importance, there are few collections of faces for use in research, none of which adequately represent the different ages of faces across the lifespan. This lack of a range of ages has limited the majority of researchers to using predominantly young faces as stimuli even when their hypotheses concern both young and old participants. We describe a database of 575 individual faces ranging from ages 18 to 93. Our database was developed to be more representative of age groups across the lifespan, with a special emphasis on recruiting older adults. The resulting database has faces of 218 adults age 18-29, 76 adults age 30-49, 123 adults age 50-69, and 158 adults age 70 and older. These faces may be acquired for research purposes from http://agingmind.cns.uiuc.edu/facedb/. This will allow researchers interested in using facial stimuli access to a wider age range of adult faces than has previously been available.

1,182 citations


Journal ArticleDOI
TL;DR: A new lexical database for French, Lexique, which includes a series of interesting new characteristics such as gender, number, and grammatical category and a metasearch engine that can be added very easily to the existing databases.
Abstract: In this article, we present a new lexical database for French: Lexique. In addition to classical word information such as gender, number, and grammatical category, Lexique includes a series of interesting new characteristics. First, word frequencies are based on two cues: a contemporary corpus of texts and the number of Web pages containing the word. Second, the database is split into a graphemic table with all the relevant frequencies, a table structured around lemmas (particularly interesting for the study of the inflectional family), and a table about surface frequency cues. Third, Lexique is distributed under a GNU-like license, allowing people to contribute to it. Finally, a metasearch engine, Open Lexique, has been developed so that new databases can be added very easily to the existing ones. Lexique can either be downloaded or interrogated freely from http://www.lexique.org.

776 citations


Journal ArticleDOI
TL;DR: The design was inspired by explanation-based constructivist theories of learning, intelligent tutoring systems that adaptively respond to student knowledge, and empirical research on dialogue patterns in tutorial discourse.
Abstract: AutoTutor is a learning environment that tutors students by holding a conversation in natural language. AutoTutor has been developed for Newtonian qualitative physics and computer literacy. Its design was inspired by explanation-based constructivist theories of learning, intelligent tutoring systems that adaptively respond to student knowledge, and empirical research on dialogue patterns in tutorial discourse. AutoTutor presents challenging problems (formulated as questions) from a curriculum script and then engages in mixed initiative dialogue that guides the student in building an answer. It provides the student with positive, neutral, or negative feedback on the student’s typed responses, pumps the student for more information, prompts the student to fill in missing words, gives hints, fills in missing information with assertions, identifies and corrects erroneous ideas, answers the student’s questions, and summarizes answers. AutoTutor has produced learning gains of approximately .70 sigma for deep levels of comprehension.

458 citations


Journal ArticleDOI
TL;DR: One method of estimating phonotactic probabilities based on words in American English is described, used in a number of previous studies and now being made available to other researchers via a Web-based interface.
Abstract: Phonotactic probability refers to the frequency with which phonological segments and sequences of phonological segments occur in words in a given language. We describe one method of estimating phonotactic probabilities based on words in American English. These estimates of phonotactic probability have been used in a number of previous studies and are now being made available to other researchers via a Web-based interface. Instructions for using the interface, as well as details regarding how the measures were derived, are provided in the present article. The Phonotactic Probability Calculator can be accessed at http://www.people.ku.edu/~mvitevit/PhonoProbHome.html.

437 citations


Journal ArticleDOI
TL;DR: MANULEX is a Web-accessible database that provides grade-level word frequency lists of nonlemmatized and lemmatization words computed from the 1.9 million words taken from 54 French elementary school readers.
Abstract: This article presents MANULEX, a Web-accessible database that provides grade-level word frequency lists of nonlemmatized and lemmatized words (48,886 and 23,812 entries, respectively) computed from the 1.9 million words taken from 54 French elementary school readers. Word frequencies are provided for four levels: first grade (G1), second grade (G2), third to fifth grades (G3-5), and all grades (G1-5). The frequencies were computed following the methods described by Carroll, Davies, and Richman (1971) and Zeno, Ivenz, Millard, and Duvvuri (1995), with four statistics at each level (F, overall word frequency; D, index of dispersion across the selected readers; U, estimated frequency per million words; and SFI, standard frequency index). The database also provides the number of letters in the word and syntactic category information. MANULEX is intended to be a useful tool for studying language development through the selection of stimuli based on precise frequency norms. Researchers in artificial intelligence can also use it as a source of information on natural language processing to simulate written language acquisition in children. Finally, it may serve an educational purpose by providing basic vocabulary lists.

424 citations


Journal ArticleDOI
TL;DR: This work uses the CELEX and Lexique lexical databases for word selection and nonword generation in Dutch, English, German, and French to generate items for Dutch and German item generation and psycholinguistic experiments on bilingualism.
Abstract: WordGen is an easy-to-use program that uses the CELEX and Lexique lexical databases for word selection and nonword generation in Dutch, English, German, and French. Items can be generated in these four languages, specifying any combination of seven linguistic constraints: number of letters, neighborhood size, frequency, summated position-nonspecific bigram frequency, minimum position-nonspecific bigram frequency, position-specific frequency of the initial and final bigram, and orthographic relatedness. The program also has a module to calculate the respective values of these variables for items that have already been constructed, either with the program or taken from earlier studies. Stimulus queries can be entered through WordGen’s graphical user interface or by means of batch files. WordGen is especially useful for (1) Dutch and German item generation, because no such stimulus-selection tool exists for these languages, (2) the generation of nonwords for all four languages, because our program has some important advantages over previous nonword generation approaches, and (3) psycholinguistic experiments on bilingualism, because the possibility of using the same tool for different languages increases the cross-linguistic comparability of the generated item lists. WordGen is free and available athttp://expsy.ugent.be/wordgen.htm.

321 citations


Journal ArticleDOI
TL;DR: iSTART is a Web-based application that provides young adolescent to college-age students with high-level reading strategy training to improve comprehension of science texts and is modeled after an effective, human-delivered intervention called self-explanation reading training (SERT).
Abstract: Interactive Strategy Training for Active Reading and Thinking (iSTART) is a Web-based application that provides young adolescent to college-age students with high-level reading strategy training to improve comprehension of science texts. iSTART is modeled after an effective, human-delivered intervention called self-explanation reading training (SERT), which trains readers to use active reading strategies to self-explain difficult texts more effectively. To make the training more widely available, the Web-based trainer has been developed. Transforming the training from a human-delivered application to a computer-based one has resulted in a highly interactive trainer that adapts its methods to the performance of the students. The iSTART trainer introduces the strategies in a simulated classroom setting with interaction between three animated characters-an instructor character and two student characters-and the human trainee. Thereafter, the trainee identifies the strategies in the explanations of a student character who is guided by an instructor character. Finally, the trainee practices self-explanation under the guidance of an instructor character. We describe this system and discuss how appropriate feedback is generated.

297 citations


Journal ArticleDOI
TL;DR: Imageability ratings made on a 1–7 scale and reaction times for 3,000 monosyllabic words were obtained and analysis comparing these ratings to 1,153 common words from Toglia and Battig (1978) indicate that they are valid.
Abstract: Imageability ratings made on a 1–7 scale and reaction times for 3,000 monosyllabic words were obtained from 31 participants. Analyses comparing these ratings to 1,153 common words from Toglia and Battig (1978) indicate that these ratings are valid. Reliability was assessed (α = .95). The information obtained in this study adds to that of other normative studies and is useful to researchers interested in manipulating or controlling imageability in word recognition and memory studies. These norms can be downloaded fromwww.psychonomic.org/archive/.

Journal ArticleDOI
TL;DR: A large-scale picture naming experiment in which the potential contribution of nine theoretically relevant factors to naming latencies was evaluated and the effects of various variables on these latencies were assessed with multiple regression techniques.
Abstract: We report the results of a large-scale picture naming experiment in which we evaluated the potential contribution of nine theoretically relevant factors to naming latencies. The experiment included a large number of items and a large sample of participants. In order to make this experiment as similar as possible to classic picture naming experiments, participants were familiarized with the materials during a training session. Speeded naming latencies were determined by a software key on the basis of the digital recording of the responses. The effects of various variables on these latencies were assessed with multiple regression techniques, using a repeated measures design. The interpretation of the observed effects is discussed in relation to previous studies and current views on lexical access during speech production.

Journal ArticleDOI
TL;DR: The shorter the task sampling time, the less sensitive the test is to sleepiness, and the 5-min PVT may provide a viable alternative to the 10- Min psychomotor vigilance task for some performance metrics.
Abstract: The 10-min psychomotor vigilance task (PVT) has often been used to assess the impact of sleep loss on performance. Due to time constraints, however, regular testing may not be practical in field studies. The aim of the present study was to examine the suitability of tests shorter than 10 min. in duration. Changes in performance across a night of sustained wakefulness were compared during a standard 10-min PVT, the first 5 min of the PVT, and the first 2 min of the PVT. Four performance metrics were assessed: (1) mean reaction time (RT), (2) fastest 10% of RT, (3) lapse percentage, and (4) slowest 10% of RT. Performance during the 10-min PVT significantly deteriorated with increasing wakefulness for all metrics. Performance during the first 5 min and the first 2 min of the PVT deteriorated in a manner similar to that observed for the whole 10-min task, with all metrics except lapse percentage displaying significant impairment across the night. However, the shorter the task sampling time, the less sensitive the test is to sleepiness. Nevertheless, the 5-min PVT may provide a viable alternative to the 10-min PVT for some performance metrics.

Journal ArticleDOI
TL;DR: Equal numbers of male and female participants judged which of seven facial expressions were displayed by a set of 336 faces, and participants rated how well the expression was displayed (i.e., the intensity of the expression).
Abstract: Equal numbers of male and female participants judged which of seven facial expressions (anger, disgust, fear, happiness, neutrality, sadness, and surprise) were displayed by a set of 336 faces, and we measured both accuracy and response times. In addition, the participants rated how well the expression was displayed (i.e., the intensity of the expression). These three measures are reported for each face. Sex of the rater did not interact with any of the three measures. However, analyses revealed that some expressions were recognized more accurately in female than in male faces. The full set of these norms may be downloaded from www.psychonomic.org/archive/.

Journal ArticleDOI
TL;DR: The Paivio, Yuille, and Madigan (1968) norms for 925 nouns were extended in two ways, involved the collecting of a much more extensive and diverse set of properties from original ratings and other sources, and demonstrated both the redundancy among various measures and the tendency for some attributes to load on multiple factors.
Abstract: The Paivio, Yuille, and Madigan (1968) norms for 925 nouns were extended in two ways. The first extension involved the collecting of a much more extensive and diverse set of properties from original ratings and other sources. Factor analysis of 32 properties identified 9 orthogonal factors and demonstrated both the redundancy among various measures and the tendency for some attributes (e.g., age of acquisition) to load on multiple factors. The second extension collected basic ratings of imagery, familiarity, and a new age of acquisition measure for a larger pool of 2,311 words, including parts of speech other than nouns. The analysis of these ratings and supplementary statistics computed for the words (e.g., number of syllables, Kucera-Francis frequency) demonstrated again the relative independence of various measures and the importance of obtaining diverse properties for such norms. Implications and directions for future research are considered. The full set of new norms may be downloaded from www. psychonomic.org/archive/.

Journal ArticleDOI
TL;DR: A set of stimuli representing human actions under point-light conditions, as seen from different viewpoints, is presented, allowing researchers to construct customized versions of these actions.
Abstract: We present a set of stimuli representing human actions under point-light conditions, as seen from different viewpoints. The set contains 22 fairly short, well-delineated, and visually "loopable" actions. For each action, we provide movie files from five different viewpoints as well as a text file with the three spatial coordinates of the point lights, allowing researchers to construct customized versions. The full set of stimuli may be downloaded from www.psychonomic.org/archive/.

Journal ArticleDOI
TL;DR: The general approach to performing distribution fitting with maximum likelihood (ML) and a method based on quantiles (quantile maximum probability, QMP) are reviewed and it is shown that QMP has both small bias and good efficiency when used with common distribution functions.
Abstract: The most powerful tests of response time (RT) models often involve the whole shape of the RT distribution, thus avoiding mimicking that can occur at the level of RT means and variances. Nonparametric distribution estimation is, in principle, the most appropriate approach, but such estimators are sometimes difficult to obtain. On the other hand, distribution fitting, given an algebraic function, is both easy and compact. We review the general approach to performing distribution fitting with maximum likelihood (ML) and a method based on quantiles (quantile maximum probability, QMP). We show that QMP has both small bias and good efficiency when used with common distribution functions (the ex-Gaussian, Gumbel, lognormal, Wald, and Weibull distributions). In addition, we review some software packages performing ML (PASTIS, QMPE, DISFIT, and MATHEMATICA) and compare their results. In general, the differences between packages have little influence on the optimal solution found, but the form of the distribution function has: Both the lognormal and the Wald distributions have non-linear dependencies between the parameter estimates that tend to increase the overall bias in parameter recovery and to decrease efficiency. We conclude by laying out a few pointers on how to relate descriptive models of RT to cognitive models of RT. A program that generated the random deviates used in our studies may be downloaded from www.psychonomic.org/archive/.

Journal ArticleDOI
TL;DR: This work describes and test quantile maximum probability estimator (QMPE), an open-source ANSI Fortran 90 program for response time distribution estimation that enables users to estimate parameters for the ex-Gaussian and Gumbel (1958) distributions, along with three “shifted” distributions.
Abstract: We describe and test quantile maximum probability estimator (QMPE), an open-source ANSI Fortran 90 program for response time distribution estimation.1 QMPE enables users to estimate parameters for the ex-Gaussian and Gumbel (1958) distributions, along with three “shifted” distributions (i.e., distributions with a parameter-dependent lower bound): the Lognormal, Wald, and Weibull distributions. Estimation can be performed using either the standard continuous maximum likelihood (CML) method or quantile maximum probability (QMP; Heathcote & Brown, in press). We review the properties of each distribution and the theoretical evidence showing that CML estimates fail for some cases with shifted distributions, whereas QMP estimates do not. In cases in which CML does not fail, a Monte Carlo investigation showed that QMP estimates were usually as good, and in some cases better, than CML estimates. However, the Monte Carlo study also uncovered problems that can occur with both CML and QMP estimates, particularly when samples are small and skew is low, highlighting the difficulties of estimating distributions with parameter-dependent lower bounds.

Journal ArticleDOI
TL;DR: A Matlab function is presented that makes it easy to specify any of these general models for paired-comparison data (EBA, Pretree, or BTL) and to estimate their parameters and eliminates the time-consuming task of constructing the likelihood function by hand for every single model.
Abstract: Tversky (1972) has proposed a family of models for paired-comparison data that generalize the Bradley-Terry-Luce (BTL) model and can, therefore, apply to a diversity of situations in which the BTL model is doomed to fail. In this article, we present a Matlab function that makes it easy to specify any of these general models (EBA, Pretree, or BTL) and to estimate their parameters. The program eliminates the time-consuming task of constructing the likelihood function by hand for every single model. The usage of the program is illustrated by several examples. Features of the algorithm are outlined. The purpose of this article is to facilitate the use of probabilistic choice models in the analysis of data resulting from paired comparisons.

Journal ArticleDOI
TL;DR: A letter-naming experiment is reported in which Uppercase frequency predicted response time to uppercase letters better than did lowercase frequency, suggesting that subjects are sensitive to frequency relationships among letters.
Abstract: We tabulated upper- and lowercase letter frequency using several large-scale English corpora (∼183 million words in total). The results indicate that the relative frequencies for upper- and lowercase letters are not equivalent. We report a letter-naming experiment in which uppercase frequency predicted response time to uppercase letters better than did lowercase frequency. Tables of case-sensitive letter and bigram frequency are provided, including common nonalphabetic characters. Because subjects are sensitive to frequency relationships among letters, we recommend that experimenters use case-sensitive counts when constructing stimuli from letters.

Journal ArticleDOI
TL;DR: A data set is described that includes eight variables gathered for 13 common superordinate natural language categories and a representative set of 338 exemplars in Dutch, which contains 6 animal categories and 2 borderline artifact-natural-kind categories.
Abstract: A data set is described that includes eight variables gathered for 13 common superordinate natural language categories and a representative set of 338 exemplars in Dutch. The category set contains 6 animal categories (reptiles, amphibians, mammals, birds, fish, andinsects), 3 artifact categories (musical instruments, tools, andvehicles), 2 borderline artifact-natural-kind categories (vegetables andfruit), and 2 activity categories (sports andprofessions). In an exemplar and a feature generation task for the category nouns, frequency data were collected. For each of the 13 categories, a representative sample of 5–30 exemplars was selected. For all exemplars, feature generation frequencies, typicality ratings, pairwise similarity ratings, age-of-acquisition ratings, word frequencies, and word associations were gathered. Reliability estimates and some additional measures are presented. The full set of these norms is available in Excel format at the Psychonomic Society Web archive,www.psychonomic. org/archive/.

Journal ArticleDOI
TL;DR: In this article, the S-PLUS package was used to estimate the parameters of the ex-Wald and ex-Gaussian distributions, and it was shown that, for some parameter ranges, much larger samples may be required.
Abstract: Schwarz (2001, 2002) proposed the ex-Wald distribution, obtained from the convolution of Wald and exponential random variables, as a model of simple and go/no-go response time. This article provides functions for the S-PLUS package that produce maximum likelihood estimates of the parameters for the ex-Wald, as well as for the shifted Wald and ex-Gaussian, distributions. In a Monte Carlo study, the efficiency and bias of parameter estimates were examined. Results indicated that samples of at least 400 are necessary to obtain adequate estimates of the ex-Wald and that, for some parameter ranges, much larger samples may be required. For shifted Wald estimation, smaller samples of around 100 were adequate, at least when fits identified by the software as having ill-conditioned maximums were excluded. The use of all functions is illustrated using data from Schwarz (2001). The S-PLUS functions and Schwarz's data may be downloaded from the Psychonomic Society's Web archive, www. psychonomic.org/archive/.

Journal ArticleDOI
TL;DR: It is concluded that a face-to-face intervention for pediatric traumatic brain injury can be successfully adapted to the Web for families with varied computer experience.
Abstract: We developed a Web-based intervention for pediatric traumatic brain injury (TBI) and examined its feasibility for participants with limited computer experience. Six families, including parents, siblings, and children with TBI, were given computers, Web cameras, and high-speed Internet access. Weekly videoconferences with the therapist were conducted after participants completed on-line interactive experiences on problem solving, communication, and TBI-specific behavior management. Families were assigned to videoconference with NetMeeting (iBOT cameras) or ViaVideo. Participants ranked the Web site and videoconferences as moderately to very easy to use. ViaVideo participants rated videoconferencing significantly more favorably relative to face-to-face meetings than did NetMeeting participants. Both the Web site and videoconferencing were rated as very helpful. All families demonstrated improved outcomes on one or more target behaviors, including increased understanding of the injury and improved parent-child relationships. All parents and siblings and all but 1 child with TBI said they would recommend the program to others. We conclude that a face-to-face intervention can be successfully adapted to the Web for families with varied computer experience.

Journal ArticleDOI
TL;DR: Five studies of the large-scale research program, in which aspects of contour-based object identification and segmentation were examined, are summarized, and the stimuli were outlines derived from the standard set of line drawings of everyday objects by Snodgrass and Vanderwart (1980), and the norms and data were collected.
Abstract: We summarize five studies of our large-scale research program, in which we examined aspects of contour-based object identification and segmentation, and we report on the stimuli we used, the norms and data we collected, and the software tools we developed. The stimuli were outlines derived from the standard set of line drawings of everyday objects by Snodgrass and Vanderwart (1980). We used contour curvature as a major variable in all the studies. The total number of 1,500 participants produced very solid, normative identification rates of silhouettes and contours, straight-line versions, and fragmented versions, and quite reliable benchmark data about saliency of points and object segmentation into parts. We also developed several software tools to generate stimuli and to analyze the data in nonstandard ways. Our stimuli, norms and data, and software tools have great potential for further exploration of factors influencing contour-based object identification, and are also useful for researchers in many different disciplines (including computer vision) on a wide variety of research topics (e.g., priming, agnosia, perceptual organization, and picture naming). The full set of norms, data, and stimuli may be downloaded from www.psychonomic.org/archive/.

Journal ArticleDOI
TL;DR: The most frequent names in Spanish corresponding to a set of 247 pictures in the Snodgrass and Vanderwart (1980) norms were used as stimuli in a discrete free-association task.
Abstract: The most frequent names in Spanish corresponding to a set of 247 pictures in the Snodgrass and Vanderwart (1980) norms were used as stimuli in a discrete free-association task. A sample of 525 Spanish-speaking participants provided the first word that came to mind for each of the verbal stimuli. Responses were organized according to frequency of production in order to prepare word-association norms for the set of stimuli.

Journal ArticleDOI
TL;DR: An algorithm is described to efficiently compute the cumulative distribution and probability density functions of the diffusion process with trial-to-trial variability in mean drift rate, starting point, and residual reaction time with closed-form solutions.
Abstract: An algorithm is described to efficiently compute the cumulative distribution and probability density functions of the diffusion process (Ratcliff, 1978) with trial-to-trial variability in mean drift rate, starting point, and residual reaction time. Some, but not all, of the integrals appearing in the model’s equations have closed-form solutions, and thus we can avoid computationally expensive numerical approximations. Depending on the number of quadrature nodes used for the remaining numerical integrations, the final algorithm is at least 10 times faster than a classical algorithm using only numerical integration, and the accuracy is slightly higher. Next, we discuss some special cases with an alternative distribution for the residual reaction time or with fewer than three parameters exhibiting trialto-trial variability.

Journal ArticleDOI
TL;DR: A set of pictorial and auditory stimuli that are developed for use in word learning tasks in which the participant learns pairings of novel auditory sound patterns (names) with pictorial depictions of novel objects (referents).
Abstract: We describe a set of pictorial and auditory stimuli that we have developed for use in word learning tasks in which the participant learns pairings of novel auditory sound patterns (names) with pictorial depictions of novel objects (referents). The pictorial referents are drawings of "space aliens," consisting of images that are variants of 144 different aliens. The auditory names are possible nonwords of English; the stimulus set consists of over 2,500 nonword stimuli recorded in a single voice, with controlled onsets, varying from one to seven syllables in length. The pictorial and nonword stimuli can also serve as independent stimulus sets for purposes other than word learning. The full set of these stimuli may be downloaded from www.psychonomic.org/archive/.

Journal ArticleDOI
TL;DR: Semantic distance as derived from WordNet appears distinct from other measures of word pair relatedness and is psychologically functional.
Abstract: WordNet, an electronic dictionary (or lexical database), is a valuable resource for computational and cognitive scientists. Recent work on the computing of semantic distances among nodes (synsets) in WordNet has made it possible to build a large database of semantic distances for use in selecting word pairs for psychological research. The database now contains nearly 50,000 pairs of words that have values for semantic distance, associative strength, and similarity based on co-occurrence. Semantic distance was found to correlate weakly with these other measures but to correlate more strongly with another measure of semantic relatedness, featural similarity. Hierarchical clustering analysis suggested that the knowledge structure underlying semantic distance is similar in gross form to that underlying featural similarity. In experiments in which semantic similarity ratings were used, human participants were able to discriminate semantic distance. Thus, semantic distance as derived from WordNet appears distinct from other measures of word pair relatedness and is psychologically functional. This database may be downloaded from www.psychonomic.org/archive/.

Journal ArticleDOI
TL;DR: The construction of a new operant chamber that incorporates modern computer, touchscreen, and display technologies is reported, simple and inexpensive to construct but powerful and flexible enough to explore a broad range of issues in animal learning and behavior.
Abstract: We report the construction of a new operant chamber that incorporates modern computer, touchscreen, and display technologies. An LCD display was housed in the front wall of a lightweight Plexiglas chamber. An Apple eMac computer was used to present visual stimuli on the monitor and to control other chamber events. Responses to the stimuli were recorded using a transparent resistive-type touchscreen that overlaid the monitor. The resulting system is simple and inexpensive to construct but powerful and flexible enough to explore a broad range of issues in animal learning and behavior.

Journal ArticleDOI
TL;DR: A commercial device and associated software is developed that enables researchers to benchmark most computer-based paradigms in situ and without modification and gives them the opportunity to correct timing errors where practicable.
Abstract: With the proliferation of commercial experiment generators and custom software within cognitive psychology and the behavioral sciences, many have assumed that issues regarding millisecond timing accuracy have been largely solved. However, through empirical investigation of a variety of paradigms, we have discovered numerous sources of timing error. These can range from poor scripting practices, to incorrect timing specifications, to hardware variability. Building upon earlier research, we have developed a commercial device and associated software that enables researchers to benchmark most computer-based paradigms in situ and without modification. This gives them the opportunity to correct timing errors where practicable, increase replicability, and reduce variability by altering onset times for stimuli, by replacing inaccurate hardware, or by post hoc statistical manipulation should the source of error be constant. We outline the features of the device and accompanying software suite, stress the importance of such independent validation, and highlight typical areas that can be subject to error.