scispace - formally typeset
Search or ask a question
Author

Danielle S. McNamara

Bio: Danielle S. McNamara is an academic researcher from Arizona State University. The author has contributed to research in topics: Reading comprehension & Reading (process). The author has an hindex of 70, co-authored 539 publications receiving 22142 citations. Previous affiliations of Danielle S. McNamara include Wichita State University & Georgia State University.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper investigated the role of text coherence in the comprehension of science texts and found that readers who know little about the domain of the text benefit from a coherent text, whereas high-knowledge readers benefit from minimally coherent text.
Abstract: Two experiments, theoretically motivated by the construction-integration model of text comprehension (W. Kintsch, 1988), investigated the role of text coherence in the comprehension of science texts. In Experiment 1, junior high school students' comprehension of one of three versions of a biology text was examined via free recall, written questions, and a key-word sorting task. This study demonstrates advantages for globally coherent text and for more explanatory text. In Experiment 2, interactions among local and global text coherence, readers' background knowledge, and levels of understanding were examined. Using the same methods as in Experiment 1, we examined students' comprehension of one of four versions of a text, orthogonally varying local and global coherence. We found that readers who know little about the domain of the text benefit from a coherent text, whereas high-knowledge readers benefit from a minimally coherent text. We argue that the poorly written text forces the knowledgeable readers t...

1,276 citations

Journal ArticleDOI
TL;DR: Standard text readability formulas scale texts on difficulty by relying on word length and sentence length, whereas Coh-Metrix is sensitive to cohesion relations, world knowledge, and language and discourse characteristics.
Abstract: Advances in computational linguistics and discourse processing have made it possible to automate many language- and text-processing mechanisms. We have developed a computer tool called Coh-Metrix, which analyzes texts on over 200 measures of cohesion, language, and readability. Its modules use lexicons, part-of-speech classifiers, syntactic parsers, templates, corpora, latent semantic analysis, and other components that are widely used in computational linguistics. After the user enters an English text, Coh-Metrix returns measures requested by the user. In addition, a facility allows the user to store the results of these analyses in data files (such as Text, Excel, and SPSS). Standard text readability formulas scale texts on difficulty by relying on word length and sentence length, whereas Coh-Metrix is sensitive to cohesion relations, world knowledge, and language and discourse characteristics.

1,271 citations

BookDOI
15 Feb 2007
TL;DR: This book discusses Latent Semantic Analysis as a Theory of Meaning, its application in Cognitive Theory, and its applications in Educational Applications.
Abstract: Contents: Part I: Introduction to LSA: Theory and Methods. T.K. Landauer, LSA as a Theory of Meaning. D. Martin, M. Berry, Mathematical Foundations Behind Latent Semantic Analysis. S. Dennis, How to Use the LSA Website. J. Quesada, Creating Your Own LSA Spaces. Part II: LSA in Cognitive Theory. W. Kintsch, Meaning in Context. M. Louwerse, Symbolic or Embodied Representations: A Case for Symbol Interdependency. M.W. Howard, K. Addis, B. Jing, M.K. Kahana, Semantic Structure and Episodic Memory. G. DenhiSre, B. Lemaire, C. Bellissens, S. Jhean-Larose, A Semantic Space for Modeling Children's Semantic Memory. P. Foltz, Discourse Coherence and LSA. J. Quesada, Spaces for Problem Solving. Part III: LSA in Educational Applications. K. Millis, J. Magliano, K. Wiemer-Hastings, S. Todaro, D.S. McNamara, Assessing and Improving Comprehension With Latent Semantic Analysis. D.S. McNamara, C. Boonthum, I. Levinstein, K. Millis, Evaluating Self-Explanations in iSTART: Comparing Word-Based and LSA Algorithms. A. Graesser, P. Penumatsa, M. Ventura, Z. Cai, X. Hu, Using LSA in AutoTutor: Learning Through Mixed-Initiative Dialog in Natural Language. E. Kintsch, D. Caccamise, M. Franzke, N. Johnson, S. Dooley, Summary Streetr: Computer-Guided Summary Writing. L. Streeter, K. Lochbaum, N. LaVoie, J.E. Psotka, Automated Tools for Collaborative Learning Environments. Part IV: Information Retrieval and HCI Applications of LSA. S.T. Dumais, LSA and Information Retrieval: Getting Back to Basics. P.K. Foltz, T.K. Landauer, Helping People Find and Learn From Documents: Exploiting Synergies Between Human and Computer Retrieval With SuperManual. M.H. Blackmon, M. Kitajima, D.R. Mandalia, P.G. Polson, Automating Usability Evaluation Cognitive Walkthrough for the Web Puts LSA to Work on Real-World HCI Design Problems. Part V: Extensions to LSA. D.S. McNamara, Z. Cai, M.M. Louwerse, Optimizing LSA Measures of Cohesion. X. Hu, Z. Cai, P. Wiemer-Hastings, A.C. Graesser, D.S. McNamara, Strength, Weakness, and Extensions of LSA. M. Steyvers, T. Griffiths, Probabilistic Topic Models. S. Dennis, Introducing Word Order: Within the LSA Framework. Part VI: Conclusion. W. Kintsch, D.S. McNamara, S. Dennis, T.K. Landauer, LSA and Meaning: In Theory and Application.

1,180 citations

Journal ArticleDOI
TL;DR: Two experiments, theoretically motivated by the construction‐integration model of comprehension, investigated effects of prior knowledge on learning from high‐ and low‐coherence history texts and indicated that the low‐ coherence text requires more inference processes.
Abstract: Two experiments, theoretically motivated by the construction‐integration model of comprehension (W. Kintsch, 1988), investigated effects of prior knowledge on learning from high‐ and low‐coherence history texts. In Experiment 1, participants’ comprehension was examined through free recall, multiple‐choice questions, and a keyword sorting task. An advantage was found for the high‐coherence text on recall and multiple‐choice questions. However, high‐knowledge readers performed better on the sorting task after reading the low‐coherence text. In Experiment 2, participants’ comprehension was examined through open‐ended questions and the sorting task both immediately and after a 1‐week delay. Little effect of delay was found, and the previous sorting task results failed to replicate. As predicted, high‐knowledge readers performed better on the open‐ended questions after reading the low‐coherence text. Reading times from both experiments indicated that the low‐coherence text requires more inference processes. Th...

708 citations

Book ChapterDOI
TL;DR: Current models of comprehension are not necessarily contradictory, but rather cover different spectrums of comprehension processes and no one model adequately accounts for a wide variety of reading situations that have been observed and the range of comprehension considered thus far in comprehension models is too limited.
Abstract: The goal of this chapter is to provide the foundation toward developing a more comprehensive model of reading comprehension. To this end, seven prominent comprehension models (Construction–Integration, Structure-Building, Resonance, Event-Indexing, Causal Network, Constructionist, and Landscape) are described, evaluated, and compared. We describe what comprehension models have offered thus far, differences and similarities between them, and what comprehension processes are not included within any of the models, and thus, what should be included in a comprehensive model. Our primary conclusion from the review of this literature is that current models of comprehension are not necessarily contradictory, but rather cover different spectrums of comprehension processes. Further, no one model adequately accounts for a wide variety of reading situations that have been observed and the range of comprehension considered thus far in comprehension models is too limited.

569 citations


Cited by
More filters
01 Jan 2016
TL;DR: The using multivariate statistics is universally compatible with any devices to read, allowing you to get the most less latency time to download any of the authors' books like this one.
Abstract: Thank you for downloading using multivariate statistics. As you may know, people have look hundreds times for their favorite novels like this using multivariate statistics, but end up in infectious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they juggled with some harmful bugs inside their laptop. using multivariate statistics is available in our digital library an online access to it is set as public so you can download it instantly. Our books collection saves in multiple locations, allowing you to get the most less latency time to download any of our books like this one. Merely said, the using multivariate statistics is universally compatible with any devices to read.

14,604 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Book
01 Jan 1993
TL;DR: This guide to the methods of usability engineering provides cost-effective methods that will help developers improve their user interfaces immediately and shows you how to avoid the four most frequently listed reasons for delay in software projects.
Abstract: From the Publisher: Written by the author of the best-selling HyperText & HyperMedia, this book provides an excellent guide to the methods of usability engineering. Special features: emphasizes cost-effective methods that will help developers improve their user interfaces immediately, shows you how to avoid the four most frequently listed reasons for delay in software projects, provides step-by-step information about which methods to use at various stages during the development life cycle, and offers information on the unique issues relating to informational usability. You do not need to have previous knowledge of usability to implement the methods provided, yet all of the latest research is covered.

11,929 citations

01 Jan 2002

9,314 citations