scispace - formally typeset
Search or ask a question
Author

Ronnie B. Wilbur

Bio: Ronnie B. Wilbur is an academic researcher from Purdue University. The author has contributed to research in topics: Sign language & American Sign Language. The author has an hindex of 34, co-authored 153 publications receiving 3342 citations. Previous affiliations of Ronnie B. Wilbur include University of Illinois at Urbana–Champaign & Boston University.


Papers
More filters
Journal ArticleDOI
TL;DR: Some of the literature is reviewed that indicates that early learning of ASL need not create concerns for future development of English structure, speech, or other cognitive skills.
Abstract: The purpose of this article is to review research dealing with the use of ASL in teaching English and literacy. I review some of the literature (and direct readers to additional sources) that indicates that early learning of ASL need not create concerns for future development of English structure, speech, or other cognitive skills. I also suggest ways in which ASL can contribute directly to developing more of the high-level skills needed for fluent reading and writing. The global benefit of learning ASL as a first language is that it creates a standard bilingual situation in which teachers and learners can take advantage of one language to assist in acquiring the other and in the transfer of general knowledge. As part of this discussion, I compare English and ASL as natural languages for similarities and differences.

179 citations

Book
01 Jan 1979

127 citations

Book ChapterDOI
01 Jan 1993
TL;DR: How the facts of ASL phonology require statement only with feature trees, tiers, syllables, and moras, and the question of whether certain spoken languages such as English need to make reference to segments in the statement of their own phonological generalizations, or whether SYLLABLE and FEATURE suffice.
Abstract: Publisher Summary This chapter presents a current model that contains segments inside ASL signs/syllables. It highlights Edmondson's claim that segments do not exist in signed languages. The chapter also discusses why proposed segments are not relevant to ASL phonology and the implications of this observation for theoretical phonology in general. It also presents four prevailing interpretations of segment in modern linguistic theory. There is no apparent phonological utility to the segments that have been proposed. The chapter discusses how the facts of ASL phonology require statement only with feature trees, tiers, syllables, and moras, and raises the question of whether certain spoken languages such as English need to make reference to segments in the statement of their own phonological generalizations, or whether SYLLABLE and FEATURE suffice.

99 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: McNeill as discussed by the authors discusses what Gestures reveal about Thought in Hand and Mind: What Gestures Reveal about Thought. Chicago and London: University of Chicago Press, 1992. 416 pp.
Abstract: Hand and Mind: What Gestures Reveal about Thought. David McNeill. Chicago and London: University of Chicago Press, 1992. 416 pp.

988 citations

Journal ArticleDOI
TL;DR: Findings show that collaboration and social negotiation are not only limited to the participants of an EVE, but exist between participants and avatars, offering a new dimension to computer assisted learning.
Abstract: This study is a ten-year critical review of empirical research on the educational applications of Virtual Reality (VR). Results show that although the majority of the 53 reviewed articles refer to science and mathematics, researchers from social sciences also seem to appreciate the educational value of VR and incorporate their learning goals in Educational Virtual Environments (EVEs). Although VR supports multisensory interaction channels, visual representations predominate. Few are the studies that incorporate intuitive interactivity, indicating a research trend in this direction. Few are the settings that use immersive EVEs reporting positive results on users' attitudes and learning outcomes, indicating that there is a need for further research on the capabilities of such systems. Features of VR that contribute to learning such as first order experiences, natural semantics, size, transduction, reification, autonomy and presence are exploited according to the educational context and content. Presence seems to play an important role in learning and it is a subject needing further and intensive studies. Constructivism seems to be the theoretical model the majority of the EVEs are based on. The studies present real world, authentic tasks that enable context and content dependent knowledge construction. They also provide multiple representations of reality by representing the natural complexity of the world. Findings show that collaboration and social negotiation are not only limited to the participants of an EVE, but exist between participants and avatars, offering a new dimension to computer assisted learning. Little can yet be concluded regarding the retention of the knowledge acquired in EVEs. Longitudinal studies are necessary, and we believe that the main outcome of this study is the future research perspectives it brings to light.

740 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a review of 154 studies that apply deep learning to EEG, published between 2010 and 2018, and spanning different application domains such as epilepsy, sleep, brain-computer interfacing, and cognitive and affective monitoring.
Abstract: Context Electroencephalography (EEG) is a complex signal and can require several years of training, as well as advanced signal processing and feature extraction methodologies to be correctly interpreted. Recently, deep learning (DL) has shown great promise in helping make sense of EEG signals due to its capacity to learn good feature representations from raw data. Whether DL truly presents advantages as compared to more traditional EEG processing approaches, however, remains an open question. Objective In this work, we review 154 papers that apply DL to EEG, published between January 2010 and July 2018, and spanning different application domains such as epilepsy, sleep, brain-computer interfacing, and cognitive and affective monitoring. We extract trends and highlight interesting approaches from this large body of literature in order to inform future research and formulate recommendations. Methods Major databases spanning the fields of science and engineering were queried to identify relevant studies published in scientific journals, conferences, and electronic preprint repositories. Various data items were extracted for each study pertaining to (1) the data, (2) the preprocessing methodology, (3) the DL design choices, (4) the results, and (5) the reproducibility of the experiments. These items were then analyzed one by one to uncover trends. Results Our analysis reveals that the amount of EEG data used across studies varies from less than ten minutes to thousands of hours, while the number of samples seen during training by a network varies from a few dozens to several millions, depending on how epochs are extracted. Interestingly, we saw that more than half the studies used publicly available data and that there has also been a clear shift from intra-subject to inter-subject approaches over the last few years. About [Formula: see text] of the studies used convolutional neural networks (CNNs), while [Formula: see text] used recurrent neural networks (RNNs), most often with a total of 3-10 layers. Moreover, almost one-half of the studies trained their models on raw or preprocessed EEG time series. Finally, the median gain in accuracy of DL approaches over traditional baselines was [Formula: see text] across all relevant studies. More importantly, however, we noticed studies often suffer from poor reproducibility: a majority of papers would be hard or impossible to reproduce given the unavailability of their data and code. Significance To help the community progress and share work more effectively, we provide a list of recommendations for future studies and emphasize the need for more reproducible research. We also make our summary table of DL and EEG papers available and invite authors of published work to contribute to it directly. A planned follow-up to this work will be an online public benchmarking portal listing reproducible results.

699 citations