scispace - formally typeset
Search or ask a question
Book

American Sign Language

TL;DR: This is a revised dictionary of American sign language, introducing more than 1500 new signs and 4000 new illustrations that express each sign and allow the user to begin signing immediately.
Abstract: This is a revised dictionary of American sign language, introducing more than 1500 new signs and 4000 new illustrations. The 7000 signs in the book represent a large part of the ASL lexicon, and are supported by more than 10,000 illustrations of the hand, arm and facial motions that express each sign and allow the user to begin signing immediately. The dictionary includes signs which have recently come into vogue, such as "log on", "downtime" and "digital".
Citations
More filters
Journal ArticleDOI
TL;DR: Sign language use in the U.S. has been studied extensively in the literature as mentioned in this paper, with a focus on two demographic research categories: (1) ASL as a language of national origin and (2) deafness.
Abstract: IN THE UNITED STATES, home language use surveys are now commonplace. The decennial census has included inquiries about home language use within immigrant households since 1890 and within all U.S. homes since 1970 (see U.S. Census Bureau 20023, hereafter cited as Measuring America). Public schools, originally to comply with the Bilingual Education Act of 1968, authorized in Title VII, Part A, of the Elementary and secondary Education Act, routinely collect home language use data for each student enrolled. The number of languages used in homes in the United States, as identified by the various federal and state surveys, is quite large. However, American Sign Language (ASL) is not on the list of non-English languages used in the home, and no state in the union counts its users in either the general or the school population. Conspicuous by its absence in U.S. language census data is an estimate of how many people use American Sign Language in the United States. We have found that California records sign language use in the home when children enter school (e.g., California Department of Education 2004); the Annual Survey of Deaf and Hard of Hearing Children and Youth (hereafter cited as Annual Survey) collects data on sign language use by family members with their deaf or hard of hearing children (e.g., see Mitchell and Karchmer 2005). However, there is no systematic and routine collection of data on sign language or ASL use in the general population. Given that estimates of the number of people who use ASL are relatively easy to find in research and practitioner publications, as well as scattered across the Internet, and range from 100,000 to 15,000,000, we decided to track down their sources. In this review of the literature on the prevalence of ASL use in the United States, we identify a number of misunderstandings. To make sense of them, we focus on two documents in particular: first, a statement presented during the U.S. Senate hearings for the Bilingual Courts Act of 1974 about how sign language use ranks in comparison to other non-English languages in the United States (Beale 1974) and, second, the findings from the National Census of the Deaf Population (NCDP; see Schein and DeIkJr. 1974). This in-depth review clarifies the meaning of the original statement for the Bilingual Courts Act of 1974 hearings and provides a more justifiable estimate of the number of signers. This number does not necessarily include all ASL users, based upon the NCDP, which is the only research study from which data-based estimates may be derived. Before we consider these earlier works, however, we offer some background on the problems of obtaining accurate (let alone current) estimates of how many people use ASL in the United States from large-scale, ongoing national data collection efforts. These include the decennial census of the U.S. population and its companion projects, the Current Population Survey (CPS) and the American Comsmunity Survey (ACS), as well as surveys commissioned by other federal agencies, in particular, the National Health Survey (NHS) and the Survey of Income and Program Participation (SIPP). Demography of Language and Deafness We focus on two demographic research categories: (1) ASL as a language of national origin and (2) deafness. For more than a century, the federal government has mandated national census counts, or censusbased survey estimates, of non-English language use in the U.S. population. Also, originally as an activity of the U.S. Bureau of the Census and then, after a delay of several decades, a U.S. Public Health Service responsibility, there have been regular estimates of the prevalence of deafness and other disabilities in the country. In this section we review some of the specifics of these two demographic categories-language and deafness-and suggest that these distinct projects require a unified perspective before ASL use is likely to be included as part of the demographic description of the U. …

264 citations

Journal ArticleDOI
TL;DR: It is argued that distinguishing between sign (or speech) and gesture is essential to predict certain types of learning and allows us to understand the conditions under which gesture takes on properties of sign, and speech takesOn properties of gesture.
Abstract: How does sign language compare with gesture, on the one hand, and spoken language on the other? Sign was once viewed as nothing more than a system of pictorial gestures without linguistic structure. More recently, researchers have argued that sign is no different from spoken language, with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign language, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the past 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We conclude that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because at present it is difficult to tell where sign stops and gesture begins, we suggest that sign should not be compared with speech alone but should be compared with speech-plus-gesture. Although it might be easier (and, in some cases, preferable) to blur the distinction between sign and gesture, we argue that distinguishing between sign (or speech) and gesture is essential to predict certain types of learning and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders between sign and gesture.

193 citations

Book ChapterDOI
Feng Jiang1, Shengping Zhang1, Shen Wu1, Yang Gao1, Debin Zhao1 
TL;DR: The essential linguistic characters of gestures: the components concurrent character and the sequential organization character are explored in a multi-layered framework, which extracts features from both the segmented semantic units and the whole gesture sequence and then sequentially classifies the motion, location and shape components.
Abstract: This paper proposes a novel multi-layered gesture recognition method with Kinect. We explore the essential linguistic characters of gestures: the components concurrent character and the sequential organization character, in a multi-layered framework, which extracts features from both the segmented semantic units and the whole gesture sequence and then sequentially classifies the motion, location and shape components. In the first layer, an improved principle motion is applied to model the motion component. In the second layer, a particle-based descriptor and a weighted dynamic time warping are proposed for the location component classification. In the last layer, the spatial path warping is further proposed to classify the shape component represented by unclosed shape context. The proposed method can obtain relatively high performance for one-shot learning gesture recognition on the ChaLearn Gesture Dataset comprising more than 50, 000 gesture sequences recorded with Kinect.

97 citations

Journal ArticleDOI
TL;DR: This article investigated the bilingual abilities of 55 deaf individuals, examining both American Sign Language (ASL) competency and English reading skills, and found that highly competent signers scored higher on a measure of reading comprehension, while family characteristics (e.g., parental education level, family hearing status) were entered into the analysis to ascertain their effect on Deaf individuals' bilingual abilities.
Abstract: The current study investigated the bilingual abilities of 55 Deaf individuals, examining both American Sign Language (ASL) competency and English reading skills. Results revealed a positive relationship between ASL competency and English skills, with highly competent signers scoring higher on a measure of reading comprehension. Additionally, family characteristics (e.g., parental education level, family hearing status) were entered into the analysis to ascertain their effect on Deaf individuals’ bilingual abilities. The findings support the theory that competency in ASL may serve as a bridge to the acquisition of English print. Moreover, the findings provide support for the critical period hypothesis for first language acquisition and its later impact on other cognitive and academic skills.

96 citations

Journal ArticleDOI
TL;DR: Evaluation results of the proposed model indicate that the model outperforms state-of-the-art models in hand sign language recognition, hand pose estimation, and hand action recognition.
Abstract: Hand sign language recognition from video is a challenging research area in computer vision, which performance is affected by hand occlusion, fast hand movement, illumination changes, or background complexity, just to mention a few. In recent years, deep learning approaches have achieved state-of-the-art results in the field, though previous challenges are not completely solved. In this work, we propose a novel deep learning-based pipeline architecture for efficient automatic hand sign language recognition using Single Shot Detector (SSD), 2D Convolutional Neural Network (2DCNN), 3D Convolutional Neural Network (3DCNN), and Long Short-Term Memory (LSTM) from RGB input videos. We use a CNN-based model which estimates the 3D hand keypoints from 2D input frames. After that, we connect these estimated keypoints to build the hand skeleton by using midpoint algorithm. In order to obtain a more discriminative representation of hands, we project 3D hand skeleton into three views surface images. We further employ the heatmap image of detected keypoints as input for refinement in a stacked fashion. We apply 3DCNNs on the stacked features of hand, including pixel level, multi-view hand skeleton, and heatmap features, to extract discriminant local spatio-temporal features from these stacked inputs. The outputs of the 3DCNNs are fused and fed to a LSTM to model long-term dynamics of hand sign gestures. Analyzing 2DCNN vs. 3DCNN using different number of stacked inputs into the network, we demonstrate that 3DCNN better capture spatio-temporal dynamics of hands. To the best of our knowledge, this is the first time that this multi-modal and multi-view set of hand skeleton features are applied for hand sign language recognition. Furthermore, we present a new large-scale hand sign language dataset, namely RKS-PERSIANSIGN, including 10′000 RGB videos of 100 Persian sign words. Evaluation results of the proposed model on three datasets, NYU, First-Person, and RKS-PERSIANSIGN, indicate that our model outperforms state-of-the-art models in hand sign language recognition, hand pose estimation, and hand action recognition.

88 citations