scispace - formally typeset
Search or ask a question
Topic

American Sign Language

About: American Sign Language is a research topic. Over the lifetime, 3190 publications have been published within this topic receiving 79790 citations. The topic is also known as: ASL & ase.


Papers
More filters
Posted ContentDOI
26 Jun 2023
TL;DR: YouTube-ASL as mentioned in this paper is a large-scale, open-domain corpus of American Sign Language (ASL) videos and accompanying English captions drawn from YouTube, which is used to train baseline models for ASL to English translation.
Abstract: Machine learning for sign languages is bottlenecked by data. In this paper, we present YouTube-ASL, a large-scale, open-domain corpus of American Sign Language (ASL) videos and accompanying English captions drawn from YouTube. With ~1000 hours of videos and >2500 unique signers, YouTube-ASL is ~3x as large and has ~10x as many unique signers as the largest prior ASL dataset. We train baseline models for ASL to English translation on YouTube-ASL and evaluate them on How2Sign, where we achieve a new finetuned state of the art of 12.39 BLEU and, for the first time, report zero-shot results.
Posted ContentDOI
13 Oct 2022
TL;DR: The ASL dataset as mentioned in this paper contains 30k sentences, 416k words, and a vocabulary of 18k words in a total of 104 hours, which is the largest continuous sign language dataset published to date in terms of video duration.
Abstract: Despite tremendous progress in natural language processing using deep learning techniques in recent years, sign language production and comprehension has advanced very little. One critical barrier is the lack of largescale datasets available to the public due to the unbearable cost of labeled data generation. Efforts to provide public data for American Sign Language (ASL) comprehension have yielded two datasets, comprising more than thousand video clips. These datasets are large enough to enable a meaningful start to deep learning research on sign languages but are far too small to lead to any solution that can be practically deployed. So far, there is still no suitable dataset for ASL production. We proposed a system that can generate large scale ASL datasets for continuous ASL. It is suitable for general ASL processing and is particularly useful for ASL production. The continuous ASL dataset contains English labeled human articulations in condensed body pose data formats. To better serve the research community, we are releasing the first version of our ASL dataset, which contains 30k sentences, 416k words, a vocabulary of 18k words, in a total of 104 hours. This is the largest continuous sign language dataset published to date in terms of video duration. We also describe a system that can evolve and expand the dataset to incorporate better data processing techniques and more contents when available. It is our hope that the release of this ASL dataset and the sustainable dataset generation system to the public will propel better deep-learning research in ASL natural language processing.
Journal ArticleDOI
TL;DR: The authors discuss logistic and ideological practices used by signed language corpus projects around the world to develop their corpora, starting with Johnston's early call to develop signed language corpora in 2004.
Abstract: In this chapter, we discuss logistic and ideological practices used by signed language corpus projects around the world to develop their corpora, starting with Johnston’s early call to develop signed language corpora in 2004. We then outline a brief history of existing American Sign Language (ASL) corpora or corpus-like collections, most of which are specialized and/or inaccessible. We unpack the lessons we have learned as we, North American linguists interested in signed language research and corpora, have sought to create a national-level corpus following the standards set by earlier projects. We conclude that our contexts are somewhat different, and describe projects that we think would be better suited for our ASL communities. In other words, though our dreams of having a single national-level ASL corpus have ‘withered’, we remain optimistic that corpus methods can be used to catalog and analyze the wealth of ASL signing data currently available.
09 Jul 2015
TL;DR: A novel method to recognize the 24 static image based alphabets A to Z of American Sign Language (ASL) using two different features to provide an average recognition rate of 93.36 percent is explained.
Abstract: Communication with the hearing impaired people without the help of interpreter is a big challenge for common people. Thus efficient computer based recognition of sign language is an important research problem.Till now numbers of techniques are being developed. This article explains a novel method to recognize the 24 static image based alphabets A to Z (excluding dynamic alphabets J and Z) of American Sign Language (ASL) using two different features. This method extracts the feature vector of the images based on the simple method of orientation histogram along with the statistical parameters. Further neural network is used for the classification of these alphabets. This method is qualified to provide an average recognition rate of 93.36 percent.

Network Information
Related Topics (5)
Language acquisition
33.9K papers, 957.2K citations
81% related
Vocabulary
44.6K papers, 941.5K citations
80% related
Literacy
58.3K papers, 1.1M citations
77% related
Sentence
41.2K papers, 929.6K citations
76% related
Recall
23.6K papers, 989.7K citations
73% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023105
2022267
2021143
2020176
2019113
2018143