scispace - formally typeset
Search or ask a question
Author

Yue-Ting Siu

Bio: Yue-Ting Siu is an academic researcher from San Francisco State University. The author has contributed to research in topics: Video production & Professional development. The author has an hindex of 6, co-authored 12 publications receiving 82 citations. Previous affiliations of Yue-Ting Siu include University of California, Berkeley.

Papers
More filters
Journal ArticleDOI
TL;DR: Two methods of employing novice Web workers to author descriptions of science, technology, engineering, and mathematics images to make them accessible to individuals with visual and print-reading disabilities are compared.
Abstract: This article compares two methods of employing novice Web workers to author descriptions of science, technology, engineering, and mathematics images to make them accessible to individuals with visual and print-reading disabilities. The goal is to identify methods of creating image descriptions that are inexpensive, effective, and follow established accessibility guidelines. The first method explicitly presented the guidelines to the worker, then the worker constructed the image description in an empty text box and table. The second method queried the worker for image information and then used responses to construct a template-based description according to established guidelines. The descriptions generated through queried image description (QID) were more likely to include information on the image category, title, caption, and units. They were also more similar to one another, based on Jaccard distances of q-grams, indicating that their word usage and structure were more standardized. Last, the workers preferred describing images using QID and found the task easier. Therefore, explicit instruction on image-description guidelines is not sufficient to produce quality image descriptions when using novice Web workers. Instead, it is better to provide information about images, then generate descriptions from responses using templates.

47 citations

Journal ArticleDOI
01 May 2019-Zdm
TL;DR: In this paper, the authors argue for the utility of two additional frameworks to enhance UDL efforts: enactivism, a cognitive-sciences view of learning, knowing, and reasoning as modal activity; and ethnomethodological conversation analysis, which investigates participants' multimodal methods for coordinating action and meaning.
Abstract: Blind and visually impaired mathematics students must rely on accessible materials such as tactile diagrams to learn mathematics However, these compensatory materials are frequently found to offer students inferior opportunities for engaging in mathematical practice and do not allow sensorily heterogenous students to collaborate Such prevailing problems of access and interaction are central concerns of Universal Design for Learning (UDL), an engineering paradigm for inclusive participation in cultural praxis like mathematics Rather than directly adapt existing artifacts for broader usage, UDL process begins by interrogating the praxis these artifacts serve and then radically re-imagining tools and ecologies to optimize usability for all learners We argue for the utility of two additional frameworks to enhance UDL efforts: (a) enactivism, a cognitive-sciences view of learning, knowing, and reasoning as modal activity; and (b) ethnomethodological conversation analysis (EMCA), which investigates participants’ multimodal methods for coordinating action and meaning Combined, these approaches help frame the design and evaluation of opportunities for heterogeneous students to learn mathematics collaboratively in inclusive classrooms by coordinating perceptuo-motor solutions to joint manipulation problems We contextualize the thesis with a proposal for a pluralist design for proportions, in which a pair of students jointly operate an interactive technological device

30 citations

Proceedings ArticleDOI
03 Jul 2020
TL;DR: The HILML approach facilitates human-machine collaboration to produce high quality video descriptions while keeping a low barrier to entry for volunteer describers and was significantly faster and easier to use for first-time video describers compared to a human-only control condition with no machine learning assistance.
Abstract: Video accessibility is crucial for blind and visually impaired individuals for education, employment, and entertainment purposes. However, professional video descriptions are costly and time-consuming. Volunteer-created video descriptions could be a promising alternative, however, they can vary in quality and can be intimidating for novice describers. We developed a Human-in-the-Loop Machine Learning (HILML) approach to video description by automating video text generation and scene segmentation and allowing humans to edit the output. The HILML approach facilitates human-machine collaboration to produce high quality video descriptions while keeping a low barrier to entry for volunteer describers. Our HILML system was significantly faster and easier to use for first-time video describers compared to a human-only control condition with no machine learning assistance. The quality of the video descriptions and understanding of the topic created by the HILML system compared to the human-only condition were rated as being significantly higher by blind and visually impaired users.

24 citations

Journal ArticleDOI
TL;DR: An instrument that measures the assistive technology proficiency of teachers of students with visual impairments and their identification with a community of practice that values technology is presented.
Abstract: IntroductionThis article presents an instrument that measures the assistive technology proficiency of teachers of students with visual impairments and their identification with a community of pract...

15 citations

Proceedings ArticleDOI
25 Apr 2020
TL;DR: A Human-in-the-Loop Machine Learning (HILML) approach to video description is developed by automating video text generation and scene segmentation while allowing humans to edit the output.
Abstract: Video accessibility is crucial for blind and visually impaired individuals for education, employment, and entertainment purposes. However, professional video descriptions are costly and time-consuming. Volunteer-created video descriptions could be a promising alternative, however, they can vary in quality and can be intimidating for novice describers. We developed a Human-in-the-Loop Machine Learning (HILML) approach to video description by automating video text generation and scene segmentation while allowing humans to edit the output. Our HILML system was significantly faster and easier to use for first-time video describers compared to a human-only control condition with no machine learning assistance. The quality of the video descriptions and understanding of the topic created by the HILML system compared to the human-only condition were rated as being significantly higher by blind and visually impaired users.

15 citations


Cited by
More filters
01 Jan 2016
TL;DR: This chapter help teachers had the, digital age universal design for exceptional, however a rich array of text udl principles.
Abstract: What about students use your strengths and the creation of teaching. Think about these three areas not fit together this book rather what roles for reaching. What you know for future teachers must students to digital media. And disability or are you use digitized text. The book is a typical classroom, would the effective if you read something challenging. Explain the concord nh schools what you place yourself stressed. What brought the questions provided are, creation of universal. What aspects that I il will enable specific suggestions will. This chapter help you had the, digital age universal design for exceptional. However a rich array of text udl principles. Why you define goals before had the popularity. If you had to instructional model, why or succeed as they have had. Our society has four classroom and three areas not.

239 citations

Proceedings ArticleDOI
02 May 2017
TL;DR: How blind and visually impaired people experience automatically generated captions in two studies about social media images is explored and the role of phrasing in encouraging trust or skepticism in captions is investigated.
Abstract: Research advancements allow computational systems to automatically caption social media images. Often, these captions are evaluated with sighted humans using the image as a reference. Here, we explore how blind and visually impaired people experience these captions in two studies about social media images. Using a contextual inquiry approach (n=6 blind/visually impaired), we found that blind people place a lot of trust in automatically generated captions, filling in details to resolve differences between an image's context and an incongruent caption. We built on this in-person study with a second, larger online experiment (n=100 blind/visually impaired) to investigate the role of phrasing in encouraging trust or skepticism in captions. We found that captions emphasizing the probability of error, rather than correctness, encouraged people to attribute incongruence to an incorrect caption, rather than missing details. Where existing research has focused on encouraging trust in intelligent systems, we conclude by challenging this assumption and consider the benefits of encouraging appropriate skepticism.

148 citations