scispace - formally typeset
Search or ask a question

Showing papers on "Interactive video published in 2019"


Journal ArticleDOI
TL;DR: In this paper, a hybrid spatio-temporal features for student behavior recognition (SBR) system that recognizes student-student behaviors from sequences of digital images is presented. And a survey is performed to evaluate the effectiveness of video based interactive learning using proposed SBR system.
Abstract: Rapid growth and recent developments in education sector and information technologies have promoted E-learning and collaborative sessions among the learning communities and business incubator centers. Traditional practices are being replaced with webinars (live online classes) E-Quizes (online testing) and video lectures for effective learning and performance evaluation. These E-learning methods use sensors and multimedia tools to contribute in resource sharing, social networking, interactivity and corporate trainings. While, artificial intelligence tools are also being integrated into various industries and organizations for students’ engagement and adaptability towards the digital world. Predicting students’ behaviors and providing intelligent feedbacks is an important parameter in the E-learning domain. To optimize students’ behaviors in virtual environments, we have proposed an idea of embedding cognitive processes into information technologies. This paper presents hybrid spatio-temporal features for student behavior recognition (SBR) system that recognizes student-student behaviors from sequences of digital images. The proposed SBR system segments student silhouettes using neighboring data points observation and extracts co-occurring robust spatio-temporal features having full body and key body points techniques. Then, artificial neural network is used to measure student interactions taken from UT-Interaction and classroom behaviors datasets. Finally a survey is performed to evaluate the effectiveness of video based interactive learning using proposed SBR system.

95 citations


Proceedings ArticleDOI
15 Jun 2019
TL;DR: This work proposes a new multi-round training scheme for the interactive video object segmentation so that the networks can learn how to understand the user's intention and update incorrect estimations during the training.
Abstract: We present a deep learning method for the interactive video object segmentation. Our method is built upon two core operations, interaction and propagation, and each operation is conducted by Convolutional Neural Networks. The two networks are connected both internally and externally so that the networks are trained jointly and interact with each other to solve the complex video object segmentation problem. We propose a new multi-round training scheme for the interactive video object segmentation so that the networks can learn how to understand the user's intention and update incorrect estimations during the training. At the testing time, our method produces high-quality results and also runs fast enough to work with users interactively. We evaluated the proposed method quantitatively on the interactive track benchmark at the DAVIS Challenge 2018. We outperformed other competing methods by a significant margin in both the speed and the accuracy. We also demonstrated that our method works well with real user interactions.

53 citations


Journal ArticleDOI
TL;DR: It is argued that DMN activation enhances learning of temporal and spatial context and that this type of learning is characteristic of receptive media, and that Receptive and interactive screen media enhance different types of comprehension and learning.

43 citations


Journal Article
TL;DR: The results showed that there were significant differences between the results of the learning achievement of the experimental group and the control group, and the interactive motion graphic media is effective to be used to improve the students’ knowledge in the science subject of the fifth graders.
Abstract: The purpose of this study was to investigate the effectiveness of the development of motion graphic animation video media in Natural Sciences subjects in Elementary Schools. This study uses a type of Research and Development research with quantitative tests. This study employed the experimental research method involving 27 students in the control group and 27 students in the experimental group. This research was conducted on 5th-grade students in elementary school in 2 different schools. Data collection uses interview methods for preliminary studies, observations and tests to test the effectiveness of animated video media motion graphics. The results of the study showed that there were significant differences between the results of the learning achievement of the experimental group and the control group. As well as motion graphic animation video media, its effectiveness has been tested in improving student achievement, especially the experimental group. Hence, the interactive motion graphic media is effective to be used to improve the students’ knowledge in the science subject of the fifth graders.

33 citations


Proceedings ArticleDOI
01 Apr 2019
TL;DR: The architecture, called iView, intelligently determines video quality and reduces the latency without pre-programmed models or assumptions and advocate multimodal learning and deep reinforcement learning in the design.
Abstract: Recently, the fusion of 360° video and multi-viewpoint video, called multi-viewpoint (MVP) 360° interactive video, has emerged and created much more immersive and interactive user experience, but calls for a low latency solution to request the high-definition contents. Such viewing-related features as head movement have been recently studied, but several key issues still need to be addressed. On the viewer side, it is not clear how to effectively integrate different types of viewing-related features. At the session level, questions such as how to optimize the video quality under dynamic networking conditions and how to build an end-to-end mapping between these features and the quality selection remain to be answered. The solutions to these questions are further complicated given the many practical challenges, e.g., incomplete feature extraction and inaccurate prediction.This paper presents an architecture, called iView, to address the aforementioned issues in an MVP 360° interactive video scenario. To fully understand the viewing-related features and provide a one-step solution, we advocate multimodal learning and deep reinforcement learning in the design. iView intelligently determines video quality and reduces the latency without pre-programmed models or assumptions. We have evaluated iView with multiple real-world video and network datasets. The results showed that our solution effectively utilizes the features of video frames, networking throughput, head movements, and viewpoint selections, achieving at least 27.2%, 15.4%, and 2.8% improvements on the three video datasets, respectively, compared with several state-of-the-art methods.

32 citations


Proceedings ArticleDOI
04 Jun 2019
TL;DR: A critical analysis of the work itself, including audience reactions and an initial user study using Roth's measurement toolbox, finds out what are the factors driving user's enjoyment and what factors might mitigate the experience.
Abstract: The Netflix production Bandersnatch represents a potentially crucial step for interactive digital narrative videos, due to the platform's reach, popularity, and ability to finance costly experimental productions. Indeed, Netflix has announced that it will invest more into interactive narratives – moving into romance and other genres – which makes Bandersnatch even more important as first step and harbinger of things yet to come. For us, the question was therefore how audiences react to Bandersnatch. What are the factors driving user's enjoyment and what factors might mitigate the experience. For example, novelty value of an interactive experience on Netflix might be a crucial aspect or the combination with the successful series Black Mirror. We approach these questions from two angels – with a critical analysis of the work itself, including audience reactions and an initial user study using Roth's measurement toolbox (N = 32).

23 citations


Journal ArticleDOI
TL;DR: A conceptual model for the design of instructional scenarios integrating hypervideo as an instructional tool is provided, based on the following two layers of design decisions: the first pertains to the interactivity features and the second is connected with the instructional strategy.
Abstract: In this article, we provide a conceptual model for the design of instructional scenarios integrating hypervideo as an instructional tool. The model provides a structural aid for making design decis...

21 citations


Book ChapterDOI
08 Jan 2019
TL;DR: The most recent version of diveXplore, which has been successfully used for the latest two Video Browser Showdown competitions, is presented, with a Feature Map Autopilot, which ensures time-efficient inspection of feature maps without gaps and unnecessary visits.
Abstract: We present the most recent version of our Deep Interactive Video Exploration (diveXplore) system, which has been successfully used for the latest two Video Browser Showdown competitions (VBS2017 and VBS2018) as well as for the first Lifelog Search Challenge (LSC2018). diveXplore is based on a plethora of video content analysis and processing methods, such as simple color, texture, and motion analysis, self-organizing feature maps, and semantic concept extraction with different deep convolutional neural networks. The biggest strength of the system, however, is that it provides a variety of video search and rich interaction features. One of the novelties in the most recent version is a Feature Map Autopilot, which ensures time-efficient inspection of feature maps without gaps and unnecessary visits.

19 citations


Journal ArticleDOI
TL;DR: Findings presented indicate that second level students predominantly value the use of digital video as a learning tool due to its motivational value, ability to explain concepts and provision of examples and real world scenarios.
Abstract: Research indicates that teenagers and young adults of second level school age in Ireland are increasingly immersed in world of technology. Online video accounts for much of their time spent online, and is often used for education purposes. While Irish government initiatives such as the Digital Strategy for Schools (2015–2020) aim to encourage integration of technology in the school system, exposure to technology continues to occur predominately outside the school setting. This study begins by examining this context, paying particular attention to the growth of online video. Following this the educational value of video is discussed, along with strategies and tools for its integration in the classroom. In the methodology section, the process of integrating digital video into eight second level classes is explained, including trainee teachers involvement. Findings presented indicate that second level students predominantly value the use of digital video as a learning tool due to its motivational value, ability to explain concepts and provision of examples and real world scenarios.

16 citations


Journal ArticleDOI
TL;DR: An interactive video-based smart learning system has been designed, which allows for the streaming video of live as well as prerecorded lecture sessions offering an interactive teaching-learning experience and supports both mobile devices and desktop computers.
Abstract: The popularity of smart learning has soared due to its flexibility, ubiquity, context-awareness, and adaptiveness. In particular, video-based m-learning has the biggest impact on the learning process. Its live and realistic features make learning interactive, easy, and fast. This article establishes the importance of video-based learning and m-learning in smart learning while discussing the basics of a smart learning environment and requirements. A framework and model for smart learning is presented. A streaming video adaptation model is also proposed for mobile devices. Based on the model, an interactive video-based smart learning system has been designed, which allows for the streaming video of live as well as prerecorded lecture sessions offering an interactive teaching-learning experience. The application supports both mobile devices and desktop computers. The model is practically implemented with a group of students and their feedback shows a high rate of acceptance of the system while a sizable percentage of them acknowledged that it improved their teaching-learning process significantly.

16 citations


Posted Content
TL;DR: In this article, the authors proposed a multi-round training scheme for the interactive video object segmentation so that the networks can learn how to understand the user's intention and update incorrect estimations during the training.
Abstract: We present a deep learning method for the interactive video object segmentation. Our method is built upon two core operations, interaction and propagation, and each operation is conducted by Convolutional Neural Networks. The two networks are connected both internally and externally so that the networks are trained jointly and interact with each other to solve the complex video object segmentation problem. We propose a new multi-round training scheme for the interactive video object segmentation so that the networks can learn how to understand the user's intention and update incorrect estimations during the training. At the testing time, our method produces high-quality results and also runs fast enough to work with users interactively. We evaluated the proposed method quantitatively on the interactive track benchmark at the DAVIS Challenge 2018. We outperformed other competing methods by a significant margin in both the speed and the accuracy. We also demonstrated that our method works well with real user interactions.

Journal ArticleDOI
TL;DR: The study shows that the proposed learning setting could become a promising means of promoting self-paced interactive learning in the classroom and demonstrates impressive self-control, self-discipline, and learning autonomy.
Abstract: Studies show that interactive educational video can reduce cognitive overload, guide viewers’ attention, and trigger reflection; moreover, tablets can help students to increase self-directed learning, take ownership of the learning process, and collaborate with one another In this study, we examine whether interactive video together with tablets and an online course learning environment can become the means for promoting efficient and effective self-paced learning in the classroom In traditional elementary classes, students most often play a somewhat passive role in pacing and organizing their learning progress Students in our study were asked to follow a learning path of interactive videos and other learning units in pairs while the teacher played only a supportive role Two classes of fifth grade (30 students) and two classes of sixth grade (30 students) exploited the proposed environment for two 90 min’ sessions The interactive videos and learning activities were designed to address students’ misconceptions about heat transfer Data were collected through pre-post tests, focus groups, attitude questionnaires for students/teachers, and researchers’ observations Students scored significantly higher in the post-test than they did in the pretest and they were very positive about the prospects of the proposed approach, which they associated with pros such as learning efficiency, learning effectiveness, self-directed learning, enjoyment, and better classroom dynamics Students demonstrated impressive self-control, self-discipline, and learning autonomy and successfully managed their own progress The study shows that the proposed learning setting could become a promising means of promoting self-paced interactive learning in the classroom

Journal ArticleDOI
11 Nov 2019
TL;DR: Integration of H5P content within course material provides opportunities for students as learners to think critically about what they are being taught and supports the flexibility students are requesting by extending the learning environment.
Abstract: Active learning is a popular and proven method used in contemporary educational design and practice. H5P (https://h5p.org/) facilitates easy creation of richer HTML5. Integration of H5P content within course material provides opportunities for students as learners to think critically about what they are being taught and supports the flexibility students are requesting by extending the learning environment. A variety of activities can be developed; case study scenarios, interactive technical demonstrations, 3D images with identification of regions of interest (hotspots; roll-over information; animation), as well as quiz questions in a wide variety of differing formats; fill in the blanks, image and text-based drag and drop, mark the word, interactive video and branching scenario tasks. H5P content can be easily shared across multiple learning management systems (Canvas, Moodle, and Blackboard). Learners receive comprehensive, automatic feedback and their engagement with H5P activities can be tracked by teachers.

Journal ArticleDOI
TL;DR: In this article, a survey examines faculty and student perceptions of videos in the online classroom with an emphasis on the practical factors that influence video integration, finding that faculty desire more opportunities to interact with their students (i.e., video-based discussions, video-conferencing, and student-generated videos).
Abstract: While instructors and students generally value the integration of videos in the online classroom, there are a number of practical considerations that may mediate the utility of videos as a teaching and learning tool. The current survey examines faculty and student perceptions of videos in the online classroom with an emphasis on the practical factors that influence video integration. Results indicate differences in faculty and student acceptance and endorsements of videos for content presentation compared to assignment feedback. Faculty desire more opportunities to interact with their students (i.e., video-based discussions, video-conferencing, and student-generated videos) and highlighted efficiency as a key consideration. Students emphasized a desire for multiple opportunities to engage with course material; while students value text-based resources, they also want to have options to learn and interact via video and audio. Key to student recommendations is an awareness of the time involved to engage with online videos. Discussion highlights practical approaches to maximize the value and utility of videos in the online classroom.

Proceedings ArticleDOI
01 Sep 2019
TL;DR: This paper considers four potentially useful types of videos for CrowdRE and how to produce them and describes essential steps of creating a useful video, making it interactive, and presenting it to stakeholders.
Abstract: In CrowdRE, heterogenous crowds of stakeholders are involved in requirements elicitation. One major challenge is to inform several people about a complex and sophisticated piece of software so that they can effectively contextualize and contribute their opinions and insights. Overly technical or boring textual representations might lead to misunderstandings or even repel some people. Videos may be better suited for this purpose. There are several variants of video available: Linear videos have been used for tutorials on YouTube and similar platforms. Interactive media have been proposed for activating commitment and valuable feedback. Vision videos were explicitly introduced to solicit feedback about product visions and software requirements. In this paper, we describe essential steps of creating a useful video, making it interactive, and presenting it to stakeholders. We consider four potentially useful types of videos for CrowdRE and how to produce them. To evaluate feasibility of this approach for creating video variants, all presented steps were performed in a case study.

Journal Article
TL;DR: In this article, the authors designed and tested interactive multimedia games to enhance the emotional intelligence (EI) of deaf and hard of hearing (DHH) learners age 13 to 15 in Thailand.
Abstract: The purpose of this study was to design and test interactive multimedia games to enhance the emotional intelligence (EI) of deaf and hard of hearing (DHH) learners age 13 to 15 in Thailand. The main content of each of the six games focused on improving EI. The interactive multimedia game was tested with 10 DHH learners in a school for the deaf in the eastern part of Thailand over a 12-week period. The Thai Emotional Intelligence Screening Test (TEIST) served as a preand post-test. The results were analyzed using the Wilcoxon Signed-Rank test. The results of the study showed a significant improvement for emotional-self-control, empathy, problem-solving, self-regard, life-satisfaction, and peace in all participants. Overall results showed that the interactive multimedia games can have beneficial effects on the EI of DHH learners.

Proceedings Article
01 Jan 2019
TL;DR: In this paper, a neural network-based attention module is proposed to detect audio-visual synchronization in a multimedia presentation, which is capable of weighting different portions (spatio-temporal blocks) of the video based on their respective discriminative power.
Abstract: With the development of media and networking technologies, multimedia applications ranging from feature presentation in a cinema setting to video on demand to interactive video conferencing are in great demand. Good synchronization between audio and video modalities is a key factor towards defining the quality of a multimedia presentation. The audio and visual signals of a multimedia presentation are commonly managed by independent workflows - they are often separately authored, processed, stored and even delivered to the playback system. This opens up the possibility of temporal misalignment between the two modalities - such a tendency is often more pronounced in the case of produced content (such as movies). To judge whether audio and video signals of a multimedia presentation are synchronized, we as humans often pay close attention to discriminative spatio-temporal blocks of the video (e.g. synchronizing the lip movement with the utterance of words, or the sound of a bouncing ball at the moment it hits the ground). At the same time, we ignore large portions of the video in which no discriminative sounds exist (e.g. background music playing in a movie). Inspired by this observation, we study leveraging attention modules for automatically detecting audio-visual synchronization. We propose neural network based attention modules, capable of weighting different portions (spatio-temporal blocks) of the video based on their respective discriminative power. Our experiments indicate that incorporating attention modules yields state-of-the-art results for the audio-visual synchronization classification problem.

Journal ArticleDOI
TL;DR: In this paper, games from the perspectives of design and analysis are presented to describe how games might employ pedagogical strategies that capitalize on their strengths as interactive media, such as games' ability to be interactive and educational.
Abstract: This article approaches games from the perspectives of design and analysis in order to describe how games might employ pedagogical strategies that capitalize on their strengths as interactive media...

Journal ArticleDOI
25 Dec 2019
TL;DR: The results showed that the interactive video tutorial learning media is very feasible to be used as a learning media.
Abstract: feasible to be applied as learning media for C ++ Basic Programming,and knowing the feasibility of interactive video tutorial learning media to be applied as basic programming learning media for the University Hamzanwadi Program Study Education Informatics. This research is a research and development. The development model used is the ADDIE development model. The type of data used is quantitative and data collection techniques using a questionnaire. The data analysis technique in this research is quantitative descriptive. The results showed that the results of interactive tutorial video learning media products stated that it was appropriate to be used according to material experts with a percentage of 85% while according to media experts it was declared very feasible with a percentage of 89% and the results of user responses were stated to be Very High with a percentage of 83%. Thus the interactive video tutorial learning media is very feasible to be used as a learning media.

Journal ArticleDOI
04 Dec 2019
TL;DR: The main aim of this study was to address the inclusive practices for children with autism and the proposed CNN algorithm-based LIV4Smile intervention resulted in high accuracy in facial smile detection.
Abstract: The purpose of this paper is to propose and develop a live interaction-based video player system named LIV4Smile for the improvement of the social smile in individuals with autism spectrum disorder (ASD).,The proposed LIV4Smile intervention was a video player that operated by detecting smile using a convolutional neural network (CNN)-based algorithm. To maintain a live interaction, a CNN-based smile detector was configured and used in this system. The statistical test was also conducted to validate the performance of the system.,The significant improvement was observed in smile responses of individuals with ASD with the utilization of the proposed LIV4Smile system in a real-time environment.,A small sample size and clinical utilizing for validation and initial training of ASD individuals for LIV4Smile could be considered under implications.,The main aim of this study was to address the inclusive practices for children with autism. The proposed CNN algorithm-based LIV4Smile intervention resulted in high accuracy in facial smile detection.


Proceedings ArticleDOI
19 Jun 2019
TL;DR: The tool will enable content producers and editors to create stories in 360° videos and in combination with non-360° media content by providing an easy-to-use web-based editor that targets devices with various characteristics and input capabilities such as TVs, tablets and head-mounted displays (HMDs).
Abstract: Existing 360° video players on the market playback only one type of media in their timeline. This paper introduces a new tool that allows creating interactive 360° videos with storytelling elements that can switch between flat and 360° videos in one timeline. The tool also integrates other media types like images, audio, and web resources that can be interlinked with each other to create one or multiple branched stories via non-linear video technology. The benefits of this approach are far-reaching, i.e., it allows broadcasters to create genres of programming that combine both traditional footage and immersive 360° video segments in one seamless experience. It enables a highly effective and attractive lean-back and lean-in experience for the audience. Moments of immersion can then be coupled with traditional storytelling, which is an attractive option for content creators who can enhance existing material with immersive moments in a way that is cost effective and allows them to fall back on their pre-existing skills as story and program creators. It will pave the way for much broader adoption and production of spherical content among content providers of the first wave, as it lowers the barrier of entry. The tool will enable content producers and editors to create stories in 360° videos and in combination with non-360° media content by providing an easy-to-use web-based editor that targets devices with various characteristics and input capabilities such as TVs, tablets and head-mounted displays (HMDs).

Journal ArticleDOI
TL;DR: An interactive Non-negative Matrix Factorization (NMF) method for representative action video discovery is developed that can generate personalized results and is tested on the public Weizman dataset.
Abstract: Automatic video summarization, which is a typical cognitive-inspired task and attempts to select a small set of the most representative images or video clips for a specific video sequence, is therefore vital for enabling many tasks. In this work, we develop an interactive Non-negative Matrix Factorization (NMF) method for representative action video discovery. The original video is first evenly segmented into short clips, and the bag-of-words model is used to describe each clip. A temporally consistent NMF model is subsequently used for clustering and action segmentation. Because the clustering and segmentation results may not satisfy user intention, the user-controlled operations MERGE and ADD are developed to permit the user to adjust the results in line with expectations. The newly developed interactive NMF method can therefore generate personalized results.Experimental results on the public Weizman dataset demonstrate that our approach provides satisfactory action discovery and segmentation results.

Patent
31 May 2019
TL;DR: In this article, an interactive video control method, a terminal and a computer readable storage medium are presented, in which facial expressions of the home terminal user are processed in the video call process and then sent to the opposite terminal user; according to the method and the device, expressions seen by opposite-end users are expressions subjected to special effect processing, mystery sensation and interestingness of the video calls are improved, functions of video call are enriched, diversified requirements of the users can be better met, and thus the satisfaction degree of user experience is improved.
Abstract: The invention discloses an interactive video control method, a terminal and a computer readable storage medium. In the process of video call through the terminal, detecting whether a facial special effect processing condition is triggered currently or not, if the facial special effect processing condition is detected to be triggered, determining a facial special effect processing mode which needsto be adopted currently, processing the collected facial expressions of the home terminal user according to the determined facial special effect processing mode, and sending the processed facial expressions to an opposite terminal user in the video call. The invention further discloses a terminal and a computer readable storage medium. Through implementation of the above scheme, the facial expressions of the home terminal user are processed in the video call process and then sent to the opposite terminal user; according to the method and the device, expressions seen by opposite-end users are expressions subjected to special effect processing, mystery sensation and interestingness of the video call are improved, functions of the video call are enriched, diversified requirements of the userscan be better met, and thus the satisfaction degree of user experience is improved.

Proceedings ArticleDOI
03 Jun 2019
TL;DR: VideoWhiz is a novel interactive video summarization tool that provides a non-linear overview design allowing easy access to the key stages or milestones within the recipe and inter-milestone relationships and is found to be effective and useful in providing quick overviews of recipe videos.
Abstract: With millions of recipe videos increasingly available online, viewers often face the challenge of browsing through these videos and deciding among different styles of recipe demonstrations and instructions. Although state-of-the-art video summarization techniques using linear presentation formats have been shown to be effective in domains such as surveillance, sports or lecture videos, recipe videos are often more complex and may require a different summarization approach. We first investigated how viewers navigate recipe videos and what information they look for when seeking quick overviews of such videos. Based on our findings, we designed VideoWhiz, a novel interactive video summarization tool that provides a non-linear overview design allowing easy access to the key stages or milestones within the recipe and inter-milestone relationships. VideoWhiz uses a combination of computer vision techniques and an annotation workflow to generate these interactive overviews. Our evaluation showed that viewers found VideoWhiz to be effective and useful in providing quick overviews of recipe videos. We discuss the potential for future work to investigate non-linear overviews for other types of instructional videos and to explore more powerful representations for video summarization.

Patent
Zhao Fengli1, Chen Yingzhong, He Huiyu, Zhang Yeqi, Xu Yue 
08 Jan 2019
TL;DR: In this article, a video recording method for interactive video including at least two video segments having an interaction relationship is presented. But the method requires the user to jump to play other video clips in the same interactive video which have interaction relationship with the current video clip by triggering the interactive control point.
Abstract: The present application relates to a video recording method for recording an interactive video including at least two video segments having an interaction relationship The method includes: displayingan interactive editing control corresponding to a first video segment obtained by recording; determining a second video clip upon receipt of a trigger operation performed on the interactive editing control and generating interactive control information indicating an interactive control point displayed on the first video clip for triggering playback of the second video clip upon receipt of a trigger operation by a user; according to the first video segment, the second video segment and the interaction control information, obtaining the interactive video When playing one video clip in the interactive video, the user can jump to play other video clips in the same interactive video which have interaction relationship with the current video clip by triggering the interactive control point, thereby expanding the interaction mode between the user and the video

Proceedings ArticleDOI
19 Aug 2019
TL;DR: It is shown for the first time that information such as the choices made by viewers can be revealed based on the characteristics of encrypted control traffic exchanged with Netflix.
Abstract: Privacy leaks from Netflix videos/movies are well researched. Current state-of-the-art works have been able to obtain coarse-grained information such as the genre and the title of videos by passive observation of encrypted traffic. However, leakage of fine-grained information from encrypted video traffic has not been studied so far. Such information can be used to build behavioral profiles of viewers. Recently, Netflix released the first mainstream interactive movie called 'Black Mirror: Bandersnatch'. In this work, we use this movie as a case-study to develop techniques for revealing information from encrypted interactive video traffic. We show for the first time that information such as the choices made by viewers can be revealed based on the characteristics of encrypted control traffic exchanged with Netflix. To evaluate our proposed technique, we built the first interactive video traffic dataset of 100 viewers; which we will be releasing. Our technique was able to reveal the choices 96% of the time in the case of 'Black Mirror: Bandersnatch' and they were also equally or more successful for all other interactive movies released by Netflix so far.

Patent
21 Jun 2019
TL;DR: Wang et al. as discussed by the authors presented a live broadcast interaction method and a device thereof, a live broadcasting system and electronic equipment, when it is detected that an anchor initiates an anchor interaction action from an anchor video frame collected by a video collection device.
Abstract: The application of the invention is a divisional application of the application 201910251306.7 in China. The invention provides a live broadcast interaction method and a device thereof, a live broadcast system and electronic equipment, when it is detected that an anchor initiates an anchor interaction action from an anchor video frame collected by a video collection device in real time, the actionposture and action type of the anchor interaction action are detected, and the anchor interaction action comprises wearing of a target prop; then, an interactive video stream of a virtual image corresponding to the anchor is generated according to the action posture and the action type of the anchor interactive action, and the interactive video stream of the virtual image is sent to a live broadcast receiving terminal for playing through a live broadcast server. Therefore, the interaction content of the virtual image of the anchor is associated with the action posture and the action type of the anchor interaction action, so that the interaction effect in the live broadcast process can be improved, the manual operation when the anchor initiates virtual image interaction is reduced, and theautomatic interaction of the virtual image is realized.

01 Jan 2019
TL;DR: The authors used a video, an interactive website, and accompanying curriculum to engage middle school students in historical thinking and learning of history content and found that treatment groups had greater gains in historical knowledge and thinking and exhibited greater student engagement than comparison groups.
Abstract: Two teacher educators collaborated with teachers, media designers, and evaluators to utilize a video, an interactive website, and accompanying curriculum to engage middle school students in historical thinking and learning of history content. The resulting multiplatform project, based on a young Frederick Douglass’ life, was piloted in three schools of varying demographics. Results indicated that treatment groups had greater gains in historical knowledge and thinking and exhibited greater student engagement than comparison groups. While students empathized with the young Douglass portrayed in the video and in autobiographical texts, their abilities to interpret primary sources required significant scaffolding. Though none of the pilot teachers perceived themselves as technology users, they responded positively to the experience and used more student-centered lessons with treatment groups than with comparison groups.

Journal ArticleDOI
TL;DR: This article reflects on the collaborative team approach to multimedia design and development by examining the team's experiences and practices through the lens of existing multimedia research, in order to understand the convergence between multimedia theory and the practicalities of developing multimedia within the constraints of large-scale online curriculum development.
Abstract: Transformations in contemporary higher education have led to an explosion in the number of degrees delivered online, a significant characteristic of which is the incorporation of multimedia to support learning. Despite the proliferation of multimedia and growing literature about the affordances of various technologies, there are relatively few examples of how judgements are made regarding choosing and actioning multimedia development decisions for educational developers. The case study presented here is framed within an institution-wide project for the development of fully online degrees that utilised a collaborative approach to curriculum and multimedia development. This example focuses on the establishment and operation of a collaborative approach to curriculum development in which multidisciplinary development teams invested considerable resources in researching improvements to their multimedia practices and processes. This article reflects on the collaborative team approach to multimedia design and development by examining the team’s experiences and practices through the lens of existing multimedia research, in order to understand the convergence between multimedia theory and the practicalities of developing multimedia within the constraints of large-scale online curriculum development. Through these reflections, four lessons learned will be explicated which will inform those engaged in employing similar approaches in other contexts. These lessons learned identify the benefits and potential issues associated with: 1. the approach used by the collaborative development team to support the production of multimedia, 2. the practices and process used by the collaborative development team to facilitate the creation of concise multimedia presentations, 3. the impacts of establishing teaching presence through videos created by the course writer and online course facilitator, and 4. the presentation styles used by course writers and the tools they used during multimedia production.