scispace - formally typeset
Search or ask a question
Author

Andreas Emch

Bio: Andreas Emch is an academic researcher. The author has contributed to research in topics: Photography & Front (military). The author has an hindex of 1, co-authored 1 publications receiving 2 citations.

Papers
More filters
Proceedings Article
01 Jul 2020
TL;DR: The applicability of the setting and processing pipeline on affective state prediction based on front camera recordings during math-solving tasks and emotional stimuli from pictures shown on a tablet are demonstrated.
Abstract: Front camera data from tablets used in educational settings offer valuable clues to student behavior, attention, and affective state. Due to the camera’s angle of view, the face of the student is partially occluded and skewed. This hinders the ability of experts to adequately capture the learning process and student states. In this paper, we present a pipeline and techniques for image reconstruction of front camera recordings. Our setting consists of a cheap and unobtrusive mirror construction to improve the visibility of the face. We then process the image and use neural inpainting to reconstruct missing data in the recordings. We demonstrate the applicability of our setting and processing pipeline on affective state prediction based on front camera recordings (i.e., action units, eye gaze, eye blinks, and movement) during math-solving tasks (active) and emotional stimuli from pictures (passive) shown on a tablet. We show that our setup provides comparable performance for affective state prediction to recordings taken with an external and more obtrusive GoPro camera.

2 citations


Cited by
More filters
Journal Article
TL;DR: A method of representing audience behavior through facial and body motions from a single video stream, and using these features to predict the rating for feature-length movies is proposed.
Abstract: We propose a method of representing audience behavior through facial and body motions from a single video stream, and use these features to predict the rating for feature-length movies. This is a very challenging problem as: i) the movie viewing environment is dark and contains views of people at different scales and viewpoints; ii) the duration of feature-length movies is long (80-120 mins) so tracking people uninterrupted for this length of time is still an unsolved problem, and; iii) expressions and motions of audience members are subtle, short and sparse making labeling of activities unreliable. To circumvent these issues, we use an infrared illuminated test-bed to obtain a visually uniform input. We then utilize motion-history features which capture the subtle movements of a person within a pre-defined volume, and then form a group representation of the audience by a histogram of pair-wise correlations over a small-window of time. Using this group representation, we learn our movie rating classifier from crowd-sourced ratings collected by rottentomatoes.com and show our prediction capability on audiences from 30 movies across 250 subjects (> 50 hrs).

3 citations