scispace - formally typeset
Search or ask a question
Author

Claus Marberger

Bio: Claus Marberger is an academic researcher from Bosch. The author has contributed to research in topics: Drivetrain & Input device. The author has an hindex of 5, co-authored 26 publications receiving 297 citations.
Topics: Drivetrain, Input device, Setpoint, Trajectory, Torque

Papers
More filters
Proceedings ArticleDOI
02 Oct 2018
TL;DR: This work introduces WESAD, a new publicly available dataset for wearable stress and affect detection that bridges the gap between previous lab studies on stress and emotions, by containing three different affective states (neutral, stress, amusement).
Abstract: Affect recognition aims to detect a person's affective state based on observables, with the goal to e.g. improve human-computer interaction. Long-term stress is known to have severe implications on wellbeing, which call for continuous and automated stress monitoring systems. However, the affective computing community lacks commonly used standard datasets for wearable stress detection which a) provide multimodal high-quality data, and b) include multiple affective states. Therefore, we introduce WESAD, a new publicly available dataset for wearable stress and affect detection. This multimodal dataset features physiological and motion data, recorded from both a wrist- and a chest-worn device, of 15 subjects during a lab study. The following sensor modalities are included: blood volume pulse, electrocardiogram, electrodermal activity, electromyogram, respiration, body temperature, and three-axis acceleration. Moreover, the dataset bridges the gap between previous lab studies on stress and emotions, by containing three different affective states (neutral, stress, amusement). In addition, self-reports of the subjects, which were obtained using several established questionnaires, are contained in the dataset. Furthermore, a benchmark is created on the dataset, using well-known features and standard machine learning methods. Considering the three-class classification problem ( baseline vs. stress vs. amusement ), we achieved classification accuracies of up to 80%,. In the binary case ( stress vs. non-stress ), accuracies of up to 93%, were reached. Finally, we provide a detailed analysis and comparison of the two device locations ( chest vs. wrist ) as well as the different sensor modalities.

486 citations

Journal ArticleDOI
TL;DR: Investigating the driver’s takeover performance when switching from working on different non–driving related tasks while driving with a conditionally automated driving function (SAE L3), which was simulated by a Wizard of Oz vehicle, to manual vehicle control under naturalistic driving conditions found timings found can be used to design comfortable and safe takeover concepts for automated vehicles.
Abstract: Objective:This study aimed at investigating the driver’s takeover performance when switching from working on different non–driving related tasks (NDRTs) while driving with a conditionally automated...

59 citations

Book ChapterDOI
17 Jul 2017
TL;DR: A comprehensive model of the transition process from automated driving to manual driving and relevant time stamps and time windows is introduced and potential influencing factors on driver availability are outlined.
Abstract: Several levels of automated driving functions require the human as a fallback driver in case system performance limits are exceeded. Human factors research in this area is especially concerned with human performance in these take-over situations and the influence of the driver state. Based on work of the publicly funded project Ko-HAF the paper introduces a comprehensive model of the transition process from automated driving to manual driving and specifies relevant time stamps and time windows. The concept of Driver Availability is regarded as a quantitative measure that relates the estimated time required to safely take-over manual control to the available time budget. A conceptual framework outlines potential influencing factors on driver availability as well as ways to apply the measure in a real-time application.

51 citations

Proceedings ArticleDOI
05 Oct 2020
TL;DR: In this paper, the authors describe the evolution of the sensor approach supplemented by related research findings from Bosch Corporate Research and integrate it in the technical, business, and regulatory context.
Abstract: About twelve years ago, driver drowsiness detection systems were introduced to the market. The degree of drowsiness was assessed based on a performance dimension by analyzing the driver's steering behavior. With partial and conditional vehicle automation on the horizon, these input signals will no longer be available. Nevertheless, knowledge about the driver's state, e.g. the attentional level or take-over readiness, becomes even more important. Currently and in the near future, interior vehicle cameras are a major source of driver status information. Furthermore, various built-in sensors are under development to complement in-vehicle cameras and possibly vehicle operation. For specific tasks, wearable sensors connected via smartphone can play a prominent role as well. The paper describes this evolution of the sensor approach supplemented by related research findings from Bosch Corporate Research and integrates it in the technical, business, and regulatory context. Thus, the industry perspective should stimulate the discussion and initiate future academic research in the field.

12 citations

Patent
28 Nov 2012
TL;DR: In this article, the authors present a method for visualizing the surroundings of a vehicle, including the following steps: determining and storing an instantaneous distance between the vehicle and present obstacles in the surrounding of the vehicle with the aid of at least one sensor; calculating an at least two-dimensional model of the surroundings from the stored data; calculating a virtual view of the model from a selected virtual observer position.
Abstract: A method for visualizing the surroundings of a vehicle, including the following steps: determining and storing an instantaneous distance between the vehicle and present obstacles in the surroundings of the vehicle with the aid of at least one sensor; determining and storing a present position of the vehicle; calculating an at least two-dimensional model of the surroundings from the stored data; calculating a virtual view of the model of the surroundings from a selected virtual observer position; recording a video depiction of at least a portion of the surroundings with the aid of at least one video camera and integrating the video depiction into the virtual view; and outputting the virtual view together with the integrated video depiction to a driver of the vehicle.

11 citations


Cited by
More filters
Patent
06 Mar 2017
TL;DR: In this article, a vehicle control device mounted on a vehicle, and a method for controlling the vehicle was presented. But the controller controlled the display module based on the driving information to output on the second region at least one of the graphic objects generated on the first region.
Abstract: The present invention relates to a vehicle control device mounted on a vehicle, and a method for controlling the vehicle. The vehicle control device includes a communication module configured to receive driving information regarding the vehicle, a display module configured to output visual information on a display region formed on a windshield of the vehicle, and a controller configured to control the display module based on the driving information to output graphic objects guiding a path of driving of the vehicle on a first region of the display region, the display region divided into the first region and a second region. The controller controls the display module based on the driving information to output on the second region at least one of the graphic objects output on the first region.

211 citations

Journal ArticleDOI
12 Jul 2019-Sensors
TL;DR: The end-to-end learning approach takes the time-frequency spectra of synchronised PPG- and accelerometer-signals as input, and provides the estimated heart rate as output, and shows that on large datasets the deep learning model significantly outperforms other methods.
Abstract: Photoplethysmography (PPG)-based continuous heart rate monitoring is essential in a number of domains, eg, for healthcare or fitness applications Recently, methods based on time-frequency spectra emerged to address the challenges of motion artefact compensation However, existing approaches are highly parametrised and optimised for specific scenarios of small, public datasets We address this fragmentation by contributing research into the robustness and generalisation capabilities of PPG-based heart rate estimation approaches First, we introduce a novel large-scale dataset (called PPG-DaLiA), including a wide range of activities performed under close to real-life conditions Second, we extend a state-of-the-art algorithm, significantly improving its performance on several datasets Third, we introduce deep learning to this domain, and investigate various convolutional neural network architectures Our end-to-end learning approach takes the time-frequency spectra of synchronised PPG- and accelerometer-signals as input, and provides the estimated heart rate as output Finally, we compare the novel deep learning approach to classical methods, performing evaluation on four public datasets We show that on large datasets the deep learning model significantly outperforms other methods: The mean absolute error could be reduced by 31 % on the new dataset PPG-DaLiA, and by 21 % on the dataset WESAD

176 citations

Journal ArticleDOI
TL;DR: In this article, a self-supervised deep multi-task learning framework for electrocardiogram (ECG)-based emotion recognition is proposed, which consists of two stages of learning a) learning ECG representations and b) learning to classify emotions.
Abstract: We exploit a self-supervised deep multi-task learning framework for electrocardiogram (ECG) -based emotion recognition. The proposed solution consists of two stages of learning a) learning ECG representations and b) learning to classify emotions. ECG representations are learned by a signal transformation recognition network. The network learns high-level abstract representations from unlabeled ECG data. Six different signal transformations are applied to the ECG signals, and transformation recognition is performed as pretext tasks. Training the model on pretext tasks helps the network learn spatiotemporal representations that generalize well across different datasets and different emotion categories. We transfer the weights of the self-supervised network to an emotion recognition network, where the convolutional layers are kept frozen and the dense layers are trained with labelled ECG data. We show that the proposed solution considerably improves the performance compared to a network trained using fully-supervised learning. New state-of-the-art results are set in classification of arousal, valence, affective states, and stress for the four utilized datasets. Extensive experiments are performed, providing interesting insights into the impact of using a multi-task self-supervised structure instead of a single-task model, as well as the optimum level of difficulty required for the pretext self-supervised tasks.

116 citations

Journal ArticleDOI
TL;DR: An introduction to the field of affective computing is presented though the description of key theoretical concepts, and the current state-of-the-art of emotion recognition is described, tracing the developments that helped foster the growth of the field.
Abstract: The seminal work on Affective Computing in 1995 by Picard set the base for computing that relates to, arises from, or influences emotions. Affective computing is a multidisciplinary field of research spanning the areas of computer science, psychology, and cognitive science. Potential applications include automated driver assistance, healthcare, human-computer interaction, entertainment, marketing, teaching and many others. Thus, quickly, the field acquired high interest, with an enormous growth of the number of papers published on the topic since its inception. This paper aims to (1) Present an introduction to the field of affective computing though the description of key theoretical concepts; (2) Describe the current state-of-the-art of emotion recognition, tracing the developments that helped foster the growth of the field; and lastly, (3) point the literature take-home messages and conclusions, evidencing the main challenges and future opportunities that lie ahead, in particular for the development of novel machine learning (ML) algorithms in the context of emotion recognition using physiological signals.

114 citations

Journal ArticleDOI
20 Sep 2019-Sensors
TL;DR: A broad overview and in-depth understanding of the theoretical background, methods and best practices of wearable affect and stress recognition is provided to enable other researchers in the field to conduct and evaluate user studies and develop wearable systems.
Abstract: Affect recognition is an interdisciplinary research field bringing together researchers from natural and social sciences. Affect recognition research aims to detect the affective state of a person based on observables, with the goal to, for example, provide reasoning for the person's decision making or to support mental wellbeing (e.g., stress monitoring). Recently, beside of approaches based on audio, visual or text information, solutions relying on wearable sensors as observables, recording mainly physiological and inertial parameters, have received increasing attention. Wearable systems enable an ideal platform for long-term affect recognition applications due to their rich functionality and form factor, while providing valuable insights during everyday life through integrated sensors. However, existing literature surveys lack a comprehensive overview of state-of-the-art research in wearable-based affect recognition. Therefore, the aim of this paper is to provide a broad overview and in-depth understanding of the theoretical background, methods and best practices of wearable affect and stress recognition. Following a summary of different psychological models, we detail the influence of affective states on the human physiology and the sensors commonly employed to measure physiological changes. Then, we outline lab protocols eliciting affective states and provide guidelines for ground truth generation in field studies. We also describe the standard data processing chain and review common approaches related to the preprocessing, feature extraction and classification steps. By providing a comprehensive summary of the state-of-the-art and guidelines to various aspects, we would like to enable other researchers in the field to conduct and evaluate user studies and develop wearable systems.

111 citations