scispace - formally typeset
Search or ask a question
Author

Rodrigo Solis-Torres

Bio: Rodrigo Solis-Torres is an academic researcher from Monterrey Institute of Technology and Higher Education. The author has an hindex of 1, co-authored 1 publications receiving 1 citations.

Papers
More filters
Proceedings ArticleDOI
01 Nov 2018
TL;DR: An experiment is presented that relates the facial expression with acoustic stimuli to develop an algorithm to create music sequences, with potential therapeutic, educational and productivity improvement implementations.
Abstract: In this paper we will present the analyzed results of an experiment that relates the facial expression with acoustic stimuli. Music therapy can be used for treating autistic children, diseases and disorders, software development and personalized musical systems [5]. Through the data obtained using iMotions", the analyzed data was processed using Matlab" and Excel. Matlab" was used to analyze the raw data using the data analysis tools, in order to determine the impact of artificially synthesized versus human composed audios. The data was obtained from three experiments, each experiment had different conditions, each experiment had different number of participants, the length and type of audio segment changed for each experiment. The data obtained from these three experiments were analyzed using FACET of iMotions". For the next stage we analyzed Neutral Emotion, the purpose of this analysis is of outmost importance for future stages for a proper development of a recurrent neural network to develop an algorithm to create music sequences, with potential therapeutic, educational and productivity improvement implementations

1 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A systematic review of the literature in the field of affective computing in the context of music therapy is presented in this paper, where the authors assess AI methods to perform automatic emotion recognition applied to Human-Machine Musical Interfaces (HMMI).
Abstract: Music therapy is an effective tool to slow down the progress of dementia since interaction with music may evoke emotions that stimulates brain areas responsible for memory. This therapy is most successful when therapists provide adequate and personalized stimuli for each patient. This personalization is often hard. Thus, Artificial Intelligence (AI) methods may help in this task. This paper brings a systematic review of the literature in the field of affective computing in the context of music therapy. We particularly aim to assess AI methods to perform automatic emotion recognition applied to Human-Machine Musical Interfaces (HMMI). To perform the review, we conducted an automatic search in five of the main scientific databases on the fields of intelligent computing, engineering, and medicine. We search all papers released from 2016 and 2020, whose metadata, title or abstract contains the terms defined in the search string. The systematic review protocol resulted in the inclusion of 144 works from the 290 publications returned from the search. Through this review of the state-of-the-art, it was possible to list the current challenges in the automatic recognition of emotions. It was also possible to realize the potential of automatic emotion recognition to build non-invasive assistive solutions based on human-machine musical interfaces, as well as the artificial intelligence techniques in use in emotion recognition from multimodality data. Thus, machine learning for recognition of emotions from different data sources can be an important approach to optimize the clinical goals to be achieved through music therapy.

2 citations