scispace - formally typeset
Search or ask a question

Do people understand human sounding voices better than robotic voices? 


Best insight from top research papers

People's understanding of human sounding voices compared to robotic voices has been explored in several studies. Research has shown that the accuracy of emotion perception differs depending on the presented emotions, with characterized voices being more appropriate for expressing emotions and showing significantly higher accuracy in emotion perception . Additionally, participants have rated voices with a higher degree of human realism as more pleasant and less eerie, indicating a positive relationship between human-likeness and user acceptance . Furthermore, participants in a study made equivalent perspective choices to human and robot addressees, suggesting that people generally assume robots and humans have equal capabilities in understanding spatial descriptions . Moreover, a study found that people reacted faster to robotic sounds when they were preceded by a congruent robot image, highlighting the role of predictive processes in robot perception . Overall, these findings suggest that people may have different perceptions and understandings of human sounding voices compared to robotic voices.

Answers from top 5 papers

More filters
Papers (5)Insight
The paper does not directly answer the question of whether people understand human sounding voices better than robotic voices. The paper focuses on how the appearance of an agent affects how we perceive their voice in human-robot interaction.
Book ChapterDOI
26 Feb 2022
The paper does not directly answer the question about whether people understand human sounding voices better than robotic voices.
The provided paper does not directly address whether people understand human-sounding voices better than robotic voices. The paper focuses on the relationship between human-likeness of synthetic voices and user acceptance in different application contexts.
The provided paper does not address the question of whether people understand human sounding voices better than robotic voices. The paper focuses on whether people regard robots as human-like social partners based on their perspective choices in spatial descriptions.
The paper states that a regular human voice showed higher user preferences and naturalness, suggesting that people may understand human sounding voices better than robotic voices.

Related Questions

What are the current advances in natural language processing and understanding technologies used for bidirectional human-robot interaction?5 answersCurrent advances in natural language processing (NLP) and understanding technologies for bidirectional human-robot interaction involve multimodal approaches, large-scale language models, and transformer-based architectures. Multimodal human-robot interaction (HRI) integrates various modalities like voice, image, text, and bio-signals for more natural communication. Integrating OpenAI's GPT-3 language model enables verbal dialogue with robots, enhancing interaction capabilities. Transformer-based models like Albert show promise in NLU tasks, offering comparable performance to BERT with significantly faster inference times, making them suitable for real-time applications on embedded systems in social robots. These advancements in NLP technologies are shaping more efficient and effective bidirectional communication between humans and robots.
Analogy between human and auditory-vocal neural system?5 answersThe analogy between the human and auditory-vocal neural systems is evident in the shared mechanisms of vocal learning and control. Both humans and songbirds exhibit complex vocal learning behaviors involving auditory feedback and specialized brain regions for vocal motor control. Additionally, the evolution of direct cortical control of vocal muscles and the development of an auditory-vocal articulatory circuit played crucial roles in the expansion of vocal plasticity and the elaboration of speech in early humans. Studies have shown distinct neural pathways for speech and emotional vocalizations, indicating partially overlapping structures for motor control. Furthermore, the involvement of the prefrontal cortex in cognitive control over audio-vocal interactions highlights the importance of higher-order auditory processing in vocal motor output. Overall, these findings suggest a continuum in the evolution of auditory-vocal systems from non-human primates to modern humans.
How human great in speaking?4 answersHuman speech is a unique and complex ability that distinguishes Homo sapiens from other species. It involves a combination of phonation and articulation, where phonation conveys emotions and articulation conveys thoughts. The evolution of human speech has been a subject of interest, with studies exploring how preexisting biological and cognitive foundations led to the emergence of language. The human language system's generative power lies in its tripartite architecture, encoding information for meaning, syntax, and sound structures, enabling the expression of thoughts through speech and writing. Achieving super-human performance in speech recognition has been a goal, with recent advancements in neural models improving accuracy and latency in processing conversational speech contexts. Human speech production requires precise coordination of various physiological systems, showcasing the specialized and finely controlled nature of this behavior from early life.
How can human talks to robot?3 answersHuman-robot interaction can be achieved through various methods. One approach is the development of a bi-directional natural language interface that allows users to communicate with robots using natural language. Another method involves the use of affective vocalizations, where human teachers provide vocal input to robotic learners based on their performance history. Mechanisms for error avoidance and error recovery are also important in human-robot communication, as they help to prevent misunderstandings and problematic situations. Additionally, verbal robot utterances play a role in shaping users' behavior and understanding of the robot's affordances. In some cases, the cooperative behavior of multiple robots can be used to initiate a conversation smoothly and attract the person's attention. These different approaches contribute to enabling effective communication between humans and robots.
How can we distinguish artificial inteligence from humans?5 answersArtificial intelligence (AI) and human intelligence can be distinguished in several ways. Firstly, human intelligence is a product of the development and interaction of processes such as the formation of the human personality, complex network relationships within the social community, and collective activity. On the other hand, AI is a complex of technological solutions that imitate human cognitive processes, but it will always be associated with human activity and dependent on human values. Secondly, comparing AI with human intelligence can help us understand both forms better, focusing on issues such as forms of embodiment, autonomy, and judgment. Lastly, there are key features of autonomous systems that are fundamentally different from automated systems, which are expected to replace humans in complex operations and achieve strong artificial intelligence. Therefore, while AI may simulate human intelligence, it exists as a parallel existence to human intelligence.
Why is ethical important in artificial intelligence of human voice for speech synthesis?3 answersEthics is important in artificial intelligence (AI) of human voice for speech synthesis because of several reasons. Firstly, the development of deep-learning based technologies has made it possible to produce speech that is almost indistinguishable from humans. However, focusing solely on producing human-like voices raises ethical, security, and societal concerns. Secondly, the rapid advancement of AI in speech synthesis has outpaced the ability of institutions and organizations to provide legal and ethical responses. This creates a need to address the ethical challenges associated with AI in new ecosystems where technology interacts with users and learns from human routines. Additionally, the use of AI in speech synthesis raises concerns about the trustworthiness of media and the potential for biases to be replicated or amplified by algorithms. Overall, considering the ethical implications is crucial to ensure responsible and accountable use of AI in human voice synthesis.