scispace - formally typeset
Search or ask a question

Showing papers by "Josh Andres published in 2023"


Book ChapterDOI
TL;DR: In this paper , a method for data sonification that employs the GPT-3 model to create semantically relevant mappings between artificial intelligence-generated natural language descriptions of data, and human-generated descriptions of sounds is presented.
Abstract: Large Language Models such as GPT-3 exhibit generative language capabilities with multiple potential applications in creative practice. In this paper, we present a method for data sonification that employs the GPT-3 model to create semantically relevant mappings between artificial intelligence-generated natural language descriptions of data, and human-generated descriptions of sounds. We implemented this method in a public art installation to generate a soundscape based on data from different systems. While common sonification approaches rely on arbitrary mappings between data values and sonic values, our approach explores the use of language models to achieve a mapping not via values but via meaning. We find our approach is a useful tool for musification practice and demonstrates a new application of generative language models in creative new media arts practice. We show how different prompts influence data to sound mappings, and highlight that matching the embeddings of texts of different lengths produces undesired behavior.

2 citations


Journal ArticleDOI
TL;DR: In this paper , the authors focus on a particular form of integration in which the user and the computational machine share agency over the user's body, that is, can simultaneously (in contrast to a traditional turn-taking approach) control the user body.
Abstract: Human-computer integration is an HCI trend in which computational machines can have agency, i.e. take control. Our work focuses on a particular form of integration in which the user and the computational machine share agency over the user's body, that is, can simultaneously (in contrast to a traditional turn-taking approach) control the user's body. The result is a user experience where the agency of the user and the computational machine is so intertwined that it is often no more discernable who contributed what to what extent; we call this “intertwined integration”. Due to the recency of advanced technologies enabling intertwined integration systems, we find that little understanding and documented design knowledge exist. To begin constructing such an understanding, we use three case studies to propose two key dimensions (“awareness of machine's agency” and “alignment of machine's agency”) to articulate a design space for intertwined integration systems. We differentiate four roles that computational machines can assume in this design space (angel, butler, influencer, and adversary). Based on our craft knowledge gained through designing such intertwined integration systems, we discuss strategies to help designers create future systems. Ultimately, we aim to advance the HCI field's emerging understanding of sharing agency.

Proceedings ArticleDOI
27 Mar 2023
TL;DR: The System of a Sound as discussed by the authors uses a large language model to choose music samples that elicit matching emotions and combines them with an artificial intelligence music engine to compose a live soundscape that reveals interaction patterns between human activity, built environment, and the surrounding natural environment.
Abstract: The relationship between human activity, the built environment, such as homes, offices, and schools, and the surrounding natural environment hide interaction patterns that can be better understood. Intelligent user interfaces today use dashboards with tables and figures, missing a way where people can innately relate to this relationship. We explored emotion-oriented data sonification as an experimental way for people to engage with data. Our audio-visual and gestural intelligent user interface, "System of a Sound", takes real-time data streams centred on the location where it is installed and uses a large language model to choose music samples that elicit matching emotions. The samples are combined using an artificial intelligence music engine to compose a live soundscape that reveals interaction patterns between human activity, the built environment, and the surrounding natural environment. This exciting IUI Demo offers conference participants emotion-oriented data sonification for data exploration.

Journal ArticleDOI
TL;DR: In this article , a design framework for ingestible play is presented, where users can swallow ingestible sensors, which facilitate interior body sensing functions that provide data on which play experiences can be built.
Abstract: Ingestible sensors have become smaller and more powerful and allow us to envisage new human-computer interactions and bodily play experiences inside our bodies. Users can swallow ingestible sensors, which facilitate interior body sensing functions that provide data on which play experiences can be built. We call bodily play that uses ingestible sensors as play technologies “ingestible play”, and we have adopted a research-through-design approach to investigate three prototypes. For each prototype, we conducted a field study to understand the player experiences. Based upon these results and practical design experiences, we have developed a design framework for ingestible play. We hope this work can guide future design of ingestible play; inspire the design of play technologies inside the human body to expand the current bodily play design space; and ultimately extend our understanding of how to design for the human body by considering the bodily experience of one’s interior body.