scispace - formally typeset
Search or ask a question
Author

Salvatore Sorce

Other affiliations: Brunel University London
Bio: Salvatore Sorce is an academic researcher from University of Palermo. The author has contributed to research in topics: Gesture & Mobile device. The author has an hindex of 12, co-authored 70 publications receiving 441 citations. Previous affiliations of Salvatore Sorce include Brunel University London.


Papers
More filters
Journal ArticleDOI
TL;DR: The Agent Network for Bluetooth Devices, a system that uses personal mobile devices as adaptive human-environment interfaces to supply people with ad hoc information and high-level services, is proposed.
Abstract: We propose the Agent Network for Bluetooth Devices, a system that uses personal mobile devices as adaptive human-environment interfaces to supply people with ad hoc information and high-level services. The ANBD system operates with a hierarchical framework of service-providing nodes, dynamically composed and managed by mobile agents.

26 citations

Proceedings ArticleDOI
07 Jun 2017
TL;DR: The influence of a passive audience on the engagement of people with a public display is investigated and an influence on where interacting users position themselves relative to both display and passive audience as well as on their behavior is found.
Abstract: It is well known from prior work, that people interacting as well as attending to a public display attract further people to interact. This behavior is commonly referred to as the honeypot effect. At the same time, there are often situations where an audience is present in the vicinity of a public display that does not actively engage or pay attention to the display or an approaching user. However, it is largely unknown how such a passive audience impacts on users or people who intend to interact. In this paper, we investigate the influence of a passive audience on the engagement of people with a public display. In more detail, we report on the deployment of a display in a public space. We collected and analyzed video logs to understand how people react to passive audience in the vicinity of public displays. We found an influence on where interacting users position themselves relative to both display and passive audience as well as on their behavior. Our findings are valuable for display providers and space owners who want to maximize the display's benefits.

23 citations

Proceedings ArticleDOI
07 Jun 2016
TL;DR: A pilot study is introduced on comparing two interfaces: an interface based on the Microsoft Human Interface Guidelines (HIG), a de facto standard in the field, and a novel interface, designed by us, which displays an avatar, which does not require any activation gestures to trigger actions.
Abstract: Public displays have lately become ubiquitous thanks to the decreasing cost of such technology and public policies supporting the development of smart cities. Depending on form factor, those displays might use touchless gestural interfaces that therefore are becoming more often the subject of public and private research. In this paper, we focus on touchless interactions with situated public displays, and introduce a pilot study on comparing two interfaces: an interface based on the Microsoft Human Interface Guidelines (HIG), a de facto standard in the field, and a novel interface, designed by us. Differently from the HIG-based one, our interface displays an avatar, which does not require any activation gestures to trigger actions. Our aim is to study how the two interfaces address the so-called interaction blindness --- the inability of the users to recognize the interactive capabilities of those displays. According to our pilot study, although providing a different approach, both interfaces has proven effective in the proposed scenario: a public display in a hall inside a University campus building.

22 citations

Journal ArticleDOI
TL;DR: A new research domain, human-to-human interaction (HHI), is presented that describes how today's human interaction is largely indirect and mediated by a wide variety of technologies and devices.
Abstract: We present a new research domain, human-to-human interaction (HHI) that describes how today's human interaction is largely indirect and mediated by a wide variety of technologies and devices. We show how this new and exciting field of design originates from the convergence of a few well-established research areas, such as traditional graphical user interfaces (GUIs), tangible user interfaces (TUIs), touchless gesture user interfaces (TGUIs), voice user interfaces (VUIs), and brain computer interfaces (BCIs). We analyse and describe current research in those areas and offer a first-hand view and presentation of its salient aspects for the human-to human interaction domain.

21 citations

01 Jan 2006
TL;DR: The goal of this paper is to build an user-friendly virtual-guide system adaptable to the user needs of mobility and therefore usable on different mobile devices (e.g. PDAs, Smartphones).
Abstract: The use of a PDA with ad-hoc built-in information retrieval and auto-localization functionalities can help people in visiting a museum in a natural manner instead of traditional audio/visual pre-recorded guides. The goal of this paper is to build an user-friendly virtual-guide system adaptable to the user needs of mobility and therefore usable on different mobile devices (e.g. PDAs, Smartphones). An information retrieval service is included which is easily accessible through a spoken language interaction or an auto-localization service. The system takes the advantages of Chatbot, speech recognition technologies and RFID detection, allowing a natural interaction with the user.

20 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: McNeill as discussed by the authors discusses what Gestures reveal about Thought in Hand and Mind: What Gestures Reveal about Thought. Chicago and London: University of Chicago Press, 1992. 416 pp.
Abstract: Hand and Mind: What Gestures Reveal about Thought. David McNeill. Chicago and London: University of Chicago Press, 1992. 416 pp.

988 citations

Journal ArticleDOI
TL;DR: The goal of this paper is to review the works that were published in journals, suggest a new classification framework of context-aware systems, and explore each feature of classification framework using a keyword index and article title search.
Abstract: Nowadays, numerous journals and conferences have published articles related to context-aware systems, indicating many researchers' interest. Therefore, the goal of this paper is to review the works that were published in journals, suggest a new classification framework of context-aware systems, and explore each feature of classification framework. This paper is based on a literature review of context-aware systems from 2000 to 2007 using a keyword index and article title search. The classification framework is developed based on the architecture of context-aware systems, which consists of the following five layers: concept and research layer, network layer, middleware layer, application layer and user infrastructure layer. The articles are categorized based on the classification framework. This paper allows researchers to extract several lessons learned that are important for the implementation of context-aware systems.

624 citations

Journal ArticleDOI
TL;DR: The applications of ambient intelligence are surveyed, including its applications, some of the technologies it uses, and its social and ethical implications; for example planning, learning, event-condition-action rules, temporal reasoning, and agent-oriented technologies.
Abstract: In this article we survey ambient intelligence (AmI), including its applications, some of the technologies it uses, and its social and ethical implications. The applications include AmI at home, care of the elderly, healthcare, commerce, and business, recommender systems, museums and tourist scenarios, and group decision making. Among technologies, we focus on ambient data management and artificial intelligence; for example planning, learning, event-condition-action rules, temporal reasoning, and agent-oriented technologies. The survey is not intended to be exhaustive, but to convey a broad range of applications, technologies, and technical, social, and ethical challenges.

373 citations

Book ChapterDOI
01 Jan 2011
TL;DR: Augmented Reality is both interactive and registered in 3D as well as combines real and virtual objects and Milgram’s Reality-Virtuality Continuum is defined as a continuum that spans between the real environment and the virtual environment comprise Augmented Reality and Augmented Virtuality in between.
Abstract: We define Augmented Reality (AR) as a real-time direct or indirect view of a physical real-world environment that has been enhanced/augmented by adding virtual computer-generated information to it [1]. AR is both interactive and registered in 3D as well as combines real and virtual objects. Milgram’s Reality-Virtuality Continuum is defined by Paul Milgram and Fumio Kishino as a continuum that spans between the real environment and the virtual environment comprise Augmented Reality and Augmented Virtuality (AV) in between, where AR is closer to the real world and AV is closer to a pure virtual environment, as seen in Fig. 1.1 [2].

320 citations

Journal ArticleDOI
TL;DR: This paper presents a survey of some recent works on hand gesture recognition using 3D depth sensors, and reviews the commercial depth sensors and public data sets that are widely used in this field.
Abstract: Three-dimensional hand gesture recognition has attracted increasing research interests in computer vision, pattern recognition, and human-computer interaction. The emerging depth sensors greatly inspired various hand gesture recognition approaches and applications, which were severely limited in the 2D domain with conventional cameras. This paper presents a survey of some recent works on hand gesture recognition using 3D depth sensors. We first review the commercial depth sensors and public data sets that are widely used in this field. Then, we review the state-of-the-art research for 3D hand gesture recognition in four aspects: 1) 3D hand modeling; 2) static hand gesture recognition; 3) hand trajectory gesture recognition; and 4) continuous hand gesture recognition. While the emphasis is on 3D hand gesture recognition approaches, the related applications and typical systems are also briefly summarized for practitioners.

291 citations