scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Balancing physical and digital properties in mixed objects

28 May 2008-pp 305-308
TL;DR: The Mixed Interaction Model is introduced, which introduces a new characterization space of the physical and digital properties of a mixed object from an intrinsic viewpoint without taking into account the context of use of the object.
Abstract: Mixed interactive systems seek to smoothly merge physical and digital worlds. In this paper we focus on mixed objects that take part in the interaction. Based on our Mixed Interaction Model, we introduce a new characterization space of the physical and digital properties of a mixed object from an intrinsic viewpoint without taking into account the context of use of the object. The resulting enriched Mixed Interaction Model aims at balancing physical and digital properties in the design process of mixed objects. The model extends and generalizes previous studies on the design of mixed systems and covers existing approaches of mixed systems including tangible user interfaces, augmented reality and augmented virtuality. A mixed system called ORBIS that we developed is used to illustrate the discussion: we highlight how the model informs the design alternatives of ORBIS.

Summary (3 min read)

1. INTRODUCTION

  • Mixed interactive systems seek to smoothly merge physical and digital worlds.
  • The design of such mixed systems gives rise to further design challenges due to the new roles that physical objects can play in an interactive system.
  • Addressing this challenge, in [7], the authors introduced the Mixed Interaction Model:.
  • The authors first present the main features of ORBIS, a mixed system that they designed and developed.
  • The authors then recall the key elements of their model before presenting the intrinsic characterization scheme of a mixed object.

2. ILLUSTRATIVE EXAMPLE: ORBIS

  • ORBIS is a system providing new ways to enjoy personal pictures, music and videos in a family house.
  • Pictures are embedded in a silicone object , displayed as a slideshow through a mini screen and are always correctly displayed according to the orientation of the silicone shape , thanks to embedded accelerometers.
  • ORBIS then allows the user to perform tasks including play/pause the presentation, shuffle or navigate the list of pictures (Table 1) by interacting with the mixed object.
  • To play/pause Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.
  • Nevertheless in the rest of the paper, the authors focus on the mixed object “List of pictures” from an intrinsic point of view without considering its context of use.

3. MODELING OF A MIXED OBJECT

  • The key concept of the Mixed Interaction Model is a mixed object.
  • The Mixed Interaction Model enables us to model both mixed objects and interaction with them.
  • The authors recall here the main principles of the model for defining a mixed object only, since they focus on intrinsic characteristics of an object without considering the interaction with it.

3.1 Definition

  • Objects existing in both the physical and digital worlds are depicted in the literature as mixed objects [4], augmented objects or physical-digital objects, but there is no precise definition of such objects.
  • The link between the physical and the digital parts of an object is defined by linking modalities.
  • The authors reuse these two levels of abstraction, device and language.
  • Two accelerometers each acquire 1D acceleration from physical properties.
  • The output linking language translates the digital properties of the object in order to present the list of pictures as a slideshow.

3.2.1 Sensed/Generated Physical Properties

  • The authors consider physical properties independently of the linking modalities.
  • In order to take into account the user in the design process, the authors relate the perceived affordance [12] of physical properties, cultural constraints and predictability [1] to the sensed physical properties.
  • Biologists explore (move, turn, resize) molecules shown as a graph projected on the table.
  • Physical properties taken into account in the NAVRNA tool, i.e. blue token, include the physical position and the color of the tokens.
  • The physical property of a token, its position, which was Sensed/Non Generated, is now Sensed/Generated, as in [14][15].

3.2.2 Acquired/Materialized Digital Properties

  • In order to take into account the user in the design process at the digital properties level, the authors may relate the materialized characteristic to the observability property [1].
  • By considering the same example, NAVRNA, designers may have a top-down approach, starting from the digital side.

4. INTRINSIC DESIGN OF A MIXED OBJECT: ORBIS EXAMPLE

  • Physical and digital properties of a mixed object are characterized by two orthogonal design axes, respectively Sensed/Generated and Acquired/Materialized as schematized in Figure 3.
  • On the one hand, the design approach can be bottom-up, starting from a physical object with a set of physical properties and then defining its generated physical properties as well as its sensed physical properties, before deciding the linking modalities.
  • The first obvious digital property is the digital list of pictures (Image 0, …, Image n).
  • Thus the authors need to define an input linking modality, linking the physical to the digital top of pictures.
  • The non-generated physical property i.e. the top of the silicone shape is sensed by an input linking modality, such as (accelerometers, orientation).

6. CONCLUSION

  • Based on their Mixed Interaction Model, the authors introduce a new characterization space of the physical and digital properties of a mixed object from an intrinsic viewpoint.
  • According to [13], it proves the usefulness of their model that facilitates interconnection between existing approaches.
  • The authors currently do not find examples of design solutions in the literature that their model left out.
  • Applying their model and its intrinsic characteristics, the authors were able to make a fine distinction between these interfaces, where other taxonomies only partially capture these differences.
  • Going further than describing and classifying existing mixed systems, in order to assess if the model is useful for design, the authors use another form of empirical evaluation: they applied the model in real design situations.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

Balancing Physical and Digital Properties
in Mixed Objects
Céline Coutrix and Laurence Nigay
Grenoble Informatics Laboratory (LIG), University of Grenoble 1, BP 53, 38041 Grenoble Cedex 9, France
33 4 76 51 44 40
{Celine.Coutrix, Laurence.Nigay}@imag.fr
ABSTRACT
Mixed interactive systems seek to smoothly merge physical and
digital worlds. In this paper we focus on mixed objects that take
part in the interaction. Based on our Mixed Interaction Model, we
introduce a new characterization space of the physical and digital
properties of a mixed object from an intrinsic viewpoint without
taking into account the context of use of the object. The resulting
enriched Mixed Interaction Model aims at balancing physical and
digital properties in the design process of mixed objects. The
model extends and generalizes previous studies on the design of
mixed systems and covers existing approaches of mixed systems
including tangible user interfaces, augmented reality and
augmented virtuality. A mixed system called ORBIS that we
developed is used to illustrate the discussion: we highlight how
the model informs the design alternatives of ORBIS.
Categories and Subject Descriptors
H.5.2 [User Interfaces] Theory and methods, User-centered
design. D.2.2 [Design Tools and Techniques] User interfaces
General Terms
Design, Human Factors.
Keywords
Mixed Systems, Mixed Objects, Augmented Reality, Tangible
User Interfaces, Design Space.
1. INTRODUCTION
Mixed interactive systems seek to smoothly merge physical and
digital worlds. Examples include tangible user interfaces,
augmented reality and augmented virtuality. The design of such
mixed systems gives rise to further design challenges due to the
new roles that physical objects can play in an interactive system.
The design challenge lies in the fluid and harmonious fusion of
the physical and digital worlds. Addressing this challenge, in [7],
we introduced the Mixed Interaction Model: Our contribution is a
new way of thinking of interaction design with mixed systems in
terms of mixed objects, putting on equal footing physical and
digital properties of an object since combining physical and
digital worlds is the essence of mixed systems. In mixed systems,
a mixed object is involved in the interaction. As identified in our
ASUR (Adapter, System, User, Real object) design notation [8]
for mixed systems, an object is either a tool used by the user to
perform her/his task or the object that is the focus of the task (i.e.,
task object).
In this paper, we focus on the physical and digital properties of a
mixed object in the light of our mixed interaction model. We
present a new characterization space of the physical and digital
properties of a mixed object from an intrinsic viewpoint. Intrinsic
characteristics of a mixed object are independent of its context of
use. Intrinsic properties can then be applied to an object that plays
the role of a tool or of a task object in the interaction. By
characterizing mixed objects, we enrich our model by providing a
better and unified understanding of the design possibilities.
The paper is organized as follows: We first present the main
features of ORBIS, a mixed system that we designed and
developed. ORBIS is used to illustrate our intrinsic
characterization space. We then recall the key elements of our
model before presenting the intrinsic characterization scheme of a
mixed object. We illustrate it by considering a mixed object in
ORBIS. We finally consider related studies and show how our
characterization scheme unifies existing approaches.
2. ILLUSTRATIVE EXAMPLE: ORBIS
ORBIS is a system providing new ways to enjoy personal
pictures, music and videos in a family house. As part of a
multidisciplinary project involving HCI researchers, computer
scientists and a product designer, we designed and developed the
functional prototype of Figure 1. The list of personal media is to
be imported beforehand in the system. In the first version of the
system, we only consider pictures. Pictures are embedded in a
silicone object (Figure 1-a), displayed as a slideshow through a
mini screen and are always correctly displayed according to the
orientation of the silicone shape (Figure 1-b), thanks to embedded
accelerometers. This mixed object is called “List of pictures”.
-a-
-b-
Figure 1: -a- ORBIS prototype. -b- Rotating the mixed object.
ORBIS then allows the user to perform tasks including play/pause
the presentation, shuffle or navigate the list of pictures (Table 1)
by interacting with the mixed object. For example, to play/pause
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
Conference’04, Month 1–2, 2004, City, State, Country.
Copyright 2004 ACM 1-58113-000-0/00/0004…$5.00.

the presentation, the user presses a tool. This action is sensed by a
balloon fixed to an atmospheric pressure sensor. To navigate the
pictures, the user rotates the tool where a potentiometer is
embedded. We considered different solutions, for example a
design with only one mixed object (Table 1, left column) that
plays the role of both task object and tool, and another one with
two distinct objects (Table 1, right column). Table 1 shows
different design solutions for interacting with the ORBIS mixed
object List of pictures”. These are examples to show how the
mixed object can be used in ORBIS. Nevertheless in the rest of
the paper, we focus on the mixed object “List of pictures” from an
intrinsic point of view without considering its context of use.
Table 1: Interacting with the mixed object “List of Pictures”
in ORBIS: different design solutions.
Play/Pause
Navigate
3. MODELING OF A MIXED OBJECT
The key concept of the Mixed Interaction Model is a mixed
object. The Mixed Interaction Model enables us to model both
mixed objects and interaction with them. We recall here the main
principles of the model for defining a mixed object only, since we
focus on intrinsic characteristics of an object without considering
the interaction with it.
3.1 Definition
Objects existing in both the physical and digital worlds are
depicted in the literature as mixed objects [4], augmented objects
or physical-digital objects, but there is no precise definition of
such objects. In the Mixed Interaction Model, a mixed object is
defined by its physical and digital properties as well as the link
between these two sets of properties. The link between the
physical and the digital parts of an object is defined by linking
modalities. We base the definition of a linking modality on that of
an interaction modality [17]: Given that d is a physical device that
acquires or delivers information, and l is an interaction language
that defines a set of well-formed expressions that convey
meaning, an interaction modality [17] is a pair (d,l), such as
(camera, computer vision) or (microphone, pseudo natural
language). We reuse these two levels of abstraction, device and
language. But as opposed to interaction modalities used by the
user to interact with mixed environments, the modalities that
define the link between physical and digital properties of an object
are called linking modalities. There are two types of linking
modalities that compose a mixed object: An input linking
modality (d
i
,l
i
) is responsible for (1) acquiring a subset of physical
properties, using a device d
i
(input device), (2) interpreting these
acquired physical data in terms of digital properties, using a
language l
i
(input language). An output linking modality is in
charge of (1) generating data based on the set of digital
properties, using a language l
o
(output language), (2) translating
these generated physical data into perceivable physical properties
thanks to a device d
o
(output device).
As an example of a mixed object, we consider the list of pictures
in ORBIS presented in Figure 1 and modeled in Figure 2. Two
accelerometers each acquire 1D acceleration from physical
properties. The resulting data are combined: for the composition
of linking modalities at both device and language levels, we reuse
the CARE properties [17]. The input linking language then
translates the resulting combined data into the digital property
top, which can have four possible values corresponding to each
side of a picture. Figure 1 illustrates this process by showing how
the changes of physical properties (rotation of the mixed object)
impact on the digital properties of the object (orientation of the
displayed picture) thanks to the linking modalities. The output
linking language translates the digital properties of the object
(Figure 2) in order to present the list of pictures as a slideshow.
Finally the device of the output linking modality (i.e., the mini
screen in Figure 1 and 2) makes the slideshow perceivable by the
user.
Figure 2: Mixed object “List of pictures” in ORBIS.
3.2 Intrinsic Characteristics
The intrinsic characterization space is based on two orthogonal
axes that describe the physical and digital properties of a mixed
object.
3.2.1 Sensed/Generated Physical Properties
We consider physical properties independently of the linking
modalities. Without specifying the linking modalities, a physical
property can be sensed or not by an input linking modality, and
generated or not by an output linking modality, as shown in
Figure 3.
Figure 3: Characterization of the physical and digital
properties of a mixed object.
In order to take into account the user in the design process, we
relate the perceived affordance [12] of physical properties,
cultural constraints and predictability [1] to the sensed physical
properties. Affordance [12] is defined as the physical properties
the user can act on. Cultural constraints are conventions shared by
users from a same cultural group. For example, if a ball has the

appearance of a soccer ball, this suggests to the users to hit the
ball with their feet. Such actions, called expected actions in [3],
should then correspond to sensed physical properties to ensure
partial predictability [1]. The complete predictability will then be
ensured by designing the proper input linking modality.
To fully illustrate the Sensed/Generated physical properties of
Figure 3, we consider the example of the NAVRNA system, a
system that we have designed and developed for the manipulation
of ARN molecules [2]. Biologists move blue tokens around a
table instrumented with camera and projector (Figure 4). The
physical position of a token is sensed by the video camera.
Biologists explore (move, turn, resize) molecules shown as a
graph projected on the table.
Figure 4: NAVRNA.
Physical properties taken into account in the NAVRNA tool, i.e.
blue token, include the physical position and the color of the
tokens. Instead of having a non-generated color, we can envision
generating the color of the token as a feedback of the sensed
physical position, as in [14]. Nevertheless in NAVRNA the color
is sensed by the computer vision linking modality. We could then
change the linking modality and consider infrared as in [11][14].
In that case, the color of a token, which was initially a
Sensed/Non Generated physical property, is now a Non Sensed/
Generated physical property. Considering the second physical
property, the position of the tokens, we may decide that when the
user moves a molecule, all tools (and therefore tokens) move
accordingly: The physical property of a token, its position, which
was Sensed/Non Generated, is now Sensed/Generated, as in
[14][15]. Identifying such a physical property during the design
phase leads the designer to decide the protocol for modifying this
shared resource (i.e., the physical property). For example in [14],
a mode is used: the object is either in sensing or generating mode.
As shown with the NAVRNA example, the two characteristics
Sensed/Generated of a physical property allow the designer to
systematically explore the design space independently of the
linking modalities and therefore the technological considerations.
3.2.2 Acquired/Materialized Digital Properties
In a symmetric way, digital properties can be acquired or not, and
materialized or not. In order to take into account the user in the
design process at the digital properties level, we may relate the
materialized characteristic to the observability property [1]. By
considering the same example, NAVRNA, designers may have a
top-down approach, starting from the digital side. The digital
property is [x,y]. It is an acquired digital property as explained
above. For enhancing the observability of the state of the object,
the property can be materialized for example by projecting a color
on top of the token as in [14]. The digital property is then
Acquired/Materialized.
4. INTRINSIC DESIGN OF A MIXED
OBJECT: ORBIS EXAMPLE
Physical and digital properties of a mixed object are characterized
by two orthogonal design axes, respectively Sensed/Generated
and Acquired/Materialized as schematized in Figure 3. The
characterization scheme does not constrain the order of design
activity. On the one hand, the design approach can be bottom-up,
starting from a physical object with a set of physical properties
and then defining its generated physical properties as well as its
sensed physical properties, before deciding the linking modalities.
On the other hand, the approach can be top-down starting by a set
of digital properties and defining the acquired and materialized
digital properties, as in the ORBIS example. Going back and
forth, considering alternatively the physical properties and the
digital properties in the light of our characterization scheme
defines a smooth combination of bottom-up and top-down design
approach of a mixed object. We illustrate this point by
considering the design of the “list of pictures” object in ORBIS.
In the context of the design of ORBIS, the list of pictures is
originally a digital object. As we wanted it to be more anchored in
the physical world, we designed it as a mixed object. The first
obvious digital property is the digital list of pictures
(Image 0, …, Image n). We identify further digital
properties attached to it: The order of the pictures, initially
arranged (0, …, n), the boolean digital property
isPresented, initially false, and current, initially 0.
Digital properties can be acquired and/or materialized. In this case
of purely digital pictures (non-acquired), we decided to
materialize these digital properties by choosing the (mini-screen,
slideshow) modality. Based on this digital part of the object, we
explore alternatives for linking devices and languages (i.e.,
linking modalities) in order to augment this object with a physical
part. Physical properties can be sensed/generated or not by linking
modalities. The design choice of physical properties neither
sensed nor generated were driven by aesthetic and portability
requirements, such as the silicone shape around the screen (Figure
1). We also consider a physical property to be sensed, such as the
top of the silicone shape, since we want the picture to be always
correctly displayed according to the orientation of the silicone
shape (Figure 1). Thus we need to define an input linking
modality, linking the physical to the digital top of pictures. The
non-generated physical property i.e. the top of the silicone shape
is sensed by an input linking modality, such as (accelerometers,
orientation). The input linking modality being defined, a new
digital property is identified, having four values corresponding to
the four possible sides of a picture. This new digital property is
acquired thanks to the input linking modality, as opposed to the
other digital properties that are not acquired. Figure 2 shows the
corresponding design, with an input linking modality based on
accelerometers as well as the acquired digital property, top.
5. RELATED WORK
The Sensed/Generated and Acquired/Materialized characteristics
of the physical and digital properties generalize the Input &
Output axis presented in [9], the characterization of physical
properties in MCRit [16] and the sensed movements in [3].
First, the Input & Output axis [9] characterizes the system
inputs and outputs without considering the two levels of a
linking modality, device and language, as well as the two
types of properties physical and digital. These levels of

abstraction are also presented in [5] and [10]. For example,
we refine “Light (photoelectric cell)” from [9] into: the
sensed physical luminosity, the input linking modality
(photoelectric cell, language-filter), and resulting digital
properties. Such a refinement helps explore the design
possibilities by systematically considering the design choices
at each level of abstraction.
Second, MCRit [16] splits the output of the system between
tangible and intangible representation. Our model extends
this definition by considering both inputs and outputs.
Moreover since our framework is not dedicated to tangible
UI only, we consider tangible and non-tangible mixed
objects. For example, an object superimposed on the
physical world through semi transparent glasses is mixed,
but not tangible.
Finally in the framework for designing sensing-based
interaction [3], sensed movements can be related to the
sensed properties of a mixed object: the sensed
movements/properties that are measured by a computer. Our
model extends this notion by also considering the generated
physical properties. Moreover our model not only considers
the physical properties but proposes a symmetric analysis of
the digital properties.
6. CONCLUSION
Based on our Mixed Interaction Model, we introduce a new
characterization space of the physical and digital properties of a
mixed object from an intrinsic viewpoint. Our intrinsic
characterization scheme unifies existing design spaces while
extending them. According to [13], it proves the usefulness of our
model that facilitates interconnection between existing
approaches. Moreover the model has been used to analyze
existing mixed systems. We currently do not find examples of
design solutions in the literature that our model left out. This
proves that the model could be used to design a wide and relevant
range of mixed objects since reverse engineering was possible.
This demonstrates the soundness of the underlying concepts of the
model. More importantly the modeling of existing systems
enables us to describe in detail the systems and to make a fine
distinction between them. As a benchmark, we chose similar
interfaces like NAVRNA [2], IRPhicon [11] and the Actuated
Workbench [14]. Differences between them are not obvious: in all
of them the user interacts by moving an object on a surface.
Applying our model and its intrinsic characteristics, we were able
to make a fine distinction between these interfaces, where other
taxonomies only partially capture these differences. This shows
that our model provides a useful framework for better
understanding existing mixed systems.
Going further than describing and classifying existing mixed
systems, in order to assess if the model is useful for design, we
use another form of empirical evaluation: we applied the model in
real design situations. Although we presented here only one
example, the model has been used to design new mixed systems
such as ORBIS, RAZZLE [7] or Snap2Play [6] with real end users
of the model, i.e. the designer and the software engineer, not the
authors of the model. As on-going work, we are currently further
evaluating the model by considering three groups of designers in
the context of a mixed system for museum exhibits: one group
working with this model, another with the ASUR model [8], and a
third group without any model.
7. REFERENCES
[1] Abowd, G., Coutaz, J., Nigay, L., 1992. Structuring the
Space of Interactive System Properties. In proceedings of
EHCI’92, North Holland Publ., 113-130.
[2] Bailly, G., Nigay, L., Auber, D., 2006. NAVRNA:
Viusalization-Exploration-Edition of RNA. In proceedings of
AVI’06, ACM Press, NY, 504-507.
[3] Benford, S., et al., 2005. Expected, Sensed, and Desired: A
Framework for Designing Sensing-Based Interaction. ACM
TOCHI, 12, 1 (March 2005), 3-30.
[4] Binder, T., et al., 2004. Supporting Configurability in a
Mixed Media Environment for Design Students. Springer
Personal and Ubiquitous Computing, 8, 5 (Sept. 2004), 310 -
325.
[5] Buxton, W., 1983. Lexical and pragmatic considerations of
input structures. ACM SIGGRAPH Computer Graphics, 17,1
(Jan. 1983), 31-37.
[6] Chin, T., You, Y., Coutrix, C., Lim, J., Chevallet, J.-P.,
Nigay, L., 2008. Snap2Play: A Mixed-Reality Game based
on Scene Identification. In proceedings of ACM IEEE
MMM’08, LNCS, Springer, 220-229.
[7] Coutrix, C., Nigay, L., 2006. Mixed Reality: A Model of
Mixed Interaction. In proceedings of AVI’06, ACM Press,
NY, 43-50.
[8] Dubois, E., Nigay, L., Troccaz, J., 2001. Consistency in
Augmented Reality Systems. In proceedings of EHCI'01,
LNCS, Springer, 117-130.
[9] Fitzmaurice, G., Ishii, H., Buxton, W., 1995. Bricks: Laying
the foundations for Graspable User Interfaces. In proceedings
of CHI’95, ACM Press, NY, 442-449.
[10] Mackinlay,J., Card, S., Robertson, G., 1990. A Semantic
Analysis of the Design Space of Input Devices. Lawrence
Erlbaum HCI, 5, 2&3 (1990), 145-190.
[11] Moore, D., Want, R., Harrison, B., Gujar, A., Fishkin, K.,
1999. Implementing Phicons: Combining Computer Vision
with InfraRed Technology for Interactive Physical Icons. In
proceedings of UIST’99, ACM Press, NY, 67-68.
[12] Norman, D., 1999. Affordance, Conventions and Design.
ACM Interactions, 6 ,3 (May-June 1999), 38-43.
[13] Olsen, D., 2007. Evaluating user interface systems research.
In proceedings of UIST’07, ACM Press, NY, 251-258.
[14] Pangaro, G., Maynes-Aminzade, D., Ishii, H., 2002. The
Actuated Workbench: Computer-Controlled Actuation in
Tabletop Tangible Interfaces. In proceedings of UIST’02,
ACM Press, NY, 181-190.
[15] Patten, J., Ishii, H., 2007. Mechanical Constraints as
Computational Constraints in Tabletop Tangible Interfaces.
In proceedings of CHI’07, ACM Press, NY, 809-818.
[16] Ullmer, B., Ishii, H., Jacob, R., 2005. Token+constraint
systems for tangible interaction with digital information.
ACM TOCHI, 12, 1 (March 2005), 81-118.
[17] Vernier, F., Nigay, L., 2000. A Framework for the
Combination and Characterization of Output Modalities. In
proceedings of DSVIS'00, LNCS, Springer, 32-48.
Citations
More filters
Journal ArticleDOI
TL;DR: This work designs a method that aligns the inspection method ''Software ArchitecTure analysis of Usability Requirements realizatioN'' SATURN and a mobile usability evaluation in the form of a user test and presents how a combination of both methods allows to address usability issues in a more holistic way.

74 citations

BookDOI
19 Sep 2012
TL;DR: Human Factors in Augmented Reality Environments is the first book on human factors in AR, addressing issues related to design, development, evaluation and application of AR systems.
Abstract: Advances in hardware and networking have made possible a wide use of augmented reality (AR) technologies. However, simply putting those hardware and technologies together does not make a good system for end users to use. New design principles and evaluation methods specific to this emerging area are urgently needed to keep up with the advance in technologies. Human Factors in Augmented Reality Environments is the first book on human factors in AR, addressing issues related to design, development, evaluation and application of AR systems. Topics include surveys, case studies, evaluation methods and metrics, HCI theories and design principles, human factors and lessons learned and experience obtained from developing, deploying or evaluating AR systems. The contributors for this cutting-edge volume are well-established researchers from diverse disciplines including psychologists, artists, engineers and scientists. Human Factors in Augmented Reality Environments is designed for a professional audience composed of practitioners and researchers working in the field of AR and human-computer interaction. Advanced-level students in computer science and engineering will also find this book useful as a secondary text or reference.

55 citations

Journal ArticleDOI
TL;DR: The formal specification of the physical behaviour of devices ‘unplugged’ from their digital effects is explored to better understand the nature of physical interaction and the way this can be exploited to improve the design of hybrid devices with both physical and digital features.
Abstract: This paper explores the formal specification of the physical behaviour of devices ‘unplugged’ from their digital effects. By doing this we seek to better understand the nature of physical interaction and the way this can be exploited to improve the design of hybrid devices with both physical and digital features. We use modified state transition networks of the physical behaviour, which we call physiograms, and link these to parallel diagrams of the digital state. These are used to describe a number of features of physical interaction exposed by previous work and relevant properties expressed using a formal semantics of the diagrams. As well as being an analytic tool, the physigrams have been used in a case study where product designers used and adapted them as part of the design process.

28 citations

Journal ArticleDOI
TL;DR: This paper shows how formal notations and formal models can be developed to account for the relationship between the physical devices that the authors actually press, twist or pull and their effects on systems.

13 citations

Proceedings ArticleDOI
13 Jun 2011
TL;DR: A collaborative design method combining the informal power of creative session and the formal generative power of a mixed interaction model called MACS (Model Assisted Creativity Session) is proposed.
Abstract: In this paper, we propose a collaborative design method combining the informal power of creative session and the formal generative power of a mixed interaction model called MACS (Model Assisted Creativity Session). By using a formal notation during creative sessions, interdisciplinary teams systematically explore combinations between the physical and digital spaces and remain focused on the design problem to address. In this paper, we introduce the MACS method principles and illustrate its application on two case studies.

9 citations


Cites background or methods from "Balancing physical and digital prop..."

  • ...identified that high level mixed interaction models such as ASUR [9] or MIM [6] constitutes appropriate candidates to be used in a MACS....

    [...]

  • ...As already mentioned, two different models were used by each session: ASUR [9] and MIM [6]....

    [...]

  • ...Some of them support interaction design [6,9], others propose abstract conceptual frameworks [10,11,13,20] which provide a better understanding of the MIS field....

    [...]

  • ...We led two sessions with two different interaction models: ASUR [9] and MIM [6] in order to test the generic aspect of the MACS....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: I was quietly lurking in the background of a CHI-Web discussion, when I lost all reason: I just couldn't take it anymore, and out came this article: I don't know if it changed anyone's minds, but it brought the discussion to a halt (not what good list managers want to happen).
Abstract: I was quietly lurking in the background of a CHI-Web discussion, when I lost all reason: I just couldn't take it anymore. " I put an affordance there, " a participant would say, " I wonder if the object affords clicking … " Affordances this, affordances that. And no data, just opinion. Yikes! What had I unleashed upon the world? " No! " I screamed, and out came this article. I don't know if it changed anyone's minds, but it brought the CHI-Web discussion to a halt (not what good list managers want to happen). But then, Steven Pemberton asked me to submit it here. Hope it doesn't stop the discussion again. Mind you, this is not the exact piece I dashed off to CHI-Web: it has been polished and refined: the requirements of print are more demanding than those of e-mail discussions.

1,673 citations


"Balancing physical and digital prop..." refers background or methods in this paper

  • ...Affordance [12] is defined as the physical properties the user can act on....

    [...]

  • ...In order to take into account the user in the design process, we relate the perceived affordance [12] of physical properties, cultural constraints and predictability [1] to the sensed physical properties....

    [...]

Proceedings ArticleDOI
01 May 1995
TL;DR: This work introduces the concept of Graspable User Interfaces that allow direct control of electronic or virtual objects through physical handles for control, and presents a design space for Bricks which lay the foundation for further exploring and developing Graspables User Inter interfaces.
Abstract: We introduce the concept of Graspable User Interfaces that allow direct control of electronic or virtual objects through physical handles for control. These physical artifacts, which we call "bricks," are essentially new input devices that can be tightly coupled or “attached” to virtual objects for manipulation or for expressing action (e.g., to set parameters or for initiating processes). Our bricks operate on top of a large horizontal display surface known as the "ActiveDesk." We present four stages in the development of Graspable UIs: (1) a series of exploratory studies on hand gestures and grasping; (2) interaction simulations using mock-ups and rapid prototyping tools; (3) a working prototype and sample application called GraspDraw; and (4) the initial integrating of the Graspable UI concepts into a commercial application. Finally, we conclude by presenting a design space for Bricks which lay the foundation for further exploring and developing Graspable User Interfaces.

1,085 citations


"Balancing physical and digital prop..." refers background or methods in this paper

  • ...¥ First, the Input & Output axis [9] characterizes the system inputs and outputs without considering the two levels of a linking modality, device and language, as well as the two types of properties physical and digital....

    [...]

  • ...For example, we refine “Light (photoelectric cell)” from [9] into: the sensed physical luminosity, the input linking modality (photoelectric cell, language-filter), and resulting digital properties....

    [...]

  • ...The Sensed/Generated and Acquired/Materialized characteristics of the physical and digital properties generalize the Input & Output axis presented in [9], the characterization of physical properties in MCRit [16] and the sensed movements in [3]....

    [...]

Journal ArticleDOI
01 Jan 1983
TL;DR: The main body of this paper examines some of the taxonomies which have been proposed and examines how they can serve as useful structures for relatin g studies in user interface problems and attempts to augment the power of these structures by developing their ability to take into account the effect of gestural and positional factors on the overall effect of the user interface.

259 citations


"Balancing physical and digital prop..." refers background in this paper

  • ...abstraction are also presented in [5] and [10]....

    [...]

Journal ArticleDOI
TL;DR: This discussion discusses the properties of the token+constraint approach; considers strengths that distinguish them from other interface approaches; and illustrates the concept with eleven past and recent supporting systems.
Abstract: We identify and present a major interaction approach for tangible user interfaces based upon systems of tokens and constraints. In these interfaces, tokens are discrete physical objects which represent digital information. Constraints are confining regions that are mapped to digital operations. These are frequently embodied as structures that mechanically channel how tokens can be manipulated, often limiting their movement to a single degree of freedom. Placing and manipulating tokens within systems of constraints can be used to invoke and control a variety of computational interpretations.We discuss the properties of the token+constraint approach; consider strengths that distinguish them from other interface approaches; and illustrate the concept with eleven past and recent supporting systems. We present some of the conceptual background supporting these interfaces, and consider them in terms of Bellotti et al.'s [2002] five questions for sensing-based interaction. We believe this discussion supports token+constraint systems as a powerful and promising approach for sensing-based interaction.

248 citations


"Balancing physical and digital prop..." refers background in this paper

  • ...Second, MCRit [16] splits the output of the system between tangible and intangible representation....

    [...]

  • ...• Second, MCRit [16] splits the output of the system between tangible and intangible representation....

    [...]

  • ...RELATED WORK The Sensed/Generated and Acquired/Materialized characteristics of the physical and digital properties generalize the Input & Output axis presented in [9], the characterization of physical properties in MCRit [16] and the sensed movements in [3]....

    [...]

  • ...The Sensed/Generated and Acquired/Materialized characteristics of the physical and digital properties generalize the Input & Output axis presented in [9], the characterization of physical properties in MCRit [16] and the sensed movements in [3]....

    [...]

Proceedings ArticleDOI
27 Oct 2002
TL;DR: The Actuated Workbench is a device that uses magnetic forces to move objects on a table in two dimensions intended for use with existing tabletop tangible interfaces, providing an additional feedback loop for computer output, and helping to resolve inconsistencies that otherwise arise from the computer's inability tomove objects on the table.
Abstract: The Actuated Workbench is a device that uses magnetic forces to move objects on a table in two dimensions. It is intended for use with existing tabletop tangible interfaces, providing an additional feedback loop for computer output, and helping to resolve inconsistencies that otherwise arise from the computer's inability to move objects on the table. We describe the Actuated Workbench in detail as an enabling technology, and then propose several applications in which this technology could be useful.

226 citations

Frequently Asked Questions (2)
Q1. What have the authors contributed in "Balancing physical and digital properties in mixed objects" ?

In this paper the authors focus on mixed objects that take part in the interaction. Based on their Mixed Interaction Model, the authors introduce a new characterization space of the physical and digital properties of a mixed object from an intrinsic viewpoint without taking into account the context of use of the object. The resulting enriched Mixed Interaction Model aims at balancing physical and digital properties in the design process of mixed objects. A mixed system called ORBIS that the authors developed is used to illustrate the discussion: they highlight how the model informs the design alternatives of ORBIS. 

As on-going work, the authors are currently further evaluating the model by considering three groups of designers in the context of a mixed system for museum exhibits: one group working with this model, another with the ASUR model [ 8 ], and a third group without any model.