scispace - formally typeset
Open AccessJournal ArticleDOI

Design Study Methodology: Reflections from the Trenches and the Stacks

TLDR
This paper reflects on the combined experience of conducting twenty-one design studies, conducts an extensive literature survey of related methodological approaches that involve a significant amount of qualitative field work, and compares design study methodology to that of ethnography, grounded theory, and action research.
Abstract
Design studies are an increasingly popular form of problem-driven visualization research, yet there is little guidance available about how to do them effectively. In this paper we reflect on our combined experience of conducting twenty-one design studies, as well as reading and reviewing many more, and on an extensive literature review of other field work methods and methodologies. Based on this foundation we provide definitions, propose a methodological framework, and provide practical guidance for conducting design studies. We define a design study as a project in which visualization researchers analyze a specific real-world problem faced by domain experts, design a visualization system that supports solving this problem, validate the design, and reflect about lessons learned in order to refine visualization design guidelines. We characterize two axes - a task clarity axis from fuzzy to crisp and an information location axis from the domain expert's head to the computer - and use these axes to reason about design study contributions, their suitability, and uniqueness from other approaches. The proposed methodological framework consists of 9 stages: learn, winnow, cast, discover, design, implement, deploy, reflect, and write. For each stage we provide practical guidance and outline potential pitfalls. We also conducted an extensive literature survey of related methodological approaches that involve a significant amount of qualitative field work, and compare design study methodology to that of ethnography, grounded theory, and action research.

read more

Content maybe subject to copyright    Report

Design Study Methodology:
Reflections from the Trenches and the Stacks
Michael Sedlmair, Member, IEEE, Miriah Meyer, Member, IEEE, and Tamara Munzner, Member, IEEE
Abstract—Design studies are an increasingly popular form of problem-driven visualization research, yet there is little guidance avail-
able about how to do them effectively. In this paper we reflect on our combined experience of conducting twenty-one design studies,
as well as reading and reviewing many more, and on an extensive literature review of other field work methods and methodologies.
Based on this foundation we provide definitions, propose a methodological framework, and provide practical guidance for conducting
design studies. We define a design study as a project in which visualization researchers analyze a specific real-world problem faced
by domain experts, design a visualization system that supports solving this problem, validate the design, and reflect about lessons
learned in order to refine visualization design guidelines. We characterize two axes—a task clarity axis from fuzzy to crisp and an
information location axis from the domain expert’s head to the computer—and use these axes to reason about design study contribu-
tions, their suitability, and uniqueness from other approaches. The proposed methodological framework consists of 9 stages: learn,
winnow, cast, discover, design, implement, deploy, reflect, and write. For each stage we provide practical guidance and outline poten-
tial pitfalls. We also conducted an extensive literature survey of related methodological approaches that involve a significant amount
of qualitative field work, and compare design study methodology to that of ethnography, grounded theory, and action research.
Index Terms—Design study, methodology, visualization, framework.
1 INTRODUCTION
Over the last decade design studies have become an increasingly pop-
ular approach for conducting problem-driven visualization research.
Design study papers are explicitly welcomed at several visualization
venues as a way to explore the choices made when applying visualiza-
tion techniques to a particular application area [55], and many exem-
plary design studies now exist [17, 34, 35, 56, 94]. A careful reading
of these papers reveals multiple steps in the process of conducting a
design study, including analyzing the problem, abstracting data and
tasks, designing and implementing a visualization solution, evaluating
the solution with real users, and writing up the findings.
And yet there is a lack of specific guidance in the visualization liter-
ature that describes holistic methodological approaches for conducting
design studies—currently only three paragraphs exist [49, 55]. The
relevant literature instead focuses on methods for designing [1, 42, 66,
79, 82, 90, 91] and evaluating [13, 33, 39, 50, 68, 69, 76, 80, 85, 86, 95]
visualization tools. We distinguish between methods and methodology
with the analogy of cooking; methods are like ingredients, whereas
methodology is like a recipe. More formally, we use Crotty’s defini-
tions that methods are “techniques or procedures” and a methodology
is the “strategy, plan of action, process, or design lying behind the
choice and use of particular methods” [18].
From our personal experience we know that the process of con-
ducting a design study is hard to do well and contains many potential
pitfalls. We make this statement after reflecting on our own design
studies, in total 21 between the 3 authors, and our experiences of re-
viewing many more design study papers. We consider at least 3 of our
own design study attempts to be failures [51, 54, 72]; the other 18
were more successful [4, 5, 10, 40, 43, 44, 45, 46, 52, 53, 67, 70, 71,
73, 74, 75, 77, 78].
In the process of conducting these design studies we grappled with
many recurring questions: What are the steps you should perform, and
in what order? Which methods work, and which do not? What are the
potential research contributions of a design study? When is the use
Michael Sedlmair and Tamara Munzner are with the University of British
Columbia, e-mail: [msedl,tmm]@cs.ubc.ca.
Miriah Meyer is with the University of Utah, e-mail: miriah@cs.utah.edu.
Manuscript received 31 March 2012; accepted 1 August 2012; posted online
14 October 2012; mailed on 5 October 2012.
For information on obtaining reprints of this article, please send
e-mail to: tvcg@computer.org.
of visualization a good idea at all? How should we go about collab-
orating with experts from other domains? What are pitfalls to avoid?
How and when should we write a design study paper? These questions
motivated and guided our methodological work and we present a set
of answers in this paper.
We conducted an extensive literature review in the fields of human
computer interaction (HCI) [7, 8, 9, 12, 16, 19, 20, 21, 22, 25, 26,
27, 28, 29, 30, 31, 38, 47, 57, 63, 64, 65, 83] and social science [6,
14, 18, 24, 32, 62, 81, 87, 93] in hopes of finding methodologies that
we could apply directly to design study research. Instead, we found
an intellectual territory full of quagmires where the very issues we
ourselves struggled with were active subjects of nuanced debate. We
did not find any off-the-shelf answers that we consider suitable for
wholesale assimilation; after careful gleaning we have synthesized a
framing of how the concerns of visualization design studies both align
with and differ from several other qualitative approaches.
This paper is the result of a careful analysis of both our experi-
ences in the “trenches” while doing our own work, and our foray into
the library “stacks” to investigate the ideas of others. We provide, for
the first time, a discussion about design study methodology, includ-
ing a clear definition of design studies as well as practical guidance
for conducting them effectively. We articulate two axes, task clarity
and information location, to reason about what contributions design
studies can make, when they are an appropriate research device, and
how they are unique from other approaches. For practical guidance we
propose a process for conducting design studies, called the nine-stage
framework, consisting of the following stages: learn, winnow, cast,
discover, design, implement, deploy, reflect, and write. At each stage
we identify pitfalls that can endanger the success of a design study, as
well as strategies and methods to help avoid them. Finally, we contrast
design study methodology to related research methodologies used in
other fields, in particular those used or discussed in HCI, and elaborate
on similarities and differences. In summary, the main contributions of
this paper are:
definitions for design study methodology, including articulation
of the task clarity and information location axes;
a nine-stage framework for practical guidance in conducting de-
sign studies and collaborating with domain experts;
32 identified pitfalls occurring throughout the framework;
a comparison of design study methodology to that of ethnogra-
phy, grounded theory and action research.
We anticipate that a wide range of readers will find this paper use-
ful, including people new to visualization research, researchers expe-

rienced in technique-driven visualization work who are transitioning
to problem-driven work, experienced design-study researchers seek-
ing comparison with other methodologies, reviewers of design study
papers, and readers outside of the visualization community who are
considering when to employ visualization versus full automation.
2 RELATED WORK
Only two sources discuss design study methodology, and both are
brief. The original call for design study papers [55] contains only a
paragraph about expectations, while Munzner [49] elaborates slightly
further by defining the critical parts of a design study. Neither of
these sources provide specific methodological and practical guidance
on how to conduct design studies.
There is, however, a rich source of papers elaborating on models
and methods, particularly evaluation methods, that pertain to design
studies. Some of the most relevant for design studies include the in-
vestigation of Lloyd and Dykes into the early steps of problem anal-
ysis and paper prototyping in a longitudinal geovisualization design
study, providing interesting insights into which human-centered meth-
ods work and which do not [42]; van Wijk’s model for understanding
and reasoning about the “value of visualization” [88] that provides a
lens on the interplay between data, user, and visualization; Amar and
Stasko’s guidance for problem-driven visualization research by identi-
fying and articulating gaps between the representation and the analysis
of data, and provide precepts for bridging these gaps [3]; and Pretorius
and van Wijk’s arguments for the importance of considering not just
the needs of the user, but also the structure and semantics of the data
when designing a visualization tool [61].
The majority of other related work on methods deals with the ques-
tion of how to evaluate visualization designs and tools in real-world
settings. Carpendale provides an overview of relevant validation meth-
ods in visualization [13] while Munzner provides guidance on when to
use which method [50]. Lam et al. conduct a broad literature survey of
more than 800 visualization papers and derive seven guiding scenar-
ios describing visualization evaluation [39]. Sedlmair et al. provide
practical advice of how to validate visualizations in large company
settings, one of many settings in which a design study may be con-
ducted [76]. Finally, many proposed evaluation methods address the
specific needs of validating the usefulness of visualization tools such
as the multidimensional in-depth long-term case study approach [80],
the insight-based method [68, 69], and grounded evaluation [33].
While these papers are excellent resources for specific methods ap-
plicable to design studies, the goal of this paper is a higher level artic-
ulation of a methodology for conducting design studies.
3 CHARACTERIZING DESIGN STUDIES
This section defines key terms, proposes two axes that clarify the po-
tential contributions of design studies, and uses these axes to charac-
terize their contributions and suitability.
3.1 Definitions
Design studies are one particular form of the more general category
of problem-driven research, where the goal is to work with real users
to solve their real-world problems. At the other end of the spectrum
is technique-driven research, where the goal is to develop new and
better techniques without necessarily establishing a strong connection
to a particular documented user need. The focus in this paper is on
problem-driven research, but in the larger picture we argue that the
field of visualization benefits from a mix of both to maintain vitality.
We define a design study as follows:
A design study is a project in which visualization re-
searchers analyze a specific real-world problem faced by
domain experts, design a visualization system that sup-
ports solving this problem, validate the design, and reflect
about lessons learned in order to refine visualization design
guidelines.
Our definition implies the following:
analysis: Design studies require analysis to translate tasks and
data from domain-specific form into abstractions that a user can
address through visualization.
real-world problem: At the heart of a design study is a contri-
bution toward solving a real-world problem: real users and real
data are mandatory.
design: Our definition of design is the creative process of search-
ing through a vast space of possibilities to select one of many
possible good choices from the backdrop of the far larger set
of bad choices. Successful design typically requires the explicit
consideration of multiple alternatives and a thorough knowledge
of the space of possibilities.
validation: A crucial aspect of our definition is the validation of
the problem analysis and the visualization design in the broad
sense of Munzner’s nested model [50]. We advocate choosing
from a wide variety of methods according to their suitability for
evaluating the different framework stages, including justification
according to known principles, qualitative analysis of results, in-
formal expert feedback, and post-deployment field studies.
reflection: Design becomes research when reflection leads to im-
proving the process of design itself, by confirming, refining, re-
jecting, or proposing guidelines.
In this paper, we propose a specific process for conducting design
studies, the nine-stage framework, described in detail in Section 4.
We offer it as a scaffold to provide guidance for those who wish to
begin conducting design studies, and as a starting point for further
methodological discussion; we do not imply that our framework is the
only possible effective approach.
Collaboration between visualization researchers and domain ex-
perts is a fundamental and mandatory part of the nine-stage frame-
work; in the rest of the paper we distinguish between these roles.
While strong problem-driven work can result from situations where
the same person holds both of these roles, we do not address this case
further here. The domain expert role is crucial; attempts to simply ap-
ply techniques without a thorough understanding of the domain con-
text can fail dramatically [92].
Conducting a design study using the nine-stage framework can lead
to three types of design study research contributions, the first of
which is a problem characterization and abstraction. Characterizing
a domain problem through an abstraction into tasks and data has mul-
tiple potential benefits. First, this characterization is a crucial step in
achieving shared understanding between visualization researchers and
domain experts. Second, it establishes the requirements against which
a design proposal should be judged. It can thus be used not only by
the researchers conducting the design study, but also by subsequent
researchers who might propose a different solution to the same prob-
lem. Finally, it can enable progress towards a fully automatic approach
that does not require a human in the loop by causing relevant domain
knowledge to be articulated and externalized. We thus argue for con-
sidering this characterization and abstraction as a first-class contribu-
tion of a design study.
A validated visualization design is the second type of possible con-
tribution. A visualization tool is a common outcome of a design study
project. Our definition of design study requires that the tool must be
appropriately validated with evidence that it does in fact help solve
the target domain problem and is useful to the experts. The validated
design of a visualization tool or system is currently the most common
form of design study contribution claim.
The third type of contribution is the reflection on the design study
and its retrospective analysis in comparison to other related work.
Lessons learned can improve current guidelines, for example visu-
alization and interaction design guidelines, evaluation guidelines, or
process guidelines.
A design study paper is a paper about a design study. Reviewers
of design study papers should consider the sum of contributions of all
three types described above, rather than expecting that a single design
study paper have strong contributions of all three. For instance, a de-
sign study with only a moderate visual encoding design contribution
might have an interesting and strong problem characterization and ab-

straction, and a decent reflection on guidelines. On the other hand, a
very thorough design and evaluation might counterbalance a moder-
ate problem characterization or reflection. Our definitions imply that
a design study paper does not require a novel algorithm or technique
contribution. Instead, a proposed visualization design is often a well-
justified combination of existing techniques. While a design study
paper is the most common outcome of a design study, other types of
research papers are also possible such as technique or algorithm, eval-
uation, system, or even a pure problem characterization paper [50].
3.2 Task Clarity and Information Location Axes
We introduce two axes, task clarity and information location, as shown
in Figure 1. The two axes can be used as a way to think and reason
about problem characterization and abstraction contributions which,
although common in design studies, are often difficult to capture and
communicate.
The task clarity axis depicts how precisely a task is defined, with
fuzzy on the one side and crisp on the other. An example of a crisp
task is “buy a train ticket”. This task has a clearly defined goal with a
known set of steps. For such crisp tasks it is relatively straightforward
to design and evaluate solutions. Although similarly crisp low-level
visualization tasks exist, such as correlate, cluster or find outliers [2],
reducing a real-world problem to these tasks is challenging and time
consuming. Most often, visualization researchers are confronted with
complex and fuzzy domain tasks. Data analysts might, for instance,
be interested in understanding the evolutionary relationship between
genomes [45], comparing the jaw movement between pigs [34], or the
relationship between voting behavior and ballot design [94]. These
domain tasks are inherently ill-defined and exploratory in nature. The
challenge of evaluating solutions against such fuzzy tasks is well-
understood in the information visualization community [59].
Task clarity could be considered the combination of many other fac-
tors; we have identified two in particular. The scope of the task is one:
the goal in a design study is to decompose high-level domain tasks of
broad scope into a set of more narrow and low-level abstract tasks.
The stability of the task is another: the task might change over the
course of the design study collaboration. It is common, and in fact
a sign of success, for the tasks of the experts to change after the re-
searcher introduces visualization tools, or after new abstractions cause
them to re-conceptualize their work. Changes from external factors,
however, such as strategic priority changes in a company setting or
research focus changes in an academic setting, can be dangerous.
The second axis is the information location, characterizing how
much information is only available in the head of the expert versus
what has been made explicit in the computer. In other words, when
considering all the information required to carry out a specific task,
this axis characterizes how much of the information and context sur-
rounding the domain problem remains as implicit knowledge in the
expert’s head, versus how much data or metadata is available in a dig-
ital form that can be incorporated into the visualization.
We define moving forward along either of these axes as a design
study contribution. Note that movement along one axis often causes
movement along the other: increased task clarity can facilitate a bet-
ter understanding of derived data needs, while increased information
articulation can facilitate a better understanding of analysis needs [61].
3.3 Design Study Methodology Suitability
The two axes characterize the range of situations in which design study
methodology is a suitable choice. This rough characterization is not
intended to define precise boundaries, but rather for guiding the under-
standing of when, and when not, to use design studies for approaching
certain domain problems.
Figure 1 shows how design studies fall along a two-dimensional
space spanned by the task clarity and the information location axes.
The red and the blue areas at the periphery represent situations for
which design studies may be the wrong methodological choice. The
red vertical area on the left indicates situations where no or very lit-
tle data is available. This area is a dangerous territory because an
effective visualization design is not likely to be possible; we provide
INFORMATION LOCATION
computer
head
TASK CLARITY
fuzzy
crisp
NOT ENOUGH DATA
DESIGN STUDY
METHODOLOGY
SUITABLE
ALGORITHM
AUTOMATION
POSSIBLE
Fig. 1. The task clarity and information location axes as a way to analyze
the suitability of design study methodology. Red and blue areas mark
regions where design studies may be the wrong methodological choice.
ways to identify this region when winnowing potential collaborations
in Section 4.1.2.
The blue triangular area on the top right is also dangerous terri-
tory, but for the opposite reason. Visualization might be the wrong
approach here because the task is crisply defined and enough infor-
mation is computerized for the design of an automatic solution. Con-
versely, we can use this area to define when an automatic solution is
not possible; automatic algorithmic solutions such as machine learning
techniques make strong assumptions about crisp task clarity and avail-
ability of all necessary information. Because many real-world data
analysis problems have not yet progressed to the crisp/computer ends
of the axes, we argue that design studies can be a useful step towards
a final goal of a fully automatic solution.
The remaining white area indicates situations where design studies
are a good approach. This area is large, hinting that different design
studies will have different characteristics. For example, the regions
towards the top left at the beginning of both axes require significant
problem characterization and data abstraction before a visualization
can be designed—a paper about such a project is likely to have a sig-
nificant contribution of this type. Design studies that are farther along
both axes will have a stronger focus on visual encoding and design
aspects, with a more modest emphasis on the other contribution types.
These studies may also make use of combined automatic and visual
solutions, a common approach in visual analytics [84].
The axes can also associate visualization with, and differentiate it
from other fields. While research in some subfields of HCI, such as
human factors, deal with crisply defined tasks, several other subfields,
such as computer supported cooperative work and ubiquitous comput-
ing, face similar challenges in terms of ill-defined and fuzzy tasks.
They differ from visualization, however, because they do not require
significant data analysis on the part of the target users. Conversely,
fields such as machine learning and statistics focus on data analysis,
but assume crisply defined tasks.
4 NINE-STAGE FRAMEWORK
Figure 2 shows an overview of our nine-stage framework with the
stages organized into three categories: a precondition phase that de-
scribes what must be done before starting a design study; a core phase
presenting the main steps of conducting a design study; and an analy-
sis phase depicting the analytical reasoning at the end. For each stage
we provide practical advice based on our own experience, and out-
line pitfalls that point to common mistakes. Table 1 at the end of this
section summarizes all 32 pitfalls (PF).
The general layout of the framework is linear to suggest that one
stage follows another. Certain actions rely on artifacts from earlier
stages—deploying a system is, for instance, not possible without some

PRECONDITION
personal validation
CORE
inward-facing validation
ANALYSIS
outward-facing validation
learn implement
winnow
cast discover design
deploy
reflect
write
Fig. 2. Nine-stage design study methodology framework classified into three top-level categories. While outlined as a linear process, the overlapping
stages and gray arrows imply the iterative dynamics of this process.
kind of implementation—and it is all too common to jump forward
over stages without even considering or starting them. This forward
jumping is the first pitfall that we identify (PF-1). A typical example
of this pitfall is to start implementing a system before talking to the
domain experts, usually resulting in a tool that does not meet their
specific needs. We have reviewed many papers that have fatal flaws
due to this pitfall.
The linearity of the diagram, however, does not mean that previous
stages must be fully completed before advancing to the next. Many
of the stages often overlap and the process is highly iterative. In fact,
jumping backwards to previous stages is the common case in order
to gradually refine preliminary ideas and understanding. For exam-
ple, we inevitably always find ourselves jumping backwards to refine
the abstractions while writing a design study paper. The overlapping
stages and gray arrows in Figure 2 imply these dynamics.
Validation crosscuts the framework; that is, validation is important
for every stage, but the appropriate validation is different for each. We
categorize validation following the three framework phases. In the pre-
condition stage, validation is personal: it hinges on the preparation of
the researcher for the project, including due diligence before commit-
ting to a collaboration. In the core phase, validation is inward-facing:
it emphasizes evaluating findings and artifacts with domain experts. In
the analysis phases, validation is outward-facing: it focuses on justi-
fying the results of a design study to the outside world, including the
readers and reviewers of a paper. Munzner’s nested model elaborates
further on how to choose appropriate methods at each stage [50].
4.1 Precondition Phase
The precondition stages of learn, winnow, and cast focus on prepar-
ing the visualization researcher for the work, and finding and filtering
synergistic collaborations with domain experts.
4.1.1 Learn: Visualization Literature
A crucial precondition for conducting an effective design study is a
solid knowledge of the visualization literature, including visual en-
coding and interaction techniques, design guidelines, and evaluation
methods. This visualization knowledge will inform all later stages: in
the winnow stage it guides the selection of collaborators with interest-
ing problems relevant to visualization; in the discover stage it focuses
the problem analysis and informs the data and task abstraction; in the
design stage it helps to broaden the consideration space of possible
solutions, and to select good solutions over bad ones; in the imple-
ment stage knowledge about visualization toolkits and algorithms al-
lows fast development of stable tool releases; in the deploy stage it
assists in knowing how to properly evaluate the tool in the field; in the
reflect stage, knowledge of the current state-of-the-art is crucial for
comparing and contrasting findings; and in the write stage, effective
framing of contributions relies on knowledge of previous work.
Of course, a researcher’s knowledge will gradually grow over time
and encyclopedic knowledge of the field is not a requirement before
conducting a first design study. Nevertheless, starting a design study
without enough prior knowledge of the visualization literature is a pit-
fall (PF-2). This pitfall is particularly common when researchers who
are expert in other fields make their first foray into visualization [37];
we have seen many examples of this as reviewers.
4.1.2 Winnow: Select Promising Collaborations
The goal of this stage is to identify the most promising collaborations.
We name this strategy winnowing, suggesting a lengthy process of sep-
arating the good from the bad and implying that careful selection is
necessary: not all potential collaborations are a good match. Prema-
ture commitment to a collaboration is a very common pitfall that can
result in much unprofitable time and effort (PF-3).
We suggest talking to a broad set of people in initial meetings, and
then gradually narrowing down this set to a small number of actual col-
laborations based on the considerations that we discuss in detail below.
Because this process takes considerable calendar time, it should begin
well before the intended start date of the implement stage. Initial meet-
ings last only a few hours, and thus can easily occur in parallel with
other projects. Only some of these initial meetings will lead to further
discussions, and only a fraction of these will continue with a closer
collaboration in the form of developing requirements in the discover
stage. Finally, these closer collaborations should only continue on into
the design stage if there is a clear match between the interests of the
domain experts and the visualization researcher. We recommend com-
mitting to a collaboration only after this due diligence is conducted; in
particular, decisions to seek grant funding for a collaborative project
after only a single meeting with a domain expert are often premature.
We also suggest maintaining a steady stream of initial meetings at all
times. In short, our strategy is: talk with many but stay with few, start
early, and always keep looking.
The questions to ask during the winnow stage are framed as rea-
sons to decide against, rather than for, a potential collaboration. We
choose this framing because continued investigation has a high time
cost for both parties, so the decision to pull out is best made as early as
possible. Two of our failure cases underline the cost of late decision-
making: the PowerSetViewer [54] design study lasted two years with
four researchers, and WikeVis [72] half a year with two researchers.
Both projects fell victim to several pitfalls in the winnow and cast
stages, as we describe below; if we had known what questions to con-
sider at these early stages we could have avoided much wasted effort.
The questions are categorized into practical, intellectual, and inter-
personal considerations. We use the pronouns I for the visualization
researcher, and they for the domain experts.
PRACTICAL CONSIDERATIONS: These questions can be easily
checked in initial meetings.
Data: Does real data exist, is it enough, and can I have it?
Some potential collaborators will try to initiate a project before real
data is available. They may promise to have the data “soon”, or “next

week/month/term”; these promises should be considered as a red flag
for design studies (PF-4). Data gathering and generation is prone to
delays, and the over-optimistic ambitions of potential collaborators
can entice visualization researchers to move forward using inappropri-
ate “toy” or synthetic data as a stopgap until real data becomes avail-
able. Other aspects of this pitfall are that not enough of the data exists
in digital form to adequately solve the problem, or that the researcher
cannot gain access to the data.
In our failed PowerSetViewer [54] design study, for instance, real
data from collaborators did not materialize until after the design and
implement phases were already completed. While waiting for real
data, we invested major resources into developing an elegant and
highly scalable algorithm. Unfortunately, we did not realize that this
algorithm was targeted at the wrong abstraction until we tested it on
real rather than synthetic data.
Engagement: How much time do they have for the project, and how
much time do I have? How much time can I spend in their environ-
ment?
Design studies require significant time commitments from both do-
main experts and visualization researchers. Although there are ways
to reduce the demands on domain experts [76], if there is not enough
time available for activities such as problem analysis, design discus-
sions, and field evaluations, then success is unlikely (PF-5). Some of
these activities also strongly benefit when they can be conducted in situ
at the workplace of the collaborators, as we discuss with RelEx [74].
INTELLECTUAL CONSIDERATIONS: These important questions
can be hard to conclusively answer early on in a design study, but they
should be kept in mind during initial meetings. It is also useful to refer
back to these questions later when monitoring progress; if a negative
answer is discovered, it might be wise to pull out of the collaboration.
Problem: Is there an interesting visualization research question in this
problem?
This question points to three possible pitfalls. First, the researcher
might be faced with a problem that can be automated (PF-6). Second,
the problem, or its solution, may not interest the researcher (PF-7). Or
third, the problem requires engineering, not research, to solve (PF-8).
In one of our projects, we identified this latter pitfall after several
months of requirements analysis in the discovery stage. We provided
the domain experts with a concise list of suggestions for an engineer-
ing solution to their problem, and both sides parted ways satisfied.
Need: Is there a real need or are existing approaches good enough?
If current approaches are sufficient then domain experts are unlikely to
go to the effort of changing their workflow to adopt a new tool, mak-
ing validation of the benefits of a proposed design difficult to acquire
(PF-9).
Task: Am I addressing a real task? How long will the need persist?
How central is the task, and to how many people?
It is risky to devote major resources to designing for a task of only pe-
ripheral relevance to the domain experts, especially if there are only a
few of them. Full validation of the design’s effectiveness will be diffi-
cult or impossible if they spend only a small fraction of their time per-
forming the task, or if the task becomes moot before the design process
is complete (PF-10). We encountered this pitfall with Constellation
when the computational linguists moved on to other research ques-
tions, away from the task the tool was designed to support, before the
implementation was complete. We were able to salvage the project by
focusing on the contributions of our work in terms of the abstractions
developed, the techniques proposed, and the lessons learned [52, 48].
We also brushed against this pitfall with MizBee when the first domain
expert finished the targeted parts of her biological analysis before the
tool was ready for use; finding a second domain expert who was just
beginning that analysis phase, however, yielded strong validation re-
sults for the design study [45]. These examples also point to how a
design study resulting in a tool aimed at a small group of domain ex-
perts can still lead to strong contributions. In this sense, the value of
design studies differs from van Wijk’s definition of the value of visu-
alization which advocates for targeting larger numbers of users [88].
INTERPERSONAL CONSIDERATIONS: Interpersonal considera-
tions, although easy to overlook, play an important role in the success
or failure of a design study (PF-11). In anthropology and ethnography,
the establishment of rapport between a researcher and study partici-
pants is seen as a core factor for successful field work [24, 63]. While
this factor is less crucial in design studies, we have found that rapport
and project success do go hand in hand.
4.1.3 Cast: Identify Collaborator Roles
The awareness of different roles in collaborations is a common theme
in other research areas: the user-centered design literature, for in-
stance, distinguishes many user, stakeholder and researcher roles [7,
9, 38, 60], while the anthropology literature distinguishes key actors
who connect researchers with other key people and key informants
who researchers can easily learn from [6]. Informed by these ideas,
we define roles that we repeatedly observed in design studies.
There are two critical roles in a design study collaboration. The
front-line analyst is the domain expert end user doing the actual data
analysis, and is the person who will use the new visualization tool.
The gatekeeper is the person with the power to approve or block the
project, including authorizing people to spend time on the project and
release of the data. In an academic environment, the front-line analysts
are often graduate students or postdocs, with the faculty member who
is the principal investigator of a lab serving as the gatekeeper. While it
is common to identify additional front-line analysts over the course of
a project, starting a design study before contact is established with at
least one front-line analyst and approval is obtained from the central
gatekeeper is a major pitfall (PF-12).
We distinguish roles from people; that is, a single person might hold
multiple roles at once. However, the distribution of roles to people can
be different for different design studies—expecting them to be same
for each project is another pitfall (PF-13). After several projects where
the front-line analyst was also the gatekeeper, we were surprised by a
situation where a gatekeeper objected to central parts of the validation
in a design study paper extremely late in the publication process, de-
spite the approval from several front-line analysts [46]. The situation
was resolved to everyone’s satisfaction by anonymizing the data, but
sharper awareness of the split between these roles on our part would
have led us to consult directly with the gatekeeper much earlier.
Several additional roles are useful, but not crucial, and thus do not
need to be filled before starting a project. Connectors are people who
connect the visualization researcher to other interesting people, usu-
ally front-line analysts. Translators are people who are very good in
abstracting their domain problems into a more generic form, and relat-
ing them to larger-context domain goals. Co-authors are part of the
paper writing process; often it is not obvious until the very end of the
project which, if any, collaborators might be appropriate for this role.
We have identified one role that should be treated with great care:
fellow tool builders. Fellow tool builders often want to augment a
tool they have designed with visualization capabilities. They may not
have had direct contact with front-line analysts themselves, however,
and thus may not have correctly characterized the visualization needs.
Mistaking fellow tool builders for front-line analysts is thus a pitfall
(PF-14); it was also a major contributing factor in the PowerSetViewer
failure case [54].
At its worst, this pitfall can cascade into triggering most of the other
winnow-stage pitfalls. In one of our other failure cases, WikeVis [72],
we prematurely committed to a collaboration with a fellow tool builder
(PF-3, PF-12, PF-14). Excited about visualization, he promised to
connect us “promptly” to front-line analysts with data. When we met
the gatekeeper, however, we discovered that no real data was available
yet (PF-4), and that we would not be allowed to meet with the ex-
tremely busy front-line analysts until we had a visualization tool ready
for them to use (PF-5). We tried to rescue the project by immediately
implementing a software prototype based on the vague task descrip-
tion of the fellow tool builder and using synthetic data we generated
ourselves, skipping over our planned problem analysis (PF-1). The
resulting ineffective prototype coupled with a continued unavailability
of real data led us to pull out of the project after months of work.

Citations
More filters
Journal ArticleDOI

Ethnography: Step By Step

Carol P. Germain
- 01 Jul 1990 - 
Journal ArticleDOI

A Multi-Level Typology of Abstract Visualization Tasks

TL;DR: A multi-level typology of visualization tasks is contributed to address the gap between why and how a visualization task is performed, as well as what the task inputs and outputs are.
Journal ArticleDOI

A survey on information visualization: recent advances and challenges

TL;DR: A comprehensive survey and key insights into this fast-rising area of InfoVis are presented, which identifies existing technical challenges and propose directions for future research.
Journal ArticleDOI

A Systematic Review on the Practice of Evaluating Visualization

TL;DR: An assessment of the state and historic development of evaluation practices as reported in papers published at the IEEE Visualization conference found that evaluations specific to assessing resulting images and algorithm performance are the most prevalent and generally the studies reporting requirements analyses and domain-specific work practices are too informally reported.
Journal ArticleDOI

Participatory IT Design: Designing for Business and Workplace Realities

TL;DR: In this article, the authors present a textbook-style introduction to the processes required for use in university teaching and for self-study purposes by people working in the field of IT system development.
References
More filters
Book

The Foundations of Social Research: Meaning and Perspective in the Research Process

TL;DR: The Making of Meaning Interpretivism For and against Culture Interpretivism The Way of Hermeneutics Critical Inquiry The Marxist Heritage Critical Inquiry Contemporary Critics and Contemporary Critique Feminism Re-Visioning the Man-Made World Postmodernism Crisis of Confidence or Moment of Truth? Conclusion
Journal ArticleDOI

Constructing grounded theory : A practical guide through qualitative analysis

TL;DR: Charmaz as mentioned in this paper presented a practical guide through qualitative analysis to construct grounded theory, using qualitative analysis, and showed that qualitative analysis can be used to understand grounded theory in a practical way.
Book

Research Methods in Anthropology: Qualitative and Quantitative Approaches

TL;DR: This book discusses the foundations of social research, as well as some of the techniques used in qualitative and quantitative analysis, which have been used in quantitative and Quantitative Analysis.
Journal ArticleDOI

Unskilled and unaware of it: How difficulties in recognizing one's own incompetence lead to inflated self-assessments.

TL;DR: Across 4 studies, the authors found that participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability.
Frequently Asked Questions (7)
Q1. What is the process of problem characterization and abstraction in a design study?

The process of problem characterization and abstraction in a design study is iterative and cyclic: the expert speaks and the researcher listens, the researcher abstracts, then elicits feedback from the expert on the abstraction. 

The authors include data abstraction as an active design component because many decisions made about the visualization design include transforming and deriving data; the task abstraction in not included because it is inherently about what the experts need to accomplish. 

While a design study paper is the most common outcome of a design study, other types of research papers are also possible such as technique or algorithm, evaluation, system, or even a pure problem characterization paper [50]. 

In their experience, writing a design study paper is harder and more time-consuming than writing other types of visualization papers because of the amount of reconsideration and reorganization necessary. 

reflecting on lessons learned from the specific situation of study in order to derive new or refined general guidelines typically requires an iterative process of thinking and writing. 

A common pitfall is to think that a paper without a technique contribution is equal to a design study paper (PF-29), a mistake the authors have seen many times as reviewers. 

In summary, the main contributions of this paper are:• definitions for design study methodology, including articulation of the task clarity and information location axes; • a nine-stage framework for practical guidance in conducting design studies and collaborating with domain experts; • 32 identified pitfalls occurring throughout the framework; • a comparison of design study methodology to that of ethnogra-phy, grounded theory and action research.