scispace - formally typeset
Open AccessJournal ArticleDOI

A framework for conceptualizing, representing, and analyzing distributed interaction

TLDR
This paper summarizes the requirements that motivate the framework, and discusses the theoretical foundations on which it is based, and presents the framework and its application in detail, with examples from the work to illustrate how it has been used to support both ideographic and nomothetic research, using qualitative and quantitative methods.
Abstract
The relationship between interaction and learning is a central concern of the learning sciences, and analysis of interaction has emerged as a major theme within the current literature on computer-supported collaborative learning. The nature of technology-mediated interaction poses analytic challenges. Interaction may be distributed across actors, space, and time, and vary from synchronous, quasi-synchronous, and asynchronous, even within one data set. Often multiple media are involved and the data comes in a variety of formats. As a consequence, there are multiple analytic artifacts to inspect and the interaction may not be apparent upon inspection, being distributed across these artifacts. To address these problems as they were encountered in several studies in our own laboratory, we developed a framework for conceptualizing and representing distributed interaction. The framework assumes an analytic concern with uncovering or characterizing the organization of interaction in sequential records of events. The framework includes a media independent characterization of the most fundamental unit of interaction, which we call uptake. Uptake is present when a participant takes aspects of prior events as having relevance for ongoing activity. Uptake can be refined into interactional relationships of argumentation, information sharing, transactivity, and so forth for specific analytic objectives. Faced with the myriad of ways in which uptake can manifest in practice, we represent data using graphs of relationships between events that capture the potential ways in which one act can be contingent upon another. These contingency graphs serve as abstract transcripts that document in one representation interaction that is distributed across multiple media. This paper summarizes the requirements that motivate the framework, and discusses the theoretical foundations on which it is based. It then presents the framework and its application in detail, with examples from our work to illustrate how we have used it to support both ideographic and nomothetic research, using qualitative and quantitative methods. The paper concludes with a discussion of the framework’s potential role in supporting dialogue between various analytic concerns and methods represented in CSCL.

read more

Content maybe subject to copyright    Report

Suthers, D. D., Dwyer, N., Medina, R., & Vatrapu, R. (2010). A framework for conceptualizing, representing, and
analyzing distributed interaction. International Journal of Computer Supported Collaborative Learning, 5(1), 5-42.
(page numbers do not correspond to final publication)
A Framework for Conceptualizing, Representing, and Analyzing Distributed
Interaction
Daniel Suthers,
§
Nathan Dwyer,
§
Richard Medina,
§
and Ravi Vatrapu
§
Laboratory for Interactive Learning Technologies
Dept. of Information and Computer Sciences, University of Hawai‘i at Manoa
1680 East West Road, POST 309, Honolulu, HI 96822, USA
http://lilt.ics.hawaii.edu
collaborative-representations@hawaii.edu
Center for Applied ICT
Copenhagen Business School
Howitzvej 60, 2.floor
Frederiksberg DK-2000
Denmark
vatrapu@cbs.dk
Abstract: The relationship between interaction and learning is a central concern of the learning sciences,
and analysis of interaction has emerged as a major theme within the current literature on computer-
supported collaborative learning. The nature of technology-mediated interaction poses analytic
challenges. Interaction may be distributed across actors, space, and time, and vary from synchronous,
quasi-synchronous, and asynchronous, even within one data set. Often multiple media are involved and
the data comes in a variety of formats. As a consequence, there are multiple analytic artifacts to inspect
and the interaction may not be apparent upon inspection, being distributed across these artifacts. To
address these problems as they were encountered in several studies in our own laboratory, we developed a
framework for conceptualizing and representing distributed interaction. The framework assumes an
analytic concern with uncovering or characterizing the organization of interaction in sequential records of
events. The framework includes a media independent characterization of the most fundamental unit of
interaction, which we call uptake. Uptake is present when a participant takes aspects of prior events as
having relevance for ongoing activity. Uptake can be refined into interactional relationships of
argumentation, information sharing, transactivity, and so forth. for specific analytic objectives. Faced
with the myriad of ways in which uptake can manifest in practice, we represent data using graphs of
relationships between events that capture the potential ways in which one act can be contingent upon
another. These contingency graphs serve as abstract transcripts that document in one representation
interaction that is distributed across multiple media. This paper summarizes the requirements that
motivate the framework, and discusses the theoretical foundations on which it is based. It then presents
the framework and its application in detail, with examples from our work to illustrate how we have used it
to support both ideographic and nomothetic research, using qualitative and quantitative methods. The
paper concludes with a discussion of the framework’s potential role in supporting dialogue between
various analytic concerns and methods represented in CSCL.
Keywords: interaction analysis, distributed learning, uptake, contingency graphs

A Framework for Analyzing Distributed Interaction – To appear in ijCSCL 5(1), 5-42, 2010.
2
Introduction
Researchers, designers, and practitioners in the learning sciences and allied fields study a variety of
technology-supported settings for learning. These settings may include tightly coupled small group
collaboration, distributed cooperative activity involving several to dozens of persons, or large groups of
loosely linked individuals. Examples include asynchronous learning networks (Bourne, McMaster,
Rieger, & Campbell, 1997; Mayadas, 1997; Wegerif, 1998), knowledge building communities (Bielaczyc,
2006; Scardamalia & Bereiter, 1993), mobile and ubiquitous learning environments (Rogers & Price,
2008; Spikol & Milrad, 2008), online communities (Barab, Kling, & Gray, 2004; Renninger & Shumar,
2002), and learning in the context of “networked individualism” (Castells, 2001; Jones, Dirckinck-
Holmfeld, & Lindstrom, 2006). These settings are diverse in many ways, including the degree of coupling
between participants’ activities, varying temporal and social scales, and the supporting technologies used.
However, they all rely on interaction to enhance learning. “Interaction” is used here in a broad sense,
including direct encounters and exchanges with others and indirect associations via persistent artifacts
that lead to individual and group-level learning. The common element is how participants benefit from the
presence of others in ways mediated by technological environments.
The distributed nature of interaction in technology-mediated learning environments poses analytic
challenges. Interaction may be distributed across actors, media, space, and time. Mixtures of synchronous,
quasi-synchronous, and asynchronous interaction may be included, and relevant phenomena may take
place over varying temporal granularities. Participants may be either co-present or distributed spatially,
and often multiple media are involved (e.g., multiple interaction tools in a given environment, or multiple
devices). Furthermore, the data obtained through instrumentation comes in a variety of formats. There
may be multiple data artifacts for analysts to inspect and share, and interaction may not be immediately
visible or apparent, particularly when interaction that is distributed across media is consequentially
recorded across multiple data artifacts. Interpretation of this data requires tracing many individual paths
of activity as they traverse multiple tools as well as identifying the myriad of occasions where these paths
intersect and affect each other.
Other analytic challenges are also exacerbated by technology-mediated interaction. Human action
is contingent upon its context and setting in many subtle ways. These contingencies take new forms and
may be harder to see in distributed settings. Interpreting nonverbal behavior is also a challenge. When
users of a multimedia environment manipulate and organize artifacts in ways implicitly supported by the
environment, it may be difficult to determine which manipulations are significant for meaning making.
The large data sets that can be collected in technology-mediated settings lead to tensions between the
need to examine the sequential organization of interaction within an episode and the need to scale up such
analyses to more episodes and larger scale organization. We are challenged to understand phenomena at
multiple temporal or social scales, and to understand relationships between phenomena across scales
(Lemke, 2001). See Suthers and Medina (in press) for further discussion of these analytic challenges.
We have encountered many of these challenges in our own research. This research includes a
diverse portfolio of studies of co-present and distributed interaction, via various synchronous and
asynchronous media, and at scales including dyads, small groups, and online communities. Our research
methods have included experimental studies (Suthers & Hundhausen, 2003; Suthers, Vatrapu, Medina,
Joseph, & Dwyer, 2008; Vatrapu & Suthers, 2009), activity-theoretic and narrative analysis of cases
(Suthers, Yukawa, & Harada, 2007; Yukawa, 2006), adaptations of conversation analysis (Medina &
Suthers, 2008; Medina, Suthers, & Vatrapu, 2009), and hybrid methods (Dwyer, 2007; Dwyer & Suthers,
2006). Through the diversity of our work, we have come to appreciate that the analytic challenges
outlined above are not specific to one setting or method, and we have been motivated to find a solution
that gives our work conceptual coherence rather than solutions that are specific to one type of
environment and/or type of analysis.

A Framework for Analyzing Distributed Interaction – To appear in ijCSCL 5(1), 5-42, 2010.
3
In order to address these challenges in a principled way, we developed the uptake analysis
framework for conceptualizing, representing, and analyzing distributed (technology-mediated) interaction.
We offer that framework in this paper in hopes that some aspects of it may also be useful to others. The
representational foundation of this framework is an abstract transcript notation—the contingency graph
that can unify data derived from various media and interactional situations and has been used to support
multiple analytic practices. The conceptual foundation of this framework includes uptake as a
fundamental building block of interaction, and the basis for construing interaction as an object of study.
Like any analytic framework, the uptake analysis framework carries theoretical assumptions. However, it
is not primarily a theory: It provides a theoretical perspective on how to look at interaction, but it does not
provide explanations or make predictions. Nor is it primarily a single method: It is a coordinated set of
concepts and representations with associated practices that support multiple methods of analyzing
distributed interaction. These distinctions are why we call it a “framework.”
This paper begins by elaborating on our motivations and requirements in the next section. The
following section presents the conceptual, empirical, and representational foundations of the uptake
analysis framework. We then detail practical aspects of applying the framework, and provide selected
examples from our work to illustrate how it supports several types of analyses with multiple data sources.
After a summary and discussion of limitations and extensions, we conclude with a discussion of its
potential role in supporting dialogue between various analytic concerns and practices represented in
CSCL.
Motivations and Requirements
This work had its origins in our recognition of the analytic limitations of our prior work and our attempts
to reconcile the strengths and weaknesses of two methodological traditions. The first author’s earlier
research program tested hypotheses concerning “representational guidance” for collaborative learning in
experimental studies where participants’ talk and actions were coded according to categories relevant to
the hypotheses, and frequencies of these codes were compared across experimental groups (Suthers &
Hundhausen, 2003; Suthers, Hundhausen, & Girardeau, 2003; Suthers et al., 2008). While these studies
suggested that representational influences were present, the statistical analyses as they were conceived did
little to shed light on the actual collaborative processes involved and, hence, of the actual roles that the
representations played. To address this problem, we began several years of analytic work to expose the
practices of mediated collaborative learning in data from our prior experimental studies, beginning with
microanalytic approaches inspired by the work of Tim Koschmann, Gerry Stahl, and colleagues
(Koschmann et al., 2005; Koschmann, Stahl & Zemel, 2004). In an analysis undertaken in order to
understand how knowledge building was accomplished via synchronous chat and evidence mapping tools,
we applied the concept of uptake to track interaction distributed across these tools (Suthers, 2006a).
Subsequently, we began analyzing asynchronous interaction involving threaded discussion and evidence
mapping tools (Suthers, Dwyer, Vatrapu, & Medina, 2007). In conducting this work, we encountered
limitations of microanalytic methods, discussed below. In response, we developed our analytic framework
to handle the asynchronicity and multiple workspaces of our data, and with hopes of scaling up
interaction analysis to larger data sets (Suthers, Dwyer, Medina, & Vatrapu, 2007). Concurrently, we
were pursuing a separate line of work on analyzing participation in online communities through various
artifact-mediated associations (Joseph, Lid, & Suthers, 2007; Suthers, Chu, & Joseph, 2009). This work
further motivated the development of a way of thinking about mediated interaction that would inform and
unify the diverse studies that we were conducting. In this section, we discuss several recurring concerns
that arose, including addressing the respective strengths and weaknesses of statistical and micro-genetic
interaction analyses, and handling the diverse data derived from distributed settings in a manner that
supports multiple approaches to understanding the organization of interaction.

A Framework for Analyzing Distributed Interaction – To appear in ijCSCL 5(1), 5-42, 2010.
4
Statistical Analysis
Many empirical studies of online learning follow a paradigm in which contributions (or elements of
contributions) are annotated according to a well-specified coding scheme (e.g., De Wever, Schellens,
Valcke, & Van Keer, 2006; Rourke, Anderson, Garrison, & Archer, 2001), and then instances of codes
are counted up for statistical analysis of their distribution (e.g., across experimental conditions). Research
in this tradition is nomothetic, seeking law-like generalities, and, in particular, is typically oriented toward
hypothesis testing. This approach has significant strengths. Coding schemes support methods for
quantifying consistency (reliability) between multiple analysts. Well-defined statistical methods are
available for comparing results from multiple sources of data such as experimental conditions and
replications of studies. Also, it is straightforward to scale up statistical analysis by coding more data.
A limitation is that these practices of coding and counting for statistical analysis obscure the
sequential structure and situated methods of the interaction through which meaning is constructed
(Blumer, 1986). Coding assigns each act an isolated meaning, and, therefore, does not adequately record
the indexicality of this meaning or the contextual evidence on which the analyst relied in making a
judgment. Frequency counts obscure the sequential methods by which media affordances are used in
particular learning accomplishments, making it more difficult to map results of analysis back to design
recommendations. Another limitation is that in common practice statistical significance testing is applied
to preconceived hypotheses to be tested rather than oriented toward discovery. An analysis of interaction
might help researchers discover what actually happened that led to the statistical results—whether
statistical significance was obtained as predicted, obtained in patterns that were not predicted, or absent.
Such an analysis is only possible if the data was recorded in a form that retains its interactional structure.
Our framework is intended to support statistical analysis in two ways: by providing sequential structures
(as well as single acts) that can be coded and counted, and by recording these structures for interaction
analysis that helps make sense of statistical results.
Sequential Analysis
Several analytic traditions find the significance of each act in the context of the unfolding interaction.
These traditions include Conversation Analysis (Goodwin & Heritage, 1990; Sacks, Schegloff, &
Jefferson, 1974), Interaction Analysis (Jordan & Henderson, 1995), and Narrative Analysis (Hermann,
2003). Some of these traditions (especially the first two cited) draw upon the assertion that the rational
organization of social life is produced and sustained in participants' interaction (Garfinkel, 1967). A
common practice is microanalysis, in which short recordings of interaction are carefully examined to
uncover the methods by which participants accomplish their objectives. Microanalysis is becoming
increasingly important in computer-supported collaborative learning because a focus on accomplishment
through mediated action is necessary to truly understand the role of technology affordances (Stahl,
Koschmann, & Suthers, 2006). For examples applied to the analysis of learning, see Baker (2003),
Enyedy (2005) Koschmann and LeBaron (2003), Koschmann et al. (2005), Roschelle (1996), and Stahl
(2006, 2009).
Microanalysis has somewhat complementary strengths and weaknesses compared to statistical
analysis. It documents participants’ practices by attending to the sequential structure of the interaction,
producing detailed descriptions that are situated in the medium of interaction. Yet analyses are often time
consuming to produce, and are difficult to scale up. As a result, microanalysis is usually applied to only a
few selected cases, leading to questions about representativeness or “generality” (but see Lee &
Baskerville, 2003, for arguments against basing generalization solely on sampling theory). Microanalysis
is most easily and most often applied to episodes of synchronous interaction occurring in one physical or
virtual medium that can be recorded in a single inspectable artifact, such as a video recording or
replayable software log. Distributed interaction may occur in more than one place, and learning may take

A Framework for Analyzing Distributed Interaction – To appear in ijCSCL 5(1), 5-42, 2010.
5
place over multiple episodes, problematizing approaches that assume that a single analytic artifact
recorded in the medium of interaction is available for review and interpretation.
The family of methods loosely classified as exploratory sequential data analysis (ESDA,
Sanderson & Fisher, 1994) provide a collection of operations for transforming data logs into
representations that are successively more suitable for analytic interpretation. In Sanderson and Fisher’s
(1994) terms, the operations are chunking, commenting, coding, connecting, comparing, constraining,
converting, and computing. ESDA draws on computational support for constructing statistical and
grammatical models of recurring sequential patterns or processes (e.g., Olson, Herbsleb, & Rueter, 1994).
Because of this computational support, ESDA can be scaled up to large data sets while still attending to
the sequential structure of the data. On these points, ESDA compares favorably to the respective
limitations of microanalysis and “coding and counting. However, like statistical analysis, computational
support risks distancing the analyst from the source data. Another limitation is that many of the modeling
approaches use a state-based representation that reduces the sequential history of interaction to the most
recently occurring event category. Reimann (2009) presents a cogent argument for basing process
analysis on an ontology of events rather than variables, and describes Petri net process models (from van
der Aalst & Weijters, 2005) that capture longer sequential patterns than state transitions. These
approaches will be discussed further at the end of the paper. Our framework is intended to support both
distributed extensions of microanalysis and ESDA approaches.
Media Generality
Some analytic traditions use units of analysis and data representations that are based on the interactional
properties of the media under study. Much of the foundational work in sequential analysis of interaction
has focused on spoken interaction. The difficulty of speaking while listening and the ephemerality of
spoken utterances constrain communication in such a manner that turns (Sacks et al., 1974) and adjacency
pairs (Schegloff & Sacks, 1973) have been found to be appropriate units of interaction for analysis of
spoken data. These units of analysis are not as appropriate for interactions in media that differ in some of
their fundamental constraints (Clark & Brennan, 1991). For example, online media may support
simultaneous production and reading of contributions, or may be asynchronous, and contributions may
persist for review in either case. Consequentially, contributions may not be immediately available to other
participants or may become available in unpredictable orders, and may address earlier contributions at
any time (Garcia & Jacobs, 1999; Herring, 1999). It is not appropriate to treat computer-mediated
communication as a degenerate form of face-to-face interaction, because people use attributes of new
media to create new forms of interaction (Dwyer & Suthers, 2006; Herring, 1999). Because conceptual
coherence of a set of contributions can be decoupled from their temporal or spatial adjacency, our
framework is based on a unit of interaction that does not assume adjacency or other media-specific
properties.
Similarly, properties of distributed interaction place different demands on representations of data
and analytic structures. Because technology-mediated interaction draws on many different semiotic
resources, analysis of interactional processes must reassemble interaction from the separate records of
multiple media, while also being sensitive to the social affordances of each specific medium being
analyzed to distinguish their roles. A framework for analysis of mediated interaction must be media
agnostic—independent of the form of the data under analysis—yet media aware—able to record how
people make use of the specific affordances of media. This is required to allow analysis to speak to design
and empirically drive the creation of new, more effective media. Our framework provides a means of
gathering together distributed data into a single representation of interaction that does not make
assumptions about media properties but indexes back to the original media records.

Citations
More filters
Journal ArticleDOI

Assessment of (Computer-Supported) Collaborative Learning

TL;DR: A roadmap for the role and application of intelligent tools for assessment of (CS)CL is presented and a perspective on what could be assessed is provided.
Journal ArticleDOI

Learning Analytics for Online Discussions: Embedded and Extracted Approaches

TL;DR: An application of learning analytics that builds on an existing research program investigating how students contribute and attend to the messages of others in asynchronous online discussions finds that the learning analytics intervention supported changes in students’ discussion participation.
Journal ArticleDOI

Social networks and online environments: when science and practice co-evolve

TL;DR: The data available from online environments makes several important contributions to network science, including reliable network flow data, unique forms of relational data across a myriad of contexts, and dynamic data allowing for longitudinal analysis and the animation of social networks.
Proceedings ArticleDOI

Towards visual analytics for teachers' dynamic diagnostic pedagogical decision-making

TL;DR: A new method called "Teaching Analytics" is introduced and a triadic model of teaching analytics (TMTA) is explored, which adapts and extends the Pair Analytics method in visual analytics which was inspired by the pair programming model of the extreme programming paradigm.
References
More filters
Book

Discovery of Grounded Theory: Strategies for Qualitative Research

TL;DR: The Discovery of Grounded Theory as mentioned in this paper is a book about the discovery of grounded theories from data, both substantive and formal, which is a major task confronting sociologists and is understandable to both experts and laymen.
Book

Mind in Society: The Development of Higher Psychological Processes

TL;DR: In this paper, Cole and Scribner discuss the role of play in children's development and play as a tool and symbol in the development of perception and attention in a prehistory of written language.
Book

Studies in Ethnomethodology

TL;DR: This work focuses on Ethnomethodology, which investigates the role of sex status in the lives of the Intersexed Person and some of the rules of Correct Decisions that Jurors Respect.
Frequently Asked Questions (9)
Q1. What have the authors contributed in "A framework for conceptualizing, representing, and analyzing distributed interaction" ?

To address these problems as they were encountered in several studies in their own laboratory, the authors developed a framework for conceptualizing and representing distributed interaction. Faced with the myriad of ways in which uptake can manifest in practice, the authors represent data using graphs of relationships between events that capture the potential ways in which one act can be contingent upon another. This paper summarizes the requirements that motivate the framework, and discusses the theoretical foundations on which it is based. The paper concludes with a discussion of the framework ’ s potential role in supporting dialogue between various analytic concerns and methods represented in CSCL. 

Their framework is intended to support statistical analysis in two ways: by providing sequential structures (as well as single acts) that can be coded and counted, and by recording these structures for interaction analysis that helps make sense of statistical results. 

Microanalysis is becoming increasingly important in computer-supported collaborative learning because a focus on accomplishment through mediated action is necessary to truly understand the role of technology affordances (Stahl, Koschmann, & Suthers, 2006). 

Because the explication of structure in the data and the analytic interpretation are separated, the contingency graph can serve as a basis for comparison and integration of multiple interpretations. 

Examples of events that are not coordinations include display updates driven by environmental sensors or by coordinations that took place on other devices. 

The authors show how the uptake structure can be interpreted in terms of three phenomena: information sharing, integration of information from multiple sources, and intersubjective round trips. 

Microanalysis is most easily and most often applied to episodes of synchronous interaction occurring in one physical or virtual medium that can be recorded in a single inspectable artifact, such as a video recording or replayable software log. 

The authors believe that analytic representations should minimize assumptions concerning the answers to the research questions posed, limiting assumptions to those necessary to ask those questions in the first place. 

Another limitation is that in common practice statistical significance testing is applied to preconceived hypotheses to be tested rather than oriented toward discovery.