scispace - formally typeset
Open AccessJournal ArticleDOI

Beyond intelligent interfaces: Exploring, analyzing, and creating success models of cooperative problem solving

Gerhard Fischer, +1 more
- 01 May 1992 - 
- Vol. 1, Iss: 4, pp 311-332
TLDR
This work shows how the conceptual framework behind a given system determines crucial aspects of the system's behavior, and conducts an empirical study of a success model of cooperative problem solving between people in a large hardware store.
Abstract
Cooperative problem-solving systems are computer-based systems that augment a person's ability to create, reflect, design, decide, and reason. Our work focuses on supporting cooperative problem solving in the context of high-functionality computer systems. We show how the conceptual framework behind a given system determines crucial aspects of the system's behavior. Several systems are described that attempted to address specific shortcomings of prevailing assumptions, resulting in a new conceptual framework. To further test this resulting framework, we conducted an empirical study of a success model of cooperative problem solving between people in a large hardware store. The conceptual framework is instantiated in a number of new system-building efforts, which are described and discussed.

read more

Content maybe subject to copyright    Report

Journal
nr
Applied
Intelligence
I,
(1992)
1992
Kilmer
Academic
Publisher'"
Boston.
Manuractured
in
The
Nethcrlanlh.
Beyond Intelligent Interfaces:
Exploring, Analyzing, and Creating Success Models
of
Cooperative Problem Solving
GERHARD
FISCHER
&
BRENT
REEVES
f)"I}(lrllllclIl
"I
COIIIIHi/a
Scicllc!'
aw/
IlIslilllle
"r
('ogllil
iI'"
Scicllce,
(Jllil'crsin'
of
Colorado,
COlllIJi{,'
Bo.r
-130,
BOII/di'!',
CO
803IJ9
Reccil'ci/
II/arch
/')')
/,
Rn'i,\cd
Allgllsi
/,)9/
Abstract.
Cooperati
ve
problem-solving
systems
are
computer-based
systems
that
augment
a
person's
ahility
to
create,
retlec!,
design,
decide,
and
reason,
Our
work
focuses
on
supporting
cooperative
prob-
lem
solving
in
the
context
of
high-functionality
computer
systems,
We
show
how
the
conceptual
frame-
work
hehind
a
given
system
determines
crucial
aspects
of
the
system's
behavior.
Several
systems
arc
descrihed
that
attempted
to
address
specific
shortcomings
of
prevailing
assumptions,
resulting
in a
new
conceptu,ti
framework,
To
further
test
this
resulting
framework,
we
conducted
an
empirical
study
of
a
success
model
of
cooperative
problem
solving
between
people
in a
large
hardware
store,
The
conceptual
framework
is
instantiated
in a
number
of
new
system-building
etTorts,
which
are
described
and
dis-
cLlssed,
Key
words:
Success
model,
knowledge-hased
cooperative
problem
solving,
intelligent
inter-
Ltces,
emrirical
studies,
integrating
problem
setting
and
problem
solving,
shared
understanding,
high-
runct
ional
it
y
systems,
I,
Introduction
'vVe
c\plorc
cunceptual
framev,:orks,
methodolo-
gie;.,
and
technologies
to
develop
cooperative
rrohlem-;.olving
"Y'itCI11S
and
exploit
the
unique
opportunlt\
oflered
by
powerful
computer
\ys-
tems,
The
purpose
to
augment
human
potential
and
pl'llductivitv
1/-3),
and
not
to
replace
hu-
man"
\\ ith ,11Itomated
s,"stems.
()ur
re"carch
appmClch,
and
the
structure
of
thl"
paper.
IS
Illustrated
If1
Figure
I,
We first re-
\
in\
\\(ll'"
that
has
heen
done
in
(}(lfJe'wtil'('
er"hlelll-\(I/I'//lf;
SI'ltCII/.I
and
discllss
why
they
a
hettel
conceptual
framework
for
joint
hlll11all-Cllll1puter
systems
than
intelligent
inter-
bee",.
ShurtCl)1l1ings
of
these
systems
motivated
u",
to
l(lO" 1'01
success
modeh
1-+-6)
and
alterna-
tive
eonceptll,d
fl'all1e\\orks
(such
a:-,
situated
c(lgnitiull
;IPPlllaches
[7-10)).
The
major
portion
of
this
paper
addresses
how
success
models
help
to
confirm
intuitions
and
introduce
new
chal-
lenges,
We
describe
integrated.
dOlllilill-ori-
(,Iltn/.
kflow/cdge-/Hlsed
dcsign
environments
as
prototypes
of
a
second
generation
of
cooperative
problem-solving
systems,
From
these
sources,
we
draw
lessons
for
a
new
conceptual
framework
for
joint
human
computer
systems,
2.
First
Generation
of
Cooperative
Prohlem-Solving Systems
Thi"
section
describes
several
issue"
that
have
surfaced
in re",earch
on
cooperative
problern-
'>(llving
system';,
First,
an
analysis
is
made
of
dif-
ferent
conceptual
frameworks
for
integrating
user
interfaces
and
knowledge-based
systcms.
Next.
sevcral
dimensions
of
cooperative
prob-

3
12
Fischer
([nd
Reel'es
Pr",iooJ: Worts
IDButJ,d
Coopertirvc Problem·
SIlCUSS Modcl.5 fot
Hcumoc
Gtudance
(McGuckJn Hardware)
c Framework;
for
Coopcrallve Problem-
Solvmg S
ystcffi<;:
(
Situated
Cogruooo
F'crspcctIV(':
.--/
Fig.
/.
Rc,earch
approach.
)
The
ba,ic
approach
taken
in
the
research
de,cribcd
in
thi,
paper.
By
analyzing
previou'i
shortcomings
and
looking
to
existing
success
models
in
domains
other
than
compllter
science,
and
then
placing
the
les,orls
learned
into
the
larger
context
of
situated
cognition
research,
we
arc
building
in-
tegrated
design
environments,
which
help
us
to
incremen-
tally
refine
an
evolving
conceptual
framework
for
cooper-
ative
problem-,>olving
systems.
lem-solving
systems
are
discussed,
followed
by a
brief
description
of
earlier
prototypes
and
how
they
fell
short
of
being
truly
cooperative
prob-
lem-solving
systems,
We
conclude
by
arguing
that
systems
need
to
be
both
usable
and
useful
r
II],
leading
to
high-functionality
systems,
2,/,
!tuelfac('s
(0
/n(cllig(,lIt
SV,\((,II1.1
alld
Illtelligent
/1I(er/(I(,(,s
Traditionally
the
Artificial
Intelligence
commu-
nity
has
classified
user
interface
research
into
two
subareas:
"interfaces
to
intelligent
sys-
tems"
and
"intelligent
interfaces."
Although
these
terms
have
been
used
mostly
without
any
effort
to
define
them,
we
will
use
a
classification
etTort
(inspired
hy a
model
from
[12J:
sec
Figure
2)
to
clarify
how
these
terms
may
be
defined.
Intelligent
interfaces
can
now
be
defined
b\
an
attemrt
to
put
intelligence
into
the
user
dis-
course
machine.
The
WEST
system
[13J
can
be
considered
an
example
of
an
intelligent
interface.
The
underlying
problem
domain
(comruting
al-
gebraic
exrressions
to
"atisfy
certain
objectives)
is
rather
simple,
but
the
user
machine
of
WEST
consists
of
a
numher
or
interesting
components
such
as
a
user
modelling
compo-
nent,
an
explanation
comroncnt,
and
a
tutoring
component.
Task
Machine
User *
Discourse
Machine
L----J
Task User
Fig. 2.
Intelligent
interfaces
vs.
interfaces
to
intelligent
systems.
A
simplification
of
Card's
112J
Triple
Agent
Model
of
Hu-
man-Computer
Interaction
(which,
in
turn
was
inspired
by
Sheridan,
Fischhoff.
Posner,
and
Pew,
I'iS3, Fig. 4-1).
Card
u'ied
the
original
figure
to
illustrate
the
different
perspec:-
tives
of
three
agents:
User,
Task
machine,
and
User
Dis-
course
Machine,
We
U'ie
this
simplified
version
to
show
hllW
Intelligent
Interfaces
foeu,
on
the
Task
machine,
when:as
Interfaces
to
Intelligent
Systems
focus
on
the
Lhcr
Discourse
Machine.
Alternatively
interfaces
to
intelligent
systems
are
an
attemrt
to
put
intelligence
into
the
task
machine.
MYCIN
[14] is
an
example
of
such
a
system,
Although
there
has
been
an
effort
in
MYCIN
to
put
some
intelligence
into
the
llser
discourse
machine
(e,g"
to
support
explanations
r 15]),
these
efforts
have
been
modest
comrared
to
that
of
modelling
the
task,
The
separating
of
the
interrace
from
the
un-
derlying
application
is
inadequate
for
many
sys-
tem-building
efforts,
We
support
this
claim
with
a
human
analogy:
a
person
who
can
communi-
cate
well
but
knows
very
little
has
severe
limit<1-
tions
as
a
cooperative
partner,
just
as
a
person
who
knows
a lot
but
cannot
communicate,
Co-
operative
rroblem-solving
systems
are
an
at-
tempt
to
avoid
this
separation
and
increase
the
usefulness
and
usability
by
a
tight
integration
of
interaction
mechani'ims
with
the
underlying
do-
main
knowledge.
2,2.
j)illlcnsions
o(Co(){JcU/(il'('
Pn!iJ/(,III-S<!/\'ing
Sfl(
('IllS
Our
original
system-building
efforh
were
very
much
influenced
by
,orne
of
the
major
()f
expert
The
major
difference
be-
tween
cla\sical
e'rert
systems
(such
as
MYCIN
[14J
and
RI [16])
and
cooperative
rroblem-solv-
ing
systems
is
that
the
human
is
much
more
an
active
agent
and
rarticirant
in
the
latter.
Tradi-

tional
expert
systems
asked
the
user
many
ques-
tions
and
then
returned
an
answer.
I n a
cooper-
ative
problem-solving
system,
the
user
and
the
system
share
the
problem
solving
and
decision
making.
Thus
different role
distributions
may
be
chosen
depending
on
the
user's
knowledge,
the
user's
goals
and
the
task
domain.
A
cooperative
system
requires
much
richer
communication
fa-
cilities
than
those
offered
by
traditional
expert
systems.
The
following
issues
are
important
dimensions
of
research
in
cooperative
problem-solving
sys-
tems:
Understanding
complex
task
domains.
The
in-
teraction
paradigms
for
dealing
with
complex
in-
formation
stores
have
often
been
based
on
the
unfounded
assumption
that
people
using
these
systems
approach
them
with a
precisely
de-
scribed
task.
But in
most
problem-solving
and
in-
formation-retrieval
tasks,
the
precise
articulation
of
a
task
is
the
most
difficult
problem
r
17J.
Users
of
such
systems
suffer
from a lack
of
knowledge
about
the
interdependencies
between
problem
setting
and
solving,
and
they
do
not
know
about
the
tools
that
exist
for
solving
these
problems.
Ignorant
of
these
mappings,
users
are
unable
to
develop
a
complete
srecification
of
what
they
want:
therefore
specifications
must
be
con-
structed
incrementally.
The
level
of
cooperation
between
human
and
computer.
Coorerative
problem
solving
systems
consisting
of
a
human
and
a
comruter
can
exploit
the
asymmetry
of
the
two
communication
part-
ners.
Humans
contribute
what
they
do
best
(e.g.,
use
of
common
sense,
goal
definition,
decompo-
sition
into
subproblems,
etc.),
whereas
the
com-
puter
should
be
used
for
what
it
is
good
for
(e.g.,
external
memory
support.
consistency
mainte-
nance,
hiding
irrelevant
information,
intelligent
summarizing,
visualization
support.
etc.)
[11<j.
The
iml>act
of
communkation
breakdowns.
Ef-
fective
depend"
on
<l
collaborative
ef-
fort
in
which
advisor
and
client
work
together
to
detect
and
repair
trouhle"
that
ari\e.
I n
cooper-
ative
problem-solving
systems,
breakdowns
arc
not
as
detrimental
as
in
eX[1ert
systems,
because
humans
are
part
of
the
overall
"ystem
and
can
ster
in if
necessary.
One
can
never
anticipate
or
"design
a\\ay"
all
of
the
misunderstandings
and
rl'llblems
that
might
arise
during
the
use
of
these
Beyond
Intelligent
Interfaces
313
systems.
We
need
to
recognize
and
develop
sys-
tem
resources
for
dealing
with
the
unexpected:
"The
prohlem
is
not
that
commllllic(ltil'e trouhle
arises
that
does
not
arise in hUnlan-to-human
communication.
but
rather
that
when
these
ifl-
el'i{(lhle trouhles
do arise, there
(Ire
flot the
same
resources ([pailable
for
their
detection
and
re-
pair"
[8]. A
cooperative
agent
needs
to
under-
stand
the
nature
of
open
problems,
the
intentions
of
the
rroblem
solver,
and
the
fact
that
goals
arc
modified
during
the
problem-solving
process.
The
role
of
background
assumptions.
We
need
a
better
understanding
of
the
rossibilities
and
limitations
of
expert
systems
research.
We
have
to
define
the
characteristics
for
problems
that
are
suitable
for
expert
systems
research
to
generate
realistic
expectations.
When
we
talk
of
a
human
expert,
we
mean
someone
whose
derth
of
under-
standing
serves
not
only
to
solve
specific well-
formulated
problems,
but
also
to
rut
them
in
a
larger
context
[<)).
The
nature
of
expertise
lies
not only
in
solving
a
problem
or
explaining
the
results
(which
some
expert
systems
can
do
to
some
extent),
but
in
learning
incrementally
and
restructuring
one's
knowledge,
in
breaking
rules,
in
determining
the
relevance
of
something,
and
in
degrading
gracefully
if a
problem
is
not
within
the
core
of
the
expertise.
Knowledge-based
sys-
tems
should
be
built
on
the
premise
that
back-
groLlnd
assumptions
can
never
be
fully
articu-
lated.
Semi-formal versus formal
approaches.
Semi-
formal
systems
[1<)-20)
do
not
require
the
com-
ruter
to
interpret
all
information
structures,
but
just
tn
serve
as
a
delivery
system
of
information
to
be read
and
interpreted
by
people.
Semi-for-
mal
sv\tems
can
be
used
more
extensively
in
co-
orerative
systems
than
in
exrert
systems,
and
will
playa
large role
in
the
design
of
effective
joint
hUIl1<ln-cofl1rllter
systems.
Humans
enjoy
"doing"
and
"deciding."
H u-
n1<1n,
orten enJllV
the
process
and
not
just
the
fi-
nal
product:
they
want
to
take
rar!
in
something.
This
I,
why
they
huild model
trains,
plan
their
vacations,
allli
de\ign
their
own
kitchens.
Auto-
mation
is
a
two-edged
sword.
At
one
extreme,
it
is
a
servant.
relieving
humans
of
the
tedium
of
lo\\'-Ie\'el
operations
and
freeing
them
for
higher
cognitive
functions:
Many
peorle
do
not
enjoy
checking
documents
for
srelling
errors,
and
they

314
Fischer
([lid
RcC!'c.\·
welcome
the
automation
provided
by
spelling
checkers
in
word
processors.
At
the
other
ex-
treme,
automation
can
reduce
the
status
of
hu-
rna ns
to
"button
pushers"
and
can
strip
their
work
of
its
meaning
and
satisfaction.
The
chal-
lenge
is
to
automate
tasks
that
people
consider
tedious
or
uninteresting,
but
these
change
as
technology
changes.
2.3.
Brief'Discllssiofl
q(
0111'
E{/rlier
Pmtot\'{)('S
Many
knowledge-based
systems
are
built
based
on
some
of
the
following
assumptions:
(I)
users
of
these
systems
can
fully
articulate
their
prob-
lems
in
advance,
(2)
users
will
ask
for
help,
(3) a
consultation
model
of
interaction
(in
which
users
serve
mostly
as
data
sources)
is
behaviorally
ac-
ceptable,
and
(4)
general
purpose
programming
environments
are
sufficient
for
supporting
coop-
erative
problem
solving
[3].
We
believe
these
as-
sumptions
are
1I11(IJ/Illdl'il.
The
assumption
that
users
can
fully
articulate
problems
in
advance
has
been
refuted
in
several
studies
[10].
Curtis,
Krasner,
and
Iscoe
[21]
ob-
served
in
an
empirical
study
of
large
software
projects:
"EI'1'1I
\I'hell
{[
cllsto/lli::.ec/
S\'stCIll
was
d1'I'clo{!cd(or
(JII(,
eliellt,
the
rC(jllircllll'lIts
OffCII
prOl'idcd
([
//lol'illg
targct
.II)/'
designCf's.
Dllrillg
S\'stCIII
dCl'elo{JlIleflt,
thc
cllstolller,
{[S
well
as
Ihe
dCl'e/O{)('/',
I('arlled
{//Wllt
the
o{!plicotioll
dO/llaill."
Many
current
software
development
methodologies
(such
as
the
waterfall
model)
falsely
assume
that
problems
are
well
defined.
The
assumption
that
users
are
always
capable
of
asking
for
heir
breaks
down
as
soon
as
the
system
becomes
very
complex.
Users
are
unable
to
ask
about
information
they
do
not
know
ex-
ists.
1..,!YCIN
114)
is
an
examrle
of
a
system
that
was
based
on
the
as,>ul11rtion
that
human-com-
ruter
interaction
i" well
surported
hy
a
consul-
tation
model
in
which
the
comrllter
asks
the
hu-
man
question>;.
From
an
engineering
point
of
vie\'.
MYCIN
had
the
advantage
of
heing
clear
and
sirnrle:
the
rrogram
controlled
the
dialogue.
But
empirical
studies
have
shown
that
these
pro-
grams
are
behaviorally
unacceptable
122).
General
[lurpose
tnoh
are
fundamentally
lim-
itin!!
hecause
the
solution
srace
rerrC',ented
hy
them
is
too
away
from
the
rroblem
space.
In
order
to
hridge
the
gap
between
general
rurpose
tools
and
complex
problem-,>olving
environ-
ments,
we
need
stable
subsystems
at
varioLls
lev-
els
in
between.
Complex
systems
develop
faster
if
they
can
build
on
stable
subsystems
[23]
and
if
they
can
be
based
on
a
marketplace
of
develored
pieces
of
knowledge
[2].
In
order
to
overcome
some
of
these
concep-
tual
deficiencies,
we
have
previously
huilt a
num-
ber
of
prototype
systems
(this
brief
annotated
list
is
restricted
to
our
own
efforts:
other
research
grours
have
addressed
these
rroblems
as
well.)
HELGON:
Illcr('mcnta/
COl1stmcfioll
lit'
Qlfe-
ries
hI'
RC/(!I'IIIII/(/tioll.
HELGON
1241
is
based
on
the
retrieval-by-reformulation
raradigm
[25],
which
was
derived
from
a
theory
of
human
remembering.
This
theory
postulates
that
hu-
mans
incrementally
construct
queries
and
nat-
urally
think
about
categories
of
things
in
of
srecific
example".
HELGON
surports
the
incremental
description
of
a
desired
object
with
multiple
specification
techniques.
LISP-CRITIC
AN!)
ACTlVIS']':
Criliqlling
Uscrs'
Work
([nd
Volllllleering
Injil/'llwlio!l.
LISP-CRITIC
[26) is a
knowledge-hased
sys-
tem
that
critiques
LISP
rrllgrams.
The
interac-
tion
is
controlled
hy
the
user,
who
selects
rarts
of
programs
and
asks
the
systems
for
heir
in
imrroving
the
code
either
for
human
comrlT-
hensibilitv
or
machine
efficiencv.
HUlllans
often
learn
by
receiving
answers
to
questions
that
they
did
not
or
could
not
pose.
The
active
help
system
ACTIVIST
127]
volunteers
information
that
wa"
not
requested.
ACTIVIST
"looks
over
the
shoulder"
of
a
lIser
working
with
an
editor.
infers
the
intended
goal
from
user
actions,
and
volunteers
editing
advice.
S
}'Sn,MS'
AS'SIS7ANT:
In/imll([lioll
VOIIlIl-
ICI'I'ing
In
[fsers.
I)esrite
the
fact
that
comnlll-
nicatioll
carahilitie'.
'iuch
as
lIIix('(l-illililltil'l'
diil/ogllcs
12S-2YI
have
been
fOllnd
to
he
ClUCI;"
for
(oopel'ative
systems,
the
progress
in
ing
them
has
heen
rather
modest.
SYSTEf'vIS'
ASSIST\NT
\\as
an
effort
to Sllprl1rt Illore
mixed-initiative
dialogue"
bv allovving
users
to
volunteer
information.
One
of
the
maiol' find-
ings
in
hlldding
SYSTEMS'
ASSISTANT
was
that
to
proVide a
more
mixed-initiative
interac-
tion
stvle
requires
more
elaborate
underlying

knowledge
structures
and
not
just
a
change
of
the
.
FINANZ:
Enriching
5'vsfcms
with Dome/iII-Ori-
ented
Ahsfractiolls.
FINANZ
is a
knowledge-
based
spreadsheet
system
[61.
Rather
than
forc-
ing
financial
experts
to
describe
their
problems
to
programmers,
who
then
build
spreadsheet
models,
FINANZ
builds
higher-level
abstrac-
tions
related
to
the
financial
expert's
problem
domain.
This
allows
the
expert
to
interact
with
the
computer
system
using
abstractions
srecific
to
the
problem
domain,
thereby
supporting
hu-
man
problem-domain
communication
[31].
Building
HELGON,
LISP-CRITIC,
ACTIVIST,
SYSTEMS'
ASSISTANT,
and
FINANZ
deep-
ened
our
understanding
of
iterative
problem
specification,
information
volunteering,
mixed-
initiative
dialogues,
and
human
problem-domain
communication.
Although
each
of
these
systems
explored
issues
of
imrortance
and
each
made
an
identifiable
step
forward,
they
all fell
short
in
surrorting
truly
cooperative
systems.
The
sys-
tems
were
isolated
efforts
and
were
built
for
rel-
,Hively
simple
domains.
One
of
the
lessons
learned
was
that
cooperative
problem-solving
systems
are
very
resource
intensive.
The
next
section
argues
why
high-functionality
computer
systems
arc
the
foundations
upon
which
these
systems
must
be
built.
2 . .f.
High-FIIIlCfioll{/litv
Compllter
Svsfell/s
Comruter
systems
should
be
both
IIsohlc
and
lise/III
[Ill.
For
a
system
to be useful
for
a
hroad
class
of
different
tasks,
it
must
offer
broad
func-
tionality.
Computing
systems
have
been
muving
more
and
more
toward
high-functionality
sys-
tems.
In
our
own
work,
we
have
analyzed
the
Symbolic,>
Lisp
machine
as
a
high-functionalitv
computer
system.
To
get a feel
for
how
comrlex-
ity
has
evolved
from
simple
rrugramming
lal1-
consider
a
comrari..,on
of
the
Pascal
language
with
the
Li\p
Machine
pro-
gramming
environment
shown
in
Figure
3.
The
more
powerful
wstell1\
become.
the
more
difficult
they
are
to
usc.
Refo!(: lIsers will he
able
to
take
advantage
of
the
rower
of
high-fuI1ction-
alit\
computer
sy'>tems,
the
cognitive
cosh
of
ma'>tering
them
mu..,t be
reduced.
The
follO\\ ing
Beyond
Intelligent
Interfaces
315
--
---------,----
Functions; Operators 29 functions and procs
19
infix operalors
Classes
None
Control Structures
627 CL functions
31352 functions total
}322 with 17305 methods
38 5pCdal forms
45
('I.
macros
__
av_er_ag_e
4400
Fig. 3.
Lo\\
v,.
high-functionality
'iyslems.
prohlems
of
high-functionality
systems
(as
iden-
tified
by
Draper
[32),
Fischer
r
II],
and
Lemke
[33 j)
must
be
overcome:
Users
do
/lof
kllOIl'
abollt
the
cxistence
oltools.
Users
cannot
develor
complete
mental
models
of
high-functionality
systems.
Without
co
111-
rlete
models,
users
arc
sometimes
unaware
of
the
existence
of
tools.
A
passive
help
system
is
of
no
assistance
in
these
situations.
Active
sys-
tems
and
browsing
tools
let
users
explore
a
sys-
tem,
and
critics
[34]
roint
out
useful
informa-
tion.
Users
do
lIof
kilO\{'
how
to
access
tools.
Know-
ing
that
something
exists
does
not
necessarily
imply
that
users
know
how
to
find it.
Users
do
flof
kno\l'
when
to
lise
tools.
In
many
cases,
users
lack
the
{/{Jplica/Jilitv
conditiol1s
for
tools
or
comronents.
Features
of
a
com-
ruter
system
may
have
a
sensible
design
ratio-
nale
from
the
vie\vpoint
of
system
engineers,
but
this
rationale
is
frequently
beyond
the
grasp
of
users,
even
those
who
arc
familiar
with
the
basic
functions
of
the
system.
Systems
seem
imponderable
hecause
users
have
to
search
through
a large list
of
ortions
and
do
not
know
how
to
choose
among
them.
Users
('II/ll/ot
ClIlI/hillC',
{/da/)f,
{/Ild
II/odil\'
fools
({c(on/in?;
fo
their
specific
nceds.
Even
after
having
overcome
all
of
the
previolls
problems
(i.e.,
a tool
wa'>
found,
ih
functioning
was
understood.
de.).
in mallY
cases
the
tool
does
not
do
exactly
what
the
user
wants.
This
prob-
lem
require,
"ystelll
slirron
to
carry
out
ll1od-
ificati,JIl at
an
level
with
which
the
LiseI'
is familiar.
One
major
is,>ue thaI is not
directly
related
to
high-fullctionality
"y,>kms
but
nevertheless
plays
<In
important
ran
in
their
dTectiveness
is
that
u,ers
do
not
ha\e
\vell-formed
goals
and
plans.

Citations
More filters
Journal ArticleDOI

A fuzzy coding approach for the analysis of long‐term ecological data

TL;DR: An unconventional procedure (fuzzy coding) to structure biological and environmental information, which uses positive scores to describe the affinity of a species for different modalities (i.e. categories) of a given variable is presented.
Journal ArticleDOI

Can We Ever Escape from Data Overload? A Cognitive Systems Diagnosis

TL;DR: It is proposed that (a) data overload is difficult because of the context sensitivity problem – meaning lies, not in data, but in relationships of data to interests and expectations and (b) new waves of technology exacerbate data overload when they ignore or try to finesse context sensitivity.
Journal ArticleDOI

Overview of human-computer collaboration

TL;DR: The paper derives a set of fundamental issues from a definition of collaboration, introduces two major approaches to human-computer collaboration, and surveys each approach, showing how it formulates and addresses the issues.
Proceedings ArticleDOI

Supporting knowledge-base evolution with incremental formalization

TL;DR: Experiences with the domain independent Hyper-Object Substrate show that its flexibility for incrementally adding and formalizing information is useful for the rapid prototyping and modification of semi-formal information spaces.
References
More filters
Book

The Sciences of the Artificial

TL;DR: A new edition of Simon's classic work on artificial intelligence as mentioned in this paper adds a chapter that sorts out the current themes and tools for analyzing complexity and complex systems, taking into account important advances in cognitive psychology and the science of design while confirming and extending Simon's basic thesis that a physical symbol system has the necessary and sufficient means for intelligent action.
Book

Case-based reasoning

TL;DR: Case-based reasoning as discussed by the authors is one of the fastest growing areas in the field of knowledge-based systems and the first comprehensive text on the subject is presented by a leader in this field.
Journal ArticleDOI

The psychology of everyday things

TL;DR: In this paper, the authors argue that even the smartest among us can feel inept as we fail to figure our which light switch or oven burner to turn on, or whether to push, pull, or slide a door.