scispace - formally typeset
Open AccessJournal ArticleDOI

Representing Knowledge of the Visual World

Havens, +1 more
- 01 Oct 1983 - 
- Vol. 16, Iss: 10, pp 90-96
Reads0
Chats0
TLDR
Current scene analysis methodology is examined under two criteria: descriptive adequacy, the ability of a representational formalism to capture the essential visual properties of objects and the relationships among objects in the visual world, and procedural adequacy), the capability of the representation to support efficient processes of recognition and search.
Abstract: 
The central issue in artificial intelligence the representation and use of knowledge unifies areas as diverse as natural-language understanding, speech recognition, story understanding, planning, problem solving, and vision. This article focuses on how computational vision systems represent knowledge of the visual world. It examines current methodology under two criteria: descriptive adequacy, the ability of a representational formalism to capture the essential visual properties of objects and the relationships among objects in the visual world, and procedural adequacy, the capability of the representation to support efficient processes of recognition and search. A major theme in computational vision has been the distinction between the methodology of image analysis (or early vision) and scene analysis (or high-level vision). Briefly, image analysis can be characterized as the science of extracting from images useful descriptions of lines, regions, edges, and Ssurface characteristics up to the level of Marr's 21/2 -D sketch. It is generally assumed that image analysis is domain independent and passive, that is, data driven. Scene analysis attempts to recognize visual objects and their configurations. It is viewed as domain dependent and goal driven, mnotivated by the necessity of identifying particular objects expected to be present in a scene. Although some may disagree, these distinctions should be seen not as a strict dichotomy but as a spectrum. Early vision exploits constraint.s that are usually valid in the particular visual world for which it has evolved (or been designed). Although early visioni is predominantly data driven, high-level visual processes must be able to establish parameters for and control the attention of lower level processes. As we argue later, efficient scene analysis systems must combinie goal-driven and datadriven recognition processes. If that dichotomy is actually a spectrum then establishing the exact boundary is not a research issue. In this article, we outline current scene analysis methodology (early vision is ably described elsewherelt2) and identity a number of its deficiencies. In response to these problems, some recent systems use schema-based knowledge representations. Examples taken from one called Mapsee2 illustrate our arguments.

read more

Content maybe subject to copyright    Report

Mapsee2's
schema-based
representations
support
efficient
recognition
and
search,
as
well
as
overcoming
some
inherent
limitations
of
the
well-known
network
consistency
approach
to
scene
analysis.
Representing
Knowledge
of
the
Visual
World
William
Havens
and
Alan
Mackworth,
University
of
British
Columbia
The
central
issue
in
artificial
intelligence
the
representation
and
use
of
knowledge
unifies
areas
as
diverse
as
natural-language
understanding,
speech
recognition,
story
understanding,
planning,
problem
solving,
and
vision.
This
article
focuses
on
how
com-
putational
vision
systems
represent
knowledge
of
the
visual
world.
It
examines
current
methodology
under
two
criteria:
descriptive
adequacy,
the
ability
of
a
representational
formalism
to
capture
the
essential
visual
properties
of
objects
and
the
relationships
among
objects
in
the
visual
world,
and
procedural
adequacy,
the
capability
of
the
representation
to
support
efficient
pro-
cesses
of
recognition
and
search.
A
major
theme
in
computational
vision
has
been
the
distinction
between
the
methodology
of
image
analysis
(or
early
vision)
and
scene
analysis
(or
high-level
vision).
Briefly,
image
analysis
can
be
characterized
as
the
science
of
extracting
from
images
useful
descriptions
of
lines,
regions,
edges,
and
Ssurface
characteristics
up
to
the
level
of
Marr's
21/2
-D
sketch.
It
is
generally
assumed
that
im-
age
analysis
is
domain
independent
and
passive,
that
is,
data
driven.
Scene
analysis
attempts
to
recognize
visual
objects
and
their
configurations.
It
is
viewed
as
domain
dependent
and
goal
driven,
mnotivated
by
the
necessity
of
identifying
particular
objects
expected
to
be
present
in
a
scene.
Although
some
may
disagree,
these
distinctions
should
be
seen
not
as
a
strict
dichotomy
but
as
a
spectrum.
Early
vision
exploits
constraint.s
that
are
usually
valid
in
the
particular
visual
world
for
which
it
has
evolved
(or
been
designed).
Although
early
visioni
is
predominantly
data
driven,
high-level
visual
processes
must
be
able
to
establish
parameters
for
and
control
the
attention
of
lower
level
processes.
As
we
argue
later,
efficient
scene
analysis
systems
must
combinie
goal-driven
and
data-
driven
recognition
processes.
If
that
dichotomy
is
actual-
ly
a
spectrum
then
establishing
the
exact
boundary
is
not
a
research
issue.
In
this
article,
we
outline
current
scene
analysis
methodology
(early
vision
is
ably
described
elsewherelt2)
and
identity
a
number
of
its
deficiencies.
In
response
to
these
problems,
some
recent
systems
use
schema-based
knowledge
representations.
Examples
taken
from
one
called
Mapsee2
illustrate
our
arguments.
Progress
in
high-level
vision
The
necessity
of
adequate
representations
for
visual
knowledge
has
been
a
constant
theme
in
high-level
vision
research.
The
very
early
work
of
Roberts3
established
an
initial
research
paradigmn
that
has
persisted
for
20
years.
Roberts'
system
consisted
of
two
programs.
An
image
analysis
program
constructed
a
line
drawing
that
served
as
input
to
his
scene
analysis
program.
From
a
gray-scale
image
the
image
analysis
line-finder
constructed
a
line
drawing
u.sing
spatial
differentiation,
clipping,
and
line-
following
techniques.
The
subsequent
scene
analysis
pro-
gram
assumed
that
the
visual
world
consisted
of
in-
stanices
of
three
simple
polyhedral
models:
a
cube,
a
wedge,
and
a
hexagonial
prism.
These
primitives
were
allowed
to
be
scaled,
translated,
and
rotated.
Composite
objects
were
constructed
of
instances
of
the
primitives
glued
together.
The
scene
analysis
program
iterated
through
a
cycle
of
four
processes:
cue
discovery,
model
invocation,
model
verification,
and
model
elaboration.4
A
variety
of
topological
image
cues
used
to
index
into
the
set
of
primitive
models
found
candidate
matches
without
ex-
haustive
analysis
by
synthesis.
The
model
fragment
thus
invoked
was
then
subjected
to
metrical
tests
to
judge
its
tit
to
the
imnage.
If
a
successful
partial
fit
was
obtained,
91)
l-916
62
3
10-(X)90S()I
(X)
19<S
Ii
C
P-1U
E
90
COM
PUTER

the
appearance
of
the
rest
of
the
model
was
predicted
in
the
image.
A
good
match
between
the
prediction
and
the
image
indicated
a
successful
model
hypothesis.
The
predicted
appearance
of
the
model
was
then
used
to
pro-
duce
a
new
line
drawing
of
the
scene
with
that
portion
of
the
scene
deleted
from
the
image.
The
cycle
repeated
un-
til
the
entire
image
had
been
accounted
for.
Although
limited,
Roberts'
program
provided
a
major
impetus
to
computational
vision
research,5
and
his
blocks
world
approach
was
the
main
one
for
the
subse-
quent
decade.
The
Huffman-Clowes
labeling
scheme,
in-
troduced
in
the
early
1970's,
was
a
crucial
breakthrough.
Its
key
ideas
are
that
edge
types
(convex,
concave,
and
occluding)
in
the
scene
domain
can
be
determined
from
image
domain
evidence
(junction
shapes)
and
that
an
edge
cannot
change
its
type
from
one
end
to
the
other
(a
scene
domain
coherence
rule).
In
the
cue-model
para-
digm,
a
junction
shape
acts
as
a
cue
for
a
number
of
cor-
ner
models
in
the
scene
domain.
This
local
ambiguity
can
be
globally
reduced
by
enforcing
the
edge
object
coherence
rule
between
adjacent
corners.
Extending
these
ideas,
Waltz6
made
two
contributions.
He
extended
the
descriptive
adequacy
of
this
scheme
by
allowing
addi-
tional
edge
types
such
as
cracks
and
shadows.
He
enhanced
the
procedural
adequacy
by
introducing
a
filtering
algorithm
that
removes
local
inconsistencies
before
constructing
global
solutions.
He
gave
some
ex-
perimental
evidence
that
this
could
be
more
efficient
than
backtracking.
The
filtering
algorithm
has
been
generalized
to
a
class
of
formal
network
consistency
algorithms
for
problems
in
which
a
number
of
variables
have
to
be
instantiated
in
associated
domains
while
satisfying
a
set
of
binary
con-
straints.7
The
constraint-based
approach
to
knowledge
representation
in
vision
has
been
applied
to
other
visual
domains.
Mapsee8
interprets
freehand
geographical
sketch
maps.
In
this
world,
image
lines
or
chains
can
be
scene
roads,
rivers,
bridges,
mountains,
towns,
lake-
shores,
or
seashores,
while
image
regions
can
be
land,
lake,
or
ocean.
The
constraint
approach
uses
these
en-
tities
as
the
objects
to
be
instantiated,
while
the
models
are
derived
from
scene
domain
knowledge
of
how
the
ob-
jects
can
interact.
For
example,
a
T-junction
of
two
im-
age
chains
could
be
a
road
junction
or
a
river
junction
or
a
river
going
under
a
bridge,
etc.
The
models
are
thus
n-ary
constraints
on
the
objects,
and
the
network
con-
sistency
algorithms
are
generalized
to
cope
with
that
extension.
The
complexity
barrier
The
computational
paradigm
introduced
by
Roberts
and
developed
by
others
is
now
mature.
It
has
resulted
in
a
uniform
representational
framework
for
encoding
and
manipulating
knowledge
about
the
visual
world.
Unfor-
tunately,
network
consistency
has
reached
its
inherent
limitations.
It
does
not
easily
scale
upward
to
more
com-
plex
domains
and
exhibits
a
number
of
shortcomings:
Limitation
1.
The
objects
defined
in
the
representation
correspond
only
to
primitive
scene
entities.
Complex
scene
interpretations
must
be
expressed
solely
as
atomic
labels
for
these
primitive
objects.
Consequently,
abstract
high-level
scene
interpretations
are
represented
only
im-
plicitly
by
projection
onto
the
low-level
label
sets
of
the
objects
and
must
be
reconstructed
from
the
low-level
in-
terpretations
after
the
recognition
process
has
ter-
minated.
Projecting
abstract
interpretations
onto
an
ob-
ject's
label
set
causes
set
size
to
grow
exponentially
with
the
complexity
of
the
scene
domain.
This
phenomenon
was
a
major
obstacle
in
Waltz's
research.
We
conclude
that
objects
at
the
lowest
level
of
description
in
a
system
are
not
appropriate
hooks
for
attaching
high-level
inter-
pretations.
Limitation
2.
The
models
are
impoverished.
Each
model
is
represented
as
a
relation
over
the
label
sets
of
a
small
number
of
neighboring
objects
in
the
network
and,
therefore,
can
express
only
local
constraints
on
the
scene.
No
explicit
descriptions
of
the
structural
relationships
appearing
in
the
overall
scene
are
represented.
Instead,
they
are
implicit
in
the
relations
themselves.
Limitation
3.
The
extension
of
the
label
set
for
each
object
has
been
represented
explicitly.
Network
con-
sistency
methods
proceed
by
deleting
from
the
label
set
of
each
object
any
label
that
does
not
satisfy
every
model
constraining
that
object.
Any
deleted
label
cannot
be
part
of
a
global
scene
interpretation.
Label
sets
are
usual-
ly
represented
explicitly
as
a
list
of
atoms,
each
naming
a
particular
interpretation.
Furthermore,
each
label
must
be
considered
independently,
even
though
many
of
the
labels
in
a
given
label
set
have
a
partial
common
inter-
pretation.
More
efficient,
intensional
representations
for
interpretations
are
needed.
Limitation
4.
A
compiler
must
be
constructed
to
com-
pute
the
label
sets
for
each
type
of
object
in
the
system.
This
compiler,
given
a
suitable
description
of
the
seman-
tics
of
the
scene
domain,
considers
exhaustively
all
possi-
ble
scene
configurations
and
represents
those
configura-
tions
in
the
label
sets
of
the
primitive
objects.
Limitation
5.
Network
consistency
relies
on
a
single
level
of
cues
and
models.
Cues
are
image
properties
com-
puted
context-free
from
the
input
image.
Once
discov-
ered,
they
are
used
to
invoke
appropriate
models
direct-
ly.
Since
each
model
depicts
relationships
among
objects
at
a
single
level
of
abstraction,
its
semantics
must
be
tied
closely
to
the
invoking
image
cue.
Therefore,
models
for
high-level
abstract
scene
relationships
are
not
possible.
Attempts
at
using
low-level
image
cues
to
invoke
high-
level
models
have
been
disappointing.2
What
is
needed
is
a
hierarchy
of
cues
and
models.
Low-level,
context-free
cues
should
be
used
to
invoke
low-level
scene
models,
and
high-level,
context-sensitive
cues,
which
have
been
computed
as
a
result
of
recognition,
should
be
used
to
in-
voke
high-level
models.9
Limitation
6.
Procedural
knowledge
is
absent.
Net-
work
consistency
employs
a
uniform
constraint
propaga-
tion
control
structure
to
guide
the
search
process.
Although
its
performance
is
often
more
efficient
than
that
of
parallel
or
automatic
backtrack
search,
i0
no
pro-
cedural
knowledge
specific
to
the
scene
domain
is
used.
What
is
needed
are
procedures,
called
methods,ci
at-
tached
to
each
model
that
can
efficiently
guide
the
search
October
1983
91

process
for
instances
of
the
model.
These
methods
must
be
able
to
use
a
combination
of
data-driven
and
goal-
driven
techniques.
Figure
1.
Input
sketch
for
Mapsee2
shows
the
lower
mainland
of
British
Columbia.
Table
1.
A
Geo-System
schema
instance.
TY
PE:
Class
Name
Lableset
Part
-of
Composition
CO
M
PON
ENTS:
World
Road-Systems
River-
Systems
Shores
Towns
Mountain-Ranges
Chai
ns
Regions
Geo-System
Geo-
System-
3
tLandmass,
Mainland
World,
,River-System,
Road-System,
Town,
Shore.
Mountain-Range!
World-
1
iRoad-System-
1
(River-System-l
,
River-System-2
'Shore-2.
Shore-8,
Shore-9,
Town-
,
Town-2.
Town-3,
Town-4
,Mtn-Range-1,
Mtn-Range-2,
Mtn-Range-3,
Mtn-Range-41
,C3,
C4,
C5,
C6.
C7,
C29.
C27.
C19.
C30,
C32,
C33,
C31,
C25,
C26,
C20,
C21,
C34.
C35,
C38,
C39,
C41,
C42,
C43,
C24.
C28,
C22.
C23,
C36,
C37.
C40.
C45.
C17.
C8.
C9,
C10,
Cll,
C12
IR1,
R2,
R3,
RI0.
Rll.
R12.
R13,
R14,
R15i
R16,
R17,
R18,
Ri9.
R20.
R22.
R21,
Limitation
7.
A
correct
segmentation
of
the
input
im-
age
is
necessary.
Erroneous
cues
resulting
froin
a
poor
segmentation
will
inevitably
invoke
inappropriate
models
leading
to
improper
or
empty
scene
interpreta-
tions.
The
problem
can
be
ameliorated
by
a
conservative
initial
segmentation
designed
to
invoke
only
appropriate
models.
The
resulting
partial
interpretations
can
then
be
used
in
a
cycle
of
perception4
to
refine
the
parameters
of
the
segmentation
in
a
context-sensitive
way.
However,
this
approach
appeals
to
a
control
mechanism,
which
is
external
to
the
basic
methodology
itself.
Furthermore,
for
complex
imagery,
there
may
be
no
appropriate
segmentation
strategy
that
yields
sufficient
''correct"
cues
to
drive
the
interpretation
process.
The
disappoint-
ing
performance
of
classification
and
region-growing
algorithms
for
interpretation
illustrates
this
phenomenon.
Of
these
seven
shortcomings,
the
first
four
can
be
conr-
sidered
descriptive
adequacv
issues
while
the
last
three
concern
procedural
adequacy.
Achieving
descriptive
adequacy
In
response
to
the
shortcomings
discussed
above,
we
have
been
exploring
schemata
as
a
suitable
representa-
tion
for
knowledge.9
Others
have
also
advocated
this
representation.1'
Our
experiments
using
schemata
for
visual
perception
have
resulted
in
a
program
called
Mapsee2.
It
automatically
interprets
hand-drawn
sketch
maps
of
cartographic
scenes,
producing
a
hierarchical
structural
description
of
the
scene.
Figure
I
is
an
inpLut
sketch
map
of
the
lower
mainland
in
the
Vancouver,
British
Columbia,
area.
It
depicts
a
large
body
of
water,
the
Strait
of
Georgia,
on
the
left,
the
mainland
on
the
right,
and
three
islands
in
Howe
Sound
at
upper
left.
On
the
mainland,
the
cities
of
Vancouver,
North
Vancouver,
West
Vancouver,
and
Surrey
are
represented
by
the
"squiggly"
lines.
The
"peaks"
north
of
the
cities
are
the
North
Shore
Mountains.
The
cities
are
connected
by
roads,
which
cross
the
Fraser
River
at
various
points
anid
cross
Burrard
Inlet
at
the
Lions'
Gate
Bridge.
(Some
features
of
the
Vancouver
area
have
been
stylized
in
this
map
to
conform
with
the
symbols
understood
by
the
system.)
The
sketch
map
domain
was
chosen
for
the
following
reasons:
(1)
Sketch
maps
capture
in
a
simple
form
tundamen-
tal
problems
in
representing
and
applying
visual
knowledge.
(2)
Techniques
for
understanding
maps
have
applica-
tion
in
interpretlng
real
imagery.
In
particular,
sketch
maps
have
been
used
to
guide
the
cooper-
ative
interpretation
of
aerial
photography.'2
(3)
By
using
the
same
task
domain,
the
capabilities
of
schema-based
systems
can
be
compared
directly
with
the
well-understood
properties
of
network
consistency
methodology.
The
knowledge
base
used
in
Mapsee2
is
a
network
of
schema
models.
Each
model
represents
a
class
of
objects,
providing
a
description
of
the
generic
properties
of
every
COMPUTER
92

member
of
the
class
and
specifying
the
possible
relation-
ships
of
the
class
with
other
schemata
in
the
network.
When
a
schema
is
used
to
represent
a
particular
scene
ob-
ject,
known
or
hypothesized
to
exist
in
a
given
sketch
map,
the
class
is
used
to
generate
a
schema
instance.
For
example,
Table
1
shows
an
instance
of
the
Geo-System
class.
This
instance,
named
Geo-System-3,
represents
the
Vancouver
metropolitan
area
in
the
sketch
map.
The
in-
stance
contains
a
number
of
defining
properties,
in-
cluding
a
Labelset,
indicating
that
the
instance
has
been
interpreted
both
as
a
Landmass
and
the
Mainland;
a
set
of
relations
with
other
schema
classes;
and
a
set
of
com-
ponents,
which
are
also
schema
instances.
Schemata
represent
complex
scene
interpretations
as
specific
compositions
of
simpler
schemata,
forming
a
composition
hierarchy.
A
complex
scene
object
is
recognized
by
recursively
recognizing
its
component
parts
so
that
the
internal
constraints
of
its
schema
are
satisfied.
Figure
2
shows
the
composition
hierarchy
used
in
Mapsee2.
In
this
hierarchy,
each
node
is
a
schema
class
and
the
arcs
between
nodes
depict
relations
between
schemata.
Looking
downward,
the
arcs
represent
composition,
whereas
in
the
upward
direction
they
represent
its
inverse
relation,
Part-of.
The
intuitive
interpretation
of
the
hierarchy
is
that
a
cartographic
World
is
composed
of
some
number
of
geographic
systems,
called
Geo-Sys-
tems,
which
are,
in
turn,
composed
of
combinations
of
River-Systems,
Road-Systems,
Mountain-Ranges,
Shore-
lines,
and
Towns.
Each
of
these
is,
in
turn,
composed
of
simpler
subschemata,
finally
terminating
in
the
primitive
input sketch
lines,
called
chains,
and
the
"empty
space"
regions
bounded
by
the
chains.
Conversely,
the
hierar-
chy
can
be
viewed
as
a
part-of
hierarchy,
representing,
for
example,
that
Town
schemata
are
component
parts
of
both
Geo-Systems
and
Road-Systems.
Schemata
provide
an
important
improvement
in
de-
scriptive
adequacy
over
network
consistency
and
related
representations.
To
substantiate
this
claim,
in
this
sec-
tion
we
examine
how
schemata
overcome
the
first
four
of
the
seven
objections
outlined
above.
Overcoming
Limitation
1.
The
distinction
between
models
and
objects
is
unnecessary.
Instead,
schemata
are
models
for
scene
objects
at
various
levels
of
abstraction.
The
interpretation
of
a
scene
is
expressed
as
a
structural
network
of
instantiated
schema
instances
instead
of
be-
ing
projected
onto
atomic
labels
for
primitive
objects.
The
interpretation
is
represented
explicitly
and
need
not
be
reconstructed
from
the
labels.
For
example,
Mapsee2
produces
a
network
description
of
the
lower
mainland,
which
is
shown
as
a
color-coded
image
in
Figure
3.
The
description
consists
of
seven
Geo-Systems:
four
are
Islands,
one
is
Sea,
one
is
Lake,
and
the
land
area
border-
ing
the
frame
is
interpreted
as
the
Mainland.
Mapsee2
discovers
two
separate
Road-Systems,
one
of
which
is
located
on
the
Vancouver
Mainland
and
contains
the
Roads,
Bridges,
and
Towns
in
that
area.
The
second
Road-System
is
an
isolated
Town
and
Road
on
the
Figure
2.
Mapsee2
composition
hierarchy.
October
1983
93

Sechelt
Peninsula
located
in
the
upper
left
corner
of
the
map.
Finally,
the
Mainland
has
two
River-Systems,
one
representing
the
Fraser
River
system
and
the
other
the
First
Narrows
connection
between
the
Sea
and
Burrard
Inlet
(which
is
interpreted
as
a
Lake).
Figure
3.
Color-coded
Interpretation
of
the
lower
mainland
of
British
Columbia.
Roads
are
red,
the
shore
and
bridges
are
purple,
the
land
mass
is
green,
rivers
and
bodies
of
water
are
blue,
and
towns
and
mountains
are
yellow.
Overcoming
Limitation
2.
Schema
models
express
scene
relationships
at
an
appropriate
level
of
abstraction.
A
model
constrains
both
the
possible
relationships
of
its
components
lower
in
the
composition
hierarchy
and
of
the
higher
schemata
of
which
it
can
be
a
part.
Thus,
con-
straints
need
not
be
localized
to
small
neighborhoods
of
the
image
but
can
express
global
scene
relationships
in
a
natural
way.
For
example,
in
Figure
2,
Road-Systems
constrain
their
component
parts
to
be
connected
Roads,
Towns,
and
Bridges
and
simultaneously
force
the
Geo-Systems,
of
which
they
are
parts,
to
be
Landmasses,
as
shown
in
the
interpretation
in
Figure
3.
Overcoming
Limitation
3.
Schemata
support
an
inten-
sional
representation
for
object
label
sets.
There
is
no
ex-
plicit
representation
of
all
possible
final
interpretations
Figure
4.
Geo-System
specialization
hierarchy.
Figure
5.
Sketch
map
superimposed
on
image
of
Ashcroft,
B.C.
Figure
6.
Missee
River-1
interpretation.
94
COM
PUTER

Citations
More filters
Book

Cognitive and linguistic aspects of geographic space : new perspectives on geographic information research

TL;DR: In this article, the Cognitive Development of the Spatial Concepts NEXT, NEAR, AWAY and FAR (NEAR, FAR and NEAR NEAR) are described and evaluated.
BookDOI

Cognitive and linguistic aspects of geographic space

TL;DR: The role of Modal Logics in the Description of a Geographical Information System and the Role of the User in Generalization within Geographic Information Systems are discussed.

Configuration as Composite Constraint Satisfaction

TL;DR: A general constraint-based model of configuration tasks represented as a new class of nonstandard constraint satisfaction problems, called composite CSP is presented, providing a more comprehensive and efficient basis for formulating and solving configuration problems.
Journal ArticleDOI

The Maltese cross: A new simplistic model for memory

TL;DR: A general framework for thought about human information processing is put forward, intended to avoid some of the problems of pipeline or stage models of function and to distinguish between persisting representations and the processes that translate one representation into another.
Journal ArticleDOI

Levels of modeling of mechanisms of visually guided behavior

TL;DR: Rana computatrix, the computational frog, is introduced to show how one constructs an evolving set of model families to mediate flexible cooperation between theory and experiment and how layered neural computation enters into models of stereopsis and how depth schemas may involve the interaction of accommodation and binocular cues in anurans.
References
More filters
Journal ArticleDOI

Consistency in Networks of Relations

TL;DR: The primary aim is to provide an accessible, unified framework, within which to present the algorithms including a new path consistency algorithm, to discuss their relationships and the may applications, both realized and potential of network consistency algorithms.

Understanding Line drawings of Scenes with Shadows

TL;DR: A detailed discussion of the standard approach to computer interpretation of line drawings as three-dimensional scenes as well as some alternative approaches to this approach are discussed.

Recovering intrinsic scene characteristics from images

TL;DR: It is suggested that an appropriate role of early visual processing is to describe a scene in terms of intrinsic (vertical) characteristics -- such as range, orientation, reflectance, and incident illumination -- of the surface element visible at each point in the image.
Journal ArticleDOI

The complexity of some polynomial network consistency algorithms for constraint satisfaction problems

TL;DR: The time complexity of several node, arc and path consistency algorithms is analyzed and it is proved that arc consistency is achievable in time linear in the number of binary constraints.
Book ChapterDOI

Frame representations and the declarative/procedural controversy

TL;DR: This chapter presents some criteria for evaluating ideas for representation and presents a rough sketch of a particular version of a frame representation, and discusses the ways in which it can deal with the issues raised.