scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Frameworks for Cooperation in Distributed Problem Solving

01 Jan 1981-Vol. 11, Iss: 1, pp 61-70
TL;DR: Two forms of cooperation in distributed problem solving are considered: task-sharing and result-sharing, and the basic methodology is presented and systems in which it has been used are described.
Abstract: Two forms of cooperation in distributed problem solving are considered: task-sharing and result-sharing. In the former, nodes assist each other by sharing the computational load for the execution of subtasks of the overall problem. In the latter, nodes assist each other by sharing partial results which are based on somewhat different perspectives on the overall problem. Different perspectives arise because the nodes use different knowledge sources (KS's) (e.g., syntax versus acoustics in the case of a speech-understanding system) or different data (e.g., data that is sensed at different locations in the case of a distributed sensing system). Particular attention is given to control and to internode communication for the two forms of cooperation. For each, the basic methodology is presented and systems in which it has been used are described. The two forms are then compared and the types of applications for which they are suitable are considered.

Summary (3 min read)

Introduction

  • Task-sharing and result-sharing, also known as considered.
  • In the former, nodes assist each other by sharing the computational load for the execution of subtasks of the overall problem.
  • Different perspectives arise because the nodes use different knowledge sources (KS’s) (e.g., syntax versus acoustics in the case of a speech-understanding system) or different data (e.g., data that is sensed at different locations in the case of a distributed sensing system).
  • Particular attention is given to control and to internode communication for the two forms of cooperation.
  • For each, the basic methodology is presented and systems in which it has been used are described.

I. DISTRIBUTED PROBLEM SOLVING

  • ISTRIBUTED problem solving is the cooperative solution of problems by a decentralized and loosely coupled collection of knowledge sources (KS’s) (procedures, sets of rules, etc.), located in a number of distinct processor nodes.
  • Perhaps the most important distinction between distributed problem solving and distributed processing systems can be found by examining the origin of the systems and the motivations for interconnecting machines.
  • The authors concerns are thus with developing frameworks for cooperative behavior between willing entities, rather than frameworks for enforcing cooperation as a form of compromise between potentially incompatible entities.
  • In the former, nodes assist each other by sharing the computational load for the execution of subtasks of the overall problem.
  • For each form, the basic methodology is presented, and systems in which it has been used are described.

II. COOPERATING EXPERTS

  • In such a situation each expert may spend most of his time working alone on various subtasks that have been partitioned from the main task, pausing occasionally to interact with other members of the group.
  • If another expert (E2) believes he is capable of carrying out the task that E1 described, he informs E1 of his availability and perhaps indicates any especially relevant skills he may have.
  • An expert (El) reports a partial result for his subproblem to his neighbors (E2 and E3) when that result may have some bearing on the processing being done by them.
  • First, communication among the members does not needlessly distract the entire group.

III. A PERSPECTIVE ON DISTRIBUTED PROBLEM SOLVING

  • In this section the authors present a model for the phases that a distributed problem solver passes through as it solves a problem (Fig. 1).
  • In the first phase, the problem is decomposed into subproblems.
  • The distinct phases, however, are more obvious in a distributed problem solver, primarily because communication and cooperation must be dealt with explicitly in this case.
  • There is also no answer synthesis phase for traffic-light control.
  • In the answer synthesis phase, the superatoms are replaced by the actual structural fragments they represent and are embedded in the generated structures.

IV. CAVEATS FOR COOPERATION

  • This obviously depends on the problem itself (e.g., there are problems for which data or computation cannot be partitioned into enough mostly independent pieces to occupy all of the processors).
  • In a speech-understanding problem, for example, knowledge is available from the speech signal itself, from the syntax of the utterances, and from the semantics of the task domain [7].
  • There are, of course, many architectures that do not lead to channel bandwidths of the same magnitude.
  • Processing load among the nodes in order to avoid computation and communication bottlenecks.
  • It is also the case that the control of processing must itself be distributed.

V. TASK-SHARING

  • Task-sharing is a form of cooperation in which individual nodes assist each other by sharing the computational load for the execution of subtasks of the overall problem.
  • In order to maximize system concurrency, both nodes with tasks to be executed and nodes ready to execute tasks can proceed simultaneously, engaging each other in a process that resembles contract negotiation to solve the connection problem.
  • Available nodes (potential contractors) evaluate task announcements made by several managers (Fig. 4) and submit bids on those for which they are suited (Fig. 5).
  • The slots have been chosen to capture the types of information that are usefully passed between nodes to determine appropriate connections without excessive communication.
  • Negotiation offers a more powerful mechanism for connection than is available in current problem-solving systems.

VI. RESULT-SHARING

  • Result-sharing is a form of cooperation in which individual nodes assist each other by sharing partial results, based on somewhat different perspectives on the overall problem.
  • Thus the key to achieving consistent image labeling is to compare the label set of each vertex with those of its neighbors and discard inconsistent labels.
  • This process continues until unique labels have been established for all nodes (i.e., one label at each node has a large certainty measure with respect to those associated with the other labels for that node) or no further updating is possible.
  • Lesser and Erman [14] have experimented with distribution of the HEARSAY-II speech-understanding system [12].
  • It achieves cooperation by both mutual restricttion and by mutual aggregation of results achieved by individual nodes (i.e., partial interpretations achieved at neighboring nodes are combined to form more complete interpretations).

VII. TASK-SHARING AND RESULT-SHARING: A COMPARISON

  • Task-sharing is used to organize problem decomposition through formation of explicit task–subtask connections between nodes.
  • Task-sharing assumes that kernel subproblems can be solved by individual nodes working independently with minimal internode communication and that the major concern is efficient matching of nodes and tasks for high-speed problem solving.
  • The data but then, instead of communicating their partial interpretations directly to each other, they communicate them to a fourth node (a manager in contract net terms) that has the task of sorting out the inconsistencies.
  • Just as partial results received from a remote node can suggest fruitful new lines of attack for a problem, they can also be distracting.

VIII. CONCLUSION

  • Two complementary forms of cooperation in distributed problem solving have been discussed: task-sharing and result-sharing.
  • These forms are useful for different types of problem and for different phases of distributed problem solving.
  • It assumes that subproblem solution can be achieved with minimal communication between nodes.
  • Result-sharing is useful in the subproblem solution phase when kernel subproblems cannot be solved by nodes working independently without communication with other nodes.
  • The authors eventually expect to see systems in which both forms of cooperation are used, drawing upon their individual strengths to attack problems or which neither form is sufficiently powerful by itself.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. SMC-11, NO. 1, JANUARY 1981 61
Frameworks for Cooperation in Distributed
Problem Solving
REID G. SMITH, MEMBER, IEEE, AND RANDALL DAVIS
Abstract — Two forms of cooperation in distributed problem solving are
considered: task-sharing and result-sharing. In the former, nodes assist each
other by sharing the computational load for the execution of subtasks of
the overall problem. In the latter, nodes assist each other by sharing partial
results which are based on somewhat different perspectives on the overall
problem. Different perspectives arise because the nodes use different
knowledge sources (KS’s) (e.g., syntax versus acoustics in the case of a
speech-understanding system) or different data (e.g., data that is sensed at
different locations in the case of a distributed sensing system). Particular
attention is given to control and to internode communication for the two
forms of cooperation. For each, the basic methodology is presented and
systems in which it has been used are described. The two forms are then
compared and the types of applications for which they are suitable are
considered.
I. DISTRIBUTED PROBLEM SOLVING
ISTRIBUTED problem solving is the cooperative
solution of problems by a decentralized and loosely
coupled collection of knowledge sources (KS’s) (proce-
dures, sets of rules, etc.), located in a number of distinct
processor nodes. The KS’s cooperate in the sense that no
one of them has sufficient information to solve the entire
problem; mutual sharing of information is necessary to
allow the group as a whole to produce an answer. By
decentralized we mean that both control and data are
logically and often geographically distributed; there is
neither global control nor global data storage. Loosely
coupled means that individual KS’s spend the great per-
centage of their time in computation rather than communi-
cation.
Distributed problem solvers offer advantages of speed,
reliability, extensibility, the ability to handle applications
with a natural spatial distribution, and the ability to tolerate
uncertain data and knowledge. Because such systems are
highly modular they also offer conceptual clarity and sim-
plicity of design.
Although much work has been done in distributed
processing, most of the applications have not addressed
issues that are important for the design of artificial intelli-
gence (AI) problem solvers. For example, the bulk of the
Manuscript received January 28, 1980, revised September 1, 1980. This
work was supported by the Department of National Defence of Canada,
Research and Development Branch, and by the Advanced Research
Projects Agency of the United States Department of Defense under Office
of Naval Research Contract N00014-75-C-0643.
R. G. Smith is with the Defence Research Establishment Atlantic,
Dartmouth, NS, Canada, B2Y 3Z7.
R. Davis is with the Artificial Intelligence Laboratory, Massachusetts
Institute Of Technology, Cambridge, MA 02139.
processing is usually done at a central site with remote
processors limited to basic data collection (e.g., credit card
verification). While it is common to distribute data and
processing, it is not common to distribute control, and the
processors do not cooperate in a substantive manner.
Researchers in the area of distributed processing have
not taken problem solving as their primary focus. It has
generally been assumed, for example, that a well-defined
and a priori partitioned problem exists and that the major
concerns lie in an optimal static distribution of tasks,
methods for interconnecting processor nodes, resource al-
location, and prevention of deadlock. Complete knowledge
of timing and precedence relations between tasks has gen-
erally been assumed, and the major reason for distribution
has been taken to be load balancing (see for example [1],
[3]). Distributed problem solving, on the other hand, in-
cludes as part of its basic task the partitioning of a
problem.
Perhaps the most important distinction between dis-
tributed problem solving and distributed processing sys-
tems can be found by examining the origin of the systems
and the motivations for interconnecting machines. Dis-
tributed processing systems often have their origin in an
attempt to synthesize a network of machines capable of
carrying out a number of widely disparate tasks. Typically,
several distinct applications are envisioned, with each ap-
plication concentrated at a single node (as for example in a
three-node system intended to do payroll, order entry, and
process control). The aim is to find a way to reconcile any
conflicts and disadvantages arising from the desire to carry
out disparate tasks, in order to gain the benefits of using
multiple machines (sharing of data bases, graceful degrada-
tion, etc.). Unfortunately, the conflicts that arise are often
not simply technical (e.g., word sizes and data base for-
mats) but include sociological and political problems as
well [6]. The attempt to synthesize a number of disparate
tasks leads to a concern with issues such as access control
and protection, and results in viewing cooperation as a
form of compromise between potentially conflicting per-
spectives and desires at the level of system design and
configuration.
In distributed problem solving, on the other hand, a
single task is envisioned for the system, and the resources
to be applied have no other predefined roles to carry out.
A system is constructed de novo, and as a result the
hardware and software can be chosen with one aim in
D
0018-9472/81/0100-0061$00.75 ©1981 IEEE

62 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. SMC-11, NO. 1, JANUARY 1981
mind: the selection that leads to the most effective environ-
ment for cooperative behavior. This also means that coop-
eration is viewed in terms of benevolent problem-solving
behavior; that is, how can systems that are perfectly willing
to accommodate one another act so as to be an effective
team? Our concerns are thus with developing frameworks
for cooperative behavior between willing entities, rather than
frameworks for enforcing cooperation as a form of com-
promise between potentially incompatible entities.
This leads us to investigate the structure of interactions
between cooperating nodes. We are primarily concerned
with the content of the information to be communicated
between nodes and the use of the information by a node
for cooperative problem solving. We are less concerned
with the specific form in which the communication is
effected.
In this paper two forms of cooperation in distributed
problem solving are considered: task-sharing and result-
sharing. In the former, nodes assist each other by sharing
the computational load for the execution of subtasks of the
overall problem. In the latter, nodes assist each other by
sharing partial results which are based on somewhat differ-
ent perspectives on the overall problem. Different perspec-
tives arise because the nodes use different KS
s (e.g., syntax
versus acoustics in the case of a speech-understanding
system) or different data (e.g., data that is sensed at
different locations in the case of a distributed sensing
system).
For each form, the basic methodology is presented, and
systems in which it has been used are described. The utility
of the two forms is examined, and their complementary
nature is discussed.
The physical architecture of the problem solver is not of
primary interest here. It is assumed to be a network of
loosely coupled, asynchronous nodes. Each node contains a
number of distinct KS’s. The nodes are interconnected so
that each node can communicate with every other node by
sending messages. No memory is shared by the nodes.
II. COOPERATING EXPERTS
A familiar metaphor for a problem solver operating in a
distributed processor is a group of human experts experi-
enced at working together, trying to complete a large task.
This metaphor has been used in several AI systems [10]-
[12], [18]. Of primary interest to us in examining the
operation of a group of human experts is the way in which
they interact to solve the overall problem, the manner in
which the workload is distributed among them, and how
results are integrated for communication outside the group.
It is assumed that no one expert is in total control of the
others, although one expert may be ultimately responsible
for communicating the solution of the top-level problem to
the customer outside the group. In such a situation each
expert may spend most of his time working alone on
various subtasks that have been partitioned from the main
task, pausing occasionally to interact with other members
of the group. These interactions generally involve requests
for assistance on subtasks or the exchange of results.
Individual experts can assist each other in at least two
ways. First, they can divide the workload among them-
selves, and each node can independently solve some sub-
problems of the overall problem. We call this task-sharing
(as in [11] and [18]). In this mode of cooperation, we are
primarily concerned with the way in which experts decide
who will perform which task. We postulate that one inter-
esting method of effecting this agreement is via negotia-
tion.
An expert (El) may request assistance because he en-
counters a task too large to handle alone, or a task for
which he has no expertise. If the task is too large, he will
first partition it into manageable subtasks, and then at-
tempt to find other experts who have the appropriate skills
to handle the new tasks. If the original task is beyond his
expertise, he immediately attempts to find another more
appropriate expert to handle it.
In either case, if E1 knows which other experts have the
necessary expertise, he can notify them directly. If he does
not know anyone in particular who may be able to assist
him (or if the task requires no special expertise), then he
can simply describe the task to the entire group.
If another expert (E2) believes he is capable of carrying
out the task that E1 described, he informs E1 of his
availability and perhaps indicates any especially relevant
skills he may have. E1 may discover several such volunteers
and can choose from among them. The chosen volunteer
then requests additional details from El, and the two
engage in further direct communication for the duration of
the task.
Those with tasks to be executed and those capable of
executing the tasks thus engage each other in a simple form
of negotiation to distribute the workload. They form sub-
groups dynamically as they progress towards a solution.
1
When subproblems cannot be solved by independent
experts working alone, a second form of cooperation is
appropriate. In this form, the experts periodically report to
each other the partial results they have obtained during
execution of individual tasks. We call this result-sharing
(as, for example, in [12] and [13]). It is assumed in this
mode of cooperation that problem partitioning has been
effected a priori and that individual experts work on sub-
problems that have some degree of commonality (e.g.,
interpreting data from overlapping portions of an image).
An expert (El) reports a partial result for his subprob-
lem to his neighbors (E2 and E3) when that result may
have some bearing on the processing being done by them.
(For example, a partial result may be the best result that
E1 can derive using only the data and knowledge available
to him.) E2 and E3 attempt l) to use El’s result to confirm
or deny competing results for their subproblems, or 2) to
1
Subgroups offer two advantages. First, communication among the
members does not needlessly distract the entire group. This is important
because communication itself can be a major source of distraction and
difficulty in large groups (see, for example, [91). Thus one of the major
purposes of organization is to reduce the amount of communication that
is needed. Second, the subgroup members may be able to communicate
with each other in a language that is more efficient for their purpose than
the language in use by the entire group.

DAVIS AND SMITH: FRAMEWORKS IN DISTRIBUTED PROBLEM SOLVING 63
Fig. 1. Phases of distributed problem solving.
aggregate partial results of their own with El’s result to
produce a result that is relevant to El’s subproblem as well
as their own, or 3) to use El’s result to indicate alternative
lines of attack that they might take to solve their own
subproblems.
III. A PERSPECTIVE ON DISTRIBUTED PROBLEM
S
OLVING
In this section we present a model for the phases that a
distributed problem solver passes through as it solves a
problem (Fig. 1). The model offers a framework in which
to anchor the two forms of cooperation that are the primary
focus of this paper. It enables us to see the utility of the
two forms, the types of problems for which they are best
suited, and the way in which they are complementary.
2
In the first phase, the problem is decomposed into
subproblems. As Fig. 1 shows, the decomposition process
may involve a hierarchy of partitionings. In addition, the
process may itself be distributed in order to avoid bot-
tlenecks. Decomposition proceeds until kernel (nondecom-
posable) subproblems are generated. Consider as an
example a simple distributed sensing system (DSS). In the
problem decomposition phase, the subproblems of detect-
ing objects in specific portions of the overall area of
interest are defined and distributed among the available
sensors.
The second phase involves solution of the kernel sub-
problems. As shown in the figure, this may necessitate
communication and cooperation among the nodes attempt-
ing to solve the individual subproblems. In the DSS exam-
ple, communication is required in the subproblem solution
phase 1) if objects can move from one area to another so
that it is helpful for sensors to inform their neighbors of
the movement of objects they have detected, or 2) if it is
difficult for a single sensor to reliably detect objects without
assistance from other sensors.
Answer synthesis is performed in the third phase; that is,
integration of subproblem results to achieve a solution to
the overall problem. Like problem decomposition, answer
synthesis may be hierarchical and distributed. In the DSS
example, the answer synthesis phase involves generation of
a map of the objects in the overall area of interest.
2
It will be apparent that the model is also applicable to centralized
problem solving. The distinct phases, however, are more obvious in a
distributed problem solver, primarily because communication and cooper-
ation must be dealt with explicitly in this case.
For any given problem, the three phases may vary in
complexity and importance. Some phases may either be
missing or trivial. For example, in the traffic-light control
problem considered in [13], the problem decomposition
phase involves no computation. Traffic-light controllers are
simply placed at each intersection. For a DSS, the problem
decomposition is suggested directly by the spatial distribu-
tion of the problem.
3
There is also no answer synthesis phase for traffic-light
control. The solution to a kernel subproblem is a smooth
flow of traffic through the associated intersection. There is
no need to synthesize an overall map of the traffic. Thus
the solution to the overall problem is the solution to the
kernel subproblems. (This is generally true of control prob-
lems; note that it does not mean, however, that communi-
cation among the nodes solving individual subproblems is
not required.)
Many search problems (like symbolic integration [16])
also involve a minimal answer synthesis phase. Once the
problem has been decomposed into kernel subproblems
and they have been solved, the only answer synthesis
required is recapitulation of the list of steps that have been
followed to obtain the solution. However, for some prob-
lems the answer synthesis phase is the dominant phase. An
example is the CONGEN program [4]. CONGEN is used
in molecular structure elucidation. It generates all struct-
ural isomers that are both consistent with a given chemical
formula and that include structural fragments known to be
present in the substance (superatoms). In the problem
decomposition phase, CONGEN generates all structures
that are consistent with the data (by first generating inter-
mediate structures, then decomposing those structures, and
so on until only structures that contain atoms or super-
atoms remain). At this point, the superatoms (like the
atoms) are considered by name and valence only. In the
answer synthesis phase, the superatoms are replaced by the
actual structural fragments they represent and are em-
bedded in the generated structures. Because embedding can
often be done in many ways, a sizable portion of the
overall computation is accounted for by this phase.
IV. CAVEATS FOR COOPERATION
One of the main aims in adopting a distributed approach
is to achieve high-speed problem solving. In order to do
this, situations in which processors “get in each other’s
way” must be avoided. This obviously depends on the
problem itself (e.g., there are problems for which data or
computation cannot be partitioned into enough mostly
independent pieces to occupy all of the processors). Perfor-
mance also depends, however, on the problem-solving archi-
tecture. It is therefore appropriate to consider frameworks
for cooperation.
3
Note that the problem solver must still implement even an obvious
decomposition. Nodes must still come to an agreement as to which node
is to handle which portion of the overall area.

64 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. SMC-11, NO. 1, JANUARY 1981
It is common in AI problem solvers to partition exper-
tise into domain-specific KS’s, each of which is expert in a
particular part of the overall problem. KS’s are typically
formed empirically, based on examination of different
types of knowledge that can be brought to bear on a
particular problem. In a speech-understanding problem,
for example, knowledge is available from the speech signal
itself, from the syntax of the utterances, and from the
semantics of the task domain [7]. The decisions about
which KS’s are to be formed is often made in concert with
the formation of a hierarchy of levels of data abstraction
for a problem. For example, the levels used in the hierarchy
of the HEARSAY-II speech-understanding system were
parametric, segmental, phonetic, surface-phonemic, syl-
labic, lexical, phrasal, and conceptual [7]. KS’s are typically
chosen to handle data at one level of abstraction or to
bridge two levels (see, for example, [7] and [15]).
Interactions among the KS’s in a distributed processor
are more expensive than in a uniprocessor because com-
munication in a distributed architecture is generally much
slower than computation. The framework for cooperation
must therefore minimize communication among processors.
Otherwise, the available communication channels may be
saturated so that nodes are forced to remain idle while
messages are transmitted.
4
As a simple example of the difficulty that excessive
communication can cause, consider a distributed processor
with 100 nodes that are interconnected with a single broad-
cast communication channel. Assume that each of the
nodes operates at 10
8
instructions per second; the compu-
tation and communication load is shared equally by all
nodes, and the problem-solving architecture is such that
one bit must be communicated by each node for every ten
instructions that it executes. With these parameters it is
readily shown that the communications channel must have
a bandwidth of at least 1 Gbit/s (even ignoring the effect
of contention for the channel) [18]. With a smaller band-
width, processors are forced to stand idle waiting for
messages.
There are, of course, many architectures that do not lead
to channel bandwidths of the same magnitude. However,
the point remains that special attention must be paid to
internode communication and control in distributed prob-
lem solving if large numbers of fast processors are to be
connected.
The framework for cooperation must also distribute the.
processing load among the nodes in order to avoid compu-
tation and communication bottlenecks. Otherwise, overall
performance may be limited by concentration of dispro-
portionate amounts of computation or communication at a
small number of processors. It is also the case that the
control of processing must itself be distributed. Otherwise,
requests for decisions about what to do next could in time
4
The focus here is on speed but the other reasons for adopting a
distributed approach are also relevant — for example, reliability (i.e., the
capability to recover from the failure of individual components, with
graceful degradation in performance) and extensibility (i.e., the capability
to alter the number of processors applied to a problem).
Fig. 2. Task-sharing.
accumulate at a “controller” node faster than they could be
processed.
5
Distribution of control does, however, lead to
difficulties in achieving globally coherent behavior since
control decisions are made by individual nodes without the
benefit of an overall view of the problem. We will illustrate
this problem in Section VII.
V. TASK-SHARING
Task-sharing is a form of cooperation in which individ-
ual nodes assist each other by sharing the computational
load for the execution of subtasks of the overall problem.
Control in systems that use task-sharing is typically goal-
directed; that is, the processing done by individual nodes is
directed to achieve subgoals whose results can be in-
tegrated to solve the overall problem.
Task-sharing is shown schematically in Fig. 2. The indi-
vidual nodes are represented by the tasks in whose execu-
tion they are engaged.
The key issue to be resolved in task-sharing is how tasks
are to be distributed among the processor nodes. There
must be a means whereby nodes with tasks to be executed
can find the most appropriate idle nodes to execute those
tasks. We call this the connection problem. Solving the
connection problem is crucial to maintaining the focus of
the problem solver. This is especially true in AI applica-
tions because they do not generally have well-defined
algorithms for their solution. The most appropriate KS to
invoke for the execution of any given task generally cannot
be identified a priori, and there are usually far too many
possibilities to try all of them.
In the remainder of this section, we consider negotiation
as a mechanism that can be used to structure node interac-
tions and solve the connection problem in task-shared
systems. Negotiation is suggested by the observation that
the connection problem can also be viewed from the per-
spective of an idle node. It must find another node with an
appropriate task that is available for execution. In order to
maximize system concurrency, both nodes with tasks to be
executed and nodes ready to execute tasks can proceed
simultaneously, engaging each other in a process that re-
sembles contract negotiation to solve the connection prob-
lem.
In the contract net approach to negotiation [18], [19], a
contract is an explicit agreement between a node that
5
Such a node would also be a hazard to reliability since its failure
would result in total failure of the system.

DAVIS AND SMITH: FRAMEWORKS IN DISTRIBUTED PROBLEM SOLVING 65
Fig. 3. Sending a task announcement.
Fig. 4. Receiving task announcements.
generates a task (the manager) and a node willing to
execute the task (the contractor). The manager is responsi-
ble for monitoring the execution of a task and processing
the results of its execution. The contractor is responsible
for the actual execution of the task. Individual nodes are
not designated a priori as manager or contractor; these are
only roles, and any node can take on either role dynami-
cally during the course of problem solving. Nodes are
therefore not statically tied to a control hierarchy.
A contract is established by a process of local mutual
selection based on a two-way transfer of information. In
brief, the manager for a task advertises the existence of the
task to other nodes with a task announcement message
(Fig. 3). Available nodes (potential contractors) evaluate
task announcements made by several managers (Fig. 4)
and submit bids on those for which they are suited (Fig. 5).
An individual manager evaluates the bids and awards
contracts for execution of the task to the nodes it de-
termines to be most appropriate (Fig. 6). Manager and
contractor are thus linked by a contract (Fig. 7) and
communicate privately while the contract is being ex-
ecuted.
The negotiation process may then recur. A contractor
may further partition a task and award contracts to other
nodes. It is then the manager for those contracts. This
leads to the hierarchical control structure that is typical of
task-sharing. Control is distributed because processing and
communication are not focused at particular nodes, but
rather every node is capable of accepting and assigning
tasks. This avoids bottlenecks that could degrade perfor-
mance. It also enhances reliability and permits graceful
degradation of performance in the case of individual node
Fig. 5. Bidding.
Fig. 6. Making an award.
Fig. 7. Manager-contractor linkage.
failures. There are no nodes whose failure can completely
block the contract negotiation process.
We have only briefly sketched the negotiation process.
Several complications arise in its implementation, and a
number of extensions to the basic method exist that enable
efficient handling of specialized interactions where the full
complexity is not required (e.g., when simple requests for
information are made). See [19] for a full treatment.
The following is an example of negotiation for a task
that involves gathering of sensed data and extraction of
signal features. It is taken from a simulation of a distrib-
uted sensing system (DSS) [17]. The sensing problem is
partitioned into a number of tasks. We will consider one of
these tasks, the signal task, that arises during the initializa-
tion phase of DSS operation.
6
6
The DSS in general is an example of a system that uses both
task-sharing and result-sharing. Task-sharing is used to initialize the system (the
problem decomposition phase of Fig. 1).

Citations
More filters
Journal ArticleDOI
TL;DR: In this article, the contract net protocol has been developed to specify problem-solving communication and control for nodes in a distributed problem solver, where task distribution is affected by a negotiation process, a discussion carried on between nodes with tasks to be executed and nodes that may be able to execute those tasks.
Abstract: The contract net protocol has been developed to specify problem-solving communication and control for nodes in a distributed problem solver. Task distribution is affected by a negotiation process, a discussion carried on between nodes with tasks to be executed and nodes that may be able to execute those tasks.

3,612 citations

Journal ArticleDOI
TL;DR: This survey characterizes an emerging research area, sometimes called coordination theory, that focuses on the interdisciplinary study of coordination, that uses and extends ideas about coordination from disciplines such as computer science, organization theory, operations research, economics, linguistics, and psychology.
Abstract: This survey characterizes an emerging research area, sometimes called coordination theory, that focuses on the interdisciplinary study of coordination. Research in this area uses and extends ideas about coordination from disciplines such as computer science, organization theory, operations research, economics, linguistics, and psychology.A key insight of the framework presented here is that coordination can be seen as the process of managing dependencies among activities. Further progress, therefore, should be possible by characterizing different kinds of dependencies and identifying the coordination processes that can be used to manage them. A variety of processes are analyzed from this perspective, and commonalities across disciplines are identified. Processes analyzed include those for managing shared resources, producer/consumer relationships, simultaneity constraints, and task/subtask dependencies.Section 3 summarizes ways of applying a coordination perspective in three different domains:(1) understanding the effects of information technology on human organizations and markets, (2) designing cooperative work tools, and (3) designing distributed and parallel computer systems. In the final section, elements of a research agenda in this new area are briefly outlined.

3,447 citations


Cites background from "Frameworks for Cooperation in Distr..."

  • ...For ex-ample, the Contract Nets protocol [Smith and Davis 1981; Davis and Smith 19831 formalizes a sequence of messages to be exchanged by computer processors shar-ing tasks in a network....

    [...]

Book
01 Nov 2001
TL;DR: A multi-agent system (MAS) as discussed by the authors is a distributed computing system with autonomous interacting intelligent agents that coordinate their actions so as to achieve its goal(s) jointly or competitively.
Abstract: From the Publisher: An agent is an entity with domain knowledge, goals and actions. Multi-agent systems are a set of agents which interact in a common environment. Multi-agent systems deal with the construction of complex systems involving multiple agents and their coordination. A multi-agent system (MAS) is a distributed computing system with autonomous interacting intelligent agents that coordinate their actions so as to achieve its goal(s) jointly or competitively.

3,003 citations

Book
01 Jan 1999
TL;DR: Social scientists in a wide range of fields will find this book an essential tool for research, particularly in sociology, economics, anthropology, geography, organizational theory, political science, social policy, cognitive psychology and cognitive science, and it will also appeal to computer scientists interested in distributed artificial intelligence, multi-agent systems and agent technologies.
Abstract: What can computer simulation contribute to the social sciences? Which of the many approaches to simulation would be best for my social science project? How do I design, carry out and analyse the results from a computer simulation? This is a practical textbook on the techniques of building computer simulations to assist understanding of social and economic issues and problems. Interest in social simulation has been growing rapidly worldwide as a result of increasingly powerful hardware and software and also a rising interest in the application of ideas of complexity, evolution, adaptation and chaos in the social sciences. This authoritative book details all the common approaches to social simulation, to provide social scientists with an appreciation of the literature and allow those with some programming skills to create their own simulations.New for this edition is a chapter on how to use simulation as a tool. A new chapter on multi-agent systems has also been added to support the fact that multi-agent modelling has become the preferred approach to simulation. Social scientists in a wide range of fields will find this book an essential tool for research, particularly in sociology, economics, anthropology, geography, organizational theory, political science, social policy, cognitive psychology and cognitive science. It will also appeal to computer scientists interested in distributed artificial intelligence, multi-agent systems and agent technologies.

2,079 citations

Journal ArticleDOI
TL;DR: An overview of the holonic reference architecture for manufacturing systems as developed at PMA-KULeuven, which shows PROSA shows to cover aspects of both hierarchical as well as heterarchical control approaches.

1,408 citations

References
More filters
Journal ArticleDOI
TL;DR: In this article, the contract net protocol has been developed to specify problem-solving communication and control for nodes in a distributed problem solver, where task distribution is affected by a negotiation process, a discussion carried on between nodes with tasks to be executed and nodes that may be able to execute those tasks.
Abstract: The contract net protocol has been developed to specify problem-solving communication and control for nodes in a distributed problem solver. Task distribution is affected by a negotiation process, a discussion carried on between nodes with tasks to be executed and nodes that may be able to execute those tasks.

3,612 citations

Journal ArticleDOI
TL;DR: The purpose of this paper is to explain why task uncertainty is related to organizational form, and why the cognitive limits theory of Herbert Simon was the guiding influence.
Abstract: The purpose of this paper is to explain why task uncertainty is related to organizational form. In so doing the cognitive limits theory of Herbert Simon was the guiding influence. As the consequences of cognitive limits were traced through the framework various organization design strategies were articulated. The framework provides a basis for integrating organizational interventions, such as information systems and group problem solving, which have been treated separately before.

1,974 citations


"Frameworks for Cooperation in Distr..." refers background in this paper

  • ..." Hierarchical control of this type is a standard mechanism used by human organizations to deal with uncertainty [9], [8]....

    [...]

  • ...This is important because communication itself can be a major source of distraction and difficulty in large groups (see, for example, [9])....

    [...]

Book
01 Jan 1971
TL;DR: This paper will concern you to try reading problem solving methods in artificial intelligence as one of the reading material to finish quickly.
Abstract: Feel lonely? What about reading books? Book is one of the greatest friends to accompany while in your lonely time. When you have no friends and activities somewhere and sometimes, reading book can be a great choice. This is not only for spending the time, it will increase the knowledge. Of course the b=benefits to take will relate to what kind of book that you are reading. And now, we will concern you to try reading problem solving methods in artificial intelligence as one of the reading material to finish quickly.

1,431 citations


"Frameworks for Cooperation in Distr..." refers background in this paper

  • ...) Many search problems (like symbolic integration [16]) also involve a minimal answer synthesis phase....

    [...]

Journal ArticleDOI
TL;DR: A framework called the contract net is presented that specifies communication and control in a distributed problem solver, and comparisons with planner, conniver, hearsay-ii, and pup 6 are used to demonstrate that negotiation is a natural extension to the transfer of control mechanisms used in earlier problem-solving systems.

1,305 citations


Additional excerpts

  • ...The connection that is effected with the contract net protocol is an extension to the pattern-directed invocation used in many AI programming languages (see [5] for an Fig....

    [...]

Journal ArticleDOI
TL;DR: A light sensing apparatus is described which employs a GaAsP MOS light-receiving element to which a potential is applied for creating a depletion region.

1,062 citations


"Frameworks for Cooperation in Distr..." refers methods in this paper

  • ...This metaphor has been used in several Al systems [10][12], [18]....

    [...]

Frequently Asked Questions (8)
Q1. What have the authors contributed in "Frameworks for cooperation in distributed problem solving" ?

For each, the basic methodology is presented and systems in which it has been used are described. 

Answer synthesis is performed in the third phase; that is, integration of subproblem results to achieve a solution to the overall problem. 

In order to maximize system concurrency, both nodes with tasks to be executed and nodes ready to execute tasks can proceed simultaneously, engaging each other in a process that resembles contract negotiation to solve the connection problem. 

Assume that each of the nodes operates at 108 instructions per second; the computation and communication load is shared equally by all nodes, and the problem-solving architecture is such that one bit must be communicated by each node for every ten instructions that it executes. 

A familiar metaphor for a problem solver operating in a distributed processor is a group of human experts experienced at working together, trying to complete a large task. 

The connection that is effected with the contract net protocol is an extension to the pattern-directed invocation used in many AI programming languages (see [5] for anin-depth discussion). 

A blocks world image is a line drawing that shows the edges of a collection of simple objects (e.g., cubes, wedges, and pyramids) in a scene. 

Thus the key to achieving consistent image labeling is to compare the label set of each vertex with those of its neighbors and discard inconsistent labels.