scispace - formally typeset
Search or ask a question
Book ChapterDOI

Distributed Search and Rescue with Robot and Sensor Teams

TL;DR: A network of distributed mobile sensor systems as a solution to the emergency response problem and how such networks can assist human users to find an exit is developed.
Abstract: We develop a network of distributed mobile sensor systems as a solution to the emergency response problem. The mobile sensors are inside a building and they form a connected ad-hoc network. We discuss cooperative localization algorithms for these nodes. The sensors collect temperature data and run a distributed algorithm to assemble a temperature gradient. The mobile nodes are controlled to navigate using this temperature gradient. We also discuss how such networks can assist human users to find an exit. We have conducted an experiment to at a facility used to train firefighters to understand the environment and to test component technology. Results from experiments at this facility as well as simulations are presented here.

Summary (2 min read)

Introduction

  • An ad-hoc network is formed by a group of mobile hosts upon a wireless local network interface.
  • It is a temporary network formed without the aid of any established infrastructure or centralized administration.
  • The authors combine networking, sensing, and control to control the flow of information in search and rescue in unknown environments.

1 Motivation

  • The authors consider search and rescue applications in which heterogeneous groups of agents (humans, robots, static and mobile sensors) enter an unknown building and disperse while following gradients in temperature and concentration of toxins, and looking for immobile humans.
  • The agents deploy the static sensors and maintain line of sight visibility and communication connectivity whenever possible.
  • It is a temporary network formed without the aid of any established infrastructure or centralized administration.
  • A sensor network consists of a collection of sensors and distributed over some area that form an ad-hoc network.
  • The authors combine networking, sensing, and control to control the flow of information in search and rescue in unknown environments.

2 Localization

  • Localization in dynamic environments such as posed by search and rescue operations is difficult because no infrastructure can be presumed and because simple assumptions such as line of sight to known features can not be guaranteed.
  • The authors have been investigating the use of low cost radio beacons that can be placed in the environment by rescue personnel or carried by robots.
  • The authors have adapted the well-known estimation techniques of Kalman filtering, Markov methods, and Monte Carlo localization to solve the problem of robot localization from rangeonly measurements [KS02] [SKS02].
  • The primary difficulty stems from the annular distribution of potential relative locations that results from a range only measurement.
  • Markov methods (probability grids) and Monte Carlo methods (particle filtering) have the flexibility to handle annular distributions.

3 Information Flow

  • Sensors detect information about the area they cover.
  • Users of the network (robots or people) can use this information as they traverse the network.
  • Figure 1 shows the layout of a room in which a fire was started.
  • The sensors computed multi-hop communication paths to a base station placed at the door.
  • For each interaction, the user did a rotation scan until the Flashlight was pointed in the direction computed from the sensor data.

4 Control of a Network of Robots

  • Robots augment the surveillance capabilities of a sensor network by using mobility.
  • Each robot must use partial state information derived from its sensors and from the communication network to control in cooperation with other robots the distribution of robots and the motion of the team.
  • The authors seek abstractions and control laws that allow partial state information to be used effectively and in a scalable manner.
  • A Mote runs for approximately one month on two AA batteries.
  • Between the potential fields (or temperature gradients) computed and stored in the sensor network (see Figure 2).

5 User Feedback

  • When robots or people interact with the sensor network, it becomes an extension of their capabilities, basically extending their sensory systems and ability to act over a much large range.
  • The authors have developed software that allows an intuitive, immersive display of environments.
  • Using, panoramic imaging sensors that can be carried by small robots into the heart of a damaged structure, the display can be coupled to head mounted, head tracking sensors that enable a remote operator to look around in the environment without the delay associated with mechanical pan and tilt mechanisms.
  • The data collected from imaging systems such as visible cameras and IR cameras are displayed on a wearable computer to give the responder the most accurate and current information.
  • Distributed protocols collect data from the geographically dispersed sensor network and integrate this data into a global map such as a temperature gradient that can also be displayed on a wearable computer to the user.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

$50<,890:?6- ,559?3<(50($50<,890:?6- ,559?3<(50(
"*/63(83?644659"*/63(83?644659
,7(8:4,5:(3 (7,89
,7(8:4,5:6-,*/(50*(35.05,,805.
7730,+,*/(50*9
;3?
09:80);:,+",(8*/(5+!,9*;,=0:/!6)6:(5+",5968#,(4909:80);:,+",(8*/(5+!,9*;,=0:/!6)6:(5+",5968#,(49
<,,2(9
$50<,890:?6- ,559?3<(50(
,68.,(5:68
(85,.0,,3365$50<,890:?
!%01(?;4(8
$50<,890:?6- ,559?3<(50(
2;4(8.8(97;7,55,+;
;03/,84," ,8,08(
$50<,890:?6- ,559?3<(50(
!65 ,:,8965
(8:46;:/$50<,890:?
",,5,>:7(.,-68(++0:065(3(;:/689
6336=:/09(5+(++0:065(3=6829(:/::798,7690:68?;7,55,+;4,(4'7(7,89
!,*644,5+,+0:(:065!,*644,5+,+0:(:065
(9<,,2(5:68,68.,;4(8!%01(? ,8,08(;03/,84," ,:,8965!65!;9(50,3("05./
"(510<(5+"73,:@,86/509:80);:,+",(8*/(5+!,9*;,=0:/!6)6:(5+",5968#,(49
,7(8:4,5:(3 (7,89

/::798,7690:68?;7,55,+;4,(4'7(7,89
69:7805:<,89065
86*,,+05.96-:/,:/5:,85(:065(365-,8,5*,650,3+(5+",8<0*,!6)6:0*9"!

/,3+;3?
#/097(7,809769:,+(:"*/63(83?644659/::798,7690:68?;7,55,+;4,(4'7(7,89
68468,05-684(:06573,(9,*65:(*:8,7690:68?76)6>;7,55,+;

09:80);:,+",(8*/(5+!,9*;,=0:/!6)6:(5+",5968#,(4909:80);:,+",(8*/(5+!,9*;,=0:/!6)6:(5+",5968#,(49
)9:8(*:)9:8(*:
&,*6590+,89,(8*/(5+8,9*;,(7730*(:065905=/0*//,:,86.,5,6;9.86;796-(.,5:9/;4(5986)6:9
9:(:0*(5+46)03,9,59689,5:,8(5;5256=5);03+05.(5++097,89,=/03,-6336=05..8(+0,5:905
:,47,8(:;8,(5+*65*,5:8(:0656-:6>059(5+366205.-680446)03,/;4(59#/,(.,5:9+,736?:/,9:(:0*
9,59689(5+4(05:(05305,6-90./:<090)030:?(5+*644;50*(:065*655,*:0<0:?=/,5,<,876990)3,"05*,
+0--,8,5:(.,5:9/(<,+0--,8,5:9,59689(5+:/,8,-68,+0--,8,5:70,*,96-05-684(:065*644;50*(:06509
5,*,99(8?-68:(9205.:/,5,:=6829/(805.05-684(:065(5+-68*65:863
5(+/6*5,:=68209-684,+)?(.86;76-46)03,/69:9;765(=08,3,9936*(35,:=68205:,8-(*,:09(
:,4768(8?5,:=682-684,+=0:/6;::/,(0+6-(5?,9:()309/,+05-8(9:8;*:;8,68*,5:8(30@,+(+40509:8(:065
9,59685,:=682*65909:96-(*633,*:0656-9,59689(5++09:80);:,+6<,8964,(8,(:/(:-684(5(+/6*
5,:=682;8/,:,86.,5,6;9:,(496-(.,5:99,5968986)6:9(5+/;4(59*659:0:;:,+09:80);:,+
(+(7:0<,9,59685,:=6829(5+(8,=,339;0:,+-68:(92905,>:8,4,,5<08654,5:9,97,*0(33?=/,5:/,
,5<08654,5:(346+,3(5+:/,:(9297,*0A*(:0659(8,;5*,8:(05(5+:/,9?9:,4/(9:6(+(7::60:
7730*(:06596-:/09=682*6<,89,(8*/(5+8,9*;,-68A89:8,9765+,894650:6805.(5+9;8<,033(5*,(5+
05-8(9:8;*:;8,786:,*:065
&,*64)05,5,:=68205.9,5905.(5+*65:863:6*65:863:/,B6=6-05-684(:065059,(8*/(5+8,9*;,05
;5256=5,5<08654,5:9"7,*0A*(33?:/098,9,(8*/,>(405,936*(30@(:06505(5,5<08654,5:=0:/56
05-8(9:8;*:;8,9;*/(9();8505.);03+05.-68)6:/9,59689(5+86)6:905-684(:065B6=(*8699(
9,59685,:=682:/(:*(536*(30@,65:/,B?-68+,30<,805.:/,469:8,3,<(5:(5+*;88,5:05-684(:065:60:9
*659;4,84(05:(0505.*;88,5:4(79(5+(;:64(:05.36*(30@(:065;905.-,,+)(*2-864:/,9,5968
5,:=682:6*65:863:/,(;:65646;986)6:9-6873(*05.9,59689*633,*:05.+(:(-8649,59689(5+36*(:05.
:(8.,:9(5++,30<,805.:/,05-684(:065.(:/,8,+-864:/,9,59685,:=68205:,.8(:,+(9(.36)(3
70*:;8,:6/;4(5;9,89#/,7(7,8=033+,:(036;8:,*/50*(38,9;3:905:/,9,(8,(9(5++,9*80),(5
05:,.8(:,+,>7,804,5:-685(<0.(:06505);8505.);03+05.9
644,5:9644,5:9
69:7805:<,89065
86*,,+05.96-:/,:/5:,85(:065(365-,8,5*,650,3+(5+",8<0*,!6)6:0*9
"!
/,3+;3?
;:/689;:/689
<,,2(9,68.,(5:68!%01(?;4(8;03/,84," ,8,08(!65 ,:,8965(50,3(!;9"(510<
"05./(5+6/5"73,:@,8
#/09*65-,8,5*,7(7,809(<(03()3,(:"*/63(83?644659/::798,7690:68?;7,55,+;4,(4'7(7,89

Distributed Search and Rescue with Robot and Sensor Teams
Aveek Das
, George Kantor
, Vijay Kumar
, Guilherme Pereira
, Ron Peterson
, Daniela Rus
, Sanjiv Singh
, John Spletzer
1 Motivation
We consider search and rescue applications in which hetero-
geneous groups of agents (humans, robots, static and mobile
sensors) enter an unknown building and disperse while follow-
ing gradients in temperature and concentration of toxins, and
looking for immobile humans. The agents deploy the static
sensors and maintain line of sight visibility and communica-
tion connectivity whenever possible. Since different agents
have different sensors and therefore different pieces of infor-
mation, communication is necessary for tasking the network,
sharing information, and for control.
An ad-hoc network is formed by a group of mobile hosts
upon a wireless local network interface. It is a temporary net-
work formed without the aid of any established infrastructure
or centralized administration. A sensor network consists of a
collection of sensors and distributed over some area that form
an ad-hoc network. Our heterogeneous teams of agents (sen-
sors, robots, and humans) constitute distributed adaptive sen-
sor networks and are well-suited for tasks in extreme environ-
ments, especially when the environmental model and the task
specifications are uncertain and the system has to adapt to it.
Applications of this work cover search and rescue for first re-
sponders, monitoring and surveillance, and infrastructure pro-
tection.
We combine networking, sensing, and control to control the
flow of information in search and rescue in unknown environ-
ments. Specifically, this research examines (1) localization in
an environment with no infrastructure such as a burning build-
ing (for both sensors and robots) (2) information flow across
a sensor network that can localize on the fly for delivering the
most relevant and current information to its consumer, main-
taining current maps, and automating localization; (3) using
feedback from the sensor network to control the autonomous
robots for placing sensors, collecting data from sensors, and
locating targets; and (4) delivering the information gathered
from the sensor network (integrated as a global picture) to hu-
man users. The paper will detail our technical results in these
4 areas and describe an integrated experiment for navigation
in burning buildings.
2 Localization
Localization in dynamic environments such as posed by search
and rescue operations is difficult because no infrastructure can
be presumed and because simple assumptions such as line of
sight to known features can not be guaranteed. We have been
investigating the use of low cost radio beacons that can be
placed in the environment by rescue personnel or carried by
robots. These radio beacons provide range to a receiver and
since their position is unknown to start and can potentially
Department of Computer Science, University of Pennsylvania
Department of Computer Science, Dartmouth
Robotics Institute, Carnegie Mellon University
Sensors
Robots
Figure 1: (Left) An ad-hoc network of robots and Mote sensors
deployed in a burning building at the Allegheny Fire Academy,
Aug 23, 2002 (from an experimental exercise involving CMU,
Dartmouth, and U. Penn). (Right) The temperature gradient
graph collected using an ad-hoc network of Mote sensors.
change during operation, it is necessary to localize both the re-
ceiver and the beacons simultaneously. This problem is often
known as Simultaneous Localization and Mapping (SLAM)
although typically a receiver is able to measure both range and
bearing to features.
We have adapted the well-known estimation techniques of
Kalman filtering, Markov methods, and Monte Carlo local-
ization to solve the problem of robot localization from range-
only measurements [KS02] [SKS02]. All three of these meth-
ods estimate robot position as a distribution of probabilities
over the space of possible robot positions. In the same work
we presented an algorithm capable of solving SLAM in cases
where approximate a priori estimates of robot and landmark
locations exist. The primary difficulty stems from the annu-
lar distribution of potential relative locations that results from
a range only measurement. Since the distribution is highly
non-Gaussian, SLAM solutions based on Kalman filtering fal-
ter. In theory, Markov methods (probability grids) and Monte
Carlo methods (particle filtering) have the flexibility to handle
annular distributions. Unfortunately, the scaling properties of
these methods severely limit the number of landmarks that can
be mapped. In truth, Markov and Monte Carlo methods have
much more flexibility than we need; they can represent arbi-
trary distributions while we need only to deal with very well
structured annular distributions. What is needed is a compact
way to represent annular distributions together with a com-
putationally efficient way of combining annular distributions
with each other and with Gaussian distributions. In most cases,
we expect the results of these combinations to be well approx-
imated by mixtures of Gaussians so that standard techniques
such as Kalman filtering or multiple hypothesis tracking could
be applied to solve the remaining estimation problem.
3 Information Flow
Sensors detect information about the area they cover. They can
store this information locally or forward it to a base station for
further analysis and use. Sensors can also use communica-
tion to integrate their sensed values with the rest of the sensor

landscape. Users of the network (robots or people) can use this
information as they traverse the network.
We have developed distributed protocols for navigation
tasks in which a distributed sensor field guides a user across
the filed [LdRR03]. We use the localization techniques pre-
sented above to compute environmental maps and sensor
maps, such as temperature gradients. These maps are then
used for human and robot navigation to a target, while avoid-
ing danger (hot areas).
Figure 1(Right) shows the layout of a room in which a fire
was started. We have collected a temperature gradient map
during the fire burning experiment as shown in Figure 1. The
Mote sensors
1
were deployed by hand at the locations marked
in the figure. The sensors computed multi-hop communication
paths to a base station placed at the door. Data was sent to the
base station over a period of 30 minutes.
We used the structure of the data we collected during the
fire burning exercise to develop a navigation guidance algo-
rithm designed to guide a user to the door, in a hop-by-hop
fashion. We have deployed 12 Mote sensors along corridors in
our building and guide a human user out of the building. Us-
ing an interactive device that can transmit directional feedback
called a Flashlight [PR02] a human user was directed across
the field. For each interaction, the user did a rotation scan un-
til the Flashlight was pointed in the direction computed from
the sensor data. The user then walked in that direction to the
next sensor. Each time we recorded the correct direction and
the direction detected by the Flashlight.
4 Control of a Network of Robots
Robots augment the surveillance capabilities of a sensor net-
work by using mobility. Each robot must use partial state in-
formation derived from its sensors and from the communica-
tion network to control in cooperation with other robots the
distribution of robots and the motion of the team. We treat
this as a problem of formation control where the motion of the
team is modeled as an element of a Lie group, while the shape
of the formation is a point in shape space. We seek abstrac-
tions and control laws that allow partial state information to
be used effectively and in a scalable manner.
Our platforms are car-like robots equipped with omnidirec-
tional cameras as their primary sensors. The communication
among the robots relies on IEEE 802.11 networking. By using
information from its camera system each robot is only able to
estimate its distance and bearing from their teammates. How-
ever, if two robots exchange their bearing to each other, they
are also able to estimate their relative orientations [SDF
+
01].
We use this idea to combine the information of a group of two
or more robots in order to improve the knowledge of the group
about their relative position.
We have developed control protocols for using such a team
of robots in connection with a sensor network to explore a
known building. We assume that a network of Mote sensors
previously deployed in the environment guide the robots to-
wards the source of heat. The robots can modify their trajecto-
ries and still find the building exit. The robots can also switch
1
Each Mote sensor (http://today.CS.Berkeley.EDU/tos/) consists
of an Atmel ATMega128 microcontroller a 916 MHz RF transceiver
a UART and a 4Mbit serial flash. A Mote runs for approximately one
month on two AA batteries. It includes light, sound, and temperature
sensors, but other types of sensors may be added. Each Mote runs the
TinyOS operating system.
0 100 200 300 400 500 600 700 800 900
0
50
100
150
200
250
300
350
400
FIRE
EXIT
x (cm)
y (cm)
(a)
0 100 200 300 400 500 600 700 800 900
0
50
100
150
200
250
300
350
400
FIRE
EXIT
x (cm)
y (cm)
(b)
Figure 2: Three robots switching motion plans in real time in
order to get information from the hottest spot of the building.
In (b) a gradient of temperature is obtained from a network of
Mote sensors distributed on the ground.
between the potential fields (or temperature gradients) com-
puted and stored in the sensor network (see Figure 2). The
first switch occurs automatically when the first robot encoun-
ters a Mote sensor at a given location. The robots move toward
the fire and stop at a safer distance (given by the temperature
gradient). They stay there until they are asked to evacuate the
building, at which point they use the original potential field to
find the exit.
5 User Feedback
When robots or people interact with the sensor network, it
becomes an extension of their capabilities, basically extend-
ing their sensory systems and ability to act over a much large
range. We have developed software that allows an intuitive,
immersive display of environments. Using, panoramic imag-
ing sensors that can be carried by small robots into the heart
of a damaged structure, the display can be coupled to head
mounted, head tracking sensors that enable a remote operator
to look around in the environment without the delay associated
with mechanical pan and tilt mechanisms.
The data collected from imaging systems such as visible
cameras and IR cameras are displayed on a wearable computer
to give the responder the most accurate and current informa-
tion. Distributed protocols collect data from the geographi-
cally dispersed sensor network and integrate this data into a
global map such as a temperature gradient that can also be dis-
played on a wearable computer to the user.
References
G. Kantor and S. Singh. Preliminary results in range only lo-
calization and mapping. In IEEE Intl. Conf. on Robotics
and Automation, pages 1819–1825, 2002.
Q. Li, M. de Rosa, and D. Rus. Distributed algorithms for
guiding navigation across a sensor net. In submitted to Mo-
biHoc 2003, 2003.
R. Peterson and D. Rus. Interacting with a sensor network. In
Proc. of Australian Conf. on Robotics an Automation, 2002.
J. Spletzer, A. K. Das, R. Fierro, C. J. Taylor, V. Kumar, and
J. P. Ostrowski. Cooperative localization and control for
multi-robot manipulation. In IEEE/RSJ Intl. Conf. on Intel-
ligent Robots and Systems, 2001.
S. Singh, G. Kantor, and D. Strelow. Recent results in exten-
sions to simultaneous localization and mapping. In Proc. of
International Symposium of Experimental Robotics, 2002.
Citations
More filters
Journal ArticleDOI
TL;DR: This article reviews some research activities in WSN and reviews some CPS platforms and systems that have been developed recently, including health care, navigation, rescue, intelligent transportation, social networking, and gaming applications.

323 citations


Cites background from "Distributed Search and Rescue with ..."

  • ...For rescuers, infrared, smoke, camera sensors, and life detectors may be needed [120,121]....

    [...]

Journal ArticleDOI
TL;DR: The current research on the swarm robotic algorithms are presented in detail, including cooperative control mechanisms in swarm robotics for flocking, navigating and searching applications.

282 citations


Cites background from "Distributed Search and Rescue with ..."

  • ...searching for the targets [28], detecting the odor sources [29], locating the ore veins in wild field [30], foraging, rescuing the victims in disaster areas [31] and etc....

    [...]

  • ...[31] Kantor G, Singh S, Ronald Peterson, Rus D, Das A, Kumar V, et al....

    [...]

Journal ArticleDOI
TL;DR: The component technologies required to deploy a networked-robot system that can augment human firefighters and first responders, significantly enhancing their firefighting capabilities are described.
Abstract: The need to collect, integrate, and communicate information effectively in emergency response scenarios exceeds the state of the art in information technology. This emergency response problem provides an interesting and important test bed for studying networks of distributed mobile robots and sensors. Here, we describe the component technologies required to deploy a networked-robot system that can augment human firefighters and first responders, significantly enhancing their firefighting capabilities. In a burning building at a firefighting training facility, we deployed a network of stationary Mote sensors, mobile robots with cameras, and stationary radio tags to test their ability to guide firefighters to targets and warn them of potential dangers. Our long-term vision is a physical network that can sense, move, compute, and reason, letting network users (firefighters and first responders) Google for physical information - that is, information about the location and properties of physical objects in the real world.

258 citations

Proceedings ArticleDOI
01 Apr 2007
TL;DR: A multi-search algorithm inspired by particle swarm optimization is presented, modified by modifying the particle Swarm optimization algorithm to mimic the multi-robot search process, thereby allowing it to model at an abstracted level the effects of changing aspects and parameters of the system.
Abstract: Within the field of multi-robot systems, multi-robot search is one area which is currently receiving a lot of research attention. One major challenge within this area is to design effective algorithms that allow a team of robots to work together to find their targets. Techniques have been adopted for multi-robot search from the particle swarm optimization algorithm, which uses a virtual multi-agent search to find optima in a multi-dimensional function space. We present here a multi-search algorithm inspired by particle swarm optimization. Additionally, we exploit this inspiration by modifying the particle swarm optimization algorithm to mimic the multi-robot search process, thereby allowing us to model at an abstracted level the effects of changing aspects and parameters of the system such as number of robots and communication range

233 citations


Cites background from "Distributed Search and Rescue with ..."

  • ...Examples include locating mines for de-mining [1], [8], finding victims in a disaster area [10], and planeta ry exploration [13]....

    [...]

Journal ArticleDOI
TL;DR: The systems being considered are a special instance of real-time cyber-physical-human systems that have become a crucial component of all large scale physical infrastructures such as buildings, campuses, sports and entertainment venues, and transportation hubs.
Abstract: This paper surveys recent research on the use of sensor networks, communications and computer systems to enhance the human outcome of emergency situations. Areas covered include sensing, communication with evacuees and emergency personnel, path finding algorithms for safe evacuation, simulation and prediction, and decision tools. The systems being considered are a special instance of real-time cyber-physical-human systems that have become a crucial component of all large scale physical infrastructures such as buildings, campuses, sports and entertainment venues, and transportation hubs.

146 citations


Cites background from "Distributed Search and Rescue with ..."

  • ...[64] describes a navigation system, termed robot-and-sensor team, to control moving robots along safe paths....

    [...]

  • ...To connect with civilians in an emergency, [48,49] form a communication backbone, while [64,66] uses robots to search for and guide victims....

    [...]

References
More filters
Proceedings ArticleDOI
14 Sep 2003
TL;DR: A protocol that combines the artificial potential field of the sensors with the goal location for the moving object guides the object incrementally across the network to the goal, while maintaining the safest distance to the danger areas.
Abstract: We develop distributed algorithms for self-organizing sensor networks that respond to directing a target through a region The sensor network models the danger levels sensed across its area and has the ability to adapt to changes It represents the dangerous areas as obstacles A protocol that combines the artificial potential field of the sensors with the goal location for the moving object guides the object incrementally across the network to the goal, while maintaining the safest distance to the danger areas We give the analysis to the protocol and report on hardware experiments using a physical sensor network consisting of Mote sensors

325 citations

Proceedings ArticleDOI
29 Oct 2001
TL;DR: A cooperative scheme for localizing the robots based on visual imagery that is more robust than decentralized localization and a set of control algorithms that allow the robots to maintain a prescribed formation are described.
Abstract: We describe a framework for coordinating multiple robots in cooperative manipulation tasks in which vision is used for establishing relative position and orientation and maintaining formation. The two key contributions are a cooperative scheme for localizing the robots based on visual imagery that is more robust than decentralized localization, and a set of control algorithms that allow the robots to maintain a prescribed formation (shape and size). The ability to maintain a prescribed formation allows the robots to "trap" objects in their midst, and to "flow" the formation to a desired position. We derive the cooperative localization and control algorithms and present experimental results that illustrate the implementation and the performance of these algorithms.

211 citations

Proceedings ArticleDOI
07 Aug 2002
TL;DR: Methods of localization using cooperating landmarks (beacons) that provide the ability to measure range only and can be used to solve the simultaneous localization and mapping problem (SLAM) when beacon locations are uncertain are presented.
Abstract: This paper presents methods of localization using cooperating landmarks (beacons) that provide the ability to measure range only. Recent advances in radio frequency technology make it possible to measure range between inexpensive beacons and a transponder Such a method has tremendous benefit since line of sight is not required between the beacons and the transponder and because the data association problem can be completely avoided. If the positions of the beacons are known, measurements from multiple beacons can be combined using probability grids to provide an accurate estimate of robot location. This estimate can be improved by using Monte Carlo techniques and Kalman filters to incorporate odometry data. Similar methods can be used to solve the simultaneous localization and mapping problem (SLAM) when beacon locations are uncertain. Experimental results are presented for robot localization. Tracking and SLAM algorithms are demonstrated in simulation.

161 citations

Proceedings ArticleDOI
08 Dec 2003
TL;DR: This work performs an experiment in which a mobile robot localizes using dead reckoning and range measurements to stationary radio-frequency beacons in its environment, incorporating the range measurements into the position estimate using a Kalman filter.
Abstract: We present an early experimental result toward solving the localization problem with range-only sensors We perform an experiment in which a mobile robot localizes using dead reckoning and range measurements to stationary radio-frequency beacons in its environment, incorporating the range measurements into the position estimate using a Kalman filter This data set involves over 20,000 range readings to surveyed beacons while a robot moved continuously over a path for nearly 1 hour Careful groundtruth accurate to a few centimeters was recorded during this motion We show the improvement of the robot's position estimate over dead reckoning even when the range readings are very noisy We extend this approach to the problem of simultaneous localization and mapping (SLAM), localizing both the robot and tag positions from noisy initial estimates

84 citations

01 Jan 2002
TL;DR: A device the authors call a Flashlight for interacting with the sensor field for collecting navigation information from the sensors in the local neighborhood, activating and deactivating specified areas of the sensors network, and detecting events in the sensor network.
Abstract: We develop distributed algorithms for sensor networks that respond by directing a target (robot or human) through a region. The sensor network models the event levels sensed across a geographical area, adapts to changes, and guides a moving object incrementally across the network. We describe a device we call a Flashlight for interacting with the sensor field. This interaction includes collecting navigation information from the sensors in the local neighborhood, activating and deactivating specified areas of the sensor network, and detecting events in the sensor network. We report on hardware experiments using a physical sensor network consisting of Mote sensors.

19 citations

Frequently Asked Questions (15)
Q1. What contributions have the authors mentioned in the paper "Distributed search and rescue with robot and sensor teams" ?

The authors consider search and rescue applications in which heterogeneous groups of agents ( humans, robots, static and mobile sensors ) enter an unknown building and disperse while following gradients in temperature and concentration of toxins, and looking for immobile humans. Applications of this work cover search and rescue for first responders, monitoring and surveillance, and infrastructure protection. Specifically, this research examines ( 1 ) localization in an environment with no infrastructure such as a burning building ( for both sensors and robots ) ( 2 ) information flow across a sensor network that can localize on the fly for delivering the most relevant and current information to its consumer, maintaining current maps, and automating localization ; ( 3 ) using feedback from the sensor network to control the autonomous robots for placing sensors, collecting data from sensors, and locating targets ; and ( 4 ) delivering the information gathered from the sensor network ( integrated as a global picture ) to human users. The paper will detail their technical results in these 4 areas and describe an integrated experiment for navigation in burning buildings. 

Applications of this work cover search and rescue for first responders, monitoring and surveillance, and infrastructure protection. 

Since different agents have different sensors and therefore different pieces of information, communication is necessary for tasking the network, sharing information, and for control. 

The primary difficulty stems from the annular distribution of potential relative locations that results from a range only measurement. 

Using an interactive device that can transmit directional feedback called a Flashlight [PR02] a human user was directed across the field. 

What is needed is a compact way to represent annular distributions together with a computationally efficient way of combining annular distributions with each other and with Gaussian distributions. 

In truth, Markov and Monte Carlo methods have much more flexibility than the authors need; they can represent arbitrary distributions while the authors need only to deal with very well structured annular distributions. 

The robots can also switch1Each Mote sensor (http://today.CS.Berkeley.EDU/tos/) consists of an Atmel ATMega128 microcontroller a 916 MHz RF transceiver a UART and a 4Mbit serial flash. 

The authors have been investigating the use of low cost radio beacons that can be placed in the environment by rescue personnel or carried by robots. 

In theory, Markov methods (probability grids) and Monte Carlo methods (particle filtering) have the flexibility to handle annular distributions. 

They stay there until they are asked to evacuate the building, at which point they use the original potential field to find the exit. 

The authors treat this as a problem of formation control where the motion of the team is modeled as an element of a Lie group, while the shape of the formation is a point in shape space. 

The authors have adapted the well-known estimation techniques of Kalman filtering, Markov methods, and Monte Carlo localization to solve the problem of robot localization from rangeonly measurements [KS02] [SKS02]. 

Localization in dynamic environments such as posed by search and rescue operations is difficult because no infrastructure can be presumed and because simple assumptions such as line of sight to known features can not be guaranteed. 

Each robot must use partial state information derived from its sensors and from the communication network to control in cooperation with other robots the distribution of robots and the motion of the team.