Abstract: Single switch scanning is the access method of last resort for powered wheelchairs, primarily because drift is a significant problem. To correct a drift to the left or the right, the user must stop going forward, wait for the scanning device to get to the arrow for the direction of choice, click to turn the chair, stop turning, wait to scan to forward and then click to move forward again. Robotic assisted control can improve the ease and speed of driving using single switch scanning. Under robotic control, sensors are used to correct the drift problem and to avoid obstacles. The user is only required to give commands to change direction, for example "left" at an intersection. BACKGROUND Powered wheelchairs can be driven with a variety of access methods. The method of first choice is a joystick. If a person is unable to drive with a joystick, a multiple switch array such as a sip and puff system or a head switch array could be used. If a person can not use a multiple switch array, a single switch scanning device is used. Single switch scanning is the access method of last resort. With traditional powered wheelchairs, the need for frequent corrections to counteract drift and to move around obstacles makes driving difficult for single switch scanning users. Work on robotic wheelchairs has resulted in systems that can navigate indoor environments by taking commands from the user and carrying out the commands safely using sensors on the robot (for example, [Levine et al., 1990] and [Miller, in press]). Most of the work on robotic wheelchairs does not address the issues of access methods; the primary focus is on the navigation system. While it is important to have a safe navigation system, it also is important to consider how a person will be able to use the system. Simpson and Levine [1997] studied voice control as an access method for the NavChair system. Yanco and Gips [1997] investigated eye control as an access method. In this paper, we study single switch scanning as an access method for our robotic wheelchair system, Wheelesley, and compare these results to traditional control of a powered wheelchair with single switch scanning devices. The wheelchair system [Yanco, in press] consists of a robotic wheelchair and a user interface. To provide robotic assistance, the wheelchair uses infrared, sonar and bump sensors and an on-board processor to avoid obstacles and to keep the wheelchair centered in a hallway. The robotic wheelchair makes the necessary corrections to the current heading whenever one or more sensors indicate that an obstacle or wall is getting too close to the wheelchair. The user gives commands through the user interface, which runs on a Macintosh Powerbook. The switch is a Prentke Romich rocking level switch which is connected to the Powerbook using a Don Johnston Macintosh switch interface. For these experiments, the user interface consists of four large arrows and a stop button. The user interface was designed to look and function like a standard single switch scanning device. The interface scans to the forward arrow, the right arrow, the left arrow and the back arrow until the user selects a command by hitting a switch. The interface pauses at each possible selection for two seconds. Since all test subjects are able-bodied, the commands are latched. To stop driving or turning, the user hits the switch again. After the stop command is given, scanning starts again on the forward arrow. RESEARCH QUESTION Does robotic assistance improve driving performance compared to traditional manual control for a person using single switch scanning as an access method for a powered wheelchair? SINGLE SWITCH ROBOTIC WHEELCHAIR CONTROL METHODS To determine the answer, we designed an experiment to test the performance of subjects under robotic assisted control and under traditional manual control. Fourteen able-bodied subjects (7 men and 7 women), ranging in age from 18 to 43, were tested. At the beginning of a session, the subject was shown the wheelchair. Sensors that are used in robotic assisted control were pointed out and explained briefly. Safety measures, such as the power button, were discussed. Then the two driving methods were explained to the subject. After this introduction, the subject was seated in the wheelchair and the user interface was connected to the wheelchair. The single switch scanning interface was explained to the subject, who practiced using the interface first with the motors turned off. Once the subject was comfortable with the interface, the session entered a practice phase in which the subject first tried robotic assisted control and then traditional manual control. The subject practiced both methods until he expressed an understanding of each control method; subjects usually spent about two minutes trying each method. All practice was done off of the test course, so that the subject was not able learn anything that would assist him during the test phase. The course was designed to include obstacles (several couches and chairs, a fire extinguisher mounted to the wall 30 cm above the ground, a trash can, and a table) and turns to the left and to the right. A diagram of the course is given in Figure 1. The test phase consisted of four up-and-back traversals of the test course, alternating between the two control methods. Half of the subjects started with robotic assisted control and the other half started with traditional manual control. Each upand-back traversal consists of two parts: running the course from the couch area to the hallway and then the return trip. The turn in the middle of the course is not counted as part of the run, as turning completely around in the middle of the hallway is not a normal driving occurrence. The total session time for each subject was approximately 45 minutes. Most data collection was done by the computer which was running the user interface. The researcher only recorded the number of scrapes made by the chair. At the completion of the test, the user was asked to rank traditional manual control and robotic assisted control on a scale from 1 (worst) to 10 (best). RESULTS There were four experimental performance measures collected by the computer: (1) the number of clicks required to navigate the course, (2) the amount of time spent scanning to get to the necessary commands, (3) the amount of time spent moving or executing the given commands, and (4) the total amount of time spent on the course (scanning time plus moving time). Results are summarized in Table 1. Data for each experimental measure was analyzed using an ANOVA test. The differences between robotic control and manual control were highly significant with p<.0001 for all measures. On average, robotic control saved 60 clicks over manual control, which is a 71% improvement. Total time for robotic assisted control was 101 seconds shorter than manual control on average, which is a 25% improvement. The differences between the two trials were significant for clicks (p=.003) and for time spent scanning (p=.015). There was not a significant difference between trials for moving time or total time. Couch Chair Table Chair