scispace - formally typeset
Search or ask a question

Showing papers in "Journal of the Experimental Analysis of Behavior in 1963"


Journal ArticleDOI
TL;DR: Performance following discrimination learning without errors lacks three characteristics that are found following learning with errors, and only those birds that learned the discrimination with errors showed "emotional" responses.
Abstract: Responses toS_ ("errors") arenota necessary condition fortheformation ofan operant discrimination ofcolor. Errors donotoccurifdiscrimination training begins earlyinconditioning andifS+andS_ initially differ withrespect tobrightness, (luration andwavelength. Aftertraining starts, S-'sduration andbrightness isprogressively increased untilS+andSdiffer onlywithrespect towavelength. Errors dooccuriftraining starts after muchconditioninginthepresence ofS+hasoccurred orifS+andS_ differ onlywithrespect towavelength throughout training. Performance following discrimination learning without errors lacks three characteristics thatarefoundfollowing learning witherrors. Onlythosebirdsthatlearned thediscrimination witherrors showed(1)"emotional" responses inthepresence ofS-, (2)anincrease intherate(oradecrease inthelatency) ofitsresponse toS+,and(3)occasional bursts ofresponses toS-. Theacquisition ofanoperant discriminationmaybedefined astheprocess whereby an organism comestorespond morefrequently toa stimulus correlated withreinforcement (S+)thantoastimulus correlated withnonreinforcement (S-).Inpopularterminology, responses madetoS+ are"correct responses"

719 citations


Journal ArticleDOI
TL;DR: A procedure developed earlier (Terrace, 1963) successfully trained a red-green discrimination without the occurrence of any errors in 12 out of 12 cases.
Abstract: A procedure developed earlier (Terrace, 1963) successfully trained a red-green discrimination without the occurrence of any errors in 12 out of 12 cases. Errorless transfer from the red-green discrimination to a discrimination between a vertical and a horizontal line was accomplished by first superimposing the vertical and the horizontal lines on the red and green backgrounds, respectively, and then fading out the red and the green backgrounds. Superimposition of the two sets of stimuli without fading, or an abrupt transfer from the first to the second set of stimuli, resulted in the occurrence of errors during transfer. Superimposition, however, did result in some "incidental learning". Performance following acquisition of the vertical-horizontal discrimination with errors differed from performance following acquisition without errors. If the vertical-horizontal discrimination was learned with errors, the latency of the response to S+ was permanently shortened and errors occurred during subsequent testing on the red-green discrimination even though the red-green discrimination was originally acquired without errors. If the vertical-horizontal discrimination was learned without errors, the latency of the response to S+ was unaffected and no errors occurred during subsequent testing on the red-green discrimination.

316 citations


Journal ArticleDOI
TL;DR: When a pigeon's pecking on a single key was reinforced by a variable-interval (VI) schedule of reinforcement, the rate of pecking was insensitive to changes in the duration of reinforcement from 3 to 6 sec.
Abstract: When a pigeon's pecking on a single key was reinforced by a variable-interval (VI) schedule of reinforcement, the rate of pecking was insensitive to changes in the duration of reinforcement from 3 to 6 sec. When, however, the pigeon's pecking on each of two keys was concurrently reinforced by two independent VI schedules, one for each key, the rate of pecking was directly proportional to the duration of reinforcement.

315 citations


Journal ArticleDOI
TL;DR: The results indicate that the number of responses in the final completed ratio run increases as a function of the size of the ratio increment, but when small increments are used, progressive satiation results in a decline in performance with the larger volumes of liquid.
Abstract: The progressive ratio schedule requires the subject to emit an increasing number of responses for each successive reinforcement. Eventually, the response requirement becomes so large that the subject fails to respond for a period of 15 min and thereby terminates the session. This point is arbitrarily defined as the "breaking point" of the subject's performance. The measure is quantified in terms of the number of responses in the final completed (i.e., reinforced) ratio run of the session. Previous work has shown that this measure varies as a function of several motivational variables and may thus be useful as an index of reinforcement strength. The present study is an extension of that work. The subjects were four rats. In the first experiment, the effects of the size of the increment by which each ratio run increased were studied. In two additional experiments, the volume of a liquid reinforcer was varied using both large and small ratio increments. The results indicate that the number of responses in the final completed ratio run increases as a function of the size of the ratio increment. However, the number of reinforcements obtained by the animals per session declines sharply. When large ratio increments are used, the number of responses in the final ratio increases as a function of the volume of the reinforcer, but when small increments are used, progressive satiation results in a decline in performance with the larger volumes of liquid.

307 citations


Journal ArticleDOI
TL;DR: When a pigeon's pecks on two keys were reinforced concurrently by two independent variable-interval (VI) schedules, one for each key, the response rate on either key was given by the equation: R(1)=R(1)/(r(1)+r(2))(5/6), where R is response rate, r is reinforcement rate, and the subscripts 1 and 2 indicate keys 1 and 1.
Abstract: When a pigeon's pecks on two keys were reinforced concurrently by two independent variable-interval (VI) schedules, one for each key, the response rate on either key was given by the equation: R(1)=Kr(1)/(r(1)+r(2))(5/6), where R is response rate, r is reinforcement rate, and the subscripts 1 and 2 indicate keys 1 and 2. When the constant, K, was determined for a given pigeon in one schedule sequence, the equation predicted that pigeon's response rates in a second schedule sequence. The equation derived from two characteristics of the performance: the total response rate on the two keys was proportional to the one-sixth power of the total reinforcement rate provided by the two VI schedules; and, the pigeon matched the relative response rate on a key to the relative reinforcement rate for that key. The equation states that response rate on one key depends in part on reinforcement rate for the other key, but implies that it does not depend on response rate on the other key. This independence of response rates on the two keys was demonstrated by presenting a stimulus to the pigeon whenever one key's schedule programmed reinforcement. This maintained the reinforcement rate for that key, but reduced the response rate almost to zero. The response rate on the other key, nevertheless, continued to vary with reinforcement rates according to the equation.

241 citations


Journal ArticleDOI
Douglas Anger1
TL;DR: Under some conditions the reinforcement in Sidman avoidance seems to be primarily due to the decrease in aversiveness of temporal stimuli, under other conditions there probably is reinforcement from the termination of conditioned aversive responses.
Abstract: Animals learn to avoid with the Sidman procedure even though the avoidance response is not followed by the termination of any warning stimulus in the environment. What reinforces this response? The accepted explanation has been that the avoidance response is reinforced when it terminates other behavior that has become aversive by pairing with shock. However, the reinforcement may also be derived from the temporal discriminations that develop with Sidman avoidance. These and other temporal discriminations show that the animal has available some events that vary with the postresponse time. The shock will closely follow the temporal stimuli at long postresponse times and would be expected to make them aversive. The stimuli at short postresponse times would have a relatively low aversiveness due to their more remote relation to shock. Since the avoidance response changes a long postresponse time to a short one, that response would be followed by a decrease in aversiveness which would reinforce it. When sharp temporal discriminations are present, reinforcement from the decrease in aversiveness of temporal stimuli probably plays a dominant role in maintaining the avoidance response. This formulation fits the available data and has adequate answers for the objections that have been raised to earlier conceptions of the role temporal discriminations might play in Sidman avoidance. Although under some conditions the reinforcement in Sidman avoidance seems to be primarily due to the decrease in aversiveness of temporal stimuli, under other conditions there probably is reinforcement from the termination of conditioned aversive responses.

202 citations


Journal ArticleDOI
TL;DR: It is suggested that "response rate" as a measure usually includes a response-dependent component that is insensitive to changes in other variables, and also the results of stimulus generalization tests are suggested.
Abstract: A cathode-ray oscilloscope and a Polaroid camera record interresponse times as a function of time, stimulus wavelength, and similar variables. Each response flashes a point of light on the oscilloscope screen; the vertical position of the point gives IRT, the horizontal position gives the value of the other variable. Several thousand such points may be recorded on a single frame of film, and the density of the points indicates the relative frequency of various IRTs. The method has the advantages of a two-dimensional display of continuous variables, flexibility, speed, and relatively low cost. It lacks the advantage of a digital output. Figures show IRTs of pigeons on VI, FR, DRL and extinction, and transitions among these, and also the results of stimulus generalization tests. The results have some provocative features that require much further exploration. Among other things, they suggest that “response rate” as a measure usually includes a response-dependent component that is insensitive to changes in other variables.

164 citations


Journal ArticleDOI
TL;DR: In this paper, a variable-interval schedule of food reinforcement and fixed-ratio punishment was introduced, which produced an initial phase during which the emission of responses was positively accelerated between punishments and a uniform but reduced rate of responding emerged.
Abstract: Responses were maintained by a variable-interval schedule of food reinforcement. At the same time, punishment was delivered following every nth response (fixed-ratio punishment). The introduction of fixed-ratio punishment produced an initial phase during which the emission of responses was positively accelerated between punishments. Eventually, the degree of positive acceleration was reduced and a uniform but reduced rate of responding emerged. Large changes in the over-all level of responding were produced by the intensity of punishment, the value of the punishment ratio, and the level of food deprivation. The uniformity of response rate between punishments was invariant in spite of these changes in over-all rate and contrary to some plausible a priori theoretical considerations. Fixed-ratio punishment also produced phenomena previously observed under continuous punishment: warm-up effect and a compensatory increase. This type of intermittent punishment produced less rapid and less complete suppression than did continuous punishment.

153 citations



Journal ArticleDOI
TL;DR: The introduction of a changeover delay (COD) reduced or eliminated the superstitious responding of human subjects when presses on one button were reinforced on a VI 30-sec schedule while presses on a second were never reinforced.
Abstract: Superstitions were demonstrated with human subjects when presses on one button were reinforced on a VI 30-sec schedule while presses on a second were never reinforced. Superstitious responding, on the second button, was often maintained because presses on that button were frequently followed by reinforcement for a subsequent press on the first button. The introduction of a changeover delay (COD), which separated in time presses on the second button and subsequent reinforced presses on the first button, reduced or eliminated the superstitious responding of these subjects. Some complex superstitions were also demonstrated with other subjects for which the COD was in effect from the beginning of the session.

141 citations



Journal ArticleDOI
TL;DR: Pigeons were exposed to three successive matching-to-sample procedures and acquired the delay performance, and were able to match effectively at delays of about 4 sec.
Abstract: Pigeons were exposed to three successive matching-to-sample procedures. On a given trial, the sample (red, green or blue light) appeared on a center key; observing responses to this key produced the comparison stimuli on two side keys. Seven different experimental conditions could govern the temporal relations between the sample and comparison stimuli. In the “simultaneous” condition, the center key response was followed immediately by illumination of the side key comparison stimuli, with the center key remaining on. In “zero delay” the center key response simultaneously turned the side keys on and the center key off, while in the “variable delay” conditions, intervals of 1, 2, 4, 10, and 24 sec were interposed between the offset of the sample and the appearance of the comparison stimuli on the side keys. In all conditions, a response to the side key of matching hue produced reinforcement, while a response to the non-matching side key was followed by a blackout. In procedure I all seven experimental conditions were presented in randomly permutated order. After nine sessions of exposure (at 191 trials per session, for a total of 1719 trials) the birds gave no evidence of acquisition in any of the conditions. They were therefore transferred to Procedure II, which required them to match only in the “simultaneous” condition, with both the sample and comparison stimuli present at the same time. With the exception of one bird, all subjects acquired this performance to near 100% levels. Next, in Procedure III, they were once more exposed to presentation of all seven experimental conditions in random order. In contrast to Procedure I, they now acquired the delay performance, and were able to match effectively at delays of about 4 sec.


Journal ArticleDOI
TL;DR: It appears that even very mild punishment may be effective if the over-all frequency of reinforcement can be maintained by means of an alternative unpunished response.
Abstract: Mental hospital patients were conditioned to respond at a high rate. Then an attempt was made to eliminate the response by means of a mild punishment consisting of a period of timeout from reinforcement (response-produced extinction). When only one response was available for obtaining the reinforcement, the mild punishment was not effective in eliminating that response. When an alternative response was also made available for obtaining the reinforcement, the mild punishment was completely effective. It appears that even very mild punishment may be effective if the over-all frequency of reinforcement can be maintained by means of an alternative unpunished response.

Journal ArticleDOI
TL;DR: The present data suggest that the magnitude of contrast is very small if pecking on the red key is reinforced at a high enough frequency, and given that interactions occur, induction rather than contrast may result from small changes in a low frequency of reinforcement associated with green.
Abstract: A pigeon's rate of pecking on a red key, reinforced at a constant frequency, may be changed by increasing or decreasing the frequency of reinforcement of pecking on a successively presented green key. The changes in the rate of pecking on red, called interactions, are of two types: contrast, in which the changes in the rates of pecking on the two colors are in opposite directions; and, induction, in which the changes in the rates are in the same direction. In previous data, a change in the frequency of reinforcement associated with the green key produced a corresponding change in the rate of pecking the green key and an opposite change (contrast) in the rate of pecking on the red key. The present data suggest that the magnitude of contrast is very small if pecking on the red key is reinforced at a high enough frequency (about 40 reinforcements per hr in the present experiment). Also, given that interactions occur, induction rather than contrast may result from small changes in a low frequency of reinforcement associated with green.

Journal ArticleDOI
James B. Appel1
TL;DR: Six male White Carneaux pigeons were trained to peck at one of two keys to obtain food on several fixed-ratio schedules of reinforcement, which revealed an exponential function of the number of responses required for reinforcement when the possibility for reinforcement was not disturbed by periods of stimulus change.
Abstract: Six male White Carneaux pigeons were trained to peck at one of two keys to obtain food on several fixed-ratio schedules of reinforcement. Concurrently, the first response on a second key could, I—change the conditions of visual stimulation and remove the food reinforcement contingency, II—change the conditions of stimulation and have no effect upon the reinforcement contingency, or III—do nothing. The second response on the stimulus change key always restored baseline conditions. When second-key responses produced a stimulus change, the number of such responses was a function of the ratio value on the first key. Typically, second-key responses occurred before the start of fixed-ratio runs. The duration of stimulus change periods was an exponential function of the number of responses required for reinforcement when the possibility for reinforcement was not disturbed by periods of stimulus change (Condition II).

Journal ArticleDOI
TL;DR: Punishment reduced the frequency of the short inter-response times to a greater extent than did either extinction or satiation, and actually increased the efficiency of the DRL responding.
Abstract: The pecking response of pigeons was reinforced when a minimum period of time had elapsed since the last response (DRL schedule of food-reinforcement). Punishment, satiation, extinction, and stimulus change were employed separately to reduce responding. When the effects of the four procedures were compared, punishment was found capable of producing a more immediate, complete and long lasting response reduction than the others. Punishment had its maximum effect on the responses that were least relevant to reinforcement. The punishment reduced the frequency of the short inter-response times to a greater extent than did either extinction or satiation. In this way, punishment actually increased the efficiency of the DRL responding.

Journal ArticleDOI
TL;DR: The results of the present study appear to be a special case of the general S(D) enhancement effect demonstrated by Hanson (1959) and by Pierrel and Sherman (1960).
Abstract: When the individual SD components of a multiple schedule were combined, their control over a response summated, thus increasing the response probability to a point over that controlled by either of the SDs independently. Summation was concluded to be a phenomenon relevant for operant as well as respondent stimulus control (Pavlov, in Kimble, 1960; Hull, 1943). The results of the present study appear to be a special case of the general SD enhancement effect demonstrated by Hanson (1959) and by Pierrel and Sherman (1960).

Journal ArticleDOI
John Farmer1
TL;DR: Key-pecking rates were found to be: inversely related to T/P; higher at T=1.0 second than at other T parameter values; low and linear at several T and T-P values; and the mean post-reinforcement pause, if initially small, increased, and if initially large, decreased, as T/ P increased.
Abstract: In a temporally defined system of reinforcement schedules, the fixed interval case is defined when reinforcement probability, P, is equal to unity for the first response in any cycle length, T; when P is less than 1.0, random interval schedules emerge wherein T/P specifies the expected interval between reinforcements. Key-pecking rates were found to be: (a) inversely related to T/P; (b) higher at T=1.0 second than at other T parameter values; (c) low and linear at several T and T/P values. The mean post-reinforcement pause, if initially small, increased, and if initially large, decreased, as T/P increased.

Journal ArticleDOI
TL;DR: A method for generating a reinforcement schedule that closely approximates idealized VI schedules in which reinforcement assignments occur randomly in time (RI schedules) is described and response rates of pigeons exposed for 20 sessions appeared very similar to response rates characteristic of arithmetic series VIs.
Abstract: A method for generating a reinforcement schedule that closely approximates idealized VI schedules in which reinforcement assignments occur randomly in time (RI schedules) is described. Response rates of pigeons exposed for 20 sessions to this schedule appeared very similar to response rates characteristic of arithmetic series VIs. The distribution function describing these schedules was derived and its relations to other VI distributions, as well as to FI and random ratio (RR) were shown.

Journal ArticleDOI
TL;DR: Three years ago a tone ending in unavoidable electrical shock was periodically presented to pigeons while they pecked a key for food, finding that the presentation of free shocks caused a reappearance of the gradient and this effect persisted in reduced amount for several sessions after the shock condition was terminated.
Abstract: Three years ago a tone ending in unavoidable electrical shock was periodically presented to pigeons while they pecked a key for food. When pecking was disrupted by tone, shock was disconnected and the training tone as well as tones of different frequencies were presented. At first, all tones caused a reduction in the rate of pecking, but as testing proceeded, suppression began to extinguish and the gradient narrowed. In the present work, testing was resumed after a 2½-yr interruption. Analysis of the gradients obtained just before and just after the interruption yielded no evidence of changes with the passage of time. As testing proceeded, however, extinction of suppression continued and the gradient all but disappeared. In subsequent experiments with these subjects (Ss) it was found that the presentation of free shocks caused a reappearance of the gradient and that this effect persisted in reduced amount for several sessions after the shock condition was terminated.

Journal ArticleDOI
TL;DR: Results show that in monkeys that have been trained on a continuous avoidance schedule, unavoidable shocks can maintain responding even under conditions where responses have no programmed consequences.
Abstract: Squirrel monkeys were trained on a multiple schedule in which 10-min periods on a continuous shock avoidance schedule, indicated by a yellow light, alternated with 10-min periods on a 1.5-min variable interval schedule of food reinforcement (VI 1.5). A white light indicated that VI 1.5 was in effect, except for the middle 2 min of the period on VI 1.5, in which a blue light appeared and terminated with the delivery of a 0.5-sec unavoidable shock. Stable response rates developed in the avoidance and VI 1.5 components. However, the highest response rates occurred in the blue, preshock stimulus. A series of experiments showed that responding in the blue stimulus persisted even when responding had been extinguished on both the VI schedule of food reinforcement and the shock avoidance schedule. Responding in the blue stimulus ceased when the blue stimulus terminated without shock or when it terminated with a response-contingent shock. Each time responding ceased, it was restored by terminating the blue stimulus with an unavoidable shock. When the blue stimulus was on throughout each session and unavoidable shocks were delivered at regular 10-min intervals, responding was well maintained. These results show that in monkeys that have been trained on a continuous avoidance schedule, unavoidable shocks can maintain responding even under conditions where responses have no programmed consequences.

Journal ArticleDOI
TL;DR: The present study was concerned with the effects of schedules of reinforcement upon the rate of verbal responding to written material in children, and rates under CRF were lower than under VR, and somewhat higher than under VI, and muchHigher than under extinction.
Abstract: The present study was concerned with the effects of schedules of reinforcement upon the rate of verbal responding to written material in children. Four multiple schedules were used; multiple CRF-EXT, multiple CRF-VR, multiple CRF-VI, and multiple VR-VI, one subject being run on each schedule. Rates under CRF were lower than under VR, and somewhat higher than under VI, and much higher than under extinction. The subject run on multiple VR-VI showed little rate difference in the two components.

Journal ArticleDOI
TL;DR: An invisibly small thumb-contraction was conditioned under secondary positive reinforcement (money) in four adult human subjects without their observation of the response, and the original skew was strikingly restored in three of the four cases.
Abstract: An invisibly small thumb-contraction was conditioned under secondary positive reinforcement (money) in four adult human subjects without their observation of the response. Electromyo-graphic detection enabled the experimenter to reinforce the response by advancing on the subject's illuminated scoreboard the count of nickels earned. A light-beam galvanometer recorded on photosensitive paper not only those instances of the response which were of the size pre-selected for reinforcement but also those too small or too large to qualify. From the developed record cumulative response curves were constructed for each of the variously sized subclasses of the operant. Histograms, too, were plotted showing response-frequency by subclass for each 10-min interval of the experimental session. Before conditioning, response frequency was radically skewed toward the large-amplitude end of the distribution. The effect of conditioning was to normalize the distribution, with the middle-sized subclass (the one reinforced) becoming modal. This entailed reduced frequency of responses in subclasses smaller than the one reinforced. In extinction the original skew was strikingly restored in three of the four cases.

Journal ArticleDOI
TL;DR: The results indicate that the development and maintenance of human avoidance and escape behavior may, in part, be dependent upon response cost conditions and that aversive control of human operant behavior may be limited without an adequate specification of response-cost conditions.
Abstract: The effects of cost (point-loss per response) upon human avoidance, escape, and avoidance-escape behavior maintained by PLPs (point-loss periods) were investigated. Cost had a marked but differentially suppressive effect upon responding under all schedules. The greatest number of PLPs taken under cost occurred on the escape schedule. In most instances PLPs were more frequent on the avoidance-escape schedule than on the avoidance schedule under cost. Inferior avoidance performance appeared only under cost conditions. Under no-cost, all subjects (Ss) successfully avoided all PLPs after the first hour of conditioning. These results indicate that the development and maintenance of human avoidance and escape behavior may, in part, be dependent upon response cost conditions. Aversive control of human operant behavior may be limited without an adequate specification of response-cost conditions.

Journal ArticleDOI
TL;DR: The response of intermediate probability reinforced the response of least, but not the one of greatest, probability, indicating that a reinforcer cannot be identified absolutely, but only relative to the base response.
Abstract: A set of four manipulanda were presented to four Cebus monkeys, individually, and later in pairs. Step 1 provided an estimate of each S's probability of operating each item, while Step 2 determined whether pairing the items would disturb the ordinal relations among individual response probabilities. Both procedures provided information necessary for testing the assumption that a reinforcer is simply a contingent response whose independent probability of occurrence is greater than that of the associated instrumental response. Step 3 tested this assumption by again presenting pairs of items, but with one locked and its operation made contingent upon operation of the free item of the pair. The four Ss differed markedly in the extent to which the items produced different independent response probabilities, and correspondingly, in the extent to which the contingent pairs subsequently produced reinforcement. Confirmation of the present assumptions came primarily from one S, which differed substantially on the individual items, and showed five cases of reinforcement, all in the predicted direction. Further, reinforcement was shown by an increase in both contingency and extinction sessions. Finally, the response of intermediate probability reinforced the response of least, but not the one of greatest, probability, indicating that a reinforcer cannot be identified absolutely, but only relative to the base response.

Journal ArticleDOI
TL;DR: Daily food intake in rats was temporarily reduced by the introduction of an activity wheel and temporarily increased by the subsequent removal of the wheel, suggesting that the total behavior output of the organism may be regulated as such.
Abstract: Daily food intake in rats was temporarily reduced by the introduction of an activity wheel and temporarily increased by the subsequent removal of the wheel. When this outcome is coupled with the positive relation between food deprivation and running—and food deprivation is seen as a loss of eating rather than as a physiological state—there is the suggestion that the total behavior output of the organism may be regulated as such. Specifically, when the rat is deprived of a behavior that recurrently comprises a large part of its total daily activity, an increase may occur in some other behavior.

Journal ArticleDOI
TL;DR: Three white rats trained to press a bar while being shocked produced a white noise, indicating that the effectiveness of the noise as a reinforcer did not depend on its status as a discriminative stimulus for some other form of operant behavior.
Abstract: Three white rats were trained to press a bar while being shocked. This produced a white noise. After 30 sec they were allowed to terminate both the shock and the noise by nosing a pigeon key. Comparison of the rates of pressing before and after the onset of the noise indicated that the noise itself was the immediate reinforcing agent for pressing. Furthermore, control tests showed that pressing was maintained only if it produced the noise: either omission of the noise or elimination of the dependency of the noise on the occurrence of the response led to a gradual abolition of pressing. When automatic termination of the shock was substituted for the key nosing requirement, however, only the key nosing extinguished. This indicated that the effectiveness of the noise as a reinforcer did not depend on its status as a discriminative stimulus for some other form of operant behavior.


Journal ArticleDOI
TL;DR: Methylphenidate had a methamphetamine-like effect under fixed interval and a caffeine-likeeffect under fixed number, which was significantly smaller with methamphetamine than with caffeine or methylphenidate.
Abstract: It was possible to distinguish three closely-related psychomotor stimulants, caffeine, methamphetamine, and methylphenidate, by means of two operant behavior procedures, fixed interval and fixed number. Under the fixed interval procedure, the percentage change in the number of RBs per reinforcement was significantly smaller with caffeine than with methamphetamine or methylphenidate (p < .001). Under the fixed number procedure, the percentage change was significantly smaller with methamphetamine than with caffeine or methylphenidate (p < .001). Thus, methylphenidate had a methamphetamine-like effect under fixed interval and a caffeine-like effect under fixed number.