Skinner and Operator Conditioning

Skinner and Operator Conditioning

In 1938 BF Skinner, in an attempt to measure and objectify the instrumental response to the fullest and, at the same time, to free it from mentalistic terms (such as satisfactory or annoying consequences) employed by Thorndike, resumed the law of effect and reformulated it as the law of reinforcement.

Under the law of reinforcement, organisms learn new behaviors that are followed by reinforcement; and we know that they have been reinforced because it increases the probability that these behaviors will reappear in the future.

Skinner's box

Skinner materialized the demonstration of the law of reinforcement by establishing the process of operant conditioning with the cage designed by himself and called Skinner's box.

The term of operant conditioning refers to the process by which the frequency of presentation of a behavior is modified by its consequences. Thus, the probability of the appearance of an operant behavior is determined, mainly, by the events that happened after performing this behavior in the past. Skinner introduced the term of operant behavior to define all those responses that have the same effect on the environment. In this sense, the operant behavior of pressing the lever can be executed by a rat performing different responses, such as pressing with one leg, with the nose or with the tail. All these answers constitute the same operand.

Skinner's Box

This device allowed an animal like a rat (also used pigeons as experimental subjects) to learn arbitrary behavior such as pressing a lever, provided that the performance of this behavior was followed by the immediate presentation of food that would reinforce this operant behavior. The box designed by Skinner is based on the following elements:

  • A lever located inside the cage that the animal has to press to obtain food and that is automatically connected to the mechanism that dispenses the small ball of food that will fall into a feeder located next to the lever.
  • A cumulative registration mechanism that graphically shows at what time and how often the animal responds during the session.

The Operating Conditioning process

A typical operant conditioning procedure consists of the following steps:


It is about depriving the rat that will be conditioned to eat, until it is 80% of its usual weight, that is, until it reaches a reduction of 20% of its weight.

Cage adaptation

During the adaptation sessions, the animal is placed inside the cage to inhabit the typical scanning responses of the rat and also observe what is the initial operating level of the animal; that is, how often do you touch the lever before beginning the conditioning of this response. This operating baseline will serve as a reference to verify the subsequent increase in the response rate caused by the presence of the reinforcement.

Training in the food dispenser

This phase has a double objective: on the one hand, that the animal knows where the food will appear and, on the other, that it learns, by classical conditioning, to associate the noise of the food dispensing mechanism (sound that will become a conditioned stimulus or food presence indicator) with food availability in the manger.

Modeling by differential reinforcement or successive approximations

This phase is when the animal learns the operant behavior of pressing the lever; To achieve this, every time the animal makes a movement that is part or that approximates the final behavior, it will receive food. Thus, for example, reinforcement will be presented only when the following behaviors appear: first, it will be reinforced that the rat is oriented towards the lever; later, when approaching; later, when raising the legs above the lever; and, finally, when you make proper pressure on the lever.

The modeling technique for differential reinforcement is used to increase the likelihood of a behavior that is virtually non-existent. in the habitual repertoire of the organism, but that this one does not present any physical limitation to be able to execute. This technique requires planning and sequencing, in increasing order of difficulty, of the different steps that must be achieved to reach the final objective behavior, to administer the reinforcement only after each of these previous behaviors. Thus, both can be used to teach a rat to press a lever, to teach a pigeon to play the piano or to train the guide dog of a blind man. However, modeling is especially useful for teaching behaviors of a certain complexity to children (which would naturally be difficult for them to occur) or to people with intellectual disabilities. This technique is also at the base of different behavioral therapies, such as the so-called systematic desensitization used to get the person to approach the object or situation that initially causes dislike.

To assess the strength of the operant conditioning, Skinner continuously measured the frequency or rate of response with which the rat pressed the lever. This experimental method designed by Skinner called free operant, since the animals can repeat freely, as many times as they want, the instrumental response without the participation of the researcher. The free operand method is opposed to the trial by trial method, initially used by Thorndike, given that in this method it is the same experimenter that marks the beginning of each trial; thus, direct manipulation of the animal is necessary to place it again in the experimental chamber once the instrumental behavior has been performed.

The operant conditioning procedure initiated and developed by Skinner allowed him to predict and control behavior. This knowledge has been essential in different fields of psychology: thus, they have had repercussions, mainly, in the clinical or behavioral therapies, in the field of education and in social aspects.


A.B. Sulzer i M.G. Roy (1983). Behavioral analysis procedures applied with children and youth. Mexico: Threshing.

D.L. Whaley and R.W. Malott (1983). Behavioral psychology. Barcelona: Fontanella.