formats

Stimulus codes form automatically

One of the assumptions shared by all coding models of reaction time is that mental codes for irrelevant stimuli are formed automatically, even though they are not necessary to carry out a task. Coding models explain consistency effects in terms of a match or a mismatch between this irrelevant stimulus code and one of the mental codes required to perform the task.

This kind of explanation is very general, because mental codes can be about anything. As a result, any kind of irrelevant stimulus can give rise to a consistency effect: letters, words, locations, colors, and so on.  What these models must specify is exactly when and how irrelevant stimulus codes influence the formation of one or more of the mental codes that are required to carry out a task.

Wallace (1971, 1972) first suggested that the S-R consistency effect in a Simon task appears because people automatically, involuntarily form a spatial code, even though the stimulus position is irrelevant. Eriksen and Eriksen (1974; see also Eriksen & Schultz, 1979) similarly suggested that flanker letters in a Flanker task are identified (forming their own letter codes) even though they are known to be irrelevant to the task. This view has been generalized since then to apply to any irrelevant stimulus characteristic, and is an assumption made by all coding models of consistency effects.

Irrelevant stimulus codes form automatically, and influence the formation of other mental codes.  Many coding models assume that mental codes form gradually, and that selective attention will eventually suppress the formation of the irrelevant stimulus code once it is identified as irrelevant. This mechanism of attention produces a rising-then-falling, or inverted U-shaped activation of the irrelevant stimulus code.

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
formats

What are mental codes?

Consider what has to go on in your mind in order for you to carry out the instructions for a typical Choice Reaction Time task, such as: “press the left key when you see the color green and the right key when you see the color blue.”

When a stimulus appears, at least three things have to happen: 1) you have to figure out the color of the stimulus, 2) you have to decide which key to press, and 3) you have to actually press the key. These are usually considered the three most basic, broadly-defined processes involved in carrying out a task, and are usually called stimulus identification, response selection, and motor programming, respectively (although they can be broken down into more specific sub-processes, as well; see Sanders, 1980, 1990).

These processes can be described more concretely in terms of information and mental codes. Your senses give you signals that contain information about what is going on in the world around you. In order to understand and react to the world, you use that information to create a mental picture of your environment. In other words, you form a stimulus code: a mental representation of what properties are in the stimulus environment that produced the sensory signals that you received.

Stimulus identification can be thought of as the process of forming stimulus codes based on sensory information. Those stimulus codes, in turn, contain information that can be used to decide on a response. In order to act on the world, you form a response code: a mental representation of what actions you want to carry out. Response selection can be thought of as the process of forming response codes based on information from stimulus codes. Finally, motor programming is the process of using information from response codes t prepare specific muscular movements that carry out your response. This can be thought of as the formation of motor codes, which are the programs your muscles use to make a response.

The idea that thought and action in the world consists of mental codes (representations of stimulus properties and response actions) is called the information processing approach, and models of performance based on this framework are called information processing models (see Anderson 1995; Bower 1975; Miller, 1988). Performing a task requires transforming information from the world into a stimulus code, a response code, and then a motor code, through a sequence of mental processes.

These processes clearly depend on one another. In the example above, what key you press depends on what side (left or right) you decide is correct, and what side you decide is correct depends on what you think the color of the stimulus is. In the language of information processing models, the output of stimulus identification, which contains information about the stimulus code, is used as the input for response selection. Similarly, the output of response selection, which contains information about the response code, is used as input for motor execution. Information processing models use terms like “input” and “output” a lot, because they were originally motivated by the idea that mental processes are like computer programs, and mental codes are like computer data (see Newell, Rosenbleem, & Laird, 1989; Simon, 1981; Simon & Kaplan, 1989).

Psychologists want to know exactly what is going on in these processes; that is, how information is represented in these codes, and how they are actually formed. One way to approach this question is to measure people’s performance, their speed and accuracy when carrying out a task, under different kinds of task conditions. The amount of time it takes for you to make a response is related to how difficult each of these processes is: when something about the task makes your response faster or slower, it is because one (or more) of these processes has been helped or hindered. By examining how different kinds of task conditions influence performance, psychologists are able to get an idea about what is actually going on in the formation of these different mental codes. This approach is called mental chronometry (see Meyer et al., 1988; Sanders, 1993).

There are a number of specific questions one can ask about the formation of mental codes during choice reaction time tasks. Is input information compared to items in memory one by one, until a match is found? Or is the input information compared to all possible items in memory at once? Does input information for a process cause mental codes to form gradually, or do mental codes form in chunks, like “yes” and “no” decisions? Does incomplete information get used by later processes, or do they have to wait until the previous process is completed?

Even more questions can be asked about consistency effects in classification tasks. How does irrelevant information affect the formation of mental codes? Does it influence the formation of stimulus codes, response codes, or motor codes? Does irrelevant information always have the same kind of influence on mental codes, or does it depend on task conditions?

Today, most models of consistency effects share a few basic assumptions about mental codes and how they behave during classification tasks (see, e.g., Barber & O’Leary, 1997; Kornblum et al., 1990; O’Leary & Barber, 1993; Lu & Procter, 1995; Prinz, 1990; Proctor, Reeve, & van Zandt, 1992; Umilta & Nicoletti, 1990; Wallace, 1971). For example, they assume that irrelevant stimulus codes form automatically, that different stimulus features are formed by multiple parallel identification processes, that mental codes are abstract representations, and that mental codes form gradually over time.

However, they also disagree on a few very key assumptions about mental processing. For example, different models often disagree about where selective inhibition happens. They also can disagree about whether the formation of response codes from stimulus information is continuous or happens only in discrete stage-like chunks. Finally, they can disagree about whether irrelevant stimulus information influences the formation of stimulus codes, the formation of response codes, or both.

This last question is the key issue that differentiates the Dimensional Overlap Model from other models of consistency effects. Most models of consistency effects assume that irrelevant stimulus information influences the formation of response codes, whereas the Dimensional Overlap Model assumes that influence of the irrelevant stimulus depends on what the irrelevant stimulus dimension overlaps with.

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
formats

Where does selective inhibition happen?

One question about which there is little consensus among connectionist models of reaction time, is the question of the locus of inhibition between alternatives.  Are inhibitory mechanisms active within each cognitive process, or between them?

According to the within-process inhibition view, the activation of one mental code in a particular process (i.e. within a module) will inhibit all of the other mental codes in that process (i.e. within the same module).  According to the between-process inhibition view, activation of a mental code in one process (e.g. stimulus identification) will directly inhibit non-corresponding codes in the following process (e.g. response selection).

Most of the earliest connectionist network models, including McClelland’s (1979) Cascade model, were feed-forward networks (see Rumelhart, McCelland, et al., 1986).  This meant that any given unit could only supply input to units in later processes, and there were no connections at all between units within the same module.  As a result, these network models naturally had to implement between-process inhibition: each stimulus unit had a positive association with one or more response units, and negative associations with all of the rest of the response units.

This feed-forward architecture also arises from many “learning algorithms”: processes through which the association strengths in a network can change from trial to trial, “learning” based on experience.  For example, the model of the Stroop effect by Cohen, Dunbar and McClelland (1990) assigns association strengths based on a standard learning algorithm (known as backpropagation) that “trains” the network on the process of reading (giving color word responses to word inputs) and on the process of color-naming (giving color word responses to color inputs).  The result of this algorithm is that, for example, the stimulus unit for the word “red” has a positive association with the response unit for the word “red,” and also a negative association with the response unit for the word “green”.  Similarly, the stimulus unit for the word “green” has a positive association with the response unit for the word “green”, and also a negative association with the response unit for the word “red”.  The result is between-process inhibition: negative associations connect stimulus units to response units.

McClelland and Rumelhart (1981), however, introduced a framework with a different assumption about connections between units.  They describe a semantic activation model with a feature module (with units representing mental codes for individual visual features), a letter module (with units representing mental codes for individual letters), and a word module (with units representing mental codes for whole words). They suggest that units are connected with positive or negative associations based on their consistency: that is, because the existence of the word “THE” is consistent with the existence of the letter “T” in the initial position, the initial “T” letter unit and the “THE” word unit have a positive association; on the other hand, because the existence of the word “ARE” is inconsistent with the existence of the letter “T” in the initial position, the initial “T” letter unit and the “ARE” word unit have a negative association.

One result of this kind of assumption is that all of the units within the same module are mutually inhibitory: the existence of a letter “T” in the initial position of a word is inconsistent with there being any other letter in that initial position, the existence of the word “ARE” is inconsistent with there being any other word, and so on.  As a result, this model implements within-process inhibition: activation of any unit within a module (process) inhibits activation of all of the other units in the same module.  However, this model also implements between-process inhibition, because the same rules apply to connections between units in different modules (e.g. letter units and word units).

This set of assumptions is used as the basis for the Stroop model developed by Phaf, van der Heijden, and Hudson (1990).  In this model, color units (representing alternative possible color codes) inhibited both each other (within-process inhibition) and inconsistent color word responses units (between-process inhibition).  Phaf et al. (1990) further argue in support of within-process inhibition by citing neurophysiological evidence: they claim that the neurophysiological phenomenon of “lateral inhibition” (inhibition among all of the nearby neurons in the same layer in cortical tissue) can be thought of as evidence that the brain implements within-process inhibition.

It was soon noted, of course, that having both of these kinds of inhibition is computationally redundant.  Several years later, McClelland (1993) proposed a normative framework for connectionist network models of performance called the Graded Random And Interactive Network (GRAIN) framework, in which he suggested that models should use only inhibitory connections between units in the same module, and only excitatory connections between units in different modules.  This effectively called for models of performance to constrain themselves to implementing within-module inhibition.  Most of the motivation for this move was computational, not psychological, and almost every “advantage” described by McClelland (1993, pp. 659-660) for within-process inhibition could also be found in an appropriately structured model with between-process inhibition.

Many models of consistency effects, nonetheless, have followed this normative suggestion (e.g. Barber & O’Leary, 1997; Cohen & Huston, 1994; Cohen, Servan-Schreiber, & McClelland, 1992; O’Leary & Barber, 1993; Zhang & Kornblum, 1998; Zhang, Zhang, & Kornblum, 1999; Zorzi & Umilta, 1995).  Zorzi and Umilta (1995), however, did explore the performance of an alternative version of their model that implemented between-process inhibition.  They found that both models could account for performance equally well, and that in the end the only motivation for preferring within-process inhibition was theoretical consistency: because everyone else was doing it.

Kornblum et al. (1999) pointed out that having within-process inhibition can lead to explosive inhibitory feedback effects if activation values are allowed to go below zero (see Kornblum et al., 1999,  p. 706).  Zorzi and Umilta (1995) and Cohen and Huston (1994) dealt with this by constraining the output of units between 0 and 1.  Instead of adding this additional constraint on the dynamics of processing, Kornblum et al. (1999) implement between-process inhibition.  By using this alternative inhibition mechanism, and forcing fewer constraints on processing (because output could be either positive or negative with no ill consequences to the model), they were still easily able to account for performance in Simon tasks, Stroop-like tasks, and their factorial combinations.

 

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
formats

Debate: Explaining the reverse-Simon effect

Placeholder

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
formats

Debate: Processing Stages or Continuous Activation

Do stimulus and response processing happen in stages? Or does information get continuously transmitted from stimulus-processing to response-processing over time?

This is an important question that models of performance heatedly disagree about. According to the continuous transfer view, any activation that accumulates for a stimulus code is immediately used as input to any associated response codes, which in turn has an immediate influence on response code activation.  According to the discrete transfer view, on the other hand, activation of a stimulus code must accumulate to some critical level, indicating that stimulus identification has been completed with some degree of certainty, before a signal is sent to the response selection process and activation of the response codes can accumulate.

Historically, popularity has swung back and forth between these alternatives.  Over thirty years ago, Sternberg (1969) proposed a method of interpreting reaction time data called the additive factors method (AFM). This framework assumes that information is transferred discretely between processes, and shows that given this assumption, if the effect of manipulating some factor A changes (i.e. becomes larger or smaller) as one manipulates another factor B, then these two manipulations must influence the same underlying process.  The logic of this method was very compelling, and was used with much success for interpreting empirical results (see Sanders, 1980, 1990; Sternberg, 1971); so, people were happy, for a while, to assume discrete information transfer.

A decade later, however, McClelland (1979) developed the Cascade model, in which information is transferred continuously from stimulus to response processes, and showed that this model could account for many of the same kinds of empirical data that AFM could account for, but also allowed for different inferences to be drawn about what processes are influenced by experimental manipulations.  Around the same time, a large number of other criticisms of AFM also arose, as well as new models favoring the assumption of continuous information transfer (e.g. Eriksen & Schulz, 1979; Taylor, 1976; Wickelgren, 1977).  The tide had turned toward continuous models.

Approximately a decade after that, Miller (1988) brought extensive criticism against this shift, saying sharply: “We consider the en masse abandonment of discrete models in favor of continuous ones to be wholely unjustified given the evidence currently available, and thus scientifically premature.”  He analyzed much of the empirical data that had been adduced in support for continuous models, and found that they could all be accounted for by models that did not assume continuous information transfer.  Most often, the empirical evidence spoke against models that consisted of only a single unitary stimulus identification process, but could easily be accounted for by models that assumed the formation of multiple parallel stimulus codes.  Miller shows that his Asynchronous Discrete Coding (ADC) model, in which multiple stimulus identification processes each give separate and independent discrete outputs to response processing, is able to account for much of the critical data (see also Miller 1982a, 1982b, 1983).

Increasingly sophisticated methods have been proposed to empirically test whether information transfer is discrete or continuous (Roberts & Sternberg, 1993), and increasingly complex models have been proposed that manipulate assumptions about different ways in which information transmission can be discrete, continuous, or varying from one to the other on a continuum (e.g. Liu, 1996; Miller, 1993).  No overall agreement, however, has been reached in this debate.

Finally, it should be noted that the terms “discrete” and “continuous” can be, and have been, used in other ways when talking about models of mental processes.  Miller (1988) distinguishes between three different ways in which a model can be “discrete” or “continuous”.  First, it may have discrete or continuous representation: mental representations may either vary freely across a continuum, or may be restricted to a limited number of mental codes.  Second, it may have discrete or continuous transformation: mental representations may vary continuously in the degree to which they are formed or activated, or may have only a limited number of states they can be in (e.g. formed or not, prepared or not).  Third, it may have discrete or continuous transfer of information, as has been discussed so far here.  Most of the empirical tests and theoretical debates have been focused on the question of information transmission, although there has been some work in trying to empirically establish whether the transformation of information within response selection is discrete or continuous (e.g. Meyer, Irwin, Osman, Kounios, 1988; Meyer, Yantis, Osman, & Smith, 1985).

Connectionist models of performance and consistency effects all have discrete representation (a finite number of units, representing discrete mental codes) and continuous transformation (continuous accumulation of activation within each unit);  However, although most of these models assume continuous information transfer (e.g. Barber & O’Leary, 1993; Cohen, Dunbar & McClelland, 1990; Cohen & Huston, 1994; Cohen, Servan-Schreiber, & McClelland, 1992; O’Leary & Barber, 1997; Phaf, van der Heijden, & Hudson, 1990; Servan-Schreiber, 1990; Zhang & Kornblum, 1998), at least one of these models explicitly assumes discrete transfer (Kornblum, et al., 1999), and another implies it (Zorzi and Umilta, 1995; this case will be discussed below).

This debate is important when evaluating computational implementations of generic dimensional overlap and response selection models.  The way a model implements the transfer of information could seriously impact the predictions that it makes, also influencing any comparison that is made between it and other models.

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
formats

Debate: What is the locus of the S-S Overlap Effect?

Placeholder.

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
formats

Inverted-U activation of irrelevant stimuli

In most connectionist models of reaction time that try to account for the effects of irrelevant stimuli, they model the influence of attentional focus by having the activation of the irrelevant stimulus unit increase at first, and then decrease again, producing a non-monotonic, inverted U-shaped activation function over time (Kornblum, et al., 1999; Lu, 1997).  This activation curve can be implemented in a computational model in a number of ways.  For example, if the input to the irrelevant stimulus is turned off or decreases shortly after it is initially turned on, activation would follow a rising and falling course over time (e.g. Kornblum et al., 1999).  Alternatively, an actual “attentional inhibition” mechanism could be made explicit, where other units become activated in response to irrelevant stimulus unit activation, and these units subsequently inhibit activation in the irrelevant stimulus units (e.g. Houghton & Tipper, 1994).  Regardless of the mechanism, the basic assumption of these models is that irrelevant stimulus unit activation increases and then decreases again over time.

This characteristic of irrelevant stimulus unit activation has been inferred primarily from empirical results that have used stimulus onset asynchrony (SOA), or the relative timing of the relevant and irrelevant stimulus characteristics, to measure how the size of consistency effects changes over time.  All consistency effects seem to follow a non-monotonic, inverted U-shaped time-course, with different effects differing only in the shape and peak of this curve (see Kornblum et al., 1999; Lu, 1997).  Moreover, in connectionist network models, the size of a consistency effect reflects the amount of activation in the irrelevant stimulus unit during processing.

It should be noted that most early models actually do not include this assumption (Cohen, Dunbar, & McClelland, 1990; Cohen & Huston, 1994; Phaf, et al., 1990).  However, these models also do a poor job of accounting for time-course, and are unable to account for the decrease in the size of consistency effects for long SOA values (Cohen & Huston, 1994).  More recent models have either included this assumption from the outset (Kornblum, et al., 1999; Zorzi & Umilta, 1995), or incorporated the assumption at some point later on (Barber & O’Leary, 1997; Zhang, Zhang, & Kornblum, 1999).

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
formats

The link from stimulus to response

In connectionist network models of reaction time, stimulus codes are transformed into response codes through links that are set up between stimulus and response units.  These connections are thought of as associative links between concepts, stored in memory.  There can be both long-term memory (LTM) associations, and short-term memory (STM) associations (Barber & O’Leary, 1997).

LTM associations form between two units from repeated exposure over an extended period of time, such as the link between the stimulus unit for a written word and the response unit for saying that word, or the link between a stimulus unit for a particular location and a response unit for acting towards that location (since we often respond toward stimuli in our environment).

STM associations are temporary links formed between stimulus and response units based on a particular task at hand.  For example, if you are told to press a left key when you see a blue stimulus and to press a right key when you see a green stimulus, then a STM association would form between the “blue” stimulus code and the “left” response code, and between the “green” stimulus code and the “right” response code.  In the language of connectionist network models, there would be a temporary link between the “blue” stimulus unit and the “left” response unit.  This means that activation in the “blue” stimulus unit would be used as input to the “left” response unit, causing activation in the “left” response unit to increase.

According to these models, both controlled (intentional, deliberate, conscious) and automatic (unintentional, reflexive, unconscious) translation of a stimulus code into a response code happens through the same mechanism: activation in a stimulus unit is transformed into output, passed along an associative link, and used as input to a response unit, causing that response unit to accumulate activation.  STM associations implement the controlled translation from stimulus to response that is determined by the specific instructions of the task at hand.  Because STM associations encode the instructions of the task, they always link a stimulus to the correct corresponding response.  LTM associations, on the other hand, are based on previous experience, rather than the task at hand.  As a result, STM associations have also been called “controlled lines,” while LTM associations have been called “automatic lines” (Kornblum et al., 1999).

It is easy to see how the combination of automatic and controlled lines can allow these models to account for the S-R consistency effect (regardless of whether one is implementing a dimensional overlap model or a response selection model, because the mechanism accounting for S-R consistency is the same in both).  Consider, for example,  how information processing might proceed in a typical Simon task: a blue stimulus appears on the left side, causing activation to accumulate in the blue relevant stimulus unit and the left irrelevant stimulus unit; if the instructions assign a left key-press to blue stimuli, then activation from the blue relevant stimulus is passed along a STM association to the left response unit; activation of the left irrelevant stimulus unit is passed along a LTM association to the left response unit; because the response unit is getting input from both stimulus units, it has a high input, and activation accumulates quickly, reaching the threshold for completion in a short amount of time.  On the other hand, if the blue stimulus had appeared on the right side, the right irrelevant stimulus unit would have activated the right response unit through the LTM association, and input to the (correct) left response unit would have been lower.  Lower input, of course, means activation accumulates more slowly, and the decision threshold is reached later.

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
formats

Mental codes form gradually

One of the basic assumptions shared by all connectionist models of reaction time is that mental codes form gradually, through the  accumulation of evidence over time based on input information.  For example, when a blue stimulus appears, information from the sensory signals gradually causes evidence to accumulate in favor of the stimulus code representing the color blue.  In the language of connectionist networks, the activation of the stimulus unit for the color blue gradually increases over time. A mental code can be thought of as “completely formed” once activation in the appropriate unit has reached some criterion level.  Activation  in a stimulus unit can trigger the accumulation of activation in a response unit either right away, or after a decision threshold is met, depending on whether the model assumes stages or continuous processing. The ultimate speed of performance is determined by how long it takes for activation of the motor units to reach some “decision criterion,” indicating that the motor codes have been fully formed.

This idea evolved out of the combination of three ideas.  Signal detection theory (Green & Swets, 1966; Swets, 1964; Tanner & Swets, 1954) suggested that detecting a particular stimulus (i.e. forming a particular stimulus code) is a statistical decision: the signals that you get from your sensory system are subject to variability, so although a particular stimulus characteristic on average produces a particular sensory signal, there will be times when the signal appears without the stimulus being there, and there will be times when the signal fails to appear when the stimulus is there.  So, you have to use a decision threshold for how strong the signal has to be so that you are most likely to detect it when it is there, but least likely to think it is there when it is actually not.

Stimulus sampling theory (Estes, 1950, 1955) introduced the idea that perception involves repeated sampling of a stimulus over time, allowing signal detection theory to be extended over time (see, e.g., Pike, 1973).  The statistical decision tool called sequential sampling and optional stopping (Wald, 1947) is used whenever you do not want to sample more data than necessary, to determine the number of times a signal has to be sampled in order to make a confident decision about its value.  According to this method, each additional sample of information modifies your cumulative level of confidence, allowing you to continue sampling information until your confidence is high enough to meet some decision criterion, at which point you stop sampling.

This idea was immediately incorporated into a large number of psychological models of performance in CRT tasks (e.g. Audley, 1960; Audley & Pike, 1965; LaBerge, 1962; McGill, 1963, 1967; Stone, 1960; Vicker, 1970).  Although the details of these models differ, the basic premise is the same: evidence for each stimulus code accumulates over time, due to repeated sampling of sensory information, until a decision criterion of some sort is reached, indicating that the code has been fully formed and the stimulus has therefore been fully identified (see Luce, 1986, for more details).  Currently, two major types of models based on this premise are being pursued: diffusion models (Ratcliff, 1978, 1980, 1981, 1988) and the connectionist models discussed here.

Most connectionist models use the same equation to determine how activation changes over time, drawing on the first connectionist model of performance, McClelland’s (1979) Cascade model.  McClelland proposed that units be understood as first-order linear integrators, so that their activation at any given point in time is a time-averaging function of their input.  When units like this are given a constant input, their activation will asymptotically approach that input value according to a “loading curve”: approaching the input level at a rate proportional to how far away it is from the input.  This function actually first appeared in a psychological model proposed by Grice (1972, 1977; Grice, Nullmeyer, & Spiker, 1982), although he rarely gets credit for this contribution (see Luce, 1986).

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
formats

Connectionist models of consistency effects

Before the DO2000 model, there were already a number of computational models of consistency effects.  All of these models can be classified as specific instances of either the generic response selection model or the generic dimensional overlap model.  Interestingly, the response selection models are each designed to account for performance in only one kind of task: Cohen (Cohen, Dunbar & McClelland, 1990; Cohen & Huston, 1994) and Phaf, van der Heijden, and Hudson (1990) have described response selection models of performance in the Stroop task;  Servan-Schreiber (1990; Cohen, Servan-Schreiber, & McClelland, 1992) has described a response selection model of performance in the Eriksen task; and Zorzi and Umilta (1995) has described a model of performance in the Simon task (which could be classified as either a dimensional overlap model or a response selection model, since both models agree on their explanation of the S-R consistency effect).

The three models that have specifically been designed as general models of consistency effects, on the other hand, are all dimensional overlap models: Barber and O’Leary (1993; O’Leary & Barber, 1997) have described a dimensional overlap model of performance in Simon and Stroop tasks, and their variants; and both Zhang and Kornblum (1998) and Kornblum et al. (1999) have described dimensional overlap models of performance in consistency tasks in general, including Eriksen, Simon, Stroop and Stroop-like tasks, and their variants and factorial combinations.

All of these models also have a common computational heritage, and so therefore also share a number of common assumptions, as well as a common descriptive language.  They can generally be classified as connectionist network models (Quinlan, 1991; Rumelhart, McCelland, et al., 1986).  Connectionist models consist of a network of interconnected processing units, where each unit is very simple, usually involving a single variable (called the unit’s “activation”) that changes as a function of input to the unit, and determines the output of the unit to be transferred to other connected units.

More specifically, these models are localist connectionist models (see, e.g. Grainger & Jacobs, 1998; Page, in press).  This means that each unit in the network represents a mental code.  In models of performance in classification tasks, the units in the network can be divided into three groups or modules: a relevant stimulus module, containing units corresponding to each of the elements of the relevant stimulus set; an irrelevant stimulus module, containing units corresponding to each of the elements in the irrelevant stimulus set; and a response module, containing units corresponding to each of the elements in the response set.  Some models also include modules of units representing executive cognitive functions, such as “task demand units,” which represent mental codes that specify which of the two stimulus sets is relevant (e.g. Cohen & Huston, 1994).

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn