formats

Inverted-U activation of irrelevant stimuli

In most connectionist models of reaction time that try to account for the effects of irrelevant stimuli, they model the influence of attentional focus by having the activation of the irrelevant stimulus unit increase at first, and then decrease again, producing a non-monotonic, inverted U-shaped activation function over time (Kornblum, et al., 1999; Lu, 1997).  This activation curve can be implemented in a computational model in a number of ways.  For example, if the input to the irrelevant stimulus is turned off or decreases shortly after it is initially turned on, activation would follow a rising and falling course over time (e.g. Kornblum et al., 1999).  Alternatively, an actual “attentional inhibition” mechanism could be made explicit, where other units become activated in response to irrelevant stimulus unit activation, and these units subsequently inhibit activation in the irrelevant stimulus units (e.g. Houghton & Tipper, 1994).  Regardless of the mechanism, the basic assumption of these models is that irrelevant stimulus unit activation increases and then decreases again over time.

This characteristic of irrelevant stimulus unit activation has been inferred primarily from empirical results that have used stimulus onset asynchrony (SOA), or the relative timing of the relevant and irrelevant stimulus characteristics, to measure how the size of consistency effects changes over time.  All consistency effects seem to follow a non-monotonic, inverted U-shaped time-course, with different effects differing only in the shape and peak of this curve (see Kornblum et al., 1999; Lu, 1997).  Moreover, in connectionist network models, the size of a consistency effect reflects the amount of activation in the irrelevant stimulus unit during processing.

It should be noted that most early models actually do not include this assumption (Cohen, Dunbar, & McClelland, 1990; Cohen & Huston, 1994; Phaf, et al., 1990).  However, these models also do a poor job of accounting for time-course, and are unable to account for the decrease in the size of consistency effects for long SOA values (Cohen & Huston, 1994).  More recent models have either included this assumption from the outset (Kornblum, et al., 1999; Zorzi & Umilta, 1995), or incorporated the assumption at some point later on (Barber & O’Leary, 1997; Zhang, Zhang, & Kornblum, 1999).

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn