Privileged access to action for objects relative to words

Psychonomic Bulletin & Review, Jun 2002

We compared action (pour or twist?) and contextual/semantic (found in kitchen?) decisions made to pictures of objects, nonobjects, and words. Although there was no advantage for objects over words in contextual/semantic decisions, there was an advantage for objects over words and nonobjects in action decisions. For objects, both action and contextual/semantic decisions were faster than naming; for words, the opposite occurred. These results extend the early results of Potter and Faulconer (1975) that there is privileged access to semantic memory for objects relative to that for words and privileged access to phonology for words. Our data suggest that, for objects, there is privileged access to action knowledge rather than to all forms of semantic knowledge and that this is contingent on learned associations between objects and actions.

A PDF file should load here. If you do not see its contents the file may be temporarily unavailable at the journal website or you do not have a PDF plug-in installed and enabled in your browser.

Alternatively, you can download the file locally and open with any standalone PDF reader:

http://link.springer.com/content/pdf/10.3758%2FBF03196292.pdf

Privileged access to action for objects relative to words

HANNA CHAINAY INSERM 0 l'Unit 0 Paris 0 France 0 0 GLYN W. HUMPHREYS University of Birmingham , Birmingham, England We compared action (pour or twist?) and contextual/semantic (found in kitchen?) decisions made to pictures of objects, nonobjects, and words. Although there was no advantage for objects over words in contextual/semantic decisions, there was an advantage for objects over words and nonobjects in action decisions. For objects, both action and contextual/semantic decisions were faster than naming; for words, the opposite occurred. These results extend the early results of Potter and Faulconer (1975) that there is privileged access to semantic memory for objects relative to that for words and privileged access to phonology for words. Our data suggest that, for objects, there is privileged access to action knowledge rather than to all forms of semantic knowledge and that this is contingent on learned associations between objects and actions. - In a now classic study, Potter and Faulconer (1975) examined access to semantic knowledge and to phonology from pictures of objects and from words. They had participants either name these stimuli or make verification decisions to category names (e.g., tools, clothing, etc.). They found that names were named substantially faster than were objects. In contrast, there was a smaller but nevertheless reliable advantage for category decisions to objects when compared with words. They suggested that words gained privileged access to phonology.For example, words may have direct connections between their orthographic and phonological representations, in addition to any connections between word meaning (semantic knowledge) and phonology (see, e.g., Coltheart, Curtis, Atkins, & Haller, 1993; Plaut, McClelland, Seidenberg, & Patterson, 1996). Objects, on the other hand, may only be named by accessing stored knowledge of their meaning (see, e.g., Humphreys, Riddoch, & Quinlan, 1988; Levelt, Roelofs, & Meyer, 1999, for explicit models of objects naming). Words can be named faster than objects by dint of the direct naming route. On the other hand, the faster category decisions for objects than for words suggests that there is privileged access to semantic memory for objects relative to that for words. This conclusion has formed the basis of contemporary accounts of access to semantic memory. For instance, the OUCH model of Caramazza and colleagues (Caramazza, Hillis, Rapp, & Romani, 1990) holds that This work was supported by grants from the Medical Research Council, the Welcome Trust, and the Fyssen Foundation. Correspondence should be addressed to H. Chainay, Neuropsychologie et Neurobiologie du Vieillissement Crbral, INSERM, lUnit 324, 2 ter, rue dAlsia, 75014 Paris, France (e-mail: ). objects access semantic knowledge faster than words do because, uniquely for objects, perceptual features correlate with their function. This differential speed of access to semantic and name information can also be used to explain asymmetrical interference effects between pictures and words (e.g., Glaser, 1992; Glaser & Dngelhoff, 1984). In this study, we question the generality of this conclusion. In particular, we ask whether objects gain faster access to all forms of semantic knowledge or whether there is faster access only to restricted forms of knowledge for example, knowledge of action rather than knowledge of semantic context. Neuropsychological evidence suggests that it is possible to gain access to knowledge about object usage even when there is impaired access to associative, contextual knowledge about objects. Optic aphasic patients show impaired naming of visually presented objects along with a relatively spared ability to gesture how objects can be used (e.g., Lhermitte & Beauvois, 1973). Such patients can also have a deficit in accessing semantic knowledge from vision (e.g., Hillis & Caramazza, 1995; Riddoch & Humphreys, 1987). For example, they can be impaired in judging whether associatively related objects belong together, though they may be able to show how the individual objects can be used. Their good gesturing, then, suggests that objects can access action knowledge even when the retrieval of associative, semantic knowledge is deficient. There is privileged access to action knowledge relative to associative, contextual knowledge. In one case reported by Riddoch and Humphreys (1987), the patient even showed very specific gestures, using his right hand to mime a cutting action to a knife but his left hand to mime a prodding action to a fork. Such specific actions suggest that performance was contingent on access to stored information about actions for the basic categories of object (knife vs. fork). A contrasting pattern of performance to that found in optic aphasia can be found in visual apraxia, a term used to describe patients who are impaired at gesturing to visually presented objects (e.g., De Renzi, Faglioni, & Sorgato, 1982; Pilgrim & Humphreys, 1991; Riddoch, Humphreys, & Price, 1989; Rothi, Mack, & Heilman, 1986). Interestingly, such patients may be able to name objects from vision, and they can also gesture to names. Visual apraxia may represent the flip side of privileged access to action knowledge. In this case, there is damage to the privileged access process (from vision). For example, noise within a direct visual route may prevent a patient from retrieving action knowledge with the use of associative, contextual information, even though the semantic route is relatively unimpaired, judging by the patients performance in gesturing to presented words. An explicit simulation of this pattern of results is provided by Yoon, Heinke, and Humphreys (in press). There is also consistent evidence for privileged access to action knowledge from vision from experimental studies with normal participants. Rumiati and Humphreys (1998) had normal participants make gestures under a fast response deadline. They found that proportionately high numbers of visual relative to semantic errors occurred when gestures were made to pictures of objects (e.g., gesturing to a razor, though it was a hammer). The reverse was found when objects were named, when semantic errors predominated (e.g., naming the razor as a shaving brush). Rumiati and Humphreys suggested that visual errors arise in gesturing because action knowledge can be activated from the visual properties of objects. Semantic errors predominate in naming because name retrieval from objects is more strongly constrained by semantic knowledge. Tucker and Ellis (1998) and Craighero, Fadiga, Rizzolatti, and Umilt (1998, 1999) reported consistent findings. For example, Craighero et al. (1998, 1999) reported priming of a motor response based on the orientation of a visual stimulus, even when this visual information was irrelevant to the task. Participants had to make a speeded grasp response to an oriented bar, and the signal to start the action was cued by a rectangle that could have the same or different orientation to the bar. Responses were initiated faster when the orientation of the start cue was compatible with that of the bar, relative to when the cue had an incompatible orientation. If there is privileged access to action knowledge for objects, we may expect to find an advantage for objects over words in tasks requiring access to action knowledge. Moreover, this advantage may be greater when action decisions are made relative to when decisions are required for other forms of associative, contextual knowledge. Now, there is a large body of evidence showing that we can make faster decisions about superordinate and basic level properties of objects, when compared with decisions about subordinate properties specific to individual exemplars (one may make a faster decision about orange squeezers in general, relative to decisions about a particular type of squeezer; see Rosch, Mervis, Gray, Johnson, & Boyes-Bream, 1976, for a classic study). The interesting point about our comparison between action and associative, context decisions is that both decisions relate to basic-level information about objectsfor example, whether orange squeezers in general are used by means of a twisting action or whether they are typically found in a kitchen. Differences in the efficiency with which these decisions are made are unlikely to reflect access to contrasting levels of knowledge (superordinate, basic level, or subordinate), nor are the data likely to reflect the fuzziness of the response categories if differences emerge with objects but not with words. We tested this in Experiment 1 by comparing decisions about the basic kind of action that might be employed with stimuli (would you make a twisting or pouring action with this object?) with associative, contextual decisions about where the items might be found (would you typically find this in a kitchen?). We ask whether objects are advantaged relative to words for action decisions, when compared with associative, contextual decisions. In addition to this, we attempted to assess the role, if any, of direct relations between the visual properties of objects and the required action responses. We did this in two ways. First, we had some of the participants respond by using a manipulandum in which a twisting or a pouring action was actually made, whereas we had other participants respond by means of a buttonpress. Direct relations between the stimuli and the responses were stressed when the manipulandum response was made. Second, we presented another group of participants with pictures of nonobjects chosen because they had the critical features of the real objects that could be linked to the action responses (as assessed by another, independent group of raters). If the fast action decisions for objects were due to the use of such features, relatively efficient action decisions should be found with nonobjects too. Finally, we also had some of the participants name words and pictures, in order to provide a comparison with Potter and Faulconers (1975) original study comparing category decisions and naming. EXPERIMENT 1 Method Stimuli. The stimuli comprised 40 line drawings of known objects (common tools and kitchen utensils) and 40 drawings of nonobjects (see Figure 1 for examples). Prior to the study, 60 line drawings of common objects were shown to an independent set of 80 participants who had to decide whether each item was used for pouring or for twisting. There were no time limitations. The items used for the study were chosen according to whether they required a pouring or a twisting action, as evaluated by over 85% of the original participants. Twenty pouring and 20 twisting objects were thus selected. An attempt was made to select items that did not have consistent visual features that could be associated with pouring and twisting actions. To evaluate this, another group of 20 independent participants were given drawings of the objects, and they were asked to mark what they thought was the critical part of each stimulus that indicated whether it should be used for twisting or for pouring. A feature was judged as being consistently related to the action decision if it was listed by 15/20 of the participants. The consistent features were essentially a long handle or a thread (for twisting objects) and a spout or an open-topped container (for pouring items). Sixteen pouring objects and 12 twisting objects had one of the critical features for its category, but the features were about equally divided across these items. Hence, no one feature was constant for all the known objects within a given class of response (pouring vs. twisting). Also, within each response category, we attempted to include some objects that had what was judged to be a critical feature for the opposite response (e.g., a frying pan was included as a pouring object, despite its having a long handle and its being an opentopped container; a meat grinder was included as a twisting object despite its being an open-topped container as well as having a long handle). The Appendix contains a list of all the objects used. The nonobjects were drawn so that each picture contained at least one of the critical features that the participants had judged as relating to pouring or to twisting (see above). Since these features were more consistent across the nonobject set than across the object set, it was more likely that responses could be linked to these critical features when the nonobjects were presented. Prior to the study, the nonobjects were shown to an independent set of 80 participants who had to decide whether each item would naturally be responded to by means of a pouring or a twisting action. There were no time limitations. There was (76%) agreement between participants on the category of action for each nonobject. All drawings were computerized and adapted for MEL 2 Professional, which was used to program and run all the experiments. The size of the drawings was, on average, 440 pixels 3 512 pixels. Forty written words, corresponding to the names of the objects shown as line drawings, were also used. The names were written in 20-point Times New Roman style. Participants . Eighty participants, all students at the University of Birmingham, took part in the study. All had normal or correctedto-normal vision. The participants were either part of the departments subject scheme or they were paid 6. Twenty participants were used in the naming task, 20 in the action and semantic decisions tasks with the manipulandum, 20 in the action decision task with a buttonpress response, and 20 in the action decisions to nonobjects. Action decisions to nonobjects were made with the manipulandum in order to maximize the opportunity for stimulus response links to affect performance. Procedure. There were three experimental tasks: (1) naming of objects and words, (2) action decision to objects, nonobjects, and words (the participants had to decide whether pouring or twisting was the more appropriate action for a given object), and (3) semantic decision, in which the participants were asked whether an object would typically be found in a kitchen. In the naming task, the participants took part in two sessions. They named objects in one block and words in the other in each session. For the participants given the action and semantic decision tasks with the manipulandum, there were four blocks per session: actiondecision objects, semantic-decision objects, action-decision words, and semantic-decision words. For the participants given the action task with buttonpress responses, there were two sessions, with the words and objects presented in different sessions, counterbalanced for order across participants. For the participants presented with nonobjects, there was a single session. The manipulandum allowed the participants to make horizontal and vertical movements that corresponded to the schematic gestures of pouring and twisting (see Figure 2). Pouring or twisting movement of the top part of the instrument broke separate circuits to signal that a particular response was made. The twisting or pouring movement was always compatible with the pouring or twisting action required for each object or nonobject, and it was always carried out with the preferred hand. The twisting action was akin to that made when taking a screw top off of a container; the pouring action was the same as that made when pouring from a pint-sized liquid container. For the semantic decision, a pouring action corresponded to a kitchen utensil and a twisting action to a nonkitchen item. The 40 drawings of objects and the 40 words were each divided into two sets, with each set containing 10 pourers and 10 twisters. In the naming task, the sets were allocated to the sessions in the following way. For half the participants, one set of objects was named first in the f irst block of Session 1, then in Block 2 one set of words (names of objects not presented in Block 1). In Session 2, one set of words not presented in Session 1 were named in the first block and then the set of objects not presented in Session 1. These orders were reversed for the other participants. For the action- and semanticdecision tasks carried out with the manipulandum, half the participants performed action decisions in Session 1 and semantic decisions in Session 2, whereas the other half had the tasks in the reverse order. Within each session there were two blocks, one with objects and the other with word stimuli. The order of the action- and semanticdecision tasks, and objects and word stimuli, was counterbalanced across participants. For the participants who carried out action de Figure 2. The manipulandum used in the study. The top part of the instrument could be used in a twisting or a pouring action. cisions by means of a buttonpress, the order of the sessions (pictures and words) was counterbalanced, as was the key used for the response. Half the participants responded twist with their preferred hand and pour with their nonpreferred hand; this was reversed for the remaining participants. The participants were placed 50 cm from a computer screen. There was a 4-min interval between each session and 2 min between each block. Each block started with four practice stimuli. Before the practice stimuli, an instruction appeared on the screen informing the participants about the task. The participants were asked to perform the task as quickly and accurately as possible. After the practice trials, a message appeared on the screen asking the participants whether they understood the task and, if so, to continue the experimental task. Additional explanation was given orally by the experimenter if a participant was not sure about what to do. In each block, stimuli (either pictures or words) were presented in a random order in the middle of the screen immediately after offset of a fixation point, which appeared for 200 msec in the middle of the screen. In the naming tasks, the participants spoke the name of the stimuli into a microphone, and the reaction times (RTs) were recorded by the computer. The names produced were recorded separately by the experimenter . Results The mean correct RTs and mean number of correct responses per item for each task are given respectively in Figures 3 and 4. Naming. The participants named words faster than they named objects [t(38) = 13.2, p , .0001]. Words were also named more accurately than objects [t(38) = 12.46, p , .0001].1 Action versus semantic decision (manipulandum response, objects, and words). An analysis of variance (ANOVA) with the factors being task (action vs. semantic decisions) and stimulus (objects vs. words), showed that, overall, semantic decisions were faster than action decisions [F(1,79) = 9.54, p , .003]. There was no overall difference between words and objects [F(1,79) = 2.54, p , .115]. However, there was a reliable interaction between task and stimulus [F(1,79) = 4.94, p , .03]. This interaction arose because action decisions were faster for objects (1,052 msec) than for words (1,174 msec, p , .03). In contrast, semantic decisions did not differ between words (1,002 msec) and objects (1,023 msec). Similar effects were apparent in the accuracy data. There was a main effect of task [F(1,79) = 14.32, p , .0001]. The action-decision task (.91) was generally better performed than the semantic-decision task (.82). There was a significant main effect of stimulus type [F(1,79) = 67.34, p , .008], with accuracy for objects (.91) higher than that for words (.87). There was also a significant interaction [F(1,79) = 6.77, p , .01]. This interaction occurred because action decisions to objects (.94) were more accurate than to words (.88; p , .0001). There was no significant difference ( p = .92) between these two types of stimuli in the semantic decision task (accuracy levels: words = .87; objects = .87). Effects of response type on action decisions (objects and words). The effect of response type (manipulandum vs. buttonpress) on action decisions was assessed in a mixed design ANOVA, with response type as the betweensubjects factor and stimulus type as the within-subjects factor. There were significant effects of response and stimulus type [F(1,79) = 88.64 and F(1,79) = 15.3, both ps , .0001]. The interaction did not approach significance [F(1,79) = 0.032, p = .86]. In general, responses with buttonpresses (805 msec) were faster than responses with the manipulandum (1,113 msec), and responses to objects (896 msec) were faster than responses to words (1,023 msec). Errors followed a similar pattern. There was a significant effect of stimulus type [F(1,79) = 32.3, p , .0001] and of response type [F(1,79) = 4.93, p , .03]. Overall, the accuracy for objects (.95) was higher than that SO for words (.89), and accuracy for the buttonpress (.94) was higher than that for the manipulandum (.91). The interaction did not approach significance [F(1,79) = 0.02, p = .9]. Action decisions with objects, nonobjects, and words (manipulandum responses). There was an overall difference in RTs [F(2,57) = 3.05, p , .05] for these three types of stimuli. The complementary analyses showed that the participants were faster in responding to objects (1,052 msec) than to words [1,174 msec; t(38) = 2.29, p , .03] and to nonobjects [1,171 msec; t(38) = 2.8, p , .03]. There was no significant difference between words and nonobjects [t(38) = 0.18, p = .86]. There was also an overall difference between the conditions in the accuracy data [F(2,57) = 107.8, p , .0001]. The participants were more accurate with objects (.94) than with words (.88) [t(38) = 3.8, p , .0001] and with nonobjects (.68) [t(38) = 13.11, p , .0001]. They were also more accurate with words than with nonobjects [t (38) = 9.9, p , .0001]. Naming versus action decision (manipulandum).2 A two-way ANOVA with factors being task (naming vs. action decision) and stimulus (objects vs. words) showed that naming (914 msec) was generally faster than action decisions (1,118 msec) [F(1,79) = 31.2, p , .001]. Responses to words (900 msec) were faster than to objects (1,113 msec) [F(1,79) = 41.03, p , .0001]. There was also a significant interaction between the two factors [F(1,79) = 99.89, p , .0001]. For words, naming (616 msec) was faster than action decisions (1,174 msec) [t(38) = 10.59, p , .0001]. For objects, the opposite result held [t(38) = 3.25, p , .002] (naming = 1,213 msec; action decision = 1,052 msec). The accuracy data followed the same pattern. There were main effects of task [F(1,79) = 11.79, p , .001] and stimulus [F(1,79) = 65.24, p , .0001]. The interaction between task and stimulus was also significant [F(1,79) = 154.5, p , .0001]. For words, naming (1.0) was more accurate than action decisions (.88) [t(38) = 10.64, p , .0001]. For objects, on the other hand, action decisions (.94) were more accurate than naming (.74) [t(38) = 8.75, p , .0001]. Naming versus semantic decision. There was a reliable main effect of task [F(1,79) = 11.9, p , .001] and of stimulus [F(1,79) = 109.6, p , .001]. The interaction between task and stimulus was also significant [F(1,79) = 94.94, p , .0001]. For words, naming (616 msec) was faster than semantic decisions (1,002 msec) [t(1,38) = 10.66, p , .0001]. For objects, in contrast, semantic decisions (1,023 msec) were faster than naming (1,213 msec) [t(38) = 4.06, p , .0001]. For the accuracy data, there was a significant main effect of stimulus [F(1,79) = 101.88, p , .0001]. The main effect of task was not significant [F(1,79) = 0.112, p , .74], but there was a significant interaction between task and stimulus [F(1,79) = 103.9, p , .0001]. For words, naming (1.0) was more accurate than semantic decisions (.87) [t(38) = 13.35, p , .0001]. For the objects, the oppo Discussion The main result of interest is that objects had an advantage relative to words for action decisions rather than for contextual/semantic decisions. This was not because action decisions were intrinsically easy. For objects, there were no reliable differences between action and semantic decisions, whereas for words semantic decisions were faster than action decisions. Nevertheless there was a relative benefit for action decisions for objects. This finding indicates that there is privileged access to action knowledge for objects, relative to access to at least some other forms of associative, contextual knowledge. These results are unlikely to be due to participants responding directly to features of objects that correlate with the response categoriespour and twist. As noted in the Method section, we took care to try and ensure that there was no single feature that occurred across all pouring and all twisting objects (see the Appendix). Nevertheless it may be that even a partial correlation between perceptual features and action categories is sufficient to convey an object advantage for action decisions. We examined this by having some of the participants make action decisions to nonobjects, each of which was drawn so that it contained at least one of the critical features that the participants had judged could be linked to the action decision with known objects. Despite this, action decisions to nonobjects were not facilitated relative to action decisions to words (and indeed, accuracy levels for nonobjects were lower than that for words). It appears that the presence of critical features was not crucial for the action advantage. A second finding that goes against the idea of the participants using direct stimulusresponse links to make the action decisions comes from our manipulation of the mode of response. Some participants made action decisions by making a response-compatible action with a manipulandum(either twisting or turning the top part down to make a pouring action), whereas others responded with a buttonpress. The size of the advantage for action decisions for objects over words was the same, irrespective of the mode of response. Thus the advantage was not contingent on the participants linking critical features of objects to a compatible motor response.3 Instead, we suggest that the action advantage arose because stored associations with action can be readily activated by the visual properties of objects, and, for objects, this action knowledge can be retrieved faster than knowledge about the associative context in which objects usually occur. This fast access to action knowledge may be useful for survival purposes. There can be circumstances in which one would need to act upon a stimulus even if it occurs outside of its normal context. Making action retrieval contingent on retrieval of associative, contextual knowledge about the object could slow performance too much when some more immediate action is required. It is interesting that the present results indicated that fast retrieval of action knowledge for objects can be based on stored objectaction links and are not solely dependent on associations between critical features and motor responses (note our results with nonobjects chosen to have the critical features present and our lack of an effect of response mode). As noted in the introduction, this fits with some of the data from optic aphasia, in which patients can make gestures specific to individual items (by selecting different hands for knives and forks), despite having impaired access to associative, contextual knowledge (e.g., Riddoch & Humphreys, 1987). In order to account for optic aphasic patients having access to action knowledge even when retrieval of associative knowledge was impaired, Riddoch and Humphreys (1987) proposed that there existed a direct route to action, linking stored visual representations of objects with stored actions (see also Rumiati & Humphreys, 1998). They suggested that this direct route to action mirrored the direct route to phonology that may exist for words, bypassing access to associative knowledge. The direct route to phonology can explain why naming is faster than semantic decisions for words (see also Potter & Faulconer, 1975). In the domain of reading, patients with semantic dementia who are able to read irregular words without knowing their meaning provide evidence for this learned, direct route (see, e.g., Schwartz, Saffran, & Marin, 1980), and they may be analogous to optic aphasics in the domain of object recognition and action. According to Riddoch and Humphreyss (1987) account, there may be dual routes to action, one involving direct links from object representations and one mediated by the retrieval of associative, contextual knowledge. However, in a strict dualroute account, it is difficult to explain disorders such as visual apraxia, in which an impaired direct route to action appears to block an otherwise unimpaired associative route (see the introduction). In standard dual-route accounts, damage to the direct, visual route should leave the associative, contextual route to action still intact. Yoon et al. (in press) instead suggested a convergent route account, in which visual and associative, contextual knowledge are used together to help select actions to objects. According to this view, selection is made when activation within an action knowledge system falls into an appropriate basin of attraction, corresponding to a learned action. Visual information provides the initial activation profile for this process, pushing activation to the appropriate basin, with activation subsequently being guided by associative, contextual knowledge. Visual apraxia can be explained if there is noise within the visual route, which pushes activation away from the normal basin of attraction, disrupting effects of associative knowledge on selection. The present results also fit with this account, with the visual representations of objects giving fast access to basic action decisions, prior to decisions based on associative, contextual knowledge. Such basic action decisions may be made from relatively broad basins of attraction within a system representing action knowledge. Theories of semantic memory have traditionally stressed the notion of a unitary system, in which the efficiency of memory retrieval is contingent on the vertical position occupied by a concept within a hierarchy of knowledge representation (e.g., Collins & Quillian, 1969; Rosch et al., 1976; Rumelhart & Todd, 1993). Here, differences may be expected between the efficiency of retrieving superordinate and subordinate knowledge, since superordinate knowledge is stored at a higher vertical position within the representation hierarchy. Differences may not be expected in the retrieval of different kinds of knowledge represented at the same level. According to other views, though, semantic representations take a more distributed form, with contrasting types of knowledge being represented in different brain regions and accessed in a relatively independent manner. This distributed view may provide a better account of the different breakdowns in access to stored knowledge that can be found in neuropsychological patients (see Humphreys & Forde, in press, for one recent summary). According to a distributed account, differences may arise in the efficiency with which particular types of knowledge can be accessed by particular stimuli, even when the knowledge occupies the same level in a traditional hierarchy. Thus, knowledge about basic categories of action may be retrieved faster than forms of contextual knowledge about objects, due to stored associations between objects and basic actions that exist independently of contextual knowledge. It is for future work to assess the generality of this advantage and how it may relate to notions about hierarchical representation. For example, although there may be fast access to basic categories of action from objects (twisting, pouring, etc.), the retrieval of subordinate actionsspecific to individual exemplarsmay be somewhat slower (e.g., ones knowledge that a particular object needs to be twisted slowly because of a worn thread). Indeed, access to subordinate action knowledge may even be relatively retarded for objects compared with words, since fast access to a basic action category from objects may create competition for retrieving an atypical action. A full understanding of how different kinds of information are accessed in normal participants will require us to specify the dynamics of knowledge retrieval and how this varies across levels of representation that may be specific to the knowledge being retrieved (e.g., a hierarchy of action knowledge as opposed to associative, contextual knowledge). Nevertheless, the present results indicate that there can be differences in the efficiency of accessing knowledge that is putatively stored at the same level within a traditional unitary model of semantics, with objects achieving privileged access to action knowledge. APPENDIX List of Objects Used in Experiment 1


This is a preview of a remote PDF: http://link.springer.com/content/pdf/10.3758%2FBF03196292.pdf

Hanna Chainay, Glyn W. Humphreys. Privileged access to action for objects relative to words, Psychonomic Bulletin & Review, 2002, 348-355, DOI: 10.3758/BF03196292