Orientation priming of grasping decision for drawings of objects and blocks, and words

Memory & Cognition, May 2011

This study tested the influence of orientation priming on grasping decisions. Two groups of 20 healthy participants had to select a preferred grasping orientation (horizontal, vertical) based on drawings of everyday objects, geometric blocks or object names. Three priming conditions were used: congruent, incongruent and neutral. The facilitating effects of priming were observed in the grasping decision task for drawings of objects and blocks but not object names. The visual information about congruent orientation in the prime quickened participants’ responses but had no effect on response accuracy. The results are discussed in the context of the hypothesis that an object automatically potentiates grasping associated with it, and that the on-line visual information is necessary for grasping potentiation to occur. The possibility that the most frequent orientation of familiar objects might be included in object-action representation is also discussed.

A PDF file should load here. If you do not see its contents the file may be temporarily unavailable at the journal website or you do not have a PDF plug-in installed and enabled in your browser.

Alternatively, you can download the file locally and open with any standalone PDF reader:

http://link.springer.com/content/pdf/10.3758%2Fs13421-010-0049-9.pdf

Orientation priming of grasping decision for drawings of objects and blocks, and words

Hanna Chainay Lucie Naouri Alice Pavec 0 ) Laboratoire d'Etude des Mcanismes Cognitifs, Universit Lumire Lyon 2 , 5 avenue Pierre Mendes France, 69676, Bron cedex, France This study tested the influence of orientation priming on grasping decisions. Two groups of 20 healthy participants had to select a preferred grasping orientation (horizontal, vertical) based on drawings of everyday objects, geometric blocks or object names. Three priming conditions were used: congruent, incongruent and neutral. The facilitating effects of priming were observed in the grasping decision task for drawings of objects and blocks but not object names. The visual information about congruent orientation in the prime quickened participants' responses but had no effect on response accuracy. The results are discussed in the context of the hypothesis that an object automatically potentiates grasping associated with it, and that the on-line visual information is necessary for grasping potentiation to occur. The possibility that the most frequent orientation of familiar objects might be included in object-action representation is also discussed. - Performance of an object-directed action can be seen to be based on two different types of processing: one using conceptual knowledge about the object, the other using visual information independent of the conceptual knowledge. Some cognitive models have proposed that action may be evoked without access to the conceptual knowledge (e.g., Riddoch, Humphreys, & Price, 1989), insofar as perceptual information contained in a visually presented object is sufficient for action selection (Rumiati & Humphreys, 1998; Humphreys & Riddoch, 2003). However, more and more evidence suggests that conceptual information and visual information are combined to ensure correct use of an object (Chainay & Humphreys, 2002; for a review see Borghi, 2005). Tucker and Ellis (1998) have proposed that observing an object, even when it is not a target for action, could automatically activate motor representations appropriate for reaching, grasping and manipulating it. Furthermore, they proposed that visual object representation includes motor patterns associated with action it affords (Ellis & Tucker, 2000). Numerous behavioural studies involving visuo-motor priming (e.g., Craighero, Fadiga, Rizzolatti, & Umilta, 1998) and a stimulus-response compatibility paradigm have supported the idea that seeing an object re-activates action knowledge associated with it and results in the generation of affordance effects (Tucker & Ellis, 1998, 2001; Phillips & Ward, 2002; Hommel, 2002; Derbyshire, Ellis, & Tucker, 2006). On the other hand, neuroimaging studies have provided evidence that cortical representation of tools and manipulable objects activates motor-related areas (e.g., CreemRegehr, 2009; Gerlach, Law, & Paulson, 2002; Grafton, Fadiga, Arbib, & Rizzolatti, 1997). Creem-Regehr & Lee (2005) pointed out that objects can have multiple affordances that define how they can be grasped. However, familiar objects such as tools have one specific use associated with their identity and that may constrain action representation. In their fMRI study, participants viewed images of 3D tools and 3D graspable shapes (cone, cylinder) presented in different orientations, or imagined grasping them. For imagined grasping, the region of activation observed in the left posterior parietal cortex was larger for tools than for graspable shapes. Activation for tools was also observed in the middle temporal gyrus and fusiform gyrus. It is thought that these regions have a special role to play in the generation of actions based on internal representations related to the functional identity of objects. Of particular interest for the re-activation of action knowledge associated with an object is the influence of its orientation with respect to grasping decisions. The orientation of an object, along with its location, is a viewpoint-dependent object property that varies continuously as the object or the observer moves. This kind of information is particularly important for real-time processing when actual grasping is required and a precise parameterisation of the particular grasp is crucial. When grasping is not required, however, the information about object orientation does not seem to be particularly relevant for action potentiation of a certain type of grasp. Tucker and Ellis (2004) proposed that intrinsic properties such as object size and shape are especially important for re-activating an appropriate type of grasp. However, they did not rule out the possibility that our knowledge of familiar objects and their usual orientation may also provide information about the type of grasp required. More recently, Derbyshir, Ellis, and Tucker (2006), failed to find the compatibility effect between object orientation and hand used to respond, and questioned this possibility. One of the aims of the present study is to investigate whether the orientation of an object has a role to play in action potentiation, particularly with respect to grasping decisions. Very few studies have looked at the effect of object orientation on action decisions, and to the best of our knowledge none of them examines the effect on grasping decisions. Yoon and Humphreys (2007) examined the effects of object orientation (handle facing towards subject vs. away from subject) on action and semantic decisions. In the action decision task participants had to decide whether an object was associated with twisting action or not. In the semantic decision task they had to decide whether an object was a kitchen item or not. Unlike semantic decisions, action decisions were affected by orientation. Participants made more errors in the action decision than in the semantic decision task, particularly when objects were presented with the handle away from the subject. These results are consistent with the hypothesis that a seen object automatically re-activates action knowledge associated with it (Tucker and Ellis, 1998). In addition, Yoon and Humphreys examined also effects of priming (twisting / non-twisting primes) on action decisions. No effect of priming was observed in their study. The authors interpreted this as indicating that the same processing is involved in action decision and actual action execution, both being performed based on current visual stimuli. Thus, Yoon, and Humphreys (2007) suggest that this real-time processing drives selection and guidance of action in both action decision and actual action execution (see also Yoon, Heinke, & Humphreys, 2002). However, these data are inconsistent with the suggestion by Tucker and Ellis (2004) that some effects of object orientation may be observed in off-line tasks (not requiring actual interaction with the object) and could be based on past interactions with the object encountered with a similar orientation. According to these authors, the most frequent orientation of the object might be included in action knowledge and be used to evoke broad categories of action. If this were the case, and contrary to the results obtained by Yoon and Humphreys, orientation priming could be expected to affect action decision tasks (off-line tasks) with a familiar object. Experiment 1 of the present study was set up to investigate this. The exact nature of action potentiation processes is not well established. The question of whether these processes are based on the on-line visual information or the stored knowledge of the object and its associated actions remains controversial. Using a stimulus-response compatibility paradigm Tucker and Ellis (2004) explored whether the presence of a visual stimulus is necessary to potentiate a particular type of grasp. In one of their experiments, participants were presented with the names and images of small and large objects and had to group them into categories (natural versus manufactured) using a precision or a power grip. The compatibility effect was observed for both images and names. Because in the name condition the on-line action-related visual information is absent, these data rule out the possibility that action potentiation effects were based on current visual information. According to Tucker and Ellis (2004), for this kind of affordance effects to be generated, presence of a visual object is not necessary. They concluded that when on-line reaching and grasping are not actually occurring, action potentiation depends more on the stored knowledge of the object and actions associated with it than on the detailed visual characteristics of the viewed object. However, the data reported by Chainay and Humphreys (2002) appear to contradict this suggestion. In their study, participants had to make an action decision (twisting or pouring) or semantic decision (object usually/not usually found in a kitchen) for drawings of objects or their names. Action decisions were made more quickly for pictures of objects than for their names, but this was not the case with semantic decisions. These data suggest the presence of visual information is necessary for action potentiation to occur, at least when the task involves an explicit action decision. The authors concluded that both visual and conceptual information may be involved in generating action potentiation. However, they also suggested that there is privileged access to action knowledge from visual information and that this kind of information is more appropriate to potentiate action. It is possible that the involvement of this kind of information in the generation of action potentiation differs, depending on the purpose of the task, insofar as in the study conducted by Chainay and Humphreys, participants had to perform an explicit action decision task, whereas in the Tucker and Ellis study no action decision was required. This suggestion is compatible with that of Glover, Rosenbaum, Graham, and Dixon (2004), who made a distinction between planning actions and their on-line control. They proposed that the planning of action is based on a visual representation of an object, and that this representation includes both visual and conceptual information. In their study, words were observed to have interference effects on planning of grasping. This is compatible with the hypothesis that potentiation of action may occur even in the absence of a visual stimulus. The present study set out to investigate further the nature of action potentiation mechanisms regarding grasping decisions. We were particularly interested in two questions: (1) Is object orientation relevant for the potentiation of grasping decisions? and (2) Is visual on-line information necessary for eliciting action potentiation in grasping decision tasks? We used a priming paradigm to examine the effect of object orientation on grasping decisions. We worked on the assumption that the grasping decision is sensitive to an objects orientation on the basis of two postulates: (1) action knowledge includes the most frequent orientation of an object, and (2) visual properties of an object automatically re-activate action knowledge associated with it. In Experiment 1 we used two kinds of stimuli: drawings of objects and blocks. We predicted that orientation priming ought to be observed for drawings of objects that provide appropriate visual information about a type of grasp suitable for grasping them. For example, for many objects, the hand must be suitably oriented relative to the object for efficient grasping of the object to occur. With respect to the visual field, two distinct hand orientations would be in either horizontal or vertical axis. Based upon this, in the present study participants had to decide whether the preferential orientation of the hand for grasping an object was horizontal or vertical. We predicted that for drawings of objects, this grasping decision task should be performed faster in a condition of congruent primes than in a condition of incongruent or neutral primes. In addition, we predicted that if a benefit from priming by orientation is specific to learned actions and based in part on the stored action knowledge that includes the most frequent orientation of an object it should be greater for drawings of objects than for blocks because only the drawings of objects would be expected to activate object-specific grasping. In Experiment 2 our stimuli were drawings of blocks and the names of objects presented in Experiment 1. Some authors have suggested that an on-line visual stimulus is not necessary for action potentiation to occur (Tucker & Ellis, 2004; Derbyshir et al., 2006). If that is the case, we would expect to observe an orientation priming effect for the names of objects. However, this suggestion was mostly based on stimulus-response compatibility effects observed in the tasks not involving explicit action decisions. Some studies (Chainay & Humphreys, 2002) have shown privileged access to action knowledge from drawings of objects (containing visual information) as opposed to names of objects when explicit action decisions were required. Based on this suggestion, and because the name of an object contains no on-line visual information about this object, we predicted there would be no orientation effects on the grasping decision for the names of objects. In this experiment, drawings of blocks were used as a baseline condition for Experiment 1 because different groups of participants took part in Experiments 1 and 2. Forty participants, all students at the University of Lyon 2, took part in this experiment. All had normal or corrected-to-normal vision and were right-handed. Their right-handedness was tested by means of a short questionnaire derived from the Edinburgh Handedness Inventory (Oldfield, 1971), and they gave their written, informed consent for participation in this study. Twenty of them, with a mean age of 29 (SD = 11.95), took part in the pre-test with the experimental stimuli. The remaining 20 participants took part in the experimental study (mean age 21 years, SD = 2.13). Stimuli for the experiment were selected using a pre-test in which 20 participants were shown 72 images of graspable objects selected from Snodgrass and Vandervart (1980) and had to decide whether the object was graspable horizontally or vertically (see Appendix 1 for an example). No time limit was set. The items used for the subsequent experiment were chosen by over 80% of the participants as being horizontally or vertically graspable. It is worthy of mentioning that in some cases although the participants judged an object as vertically or horizontally graspable, the grasping would not necessarily take place in one of these two orientations. Instead, the most common use of the object would. A total of 21 horizontally and 21 vertically graspable items were selected. Thus the experimental stimuli consisted of 42 line drawings of everyday objects (39 non-living and 3 living). Twentytwo stimuli (most of them having handles) were presented inclined 45 on the right (11 stimuli) or the left side (11 stimuli) of the midline of the participant sitting in the front of the computer. The remaining 20 stimuli (all without handles) were presented in their most common position in the everyday life (see Appendix 2 for a list of stimuli). Two drawings of blocks (one in the horizontal and one in the vertical position) were also used as stimuli. In addition, drawings of two rectangles and one circle were used as primes. The size of the drawings of objects and blocks and primes was 6.5 cm width 7.5 cm height. Each object subtended a visual angle of between 6.2 and 7.12. In this experiment we tested the effects of priming on grasping decision with drawings of objects and drawings of blocks. Participants were asked to decide whether a preferential grasping of the object is horizontal or vertical. For a bottle, for example, the preferential grasping is vertical, as opposed to horizontal in the case of a comb. The stimuli were primed by congruent, incongruent or neutral primes (circle). The congruent or incongruent primes were the horizontal and vertical bars. For example, in a congruent prime-target condition, a cup having the preferential vertical grasping was primed by a vertical bar. For each experimental stimulus type, i.e., drawings of objects and drawings of blocks, 42 prime-target pairs were constructed with 14 pairs for each priming condition (congruent, incongruent, neutral). Half of the items in the congruent and incongruent pairs were horizontally graspable and the other half vertically graspable. Thus participants saw 42 prime-target pairs with the drawings of objects and 42 prime-target pairs with the drawings of blocks. Presentation of the primetarget pairs was random. Drawings of objects and blocks were presented in the separate sessions. The order of the presentation of the sessions was counter-balanced so that half of the participants saw the drawings of objects first and the drawings of blocks second. The other half of the participants performed the sessions in the reverse order. The experiment was programmed and run using DMDX software (K. Foster, University of Arizona, 2002). There was a 2-min interval between the presentations of the two experimental sessions (drawings of objects, drawings of blocks). Each session started with five practice stimuli, before which instructions appeared on the screen informing participants about the task ahead. After the practice, additional oral explanations were given to participants when necessary. They were asked to perform the task as quickly and accurately as possible. Each trial started with a fixation cross displayed in the middle of a computer screen for 500 ms, followed by a prime displayed for 150 ms. The participants were not informed that the target stimulus will be preceded by the prime although they were likely aware of the orientation cues. Immediately after the prime, the stimulus was presented, also in the middle of the computer screen, for 1500 ms maximum and disappeared once a response had been given. Responses were made using the shift key on the right side of a keyboard (horizontal grasping) and the shift key on the left side of the keyboard (vertical grasping) and accuracy and reaction times recorded. Results and discussion The statistical analyses were performed on the reaction times for the correct responses and the mean of correct responses. A repeated measure ANOVA, with factors being Stimulus (drawings of objects vs. drawings of blocks) and Prime (congruent vs. incongruent vs. neutral) showed that, overall, participants responded significantly faster for drawings of blocks (mean = 663.2 ms, MSE = 21.1) than for drawings of objects (mean = 1258.4 ms, MSE = 86.6) F(1,19) = 72.39, p < .0001. The effect of Prime was also significant F(2,38) = 6.34, p < .005. The Fig. 1 Mean reaction time for drawings of objects and blocks in congruent, neutral and incongruent priming condition. The error bars represent the standard error of the mean planned comparisons were performed to decompose the main effect of Prime. Participants responded significantly faster for stimuli primed by congruent prime (mean = 802.4 ms, MSE = 76.5) than by neutral (mean = 851.6 ms, MSE = 88.4; F(1,19) = 9.73, p < .006) or incongruent prime (mean = 870.6 ms, MSE = 89.2; F(1,19) = 7.94, p < .01). There was not a significant difference in the mean reaction times between the target primed by the neutral primes and those primed by incongruent primes, F(1,19) = .5, p = .48. The interaction between the Stimulus and Prime was not significant F(2,38) = 1.66, p = .2 (see Fig. 1). We predicted that the priming by congruent orientation should be more influential for drawings of objects than for drawings of blocks because only these stimuli may elicit object-specific grasping. Contrary to our prediction, there was no significant interaction between Stimulus and Prime suggesting no difference in priming effect for these two types of stimuli. However, in order to better understand effects of congruent priming we examined the magnitude of the effect of this priming (neutral condition minus congruent condition) for drawings and for blocks. Participants benefited more from congruent priming in a case of the drawings of objects (mean = 103.27 ms) than in a case of drawings of blocks (mean = 29.4 ms). A t-test showed that the magnitude of the priming effect was significantly larger, i.e., t(19) = 2.16, p < .005, for drawings of objects than for drawings of blocks. A similar effect of Stimulus was observed for the accuracy data, F(1,19) = 19.6, p < .001. Participants generally performed better for drawings of blocks (mean = 38.65/42 maximum correct response, MSE = .56) than for drawings of objects (mean = 32.65/42, MSE = 1.29). There was no significant effect of Prime, F(2,38) = 1.48, p = .24, however, and no significant interaction F(2,38) = .89, p = .42. Participants responded in a similar way for stimuli primed by congruent (mean = 23.5/42, MSE = .69), incongruent (mean = 24.35/42, MSE = .53) and neutral (mean = 23.45/42, MSE = .55) primes. In other words, priming did not influence the accuracy of the responses for drawings of blocks or drawings of objects. The main result of this experiment was that the visual priming affected the speed of grasping decisions made based on drawings of objects and drawings of blocks. In addition, the magnitude of the priming effect was larger for the former condition than for the latter. In order to examine whether the observed priming effect depends on the processing of visual information, specifically of the objects represented on the drawings, we conducted a second experiment using the written names of the objects presented in Experiment 1 as stimuli. In addition we repeated the condition with the drawings of blocks. Twenty participants, all students at the University of Lyon 2, took part in this experiment (mean age 21.3 years, SD = 1.9). All had normal or corrected-to-normal vision and were right-handed. As with the participants from Experiment 1, the right-handedness was tested by means of a short questionnaire derived from the Edinburgh Handedness Inventory. The participants gave their written, informed consent for participation in this study. )1050 s (m950 T R 850 Fig. 2 Mean reaction time for words and drawings blocks in congruent, neutral and incongruent priming condition. The error bars represent the standard error of the mean Written words corresponding to the names of the 42 objects shown as line drawings in Experiment 1 were used as stimuli. The names appeared in Times New Roman, font size 20. As in Experiment 1 two drawings of blocks (one in a horizontal and one in a vertical position) were also used as stimuli, and drawings of two rectangles and one circle were used as primes. The size of the drawings of blocks and primes was identical to those used in Experiment 1. The procedure was the same as in Experiment 1. The only difference was that instead of performing a session with the drawings of objects and the drawings of blocks participants performed the session with the written names of objects and the drawings of blocks. Results and discussion A repeated measure ANOVA, with factors being Stimulus (words vs. drawings of blocks) and Prime (congruent vs. incongruent vs. neutral) showed that, overall, participants responded significantly faster for drawings of blocks (mean = 603.2 ms, MSE = 21.1) than for words (mean = 1119.6 ms, MSE = 23.7), F(1,19) = 71.57, p < .0001. The effect of Prime was not significant, F(2,38) = 1.76, p = .19. The interaction between the Stimulus and Prime was significant, F(2,38) = 6.01, p < .01 (see Fig. 2). The planned comparisons did not show significant differences between the different priming conditions for words: congruent versus incongruent, F(1,19) = 2.22, p = .15; congruent versus neutral, F(1,19) = .04, p = .87; and incongruent versus neutral, F(1,19) = 1.61, p = .21. For drawings of blocks a significant difference, F(1,19) = 11.66, p < .003, was observed between congruent (mean = 584.67 ms) and incongruent (mean = 625.2 ms) priming. There was not a significant difference between congruent and neutral priming, F(1,19) = .57, p = .46, nor between neutral and incongruent F(1,19) = 1.81, p = .19. A similar effect of Stimulus was observed for the accuracy data, F(1,19) = 22.13, p < .001. Participants generally performed better for drawings of blocks (mean = 40.6/42 maximum correct response, MSE = .42) than for words (mean = 27.6/42, MSE = .81). There was no significant effect of Prime, F(2,38) = 2.62, p = .086, and no significant interaction F(2,38) = .99, p = .38. The analysis presented here is consistent with the prediction that priming by orientation should be observed only for drawings of objects and drawings of blocks as The aim of this study was to investigate further the nature of action potentiation mechanisms as regards grasping decisions. In particular, we looked at the role of object orientation for action decision judgments with the help of a priming paradigm. We based our two experiments on the assumption that the grasping decision is sensitive to an objects orientation. This assumption derives from the suggestion that action knowledge includes information about an objects most frequent orientation (Tucker & Ellis, 2004), and that the visual properties of an object automatically re-activate action knowledge associated with it (Tucker & Ellis, 1998, 2004; Derbyshir et al., 2006). It seems possible that as a result of multiple experiences with an object we are able to acquire knowledge about its most frequent orientation. This knowledge is probably included in action knowledge about an object and would be used to potentiate action. Thus we predicted that effects of priming by orientation should be observed on action decisions for drawings of objects. The main result of Experiment 1 supports our prediction. A significant priming effect of orientation was observed when action judgements were made for drawings of objects. Participants responded faster for target stimuli following congruent primes than following incongruent or neutral primes. However, the congruent primes did not influence accuracy insofar as there was no significant priming effect on response accuracy. Our reaction time data, but not our data on accuracy, are compatible with those reported by Tucker and Ellis (1998) showing a stimulus-response compatibility effect based on the object orientation. The data of Tucker and Ellis suggest that an objects orientation (extrinsic characteristic) may, like its other visual characteristics (e.g., size, shape), potentiate actions associated with it, even those not appropriate for an actions current purpose. According to Tucker and Ellis (2004), the orientation of an object plays an important role in on-line action control during grasping, but they did not rule out the possibility that some knowledge about object orientation may be included in object-action knowledge. Our data are in favour of this suggestion. In a more recent study, however, Derbyshir, Ellis, and Tucker (2006) observed no stimulusresponse compatibility effects related to object orientation. In fact, the left/right orientation of the objects had no influence over the speed and accuracy of the right/left hand power responses in a categorization task (kitchen utensils/garage tools). According to the authors, these data run counter to the idea that object orientation may be included in object-action knowledge. Our data do not support this suggestion since in our study orientation priming effects on the grasping decision were observed. However, in our case we used a priming paradigm and grasping decision task, rather than a stimulus-response compatibility paradigm and a conceptual decision task not directly involving action knowledge. The different results may therefore be due to the fact that in our study, unlike in the Derbyshir et al. study, participants made an explicit decision about grasping. When explaining the absence of any compatibility effect between object orientation and hand of response, Derbyshir and colleagues rejected the possibility that object orientation may afford actions associated with it via a memory-based representation. In their view, the orientation of an object cannot be stored in object-action representation because it is constantly changing. The facilitating effect of priming by orientation we observed in our study questions this suggestion. Our reaction time data, contrary to our accuracy data, are inconsistent with those reported by Yoon and Humphreys (2007) since these authors observed no effects of priming either on reaction times or accuracy. In contrast, we observed faster grasping decisions for drawings of objects primed by congruent primes than primed by incongruent or neutral primes. This discrepancy may be due to the fact that our participants had to decide on a category of grasping (horizontal and vertical) rather than a category of action (twisting). It is possible that object orientation plays a differential role in these two tasks, being more relevant for the grasping decision. In addition, the nature of the primes we used was also different. In the Yoon and Humphreys study, primes depicted objects belonging to the same or a different action category as the target-stimulus. It is possible that this rather complex kind of prime may potentially activate information not relevant for action judgement tasks and which interferes with effects of priming. In our study primes were the simple bars that activated only the information relevant for positioning the hand (vertical or horizontal). Yoon and Humphreys interpreted the absence of priming effect on action judgement as supporting the idea that such a decision is made using an online link between vision and action judgement. In their view the absence of priming effects on action judgement is evidence that any task concerning action, irrespective of whether that action is actually carried out, is mediated by the same mechanisms. This online link between vision and action was originally proposed as an explanation for the absence of priming effects on actual grasping (e.g. Cant, Westwood, Valyear, & Goodale, 2005). Cant et al. (2005) and Garofeanu, Kroliczak, Goodale and Garofeanu et al. (2004) observed effects of repetition priming on naming but not grasping. According to Garofeanu et al., the real time visuo-motor processing is sufficient for object grasping under visual guidance that requires no memory-based visual representation of an object. Interestingly, in one of their experiments the participants responses when asked to name objects were faster when grasping preceded naming. These data seem compatible with the idea supported by our results that the visual representation of an object includes action-related information. Some fMRI studies provide interesting results about object orientation processing. Valyear, Culham, Sharif, Westwood and Goodale (2006) have shown that the lateral occipito-parietal junction (OPJ), the dorsal stream region, is particularly sensitive to changes in object orientation but not to changes in their identity. These activations were observed when participants were passively viewing the stimuli. In addition, Rice, Valyear, Goodale, Milner, and Culham (2007) suggested that this sensitivity is specific to graspable objects. In their experiment, participants had to decide whether two sequential masked stimuli had the same orientation or not. Some of the stimuli were graspable and others not. Differences in activation in the occipito-parietal junction related to the changes in orientation were observed only for graspable objects. Even if the authors do not use a priming paradigm, in both studies this sensitivity was observed while participants observed changes between two visual stimuli presented consecutively. Unfortunately, the behavioural data are not presented. Also, it would be interesting to know whether participants responses to graspable objects differed from their responses to nongraspable objects. Tucker and Ellis (2004) proposed that some knowledge about object orientation might be included in object-action knowledge. More recently (Derbyshir et al., 2006), they did not support this proposition. However, based on the observation that familiar objects are frequently encountered in the same orientation, it seems possible that as a result of our experience with familiar objects some information about their most frequent orientation may be stored within object-action representation. Insofar as the notion of most frequent orientation is relevant only for familiar objects, we compared the orientation priming effect on grasping decisions for drawings of objects with the same effect for drawings of blocks. Given that only familiar objects may have a most frequent orientation, we predicted that the effect of object orientation on the grasping decision would be greater with the drawings of objects than with the drawings of blocks. Our results support this prediction. An effect of priming was observed for drawings of both objects and blocks, but the benefit of congruent priming for the grasping decision was significantly greater in the case of drawings of objects. Our explanation for these results is that, unlike blocks, familiar objects have a familiar orientation and functional identity, and that object-action representations in the case of familiar objects include both visual (intrinsic, and to some extent extrinsic) and functional information. This interpretation is in keeping with the Creem-Regehr and Lee (2005) fMRI study, which looked at whether the potentiation of action in the case of familiar objects such as tools comes from their visual property of being graspable or from their functional identity. The authors compared cerebral activation for tools and for graspable 3D drawings of shapes (cylinder and cone) during simple observation or imagery tasks. During imagined grasping, the extent and location of parietotemporal activation was not the same for tools as for graspable shapes. In the case of tools, additional activation was also found in the middle temporal gyrus and fusiform gyrus. The authors suggested that both functional knowledge and an objects visual characteristics contribute to its object-action representation. Another possibility is that the difference in the benefit of priming observed with drawings of objects as compared to drawings of blocks is due to a convergence of a coarse type of grasping pre-activated by the prime and the specific grasping evoked by a drawing of an object. For example, drawings may automatically evoke grasping that is object specific. Thus faster responses for objects may be due to the convergence of both, the pre-activated (by the prime) coarse type of grasping and the objects specific grasping activated by the drawing of the object. In Experiment 2 we investigated whether the on-line visual representation is necessary for potentiation of action to occur. Participants were asked to make grasping decisions for the names of objects, as this kind of stimuli provides no on-line visual information about the object. We hypothesized that potentiation of action is essentially based on the visual characteristics of an object and thus should be preferentially elicited from drawings of objects. Chainay and Humphreys (2002) observed that action decisions (twisting/pouring) were faster and more accurate for drawings of objects than for their names. On the contrary, this difference was not observed between these two types of stimuli in the case of conceptual decisions (usually found in a kitchen or not). The authors suggested that objects have privileged access to action. In other words, potentiation of action is based in particular on the visual characteristics of the stimulus. Our results support this suggestion, insofar as we observed no effect of priming for the names of objects. Of particular interest here is the study by Tucker and Ellis (2004). In one of the Tucker and Ellis experiments participants had to categorize names and images of small and large objects into natural versus manufactured categories with a precision or power grip. Contrary to their own previous suggestion (Tucker & Ellis, 1998), the authors observed a compatibility effect for both images and names of objects. Insofar as with names the on-line action-related visual information is absent, they concluded that the presence of the visual object is not necessary for action affordance to occur. Thus, when on-line reaching and grasping are not actually taking place the potentiation of action depends more on the stored knowledge of the object and actions associated with it than on its detailed visual characteristics. Our data are in contradiction with this proposition. If names can potentiate an action, a priming effect was to be expected in the present study. However, no such effect was observed. It may be the case that action potentiation as a result of an objects orientation depends more on visual information than on information based on intrinsic characteristics such as size or shape. Another possibility is that the effect observed by Tucker and Ellis is specific to a stimulus-response compatibility paradigm, in which no explicit action decision is required. In addition, independently of priming, grasping decisions made for drawings of objects were more accurate than for words. This occurs because visual information about grasping an object is not directly present in words. Access to this information based on words relies on the semantic system. These results are compatible with those reported by Chainay and Humphreys (2002) who found that action judgements in relation to depicted objects were faster and more accurate than in relation to words. However, the reaction times for the grasping decision were quite similar for drawings of objects and words. One would rather expect slower performance for words than for drawings of objects because words need to be imagined in terms of grasping. It is possible that participants placed accuracy above rapidity as regards their judgements with respect to drawings of objects. Another possibility is that in the case of drawings, both routes, direct and indirect, contributed to the grasping decision. Riddoch et al. (1989) and Chainay and Humphreys (2002) suggested that visual and semantic action routes normally converge to govern action selection. It may be the case, for example, that any discrepancy in the information activated in each of these routes may slow down the grasping decision. The absence of difference in speed performance between drawings of objects and words needs to be further investigated. In the present study we observed a priming effect of orientation on grasping decisions for drawings of objects but not for words. However, as there are only few studies that investigated effects of orientation on action decision, further work is required to confirm the present data. Fig. 3 Examples of the experimental stimuli. A Horizontal, vertical and neutral prime. B Horizontal and vertical bloc. C Examples of the pictures of objects selected for horizontal hand grasping condition. D Examples of the pictures of objects selected for vertical hand grasping condition A. Horizontal, Vertical and Neutral prime. B. Horizontal and Vertical Bloc. C. Examples of the pictures of objects selected for horizontal hand grasping condition. D. Examples of the pictures of objects selected for vertical hand grasping condition. Table 1 List of the experimental stimuli 1. Knife 2. Potato peeler 3. Handbag 4. Frying pan 5. Tube paste 6. Clothes peg 7. Loaf of bread 8. Painting brush 9. Hammer 1. Salter 2. Lipstick 3. Water pot 4. Yogurt 5. Bottle 6. Cornet of ice cream 7. Glass 8. Candle 9. Cup


This is a preview of a remote PDF: http://link.springer.com/content/pdf/10.3758%2Fs13421-010-0049-9.pdf

Hanna Chainay, Lucie Naouri, Alice Pavec. Orientation priming of grasping decision for drawings of objects and blocks, and words, Memory & Cognition, 2011, 614-624, DOI: 10.3758/s13421-010-0049-9