A Common Representation of Spatial Features Drives Action and Perception: Grasping and Judging Object Features within Trials
Research output: Contribution to journal › Journal article › Research › peer-review
Final published version, 424 KB, PDF document
Spatial features of an object can be specified using two different response types: either by use of symbols or motorically by directly acting upon the object. Is this response dichotomy reflected in a dual representation of the visual world: one for perception and one for action? Previously, symbolic and motoric responses, specifying location, has been shown to rely on a common representation. What about more elaborate features such as length and orientation? Here we show that when motoric and symbolic responses are made within the same trial, the probability of making the same symbolic and motoric response is well above chance for both length and orientation. This suggests that motoric and symbolic responses to length and orientation are driven by a common representation. We also show that, for both response types, the spatial features of an object are processed independently. This finding of matching object-processing characteristics is also in agreement with the idea of a common representation driving both response types.
|Number of pages||14|
|Publication status||Published - 2014|
Number of downloads are based on statistics from Google Scholar and www.ku.dk
No data available