首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5篇
  免费   0篇
航天   5篇
  2013年   3篇
  1999年   2篇
排序方式: 共有5条查询结果,搜索用时 15 毫秒
1
1.
Abstract

Participants learned circular layouts of six objects presented haptically or visually, then indicated the direction from a start target to an end target of the same or different modality (intramodal versus intermodal). When objects from the two modalities were learned separately, superior performance for intramodal trials indicated a cost of switching between modalities. When a bimodal layout intermixing modalities was learned, intra- and intermodal trials did not differ reliably. These findings indicate that a spatial image, independent of input modality, can be formed when inputs are spatially and temporally congruent, but not when modalities are temporally segregated in learning.  相似文献   
2.
We examined the use of hand gestures while people solved spatial reasoning problem in which they had to infer motion from static diagrams (mental animation problems). In Experiment 1, participants were asked to think aloud while solving mental animation problems. They gestured on more than 90% of problems, and most gestures expressed information about the component motions that was not stated in words. Two further experiments examined whether the gestures functioned in the mechanical inference process, or whether they merely served functions of expressing or communicating the results of this process. In these experiments, we examined the effects of instructions to think aloud, restricting participants' hand motions, and secondary tasks on mental animation performance. Although participants who were instructed to think aloud gestured more than control groups, some gestures occurred even in control conditions. A concurrent spatial tapping task impaired performance on mechanical reasoning, whereas a simple tapping task and restricting hand motions did not. These results indicate that gestures are a natural way of expressing the results of mental animation processes and suggest that spatial working memory and premotor representations are involved in mental animation. They provide no direct evidence that gestures are functional in the thought process itself, but do not rule out a role for overt gestures in this type of spatial thinking.  相似文献   
3.
This paper tests the generality and implications of an encoding-error model (Fujita et al. 1993) of humans' ability to keep track of their position in space in the absence of visual cues (i.e., by nonvisual path integration). The model proposes that when people undergo nonvisually guided travel, they encode the distances and turns that they experience, and their errors reflect systematic inaccuracies in the encoding process. Thus when people try to return to the origin of travel, they base their response on mis-encoded values of the outbound distances and turns. The two experiments reported here addressed three issues related to the model: (i) whether path integration is context-dependent and if so, how rapidly it adapts to recently experienced distances and turns; (ii) whether effects of experience can be specifically attributed to changes in the encoding process, and if so, what changes; and (iii) whether the encoding process represents distances and turns in the individual paths without considering their spatial relationship to one another (i.e., an object-centered representation). Testing these issues allows us to evaluate and develop the model.Subjects who were blindfolded or had restricted vision were led through two legs of a triangle and the turn between, then tried to return to the origin. Paths varied in whether experienced legs and turns were small or large (Experiment 1) and in variability of return and outbound course (Experiment 2). Response turn, distance and course were determined. The assumption of immutable encoding functions was not supported; encoding processes were context dependent, although they did not adapt within a block of trials. Although effects of experience could be accounted for by the model, the affected parameters were not always as predicted, and in some cases additional parameters were necessary. Results of manipulating variability in return course were consistent with the model's assumption of object-centered representation.  相似文献   
4.
Abstract

This research uses a novel integration paradigm to investigate whether target locations read in from long-term memory (LTM) differ from perceptually encoded inputs in spatial working-memory (SWM) with respect to systematic spatial error and/or noise, and whether SWM can simultaneously encompass both of these sources. Our results provide evidence for a composite representation of space in SWM derived from both perception and LTM, albeit with a loss in spatial precision of locations retrieved from LTM. More generally, the data support the concept of a spatial image in working memory and extend its potential sources to representations retrieved from LTM.  相似文献   
5.
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号