Research in Developmental Disabilities 32 (2011) 2084–2091
Contents lists available at ScienceDirect
Research in Developmental Disabilities
The impact of vision in spatial coding Konstantinos Papadopoulos *, Eleni Koustriava Department of Educational and Social Policy, University of Macedonia, 156 Egnatia St., P.O. Box 1591, 54006 Thessaloniki, Greece
A R T I C L E I N F O
A B S T R A C T
Article history: Received 23 July 2011 Accepted 25 July 2011 Available online 15 September 2011
The aim of this study is to examine the performance in coding and representing of nearspace in relation to vision status (blindness vs. normal vision) and sensory modality (touch vs. vision). Forty-eight children and teenagers participated. Sixteen of the participants were totally blind or had only light perception, 16 were blindfolded sighted individuals, and 16 were non-blindfolded sighted individuals. Participants were given eight different object patterns in different arrays and were asked to code and represent each of them. The results suggest that vision influences performance in spatial coding and spatial representation of near space. However, there was no statistically significant difference between participants with blindness who used the most effective haptic strategy and blindfolded sighted participants. Thus, the significance of haptic strategies is highlighted. ß 2011 Elsevier Ltd. All rights reserved.
Keywords: Vision Visual impairments Spatial coding Haptic strategies
1. Introduction Many behavioral studies reveal the detrimental effect of vision loss on the acquisition of spatial knowledge (see for a review Cattaneo et al., 2008). Mental representation of space is essentially visual in character (Huttenlocher & Presson, 1973). However, there is another point of view according to which mental representation neither results from nor reflects visual perception (Millar, 1976). Although Millar (1988) accepts that the type of information used by individuals with congenital blindness may cause difficulties in mental spatial reorganization tasks, she states that vision is neither necessary nor sufficient for spatial coding. In the same direction, Paivio (1986) suggests that imagery could result from every sensory modality. There are a number of researches that provide evidence of a quite similar performance between individuals with blindness and individuals with normal vision, when the experiment of the research concerns visual imagery (see for a review Cattaneo et al., 2008). Different cognitive strategies such as mental spatial representations or haptic imagery may interfere with resulting in a performance equivalent to the one based on visual imagery (Cattaneo et al., 2008). However, it has been suggested that there may be a distinction between visual imagery and representational spatial imagery. The latter seems to be amodal and abstract mental representation (Corballis, 1982). Thus, while visual input is considered by many researchers as necessary for imagery, others argue that the lack of it can be compensated for through the development of another sensory modality. For instance, part of the research underlines the effectiveness of touch in specific tasks, which individuals with visual impairments performed as well as or even better than their sighted counterparts (Heller, 1989; Heller, Brackett, Scroggs, & Allen, 2001; Postma, Zuidhoek, Noordzij, & Kappers, 2007). Heller, Wilson, Steffen, and Yoneyama (2003) suggested that haptics might surpass visual experience when haptic selectivity is required. In addition, Fiehler, Reuschel, and Ro¨sler (2009) proved that early induction in orientation and
* Corresponding author. Tel.: +30 2310 891403; fax: +30 2310 891388. E-mail addresses:
[email protected] (K. Papadopoulos),
[email protected] (E. Koustriava). 0891-4222/$ – see front matter ß 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.ridd.2011.07.041
K. Papadopoulos, E. Koustriava / Research in Developmental Disabilities 32 (2011) 2084–2091
2085
mobility training may procure acuity of spatial perception in individuals with congenital blindness, who performed as well as individuals without visual impairment in the spatial tasks of the study in question. What happens, though, when individuals with total blindness code and represent the near (peripersonal) space? Does vision still have superiority or does the haptic experience of individuals with blindness compensate for the lack of vision and enable an equal or even better performance? Moreover, what happens, when sighted with and without blindfold code and represent the near space? Could touch be as effective as vision? Does visual experience influence the current performance? There are several studies that have examined the ability of people with visual impairments in terms of spatial coding and spatial representation of near space (Hollins & Kelley, 1988; Millar, 1979; Monegato, Cattaneo, Pece, & Vecchi, 2007; Papadopoulos, Koustriava, & Kartasidou, 2009; Papadopoulos, Koustriava, & Kartasidou, 2010; Pasqualotto & Newell, 2007; Postma et al., 2007; Vanlierde & Wanet-Defalque, 2004) with significant findings. The majority of the researchers who compared the spatial performance of individuals with that of those without visual impairments or performance of individuals with blindness to that of peers with residual vision come to the conclusion that visual experience decisively influences management of spatial environment. Also, the role of visual experience in spatial cognition is considered to be a major one (Cattaneo et al., 2008). Apart from the influence of visual experience on enabling spatial imagery (Hollins, 1985; Vanlierde & Wanet-Defalque, 2004), visual experience is also considered to be very important for an effective coding and representation of spatial information as well as for the update of spatial haptic representations (Pasqualotto & Newell, 2007). However, research has indicated that a person with congenital visual impairment may perform better at spatial tasks than a person who is adventitiously impaired (Monegato et al., 2007). There are previous researches on sighted participants which examine near space coding through various sensory modalities (Newell, Woods, Mernagh, & Bu¨lthoff, 2005), different frames of reference (Kappers, 2007; Waller, Lippa, & Richardson, 2008) and viewpoints (Mou, Fan, McNamara, & Owen, 2008; Mou, McNamara, Valinquette, & Rump, 2004), under blindfolded conditions or conditions of non-informative vision (Newport, Rabb, & Jackson, 2002; Newell et al., 2005). On the other hand, the same procedures cannot be used nor can the same conclusions be validated for individuals with blindness. That is because individuals with blindness rely on haptic strategies to code and represent space and tend to apply different, more egocentric, coding systems (Spencer, Blades, & Morsley, 1989; Warren, 1994). Thinus-Blanc and Gaunet (1997) define strategy as ‘the set of functional rules implemented by the participant at the various phases of information processing, from the very first encounter with a new situation until the externalization of the spatial knowledge’. Studying the results of a research in large-scale space, they concluded that strategies may be the cause of similar performance levels between participants with blindness, late blindness and sighted participants with blindfold (ThinusBlanc & Gaunet, 1997). Egocentric representation expresses the relationship between the position of objects and the viewer (Wang, 2003). It originates in sensory data and can provide a starting-point for action in space (Nardini, Burgess, Breckenridge, & Atkinson, 2006; Pick, 2004). In the case of haptic exploration of space the egocenter – the reference point according to which the distance and/or the orientation of an object in space is encoded – is not the eye or head and not necessarily the body of the viewer but could be the shoulder, elbow or wrist (Kappers, 2007). Allocentric representation contains no spatial information relative to a viewer (Wang, 2003) and expresses a location in relation to an external point of reference (Nardini et al., 2006). Children with blindness have difficulty in changing their frame of reference from egocentric to allocentric (Morsley, Spencer, & Baybutt, 1991; Ochaita & Huertas, 1993; Warren, 1994). They seem to abide by egocentric strategies and to be slow to make the transition to an allocentric encoding system (Ochaita & Huertas, 1993; Warren, 1994). Visual experience seems to be responsible for the adoption of specific haptic strategies (Spencer et al., 1989; Ungar, Blades, & Spencer, 1995), which in turn defines the performance of an individual with visual impairments in spatial tasks (Ungar et al., 1995). However, it seems that visual experience is not the only factor that influences the adoption of a specific coding strategy. Since, individuals with congenital blindness appear to make use of allocentric coding strategies in researches, other factors, such as education in orientation and mobility, may mediate. In a study by Ungar et al. (1995), it was proved that the participants (either those with congenital blindness or those with residual vision) who used methodical strategies of coding near space – calculated on the position of each shape based on the distance between the shapes and the distance between the shapes from the external frame – had better performances than the participants who used simpler coding strategies. The results of studies proving that individuals with blindness can have similar performances with individuals with normal vision in spatial tasks, are usually discussed through the prism of compensatory mechanisms (Cattaneo et al., 2008). What if near-space coding supports haptics in having an approximately equal force to vision? It is suggested that if a spatial point of reference is detectable by an individual who is visually impaired then it could lead the person to an allocentric coding (Millar, 1979). Millar (1979) designed her research supposing that children with blindness rely on egocentric system – specifically, on movements – because they fail to internalize external cues that are not present for them. According to Millar, individuals with visual impairments fail to ‘internalize’ spatial information because the external cues cannot be perceived. In other words, they internalize spatial cues when these are present (Millar, 1979). Vision facilitates mental combinations of three or more items, whereas haptics leads to a sequential way of receiving information (Cattaneo et al., 2008). For this reason it is assumed that individuals with blindness face serious difficulties in simultaneously processing a considerable amount of haptic information (Cattaneo et al., 2008). This hypothesis is further supported by the fact that the efficiency of vision in collecting and processing a large amount of spatial information is
2086
K. Papadopoulos, E. Koustriava / Research in Developmental Disabilities 32 (2011) 2084–2091
decisively reduced when vision exploration is turned to sequential exploration to resemble haptics (Loomis, Klatzky, & Lederman, 1991). What happens though when there are haptic strategies that permit touch to combine a significant amount of spatial information? Is it possible that haptic strategies compensate for vision properties? And what really happens when sighted individuals are forced to use haptic strategies? 2. Study The aim of this study was to examine the performance in coding and representing near space in relation to vision status (blindness vs. normal vision) and sensory modality (touch vs. vision). For this purpose, individuals with blindness, individuals with normal vision and blindfolded sighted individuals compared based on their performance in coding and representing near-space. The influence of haptic strategies on the performance of participants with blindness was examined as well. 2.1. Participants Forty-eight children and teenagers participated in the present study. Sixteen of the participants were totally blind or had only light perception, 16 were blindfolded sighted individuals, and 16 were non-blindfolded sighted individuals. The three groups were matched in terms of gender, age and educational level. Eight boys and eight girls, aged from seven years and 11 months to 17 (M = 12.58 years, SD = 2.42), comprised the group of participants with blindness (group A). Fourteen of them were congenitally blind, and two of them became blind before the age of six. Concerning their educational level, 11 of the participants were primary school students, 3 were secondary school students and 2 were high school students. The participants come from two different cities in Greece, the capital city of Greece, Athens, and from the second largest city, Thessaloniki. In the beginning, we composed a list of students with blindness who studied in special schools for children with visual impairments or in mainstream schools. However, only students whose parents gave their consent and who had no additional disabilities participated. Eight boys and eight girls, aged from seven years and six months to 17 (M = 12.57 years, SD = 2.96), constituted the group of blindfolded sighted participants (group B). Eight boys and eight girls, aged from seven years and four months to 17 (M = 12.51 years, SD = 2.82), constituted the group of sighted participants without blindfold (group C). 2.2. Experiment The experiment examined the ability of each participant in spatial coding and spatial representation of near space. The participants were required to memorize the type, position and orientation of the various shapes which were given to them in turn and subsequently to place the correct shapes in the right position and correctly orient them. The experiment is similar to tests that have been used by Platsidou (1993) for the evaluation of ability in spatial coding and spatial representation of sighted individuals. Moreover, the same experiment was used in a previous research of Papadopoulos et al. (2010) for the evaluation of individuals with visual impairments. The experiment consisted of two tests. In the first test the shapes were placed in their original orientation; in other words, the shapes were not rotated. In the second test the shapes were rotated (see Fig. 1 for the correct orientation of shapes and Fig. 2 for rotated shapes). Each test included four different sub-tests (2-, 3-, 4- and 5-shapes, respectively). The experiment consisted of eight sub-tests in total. 2.3. Materials and design Eight surfaces and a base were constructed. The base was wooden, A3 size (42 cm 29.7 cm) and its edges were defined by a prominent black frame which could be easily identified through touch. Subsequently, the participant could use the frame or some of its points (the angles) as reference points, when coding and representing the shapes. Therefore he/she could use allocentric references for spatial coding. Spatial coding based on allocentric references brings about better results (Nardini et al., 2006; Pick, 2004). The surfaces were constructed out of white A3-size paper, on which the geometric shapes, made out of black painted cork, were glued. The height of the shapes was about 3 mm. The black color used for the frame of the base and for the shapes was chosen to ensure proper contrast with the white paper. Moreover, a set of shapes (not glued on the paper) was built out of black painted cork and was used in the representation phase. The height of the shapes was approximately 3 mm. 2.4. Procedure Each participant was examined alone in a quiet room. Initially, the researcher explained in detail the procedure to be followed during the test to the participant and this was followed by a short period of time (5 min) for the participant to familiarize herself with the procedure.
K. Papadopoulos, E. Koustriava / Research in Developmental Disabilities 32 (2011) 2084–2091
2087
Fig. 1. The four surfaces that were used in the test including the correct orientation.
The participant sat comfortably in a chair in front of a table. The base was placed on the table, precisely in front of the participant. On this base the researcher placed the surface with the glued-on shapes (initial surface) (see Figs. 1 and 2). A specific amount of time was given for the coding of each surface. For the surface with the two shapes 30 s were given and for each additional shape an extra 15 s were allowed. That is, for the three shapes 45 s were given, for the four shapes 60 s and for the 5 shapes 75 s. Participants with blindness and blindfolded sighted participants read (code) the surface through touch. Non-blindfolded sighted participants were only able to use their vision. In order to examine the impact of haptic strategies on performance, we observed and recorded the way each participant read/coded each surface. When the participant stated that he/she had completed the coding phase or when the time available ran out the researcher took the surface with the shapes away from the participant. Presently, after a period of 10 s, a blank surface (piece of A3 size paper) was placed on the base and a set of shapes placed on the table. The delay was inevitable as this was the time
Fig. 2. The four surfaces that were used in the test including the rotated shapes.
2088
K. Papadopoulos, E. Koustriava / Research in Developmental Disabilities 32 (2011) 2084–2091
Fig. 3. Specimen of the four shapes correctly orientated (d, distance between the centers of the original shape and the final shape – location error; a, divergence of direction – orientation error).
needed to place the blank surface on the base and the base in the appropriate position in front of the participant, as well as to place the set of shapes on the table, next to the base. In each sub-test the set of shapes given to the participant numbered twice as many as the shapes on the surface. The participant had to choose the correct shapes (the same that were placed on the initial surface) and place them in the correct position and orientation. For example, for the representation of the two shapes the participant had to choose the two correct shapes out of the four different ones that were given to him/her. Under each of the given shapes was a piece of modeling clay to ensure that the shapes could be securely placed on the paper, but also to allow swift corrections of a shape’s position. The surfaces were presented to the participant in a specific order; that is first the surface with the 2 shapes, then the surface with the 3 shapes, etc. No time limits were applied for the representation of each surface. When the participant stated that she/he had completed reproducing each surface, the researcher registered the position of each shape by drawing, with a pencil, its outline on the surface. Out of this procedure a total of 384 A3 surfaces emerged (8 for each participant), on which the positions of the shapes were drawn as the participants defined them during the representation procedure (final surfaces). For the evaluation of the participants’ performance the following measurements were taken: (1) the location error, i.e. the distance (d) between the centers of the original shape (the one the participant read to begin with on the initial surface) and the final shape (the one that came out of the representation procedure) (see Fig. 3), (2) the orientation error, i.e. the divergence angle (a) of the direction of the shape (see Fig. 3) placed by the participant from the initial correct direction of the shape; every time this angle was more than 158 the participant’s answer was marked as wrong (this measurement was not implemented when, during the selection and placement of the shapes, the participant replaced one shape with another), (3) the number of replacements, i.e. the sum of two errors – object identity error and object-to-position assignment error. Object identity error represents how many shapes after the representation procedure did not match the ones from the initial surface of each sub-test. Object-to-position assignment error occurred when a shape appeared on the initial and final surface but in a different position and so was counted as an object-to-position assignment error. 3. Results Test scores for each group of participants were calculated in relation to location error, orientation error, and replacements (see Table 1). As far as the first test is concerned, the implementation of one-way ANOVA revealed that there are statistically significant differences among the three groups regarding the location error (F = 8.510, p < .01) and orientation error (F = 7.997, p < .01). In particular, the individuals of group A (participants with blindness) have a greater location error than the individuals of group B (blindfolded sighted participants) (Bonferroni post hoc test, p < .05) and those of group C (sighted) (p < .01). Moreover, the individuals of group A have a greater orientation error than the individuals of group B (p < .05) and individuals of group C (p < .01). Table 1 Mean score of wrong answers of each group in the first and second test. Group
Participants with blindness Blindfolded sighted participants Sighted participants
First test
Second test
LE
OE
RP
LE
OE
RP
73.59 56.48 46.59
2.44 1.13 .75
3.94 3.13 2.94
74.83 48.01 41.42
4.44 4.63 3.50
5.00 2.31 1.38
LE, location error (cm); OE, orientation error (number of errors); RP, replacements (number of errors).
K. Papadopoulos, E. Koustriava / Research in Developmental Disabilities 32 (2011) 2084–2091
2089
Table 2 Mean score of the wrong answers of participants with visual impairments with relation to haptic strategy in the first and second test. Group
Group-1 Group-2 Group-3 Group-4
Fist test
Second test
LE
OE
RP
LE
OE
RP
54.27 76.83 71.63 101.60
2.00 2.00 1.67 4.00
1.50 3.67 5.00 7.00
63.50 65.37 78.87 95.88
4.50 6.00 5.33 2.5
3.33 2.00 4.33 10.25
LE, location error (cm); OE, orientation error (number of errors); RP, replacements (number of errors).
Concerning the second test, the implementation of one-way ANOVA revealed that there are statistically significant differences among the three groups regarding the location error (F = 15.350, p < .01), and number of replacements (F = 7.097, p < .01). The individuals of group A have a greater location error than the individuals of group B (Bonferroni post hoc test, p < .01) and individuals of group C (p < .01). Moreover, the individuals of group A have a greater number of replacements than the individuals of group B (p < .05) and those of group C (p < .01). On the other hand, in neither the first test nor the second are there any statistically significant differences between blindfolded sighted individuals and sighted individuals with reference to the location error, orientation error, and replacements. As was previously mentioned (Section 2.4) the researcher observed and recorded the way each participant read/coded each surface. Depending on the strategy that was used by each participant, five groups emerged with the following characteristics: (a) the participants of group 1 scanned the surface with their hands and simultaneously measured the distances of the shapes from the points of reference, using as points of reference both the ends of the frame and the other shapes placed on the surface – the nearest ones (resulting in an allocentric – both intrinsic and extrinsic – mental representation), (b) the participants of group 2 used as points of reference only the other shapes (resulting in an intrinsic allocentric mental representation), (c) the participants of group 3 used as points of reference only the ends of the frame (resulting in an extrinsic allocentric mental representation), (d) the participants of group 4 did not use any external points of reference, but simply touched the shapes without measuring distances, and (e) the participants of group 5 used their vision to code the arrangement of the shapes. Of the total of 16 participants with blindness, six were integrated into group 1, three into group 2, three into group 3, and four into group 4. Of the total of 16 blindfolded sighted participants, eight were integrated into group 1, seven into group 3 and only one participant was integrated into group 4. All the participants who used haptic strategies used both hands in exploring the configurations and did not have one hand anchored to the frame while using the other to explore. Participants with blindness who formed group 1, which had the best haptic strategy, appear to have better performance than the other groups in the first and second tests (see Table 2). Moreover, a one-way ANOVA was implemented to see if there were any statistically significant differences between group 1 of participants with blindness and the two groups of sighted participants (blindfolded and non-blindfolded). Concerning the first test, ANOVA revealed that there are indeed statistically significant differences among the three groups regarding the orientation error (F = 4.157, p < .05). The participants with blindness have a greater orientation error than the sighted participants without blindfold (Bonferroni post hoc test, p < .05). Statistically significant differences did not emerge between participants with blindness and blindfolded sighted participants. Concerning the second test, ANOVA revealed that there are statistically significant differences among the three groups regarding the location error (F = 4.164, p < .05). Participants with blindness have a greater orientation error than the sighted participants without blindfold (Bonferroni post hoc test, p < .05). There are no statistically significant differences between participants with blindness and blindfolded sighted individuals. 4. Discussion The present study concludes that vision influences performance in spatial coding and spatial representation of near space. The participants with blindness had worse performance in comparison with the sighted participants – either the latter used their touch to code and represent near space (group B) or they used only their vision (group C). Moreover, it seems that this does not derive mainly from the fact that individuals with blindness code and represent near space through touch; otherwise, someone would anticipate a statistically significant difference between blindfolded sighted participants and sighted participants without blindfold. Here, no such result was detected. Although sighted participants who used their vision outperformed sighted participants who only used their touch (blindfolded), the differences between the former and the latter were not statistically significant. Previous studies which examined the ability of people with visual impairments in spatial coding and spatial representation of near space have concluded that the vision – even if it is reduced – is important for coding and representing space (Papadopoulos et al., 2009, 2010; Pasqualotto & Newell, 2007; Ungar et al., 1995). However, individuals with congenital blindness are able to improve their performance by using the proper strategies to explore and code space (Papadopoulos et al., 2010); the proper strategies in this context seem to be the allocentric ones. In the present study, the statistical data concerning the performance of participants with blindness who used the most effective haptic strategy
2090
K. Papadopoulos, E. Koustriava / Research in Developmental Disabilities 32 (2011) 2084–2091
compared with the performance of blindfolded sighted participants are of great importance. They reveal the significance of another influential factor: the haptic strategies. It has been suggested that individuals with congenital blindness tend to use self-referent strategies when they code small-scale places (Hollins & Kelley, 1988). Papadopoulos et al. (2010) found, however, that only around a third of participants with blindness appeared to use egocentric strategies alone to code near-space. Moreover, only a quarter of participants who were congenitally blind used self-referent strategies. Similarly, in the study of Ungar et al. (1995) only a small proportion of the participants with congenital blindness (10% in the three-shape layout and 20% in the five-shape layout) used self-referent strategies to code and represent space. Here, we support not only the fact that individuals with blindness are capable of adopting allocentric strategies but that their performance in coding space can be equated to that of blindfolded sighted individuals. In research by Postma et al. (2007) no significant differences between the performances of participants with blindness and blindfolded sighted participants in coding near space were observed. However, the vast majority of the participants were over 40 years old. Moreover, participants had the chance to code the locations in previous trials through movement, which according to Millar (1994) is a basic mode of spatial coding. Millar (1979) suggested that in near space (hand) movements intervene with the memorization of distance and direction of objects. Movements and proprioceptive information used to code space are by definition egocentric strategies. Thus, one could not argue in favor of allocentric representation of objects. Although Postma et al. (2007) mention that the descriptions of participants with blindness reveal an allocentric–intrinsic mental representation of objects, this same representation could be just an indication of serial memory, which seems to be essential for individuals with blindness trying to generate mental representations (Cattaneo et al., 2008). Moreover, in the study of Papadopoulos et al. (2010) it also became apparent that the ability for independent movement influences the haptic strategy selection. The group of participants with the most efficient haptic strategy appeared to have a greater ability for independent movement. Furthermore, Fiehler et al. (2009) suggested that the early training of children with congenital blindness in orientation and mobility results in acuity of spatial perception as well as in allocentric coding performance similar to the performance of individuals with normal vision. To summarize, vision is very important for coding and representing near space. The proper allocentric strategy can compensate, however, for vision properties. Within the present study we indicate that individuals with blindness – even those with congenital blindness – are indeed capable of applying allocentric strategies to code and represent space. For the first time it appears that when proper allocentric strategies are used during the codification and representation of near space, the performance of individuals with blindness can be equated to that of blindfolded sighted individuals. These findings are significant enough to be applied to orientation and mobility training of individuals with visual impairments, to the field of environmental accessibility or even to the field of adaptation of the environment. Previous studies with sighted participants have also concluded that there are correlations between spatial abilities in small-scale space and the spatial knowledge of far space (see Hegarty, Montello, Richardson, Ishikawa, & Lovelace, 2006). References Cattaneo, Z., Vecchi, T., Cornoldi, C., Mammarella, I., Bonino, D., Ricciardi, E., et al. (2008). Imagery and spatial processes in blindness and visual impairment. Neuroscience and Biobehavioral Reviews, 32, 1346–1360. Corballis, M. C. (1982). Mental rotation: Anatomy of a paradigm. In M. Potegal (Ed.), Spatial abilities: Developmental and physiological foundations. New York: Academic Press. Fiehler, K., Reuschel, J., & Ro¨sler, F. (2009). Early non-visual experience influences proprioceptive-spatial discrimination acuity in adulthood. Neuropsychologia, 47, 897–906. Hegarty, M., Montello, D. R., Richardson, A. E., Ishikawa, T., & Lovelace, K. (2006). Spatial abilities at different scales: Individual differences in aptitude-test performance and spatial-layout learning. Intelligence, 34, 151–176. Heller, M. (1989). Picture and pattern perception in the sighted and the blind: The advantage of late blind. Perception, 18, 379–389. Heller, M. A., Brackett, D. D., Scroggs, E., & Allen, A. C. (2001). Haptic perception of the horizontal by blind and low-vision individuals. Perception, 30, 601–610. Heller, M. A., Wilson, K., Steffen, H., & Yoneyama, K. (2003). Superior haptic perceptual selectivity in late-blind and very-low-vision subjects. Perception, 32, 499– 511. Hollins, M. (1985). Styles of mental imagery in blind adults. Neuropsychologia, 23, 561–566. Hollins, M., & Kelley, K. E. (1988). Spatial updating in blind and sighted people. Perception and Psychophysics, 43(4), 380–388. Huttenlocher, J., & Presson, C. C. (1973). Mental rotation and the perspective problem. Cognitive Psychology, 4, 277–299. Kappers, M. L. A. (2007). Haptic space processing—Allocentric and egocentric reference frames. Canadian Journal of Experimental Psychology, 61(3), 208–218. Loomis, M. J., Klatzky, L. R., & Lederman, J. S. (1991). Similarity of tactual and visual picture recognition with limited field of view. Perception, 20, 167–177. Millar, S. (1976). Spatial representation by blind and sighted children. Journal of Experimental Child Psychology, 21, 460–479. Millar, S. (1979). The utilization of external and movement cues in simple spatial tasks by blind and sighted children. Perception, 8, 11–20. Millar, S. (1988). Models of sensory deprivation: The nature/nurture dichotomy and spatial representation in the blind. International Journal of Behavioral Development, 11(1), 69–87. Millar, S. (1994). Understanding and representing space: Theory and evidence from studies with blind and sighted children. New York: Oxford University Press. Monegato, M., Cattaneo, Z., Pece, A., & Vecchi, T. (2007). Comparing the effects of congenital and late visual impairments on visuospatial mental abilities. Journal of Visual Impairment and Blindness, 101(5), 278–295. Morsley, K., Spencer, B., & Baybutt, K. (1991). Is there any relationship between a child’s body image and spatial skills? The British Journal of Visual Impairment, 9(2), 41–43. Mou, W., Fan, Y., McNamara, P. T., & Owen, B. C. (2008). Intrinsic frames of reference and egocentric viewpoints in scene recognition. Cognition, 106, 750–769. Mou, W., McNamara, P. T., Valiquette, M. C., & Rump, B. (2004). Allocentric and egocentric updating of spatial memories. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(1), 142–157. Nardini, M., Burgess, N., Breckenridge, K., & Atkinson, J. (2006). Differential developmental trajectories for egocentric, environmental and intrinsic frames of reference in spatial memory. Cognition, 101, 153–172. Newell, F. N., Woods, A. T., Mernagh, M., & Bu¨lthoff, H. H. (2005). Visual, haptic and crossmodal recognition of scenes. Experimental Brain Research, 161, 233–242. Newport, R., Rabb, B., & Jackson, R. S. (2002). Noninformative vision improves haptic spatial perception. Current Biology, 12, 1661–1664.
K. Papadopoulos, E. Koustriava / Research in Developmental Disabilities 32 (2011) 2084–2091
2091
Ochaita, E., & Huertas, J. A. (1993). Spatial representation by persons who are blind: A study. Journal of Visual Impairment and Blindness, 87(2), 37–41. Paivio, A. (1986). Mental representations: A dual coding approach. Oxford: Oxford University Press. Papadopoulos, K., Koustriava, E., & Kartasidou, L. (2009). The impact of residual vision in spatial skills of individuals with visual impairments. Journal of Special Education doi:10.1177/0022466909354339. Papadopoulos, K., Koustriava, E., & Kartasidou, L. (2010). Spatial coding of individuals with visual impairment. Journal of Special Education doi:10.1177/ 0022466910383016. Pasqualotto, A., & Newell, N. F. (2007). The role of visual experience on the representation and updating of novel haptic scenes. Brain and Cognition, 65, 184–194. Platsidou, M. (1993). Information processing system: Structure, development and interaction with specailized cognitive abilities. Doctoral Dissertation. Thessaloniki: Aristotle University of Thessaloniki. Pick, H. L., Jr. (2004). Mental maps psychology of. In P. B. Baltes & N. J. Smelser (Eds.), International encyclopedia of the social & behavioral sciences (pp. 9681–9683). Amsterdam, Netherlands: Elsevier. Postma, A., Zuidhoek, S., Noordzij, M. L., & Kappers, A. M. L. (2007). Differences between early-blind, late-blind, and blindfolded-sighted people in haptic spatialconfiguration learning and resulting memory traces. Perception, 36, 1253–1265. Spencer, C., Blades, M., & Morsley, K. (1989). The child in the physical environment: The development of spatial knowledge and cognition. Chichester, NY: Wiley. Thinus-Blanc, C., & Gaunet, F. (1997). Representation of space in blind persons: Vision as a spatial sense? Psychological Bulletin, 121(1), 20–42. Ungar, S., Blades, M., & Spencer, C. (1995). Mental rotation of a tactile layout by young visually impaired children. Perception, 24(8), 891–900. Vanlierde, A., & Wanet-Defalque, M. C. (2004). Abilities and strategies of blind and sighted subjects in visuo-spatial imagery. Acta Psychologica, 116, 205–222. Waller, D., Lippa, Y., & Richardson, A. (2008). Isolating observer-based reference directions in human spatial memory: Head, body, and self-to-array axis. Cognition, 106, 157–183. Wang, R. F. (2003). Spatial representations and spatial updating. The Psychology of Learning and Motivation, 42, 109–155. Warren, D. H. (1994). Blindness and children: An individual differences approach. Cambridge: Cambridge University Press.