Rating reflection on experience: A case study of teachers’ and tutors’ reflection around images

Rating reflection on experience: A case study of teachers’ and tutors’ reflection around images

Interacting with Computers 24 (2012) 439–449 Contents lists available at SciVerse ScienceDirect Interacting with Computers journal homepage: www.els...

268KB Sizes 0 Downloads 9 Views

Interacting with Computers 24 (2012) 439–449

Contents lists available at SciVerse ScienceDirect

Interacting with Computers journal homepage: www.elsevier.com/locate/intcom

Rating reflection on experience: A case study of teachers’ and tutors’ reflection around images q Rowanne Fleck ⇑ Department of Informatics, University of Sussex, Falmer, Brighton BN1 9QJ, UK

a r t i c l e

i n f o

Article history: Received 5 March 2012 Received in revised form 3 July 2012 Accepted 23 July 2012 Available online 2 August 2012 Keywords: Rating framework Reflection on experience Reflective practice Teacher training SenseCam

a b s t r a c t Reflection on personal experience is described as a means to learn from experience, enable self-development and improve professional practice amongst other things. Recently there has been a move in HCI to explore new ways technology may support us in doing this. However, within this community there is little use made of existing literature to evaluate how well such tools support this reflection. In this paper we present a case study of the development of a ‘levels of reflection’ framework for the purposes of evaluating a wearable digital camera (SenseCam) to support teachers’ and tutors’ reflective practice. The framework enabled us to rate and compare reflection achieved by participants in different situations, and to explore the relationship between the ways images were used by participants and the level of reflection this led to, with implications for designing future SenseCam use to better support teachers’ and tutors’ reflection on experience. Beyond our particular case study, we suggest that the framework and associated methodological approach for rating reflection is of value to those within the HCI community interested in designing for reflection on experience. Rating reflection in this way can enable new tools or techniques for supporting reflection to be explored over time, across similar situations or with adaptations, and to build understandings of how reflection is being most effectively supported – ultimately inspiring the design of future technologies by building up an understanding of the most effective ways of supporting reflection on experience. Ó 2012 British Informatics Society Limited. All rights reserved.

1. Introduction As the field of HCI moves away from concerns of functional usability and technology becomes more pervasive, there is an increased focus on how we experience our lives using and through technology. One step further is considering how technology might enable us not only to experience new things, but also to encourage us to reflect on our everyday life experiences. Out with this community a vast literature exists on reflection, with reflection on personal experience described as a means to learn from experience (Boud et al., 1985; Moon, 1999), enable self-development (Moon, 1999) and improve professional practice (Schön, 1983) amongst other things.1 Whilst previous work within HCI tends to describe how technology might support reflection on experience in various ways, there is less emphasis on evaluating or rating the reflection q

This paper has been recommended for acceptance by Paul Cairns.

⇑ Present address: UCL Interaction Centre, MPEB 8th Floor, University College London, Gower Street, London WC1E 6BT, UK. Tel.: +44 020 7679 2867. E-mail address: r.fl[email protected] 1 Related to reflection is the concept of reflexivity, which some readers may be more familiar with. Being reflexive in the research process, for example, requires reflection on how our own values, experiences, interests and beliefs amongst other things have shaped the research, and how any assumptions made in the course of the research have implications for the research and its findings.

such technologies engender. This paper begins to address this issue by presenting a case study of the development and application of a framework for rating the reflection observed when SenseCam, a wearable digital camera, was used and evaluated as a tool to support teachers’ and tutors’ reflection on experience. While this research has domain specific implications for ways to design SenseCam use to better support teachers’ reflective practice, we draw from this case study some more general implications for using this framework and methodological approach to evaluate and design for reflection on experience. 1.1. Background: Teachers’ reflection and SenseCam We start this paper by introducing the domain of the case study, teachers’ reflective practice, and the technology trialled, SenseCam, before discussing in the next sections why and how we developed our framework for rating reflection in this domain. Largely inspired by the work of Schön (1983), within the teaching domain reflective practice is regarded as important to allow teachers to develop a more complex understanding of teaching: trainee teachers are encouraged to reflect after all lessons they have taught or observed. Specifically this reflection is meant to allow teachers to analyze what they are doing (Reinman, 1999) integrate and reconstruct knowledge and ideas (Reinman, 1999;

0953-5438/$ - see front matter Ó 2012 British Informatics Society Limited. All rights reserved. http://dx.doi.org/10.1016/j.intcom.2012.07.003

440

R. Fleck / Interacting with Computers 24 (2012) 439–449

Davis, 2006), so they can adapt and improve their practice (Reinman, 1999; Ward and McCotter, 2004), and have a positive impact on their students’ learning (Ward and McCotter, 2004; Parsons and Stephenson, 2005; Leung and Kember, 2003). Reflective practice is considered important for ‘bridging the gap’ between theory and practice (Parsons and Stephenson, 2005; Cannings and Talley 2003). Features indicative of the productive reflection on practice of teachers are suggested by Davis to involve: providing reasons for decisions; giving evidence for claims; generating alternatives; questioning assumptions; identifying the results of teaching decisions; and evaluating one’s teaching (2006). Most of trainee teachers’ reflection happens either alone (selfreflection), or with the support of a mentor or other supervisor who may have observed them teaching (social-reflection). Formal self-reflections include keeping a written journal containing reflections on each lesson taught or observed. These are structured by reflective questions such as what went well or badly in the lesson and why that might be. In addition as part of their coursework they are also required to write essays where they are encouraged to consider how ideas in the literature about teaching and learning relate to their own experiences in the classroom. Mentors will discuss with trainees aspects of their teaching – prompting them to reflect further on their experiences and offering practical advice to develop their practice. Trainees also reflect with their peers, either ad hoc in the staff room between classes or back at university, or following more formal peer support exercises within the school which may involve peers observing each others teaching or even teaching lessons together. There are fewer situations in which trainee university tutors are encouraged to reflect on their practice, though peer reflection activities are promoted. In practice there is rarely any technology used to support these reflective activities, though video is occasionally used and the benefits of doing so are widely reported in the literature (e.g., Hutchinson and Bryson, 1997; Sherin and van Es, 2002; Zuber-Skerritt, 1984). In previous research SenseCam, a wearable digital stills camera, has been found to be of value for supporting teachers’ and tutors’ reflective practice in much the same way video is reported to, but with a flexibility that may enable its easier integration into their current reflective practices (Fleck, 2008; Fleck and Fitzpatrick, 2009). SenseCam is a wearable digital camera, worn round the neck like a pendant, which automatically takes a series of still images (approx. 3 or 4 a minute) triggered by built in sensors (see Fig. 1). The record it produces is both different from and lies somewhere between that produced by a stills and that produced by a video camera. While it takes still images, unlike a traditional stills

camera a great deal more images are collected, no-one is required to take the photographs and these images are from the approximate perspective of the wearer. In contrast to video is also the unusual perspective, that images only provide a few snap-shots of events rather than a continuous stream and that there is no sound associated with the recording. However, SenseCam is small and wearable, and has enough battery and memory capacity to produce a visual record of the wearer’s whole day which can be played back later via a PC in a way which resembles a speeded up movie (see Hodges et al., 2006). In this way, SenseCam suggests itself as an interesting technology for encouraging people’s reflection on experience. Indeed, as well as supporting teachers’ and tutors’ reflective practice (Fleck and Fitzpatrick, 2009), previous research has found it a useful tool to support reflection on everyday experience (Harper et al., 2007, 2008; Lindley et al., 2009) and on students’ field trip experiences (Fleck and Fitzpatrick, 2006). Previously published research (Fleck and Fitzpatrick, 2009) suggested that three main themes emerge as the way SenseCam images supported teachers’ and tutors’ reflective practice: 1. Images were found to structure and support participants’ recall of the experience. 2. Images allowed participants to see or become more aware of more aspects of the lesson. 3. Images were brought into support participants’ ongoing reflection and reflective discussions. These themes describe the role images can play in supporting the self and social reflection of teachers and tutors around them and it was found images were able to play a role in interactions described in the literature as indicative of good reflective practice (e.g., Davis, 2006; Lee, 2005; Parry, in press). However, such an approach taken alone is limited in the extent to which it can identify and evaluate the reflection observed, which would be of value to inform design for reflection on experience. Therefore this paper extends previous work by presenting a framework developed to identify and evaluate the reflection on experience of trainee teachers and tutors as supported by SenseCam. We now go onto discuss why and how we developed our framework for rating reflection in this domain, and demonstrate its value by using it to evaluate the reflection we observed when teachers and tutors trialled SenseCam in authentic classroom settings. From these findings we draw some implications for the future use of SenseCam as a tool in this setting, and discuss how this methodological approach may have applicability beyond this specific context to the field of HCI and reflection on experience more generally.

2. Evaluating teachers’ reflection on experience

Fig. 1. Worn SenseCam.

In order to evaluate teachers’ reflection as they used SenseCam, we turned to previous literature in the field. Initially we considered how video has been evaluated as a tool to support reflection in the field of teachers’ reflective practice and in other domains, as it was expected that our tool would be used and would support their reflection in a comparable way. However, we identified a number of limitations in these approaches. For example, whilst video is reported to support teachers reflection on experience, on closer examination most claims result from asking participants afterwards about what happened (Chuang and Rosenbusch, 2005; Whitehead and Fitzgerald, 2006), or researchers reporting on their own experiences (Jones and McNamara, 2004). Such self-reports can only go so far in informing us of exactly how video supports various types of reflection, or how valuable that reflection is to the trainee teachers. As Hatton and Smith (1995) comment:

R. Fleck / Interacting with Computers 24 (2012) 439–449

‘‘it is necessary to move beyond self reports to the identification of ways in which reflective processes can be evidenced. It is not sufficient to assert that reflection is encouraged by a procedure or technique, rather means must be specified to demonstrate that particular kinds of reflecting are taking place’’ (p. 36). Others, for example Hutchinson and Bryson (1997) and Sherin and van Es (2002), describe types of reflective thought which occurred at the time in relation to what their technology was offering, but do not report in detail on their methods for establishing this. More promisingly, Thomson et al. (2005a,b) video recorded conversations of trainee teachers and researchers discussing short video clips of examples of the trainees interacting with children in their classroom and then broke these down into categories of talk including affirmative responses, reflection, remembering and questioning, with reflection further sub-categorised into what it was about. This approach was able to reveal what the teachers’ focus of reflection was on, but there was no real definition given of what constituted evidence of reflection, or detail of the role of the video in supporting this discussion. Therefore, to demonstrate that particular kinds of reflection are taking place when teachers’ and tutors’ reflect with SenseCam, we video recorded their reflective discussion around collected images, then, in order to evaluate this reflection, it was necessary to go back to the wider teacher reflective practice literature and beyond. A review of this literature revealed that where reflection is evaluated in the sense we would like, this is often through the use of questionnaires (e.g., Kember et al., 1999), or by looking for evidence of reflection in journals, essays and in discussion (e.g., Moon, 1999; Kember et al., 1999; Hatton and Smith, 1995). Various frameworks are used to understand and measure this reflection (see Lee (2005) for a review). One approach commonly used is to identify within the reflection different types or levels of reflection, where higher levels are considered to involve more or deeper reflective thought, and therefore to be more desirable (e.g., Hatton and Smith, 1995; Ward and McCotter, 2004; Lee, 2005; Manouchehri, 2002). Certainly a great deal of research supports the idea that over time and with support, reflectors can move from reflection in the lower levels to occurrences of reflection at the higher levels, though the very highest levels of reflection, often termed ‘critical’ reflection, is very rarely reached (Hatton and Smith, 1995; Ward and McCotter, 2004). We took this idea of levels of reflection as the basis of our framework for evaluating teachers’ and tutors’ reflection around SenseCam images. 2.1. Levels of reflection – an initial literature-based framework Whilst there are some commonalities in previous research as to what constitutes evidence of teachers’ reflection at various levels, there is no definitive measure or agreed common definition of levels. A valid approach could be to choose one ‘levels’ model; however, there is not enough detail published in papers to make it possible simply to lift another framework and apply it immediately to our data. Also all have been developed by application to a particular (and unpublished) dataset, which will have a number of different features depending on the nature and context of the research being conducted. For example, most previous research has been concerned with identifying reflection in trainee teachers’ written reflection, where in our research we are interested in looking for evidence of reflection in conversation; other researchers have found that the two are not always equivalent (Lee, 2005). Therefore, as a starting point for our research we synthesised a framework (see Appendix A) from a selection of existing frameworks developed to rate teachers’ reflection (Hatton and Smith, 1995; Ward and McCotter, 2004; Lee, 2005; Manouchehri, 2002) all of which are based on more extensive reviews of the literature, including literature beyond

441

teachers’ reflective practice. This initial framework, composed of five levels R0–R4 where R4 is the highest level of reflection, is introduced and discussed in relation to the literature below. R0: Description

A description or statement about events without further elaboration or explanation. Hatton and Smith’s (1995) thorough analysis of trainee teachers’ written reflection concluded that a good deal of this writing was not reflective at all – it was purely descriptive. Manouchehri (2002) also reports an amount of storytelling or recall of classroom events which she does not consider reflective. However others have been more generous with what they consider as reflective. Ward and McCotter (2004), for example, considered everything students wrote that focused on a specific teaching action to be reflective as this implied deliberate thinking about that action and a desire for improvement. Similarly, Davis (2006) considered everything students have written in response to a request for them to reflect on their practice, to be reflective, but distinguish between productive and unproductive reflection. Nevertheless, a category of non-reflective talk is included in this research. R1: Descriptive reflection

Description including justification or reasons for action, but in a reportive or descriptive way. No alternate explanations explored, limited analysis and no change of perspective. For the purposes of this research, this is the first level where there is considered evidence of reflective thought rather than simply recall of events. This follows mainly from Hatton and Smith’s (1995) definition of descriptive reflection where unlike descriptive writing there is evidence of justification or reasons given for action. They go onto say this is, however, in a reportive or descriptive way – which has been equated here with other authors’ descriptions of analysis being limited (Ward and McCotter, 2004), there being no alternative explanations explored (Lee, 2005) and certainly no change of perspective (Ward and McCotter, 2004). Arguably not all authors would consider this enough to constitute evidence of reflective thought (e.g., Kember et al., 1999) and it might still fall into the category of unproductive reflection in Davis’ (2006) opinion. R2: Dialogic reflection

A different level of thinking about. Looking for relationships between pieces of experience, evidence of cycles of interpreting and questioning, consideration of different explanations, hypothesis and other points of view. A number of authors distinguish a level of reflection that involves a considering of alternatives (e.g., Davis, 2006; Ward and McCotter, 2004), and being able to see things from a different perspective (e.g., Jay and Johnson, 2002; Manouchehri, 2002). This is often referred to as dialogic reflection, as ideally two people in dialog would each be offering their own perspective whilst taking on board that of their conversant. However, it is possible for one person to show evidence of considering more than one version of events. Related to this idea of being able to see things in a different way is to search for relationships between ideas and experiences and generalise from them to get a different level of understanding (Davis, 2006; Lee, 2005).

442

R. Fleck / Interacting with Computers 24 (2012) 439–449

R3: Transformative reflection

Revisiting an event with intent to re-organise and do something differently. Asking of fundamental questions and challenging personal assumptions leading to a change in practice. One of the stated outcomes for teachers’ reflective practice, and indeed engaging in reflective practice in general, is that it will ultimately lead to a change and improvement in practice (Moon, 1999; Reinman, 1999; Ward and McCotter, 2004). It is thought that this occurs following from the earlier levels of reflection, building on earlier levels of reflection where other points of view or alternate explanations are considered, to where the reflectors’ own initial assumptions are challenged and their ideas restructured or reframed finally leading to this change in practice. The idea of ‘reframing’ (of the problem) is central to Schön’s concept of reflective practice (1983), where the reflector considers the situation and then, initially with support, is encouraged to see it in a different way: reframed. This sounds very similar to R2 reflection, but in this case there is the idea that the reflector’s original point of view is somehow altered or transformed to take into account the new perspectives he has been presented, which might ultimately lead to a change in practice. In order to achieve this perspective transformation: ‘‘it is necessary to recognise that may of our actions are governed by a set of beliefs and values which have been almost unconsciously assimilated from the particular environment’’ (p. 23, Kember et al., 1999). There is some confusion in the literature as to what this level of reflection is called, with a number of authors referring to this as ‘critical reflection’ (see p. 251, Ward and McCotter, 2004). However, other authors describe critical reflection rather more stringently (see next section), so we refer to this category as transformative reflection. R4: Critical reflection

Where social and ethical issues are taken into consideration. Generally considering the (much wider) picture. Although the term critical reflection is often used to describe the type of reflection classified as transformative above, others have described it as requiring consideration of moral and ethical issues, for example whether one’s actions are equitable and just (Alder in Hatton and Smith, 1995) and of one’s personal action within wider socio-historical and politico-cultural contexts (e.g., Noflke and Brennan in Hatton and Smith, 1995). Based on this, Hatton and Smith (1995) include a category of reflection in their framework for rating written reflection, which they define as ‘‘Demonstrates an awareness that actions and events are not only located in, and explicable by, reference to multiple perspectives but are located in, and influenced by multiple historical, and socio-political contexts’’. Little more description is provided than this, but as these issues do not seem to be covered by the levels above, we include here a category for reflections that involve such considerations which could be considered outside the immediate context. 3. Application and development of the framework We now go onto describe how this initial framework was applied to the field data collected in a series of case studies of teachers’ and tutors’ reflecting on their lesson experiences around images and explain how it was iteratively developed to produce an

Table 1 Details of cases discussed in this paper (Case 27 included three participants with different experience). Technology evaluated Cases used to develop the framework (described in this section) Self-reflection Trainee teachers Worn SenseCam Tutors Worn SenseCam Social reflection Trainee teachers Tutors Further cases (discussed in Section 4) Trainee teachers Newly qualified teachers Recently qualified teachers (1 yr experience)

Case numbers

1, 8, 15 2, 5, 6, 9, 10, 16

Worn SenseCam Worn SenseCam

7, 14, 17, 18 3, 4, 11, 12, 13

Worn and Static SenseCam Worn and Static SenseCam Worn and Static SenseCam

27, 28 21, 23, 24, 25, 27 20, 22, 27

operationalised ‘levels of reflection’ framework for rating reflection in this context.

3.1. Field data The field data for which and on which the ‘levels of reflection’ framework was developed consists of a series of 18 cases to explore the potential of SenseCam to support the reflective practice of trainee teachers and tutors. SenseCam was trialled in a number of situations within their existing reflective practices (see Section 1.1) for self-reflection, peer-reflection and for trainee teachers’ mentor supported reflection. A total of 9 trainee teachers (8 Post Graduate Certificate of Education and 1 Graduate Teacher Program) and 3 of their mentors, and 13 university tutors participated in this research (see Table 1 below). The details of each case and how SenseCam was used within it varied; however, the broad structure of each case was similar. Each involved a classroom session in which a teacher or tutor wore SenseCam whilst teaching a class of students, and resulted in a series of around 130 fisheye photographs of the progress of the lesson, from a first person perspective. This session was followed shortly afterwards by a review session in which the captured images of the lesson were looked at (as described below) and in the selfreflection 9 cases the lesson reflected on by an individual participant (self-reflection) and in 9 the other cases discussed between two or more participants (social reflection). In all cases the review session was attended by the researcher and was conducted as soon as practical after the classroom session, as favoured in the literature on reflective practice (Zuber-Skerritt, 1984). In preparation for it, the images from the lesson were downloaded to a laptop PC. The structure of the review session depended on whether it was a self-reflection or social-reflection case. In all cases participants were instructed on how to use the viewing software, which enabled them to rapidly ‘click’ through the images. In self-reflection cases participants were asked to talk out loud about what they were thinking as they went through the images, which was followed by an interview with the researcher in which they clarified a few points raised, asked if images allowed participants to pick up certain aspects of the lesson and finally about the design and potential of SenseCam and whether they found the exercise useful. In social reflection situations participants were asked to click through the images together and discuss the lesson. In both self-reflection and social-reflection sessions, session lengths varied widely – from 9 to 48 min – with restrictions imposed only by participant schedules. All review sessions were video recorded

R. Fleck / Interacting with Computers 24 (2012) 439–449

for further analysis. More details on the cases and empirical procedures can be found in (Fleck, 2008). 3.2. Applying and operationalising the framework The process of applying and operationalising the initial framework derived from the literature happened through a series of steps described below, of which 2–4 were iterated both within and across steps. 1. Producing transcripts of talk around images. 2. Breaking transcripts down into reflective chunks. 3. Rating each chunk for evidence of reflection against the initial framework. 4. Clarifying and adapting the initial framework to describe the reflection observed.

443

the remaining self-reflection cases. Whenever an example was found that was not adequately captured by the current version of the framework, one of two actions occurred. If it was clear where the chunk fitted, the description of a current definition in the framework was updated. If, however, the chunk was hard to classify in the existing framework, other chunks which contained examples of reflection most similar to the problem chunk were sought, and a rethink of the boundaries between levels considered – though in reality this happened less as the framework developed. A similar process was followed to rate and update the framework to cover social-reflection cases. Periodically throughout this whole process (steps 2–4), transcripts that had been previously coded and chunked were chunked and rated for reflection blind to how they had previously been coded to maintain consistency within and across cases. 3.3. Operationalised ‘levels of reflection’

3.2.1. Step 2: Chunking Transcripts were broken down into topic chunks defined as: a section of dialog that flowed naturally around an idea or a number of related ideas. If a new, seemingly unrelated idea was then discussed, or there was a long pause between comments that were not obviously linked, this was considered the beginning of a new topic chunk. Chunks ranged in length from just a few words, to whole sentences, to multiple sentences (some of the longest chunks were up to 300 words long). This is an approach adopted regularly by researchers in the field of teachers’ reflective practice (e.g., Hatton and Smith, 1995; Manouchehri, 2002; Ward and McCotter, 2004; Lee, 2005; Davis, 2006) and we found it made most sense in terms of understanding the data in terms of reflection: it was not possible using smaller or more regular segmentation to meaningfully capture the development of a reflection. In most cases the process of chunking was quite straight forward: comments made tended to flow around one topic or related topics, then break and move onto another. When participants were looking through the images the pictures tended naturally to move them on, and interview questions seemed to serve a similar structuring role. However, there were some situations where chunking was more problematic and required a greater degree of judgement informed by iterative familiarity with the topic domain. Therefore the data was returned to regularly throughout the process of going through steps 2–4 and sections re-coded, blind to how they had initially been chunked to maintain as much consistency as possible in how this was done across chunks and cases. However, there were some extreme examples where it was simply not possible to break up a chunk which clearly moved through a number of ideas without a natural break – these examples caused us to reconsider and refine the initial levels of reflection framework, which we will discuss below (see level R1.4). 3.2.2. Steps 3 and 4: Rating chunks for reflection and clarifying and adapting the initial framework With the exception of comments made about the task or comments made that were completely off task, the framework was used to rate each chunk for evidence of reflection. In order to do this, initially an approach similar to a card sorting technique was applied. A sub-set of transcripts from self-reflection cases were cut up along chunk separations and sorted according to the initial framework as best as possible (and further where necessary – resulting in subgroups for R1). This approach made it very easy to compare across chunks from different cases in order to determine what constituted evidence of each level of reflection in our own data. The original framework was then supplemented with descriptions that captured all examples seen so far of each level of reflection, and a set of examples was built up alongside to help classification of further cases. This framework was then applied to

The following presents our final ‘operationalised’ levels of reflection framework illustrated in its specific use for the purpose of rating teachers’ and tutors’ reflection around SenseCam images. By ‘operationalised’ we mean expanded and adapted to clearly describe what constitutes evidence of each of the levels so it could be accurately applied across our dataset. We include explanations of why and how the initial framework was adapted, and illustrate it with examples from our dataset. All additions and changes to the original framework are in italics. R0: Non-reflective description

A description or statement about events or other interpretation of the image without further elaboration or explanation. Can be of/about images, or just from memory of events. Chunks which involved participants recalling and describing events that went on in the classroom without attempting to provide reasons or justifications for these events were rated as R0, or descriptive talk. This rating included descriptions of what the participants were doing, what the students were doing, or other general descriptions of events going on at the time. For example: ‘‘Here I am at the board, just explaining to them what I want them to do’’ Since they also fitted most closely the outline of non-reflective description in the initial framework, we added to the description in the operationalised framework the case of statements that were made about the images themselves if they did not include any explanation or elaboration, e.g., ‘‘I don’t know what I’m looking at here’’ Participants in social-reflection deferring to each other to clarify how events unfolded, or what they thought the images were showing, were also included. The following example between two peer tutors illustrates this: T10: and I’ve gone back, and what am I doing? I can’t remember why I went back to the board T11: you were. . .demonstrating something to. . .[?] T10: is this when I actually gave an example? T11: yes For clarification we added the statement ‘can be of/about images, or just from memory of events’ to cover the two classes of description we observed in our data.

444

R. Fleck / Interacting with Computers 24 (2012) 439–449

R1: Reflective description

Description including justification or reasons for action, but in a reportive or descriptive way. No alternate explanations explored, limited analysis and no change of perspective. May include discussion of things to change in future. Four types identified. There was much variation between examples of reflection we considered to reach the level of R1 descriptive reflection, which on closer examination seemed to fall into four main types of reflection in self-reflection cases: description and explanation, description and theory, evaluation, and storytelling. They were also evident in social-reflection cases with the exception of storytelling which did not have a clear parallel. These 4 sub-types of R1 reflection were therefore derived directly from our data rather than the literature, and we added them to our operationalised framework. We also added the phrase ‘may include discussion of things to change in future’. This too was not something included in other literature descriptions – the closest being in definitions of R3 transformative reflection ‘revisiting an event with intent to reorganise and do something differently’. However, as we explain in more detail in the section on R3 reflection, there were many examples where participants made comments about changing their behaviour in the future that did not otherwise suggest a level of transformative reflection. Such chunks were therefore rated at the level of reflection the rest of the chunk suggested. R1.1. Description and explanation

Description of events or behaviours and explanation for why they happened or why they are worth noting. Appears that reflector is reporting explanations they were already aware of Often participants presented explanations almost as though they were fact: as known and accepted justifications or explanations for the situation. For example in this extract, the tutor T8 provides an explanation for her own actions: ‘‘Um, she was side tracking so I moved over to talk to her because she wanted to talk about the assignment. As her team had already done all their work, and weren’t directly involved in the proceedings at that point.’’

R1.2. Description and theory

Description/interpretation of events and an explanation or theory of why; appears that reflector is not sure what the reasons for action or events were and is generating a new theory or explanation However, there were cases where the participant seemed less sure what the reasons for action or events were at the time, and was generating a new theory or explanation to make sense of the situation, as in the following example: ‘‘I think I naturally look more to one side, but I think it’s more to do with that it’s in a different classroom, so in that classroom I stand on the left of the board and I always look to the right, naturally, first glance.’’ Although the distinction between providing a known explanation and theorising is extremely difficult to make, in some cases indicators such as ‘‘I think’’ or ‘‘I wasn’t sure’’ tended to suggest the participant

was actively theorising. As theorising or hypothesising is an indicator that the reflector is becoming more aware of and trying to understand the situation more clearly, this distinction is useful to make. R1.3. Evaluation

Evaluation of events depicted in images There were also some examples of participants evaluating things: such as their own teaching behaviours or decisions; their students’ behaviour or understanding; and also on a number of occasions, the SenseCam as a tool to support reflection. In the example below the tutor (T1) is evaluating one of her student’s performances in a presentation task. ‘‘It was quite good actually, she was. . . she was a very good presenter. She was so nervous, because her English wasn’t her first language, but um. She just did it really well. She was completely in control of the class. Even though the technology wasn’t there’’ R1.4. Storytelling

Longer chunks containing multiple descriptions and evaluations as above which still do not quite classify as Reflection Level 2 below As discussed in Section 3.2, there were some chunks that were very difficult to separate as, unlike in other cases where talk moved on neatly to another focus as the participants progressed through the images, these examples included chains of descriptions, explanations and evaluations of events which followed on from each other. It was almost as though the participant were telling a story about events that went on during the lesson, as in the following (short) extract from Case 9 which contains a chain of description and explanation. Often these would go on for up to 2 or 3 min. ‘‘He was actually asking me quite an intelligent question about, I was asking for the first question was about does this, give me an example of this or that, and I was trying to [?] things in the middle ground. Um, which didn’t really fit the question, and we were getting into a discussion about ‘pick the things that make a good answer to the question’’’ As separating such sequences into smaller chunks would have required a number of arbitrary cuts it would be hard to replicate, we decided to consider these chunks as one unit – albeit one that contained multiple examples of phrases which would be considered indicative of R1. These we refer to as storytelling, where the components of the chunks fit the descriptions of R0 and R1 above; but, even when taken together as one larger reflective chunk, the reflection does not reach a level which could be considered R2 as described in the initial reflection levels framework. The subcategories R1.1–R1.4 could simply be listed as examples observed of R1 (as we have done with R2 and R3 below). However, these categories were quite robust and useful in practice as we found different image uses led to different subcategories of R1. Also, storytelling (R1.4) was something that we observed a lot with some participants but not at all with others. R2: Dialogical reflection A different level of thinking about. Looking for relationships between pieces of experience, evidence of cycles of interpreting and questioning, consideration of different explanations, hypothesis and other points of view. May include discussion of suggestions for change.

R. Fleck / Interacting with Computers 24 (2012) 439–449

Features observed: questioning assumptions, referencing to past experiences, relating experience to theoretical concepts, interpreting, hypothesising, considering different explanations, considering implications of observations, interpretations and suggestions, generalising from experience. As in R1, we included at this level further examples of chunks which mentioned things to change in future but no other evidence of R3 which otherwise would be considered R2. We also added to the definition of R2 a list of all the features we observed in our data that we considered to be at this level in order to clarify our framework better for others. The example of R2 below is from a tutor (T7) self-reflecting whilst going through SenseCam images and considering a number of factors related to an observation he had made about the students. He talks about his initial assumptions about their behaviour, and the teaching decisions he has made based on these assumptions which now come into question given this new awareness of what the students actually do when he is not watching them. This could be described as ‘questioning assumptions’ and although no concrete conclusions are drawn, this example does come close to ‘a different level of thinking about. . . evidence of cycles of interpreting and questioning, consideration of different explanations, hypothesis. . .’ as described in the initial framework. ‘‘It’s actually interesting what they’re doing as well. . . actually one thing, when you were asking about what I was expecting beforehand, I’m surprised to see that they’re looking at me all the time. [laughs] because obviously I’m only looking at. I try to flick my eyes around the room, and as you can tell, I’m actually quite a new teacher, um, so I try and make contact with all of them all the time, and not just talk to one or whatever. It would be quite tempting to talk to her, or [?] or two of the girls down here. So, I don’t know, I think I just assumed that when I’m not looking at them they’re probably looking at their notes or something, but they’re all, in all these pictures, they are actually looking towards me’’ Examples from the discussion with the experimenter following self-reflection around images showed evidence of referencing to past experiences, interpreting, hypothesising and considering different explanations – as illustrated in the example from Case 6 below: ‘‘It’s. . . I got this feeling actually with some of the earlier groups, that they have a tendency to see how the other people in the group are. . . taking on board what I’m saying before responding. There’s very much this element of, they don’t want to seem too keen. There’s this sort of thing of. . . if I’ve said something they don’t necessarily understand, instead of saying to me I don’t understand it, they’ll turn to each other and make faces. But watching that, I didn’t get the feeling that it was turning round and saying, ‘‘oh god, what’s going on here’’ it’s just, they’re looking at each other. They’re looking at me and listening to what I’m saying, but they also look around to see what’s going on in other people’s faces. Maybe to see if they look like they’re understanding what’s been said. So they’re assessing each others’ response to my tutorial, not just their own.’’ There were also examples of participants relating their experiences to theoretical concepts they had learned about in their training, as in the next example (where E is the experimenter):

445

E: do you think it’s a good or bad thing that you move around the room more than you thought? T7: I don’t know. I’m always self-critical I think, so I wonder if that’s distracting to them? But actually, from the things that A’s been telling us on the course. . . [referring to the tutor training course] I think, um, I prefer the idea that I’m not just standing at the front, giving the lesson and preaching sort of thing, that I’m kind of involved with them. So I think it probably is quite a good thing. And just also because, one thing that I hadn’t thought about before when teaching, was how do you sort of know that they’ve got what you’re explaining, and in the course A’s been saying eavesdropping is usually quite a good way of overhearing them, rather than. . . if you start and try to get everyone and say, ‘‘right Lisa, tell me the difference between’’ which is what I did at the start. Then they’re more likely to just not want to answer because they’re on stage in front of everybody. Whereas, if you just wander around the room quite a lot, and just, not necessarily, I don’t necessarily stop and talk to them each time I’ve been round either. I maybe just listen to what they’re saying. Yeah. So I think it probably is quite a good thing. I don’t know how they feel about it, but. . . In addition to the above reflection themes, there was evidence in some of the social-reflection cases of participants evaluating each other’s comments which could lead them to consider the implications of any new interpretations of events or suggestions for change in practice, as in the next example where the two tutors (T3 was teaching and T2 observing) are discussing an observation, implications of it, and possible changes to make in the future: T3: {would it be} [points at images] if I’d have thought about it, I could have done something to include her a bit more. T2: yeah, {you can see that she’s physically. . .} T3: {she sat there last time} [points] T2: . . .quite far away from them T3: yeah. She sat there [points] last time, and it was better. T2: yeah T3: because she had much closer peers here [pointing throughout] T2: because these girls had all come round to sort of be close, and they were actually talking very well as a whole group together [pointing] whereas you can see these two guys here at the back [points again] were just chatting to each other T2: and she is, yeah, just physically a long way away. T3: yeah T2: I mean she’s leaning forward, so she was trying to be involved There was also evidence of stepping back to generalise from the specifics of an event in the classroom to consider it in terms of either the whole lesson or teaching practice in general, as in the next example where the Mentor reassures the teacher that the issue she raises is one which causes problems to all teachers: P8: but that’s got, I think that’s got to the stage then, and I said they had ‘til 11.55 to do it. M: yeah P8: so you’ve got, you know, 7 minutes M: oh it’s so hard, so hard the timing of something like this. Different groups take different times, it’s so hard. What do you do when you’re finished?

R3: Transformative reflection Revisiting an event with intent to re-organise and do something differently. Asking of fundamental questions and challenging personal assumptions leading to a change in practice

446

R. Fleck / Interacting with Computers 24 (2012) 439–449

Features observed: (features of R2), becoming more aware of and questioning own motivations, challenging and reconsidering assumptions, considering need to change (teaching) practices. As with R2, we added the features we observed in our data and considered to be at a level of R3 to our framework for the benefit of future research – we expect other features could be added to this list with further data as, in line with others findings (e.g., Hatton and Smith, 1995; Ward and McCotter, 2004), this level of reflection occurred on only a few occasions. Also it is always difficult to say whether a real transformation of perspective did take place to the extent it may really lead to a fundamental change in practice with this kind of measure. However in the following example the reflector does seem to do this. The tutor (T6) describes how an observation from the images helped her to question her initial assumptions and consider an alternate explanation for her students’ behaviour (their ‘unnecessary’ writing of notes whist she was talking), and as a result the need for her to reconsider and change her teaching practices: T6: Yeah, I mean there’s a few times where I do actually say to them, just as a reminder to them, this is in the handout. But it’s. . .. obviously the students want to write down in their own words what I’m saying. So, I have to be aware of that when I’m lecturing, when I’m tutoring. I’ve got to go at a speed where they can write things down. And yeah. Watching that has made me realise how many instances during the time that I’m talking they are trying to put something down in their own words E: Ok, so it maybe made you more aware of something you were already aware of? T6: Yeah. But the importance of it. I had kind of thought it was slightly annoying that I’m trying to get them to focus on what I’m saying, and what they’re to do is write down every single word. But it’s obviously, for them, it’s an important way of learning. So I’ve got to take that into account. In contrast, there were other examples where participants suggested how they might do things differently next time, but without the accompanying suggestion of a fundamental change of perspective – more as an experimentation or idea about how things might be better. These are the examples we discussed earlier in descriptions of R1 and R2, as they were examples of intention to change behaviour that did not otherwise suggest a level of transformative reflection. A note about R4: Critical reflection We did not observe any examples that we felt matched the description of critical reflection we proposed in the initial framework (i.e. where social and ethical issues are taken into consideration. Generally considering the (much wider) picture). Whilst Hatton and Smith (1995) report that reaching this level of reflection was very rare, there may be a few reasons why this was not observed. For example, it may have been that we were not sensitive enough to indications of this level of thought and missed them where they did occur in our dataset. Alternatively, this level of reflection may really be very rare in teachers’ and tutors’ lesson reflections (even more so in spoken reflection), or the technology we trialled may not have been conducive to encouraging this level of reflection. One possibility is that even if examples that did include discussion of wider issues had been found, they may not have suggested themselves as belonging to a distinct level of reflection (and fit within the levels outlined by Manouchehri (2002), Ward and McCotter (2004) and Lee (2005)). However,

without any examples it is not possible for us to discuss here but something to be explored in future work. 4. Findings and discussion Having operationalised our initial framework, we now describe how it was used. Firstly we used it to rate the reflection observed in the 18 case studies of trainee teachers’ and tutors’ reflective practice used to develop the framework. We also then applied it to a further 9 cases of mainly more experienced newly and recently qualified teachers (see Table 1 in Section 3.1). Secondly we used it to relate the levels of reflection achieved within these cases to the 3 types of image use identified in previous research (Fleck, 2008; Fleck and Fitzpatrick, 2009) i.e. to structure and support participants’ recall of the experience, to allow participants to see or become more aware of more aspects of their lesson, and to support participants’ ongoing reflection and reflective discussions. In doing this we were able to understand more about the most effective uses of SenseCam within the domain of teachers’ reflective practice, and make suggestions for how to improve on this. After an evaluation of the framework for application in the current domain, we then go onto discuss how some of these findings have wider implications for designing for reflection in general, and conclude with an outline of the applicability of our framework and methodological approach beyond our specific case. 4.1. Informing the use of SenseCam for teachers’ and tutors’ reflection 4.1.1. Rating reflection to compare situations of use This methodological process allowed us to rate the reflection observed in the 18 case studies of trainee teachers and tutors reflecting on their practice that were used to develop the framework. When doing so, more than half of the chunks in all selfreflection cases but one were rated as non-reflective (R0) and the remaining chunks were considered to be at the level of reflective description (R1), with one reaching dialogic reflection (R2). However, the length of chunks rated as R1 or R2 were on average considerably longer than R0 chunks, with reflections ranging in length from 1 to 12 but typically 1 to 3 sentences, and non-reflective chunks ranging from one or two words to a couple of sentences at most and rarely. Taking this into account, around half the talk was considered to reach level R1. When participants were interviewed by the researcher after their review of the images, participants reflected further on the lesson, sometimes reaching a level of dialogic reflection (R2). In comparison, the conversations between participants in the social-reflection cases were, on average, more reflective than selfreflections around images: there were both relatively fewer chunks considered non-reflective and descriptive (R0), and more chunks considered at the dialogical (R2) level of reflection. Also, most chunks tended to be longer than chunks rated at the same level of reflection in self-reflection cases. There was still little evidence of reflection at higher transformative (R3) and none at critical (R4) reflection levels, but there was evidence of some further reflection themes which did not occur frequently, if at all, in selfreflection cases. (A more detailed description and comparison of patterns of reflection observed in the different cases are outlined in (Fleck, 2008)). Overall, we found that participants were generally able to reach higher levels of reflection with support of others – i.e. in socialreflection situations, including when in discussion with a non-domain expert researcher. This approach also enabled us to compare the reflection observed in these initial 18 cases with the further 9 cases conducted with more experienced newly and recently qualified teachers, who trialled both the worn SenseCam and one

R. Fleck / Interacting with Computers 24 (2012) 439–449

statically located in their classroom (see Table 1). We found, using the same levels of reflection rating framework, both that the reflection of more experienced teachers was generally rated as higher than less experienced teachers, and, in comparison to images collected from a worn SenseCam, those from a static camera more often led to higher levels of reflection (Fleck, 2008). Therefore rating reflection in this way enabled us to compare and contrast different situations of use of SenseCam, and identify those in which better reflection was achieved. The framework could be used beyond these initial studies to explore further situations of use, compare reflection supported with SenseCam to reflection supported by alterations to the technology, different technologies (e.g., traditional video) or without the support of any technology at all. 4.1.2. Rating to understand how the technology leads to reflection The framework also allowed us to relate the levels of reflection achieved to the 3 types of image use identified in previous research (Fleck, 2008; Fleck and Fitzpatrick, 2009) i.e. to structure and support participants’ recall of the experience, to allow participants to see or become more aware of more aspects of their lesson, and to support participants’ ongoing reflection and reflective discussions. This was done by coding transcripts in terms of how images were used in these three ways, then observing how this lead to levels of reflection as rated by the operationalised levels of reflection framework. What became clear was that the link between image use and reflection was not always direct. For illustrative purposes these findings are summarised below; further detail can be found in (Fleck, 2008). Firstly, where images are used to structure discussion and recall of events, they seem to trigger discussion, which could lead to participants describing or discussing events for periods of time without necessarily returning to the images. This was particularly the case in storytelling examples (R1.4) and makes images’ role in leading to subsequent reflection hard to establish. Although most of this talk is description of events, and therefore not reflective at all (R0), it did often then led to higher levels of reflection. In particular, such descriptions were often what triggered a mentor or the researcher to prompt for more information so indirectly led to teachers’ and tutors’ reflection in this way. Therefore, images can be used to structure recall of the lesson leading to reflection, and another person can guide and prompt further reflection from this recall. Secondly, where images enable participants to see more than they did at the time they were more likely to lead to higher levels reflection than the other two main uses of images, suggesting this is one of the most valuable ways in which images can support teachers’ and tutors’ reflection on practice. When becoming aware of these new things, participants are more inclined to reflect on them and in most situations where images are interpreted or evaluated rather than events in them just described, a level of descriptive reflection (R1) is usually reached. Also, as these observations from images are less likely to be things participants have been aware of or thought much about previously, reflections more often involve participants considering the implications and assumptions they had previously made about the lesson based on their perceptions at the time, which can lead to a change in perspective (all themes associated with R2 reflection). In particular, specifically looking for evidence of things missed or for patterns of events in the images triggers reflection; however, these require a certain amount of effort and understanding on the part of the participants as to how images might allow them to see more. Therefore, participants should be made aware of all the potential ways in which they could see more in the images. Guidance or prompting from another person can aid this process and add a further perspective to see more. Finally, where images are brought into support ongoing discussion, the role of images is more explicit (i.e. they are pointed

447

to or referred to specifically); however, how directly this leads to various types of reflection is still often unclear. For example, a number of chunks which reach levels of dialogic reflection (R2), particularly in social situations, involve extensive use of images like this: both to act as evidence or illustrate discussion points made, or to ground conversation. However, such use of images does not always correlate with higher levels of reflection, and some participants made little use of images in this way. Where this kind of image use does lead to higher levels of reflection, it is often a result of the reconsideration of events that occurs as participants use images as evidence to support their own arguments and are faced with other’s interpretations in return, leading to a challenging of their initial assumptions and potentially a change of perspective. Therefore, participants should be made aware of the potential of images to ground their discussion or provide evidence of events to be interpreted and evaluated. Overall, these observations point out a few key things for the future of images to support teachers’ and tutors’ reflective practice. For example, however images are used, the presence of another person is more likely to lead to higher levels of reflection. They also highlight the importance of guidance to support reflection with images, and we have suggested that awareness of how images can lead to reflection (including, most importantly, helping them see things they have previously missed including patterns of events, and also helping structure return to experience, and to support on-going conversation by grounding this conversation or providing evidence of events that can be reinterpreted and evaluated to challenge initial assumptions) may lead to better reflection. These observations have been organised into specific guidelines for use of SenseCam in this particular domain (see Fleck, 2008), and could be evaluated in future research by making use of the levels of reflection framework. 4.1.3. Evaluating the framework Rating reflection using our levels of reflection framework has helped us build up a picture of which are the most effective situations of use for SenseCam as a tool to support teachers’ and tutors’ reflective practice. It has also enabled us to identify the most effective uses of images which in turn leads to suggestions for how to design SenseCam use in the future to achieve better reflection. In addition, it could be used to evaluate these suggestions and any changes to its design (for example the adding of structure or support to reflection in the viewing software), as well as compare it to other technologies or techniques. In terms of the validity of our framework within the current domain and reliability of our findings, there are a few points we wish to raise. Firstly, there are issues related to any attempt at rating something as hard to get at as reflection: it is widely reported in the literature that any reflective exercise can only provide indirect evidence of reflection and may not accurately reflect ability to reflect (Davis, 2006; Hatton and Smith, 1995; Sumison and Fleet, 1996) and it is very hard to measure the intentionality behind any comments (Sumison and Fleet, 1996). Therefore we accept that our framework may miss or misattribute reflection, which could explain both why self-reflections appear less reflective and that we did not observe much reflection at all above level R2; however these difficulties are not just limited to our framework but any attempt to rate reflection. Secondly, the operationalised framework was developed in relation to our data by a single researcher, a valid inductive research approach which fitted the exploratory nature and context of or our research. Whilst as described earlier this process was iterative and involved much returning to the data and blind re-coding to ensure as much consistency within and across cases as possible, other researchers may disagree with the choices made about what constitutes evidence of the various levels of reflection. Therefore, further application of our framework in this

448

R. Fleck / Interacting with Computers 24 (2012) 439–449

domain will be of great value in establishing its validity. Finally, as every context in which reflection occurs is different from every other context, and especially as there is little detail in previous research to describe exactly what constitutes reflection at the various levels, it is very difficult to make any comparison between the findings of this and previous research. However, whilst it is not possible to make a direct comparison, the pattern of reflection we observed is similar to those widely reported in teacher reflective practice literature where reflection is supported in a variety of ways, suggesting our ratings are along the right lines. For example Hatton and Smith (1995) report that up to 60–70% of their participants’ chunks of reflective writing were coded at a level of nonreflective description, and examples of higher levels of reflection that we term as transformative (R3) or critical reflection (R4) are always reported to be rare (e.g., Kember et al., 1999; Moon, 1999; Ward and McCotter, 2004). Taking a step back and thinking about this, it does make sense. There are only so many fundamental changes in perspective that it would be meaningful to make in a short period of time. Hatton and Smith (1995) emphasise the importance of lower levels of reflection as essential precursors to higher levels, and suggest that the levels should not be considered as an increasingly desirable hierarchy. Also, since developing this framework on the initial 18 cases, we subsequently successfully applied it to the additional dataset of 9 cases (see Table 1) that included both trainee teachers and more experienced teachers (see (Fleck, 2008) for more details) and found a pattern of reflection similar to that reported in the literature (i.e. that more experienced teachers are better able to reflect (e.g., Hatton and Smith, 1995)). Again, further application of our framework would be invaluable to establish its validity and robustness; by presenting our operationalised framework, discussing how it was developed and providing extensive examples of how it was applied, we hope this will make it possible for other researchers to do this in a way previous research has not allowed. 4.2. Beyond SenseCam and teachers’ reflective practice Some of the findings we have outlined above have implications beyond SenseCam within the domain of teachers’ and tutors’ reflective practice. In particular, they highlight that it is not just the technology, but the wider framework in which it is used that is important for supporting reflection. Whilst this observation is not new to the literature of reflection (e.g., Hatton and Smith, 1995), it is perhaps worth highlighting for the benefit of the HCI community. However, the main contribution we feel this work makes beyond our specific context is the value of the methodological approach for rating reflection. Firstly, the operationalised framework as it stands will have applicability to other image based reflection tools. We have already suggested how it may be used in future research to make comparisons between SenseCam, future versions or situations of use of SenseCam, and video within this domain. More research is needed to establish the robustness of our framework beyond our specific case, especially as the framework was developed largely from the literature of teachers’ reflective practice, although care was taken to base it on frameworks derived from theory beyond this domain. As shown by previous research and the proliferation of numerous frameworks for rating reflection, any new context in which reflection is being rated (and so any new technology or domain) will require some interpretation and iterative adaptation of any initial framework. Also, we chose to use our framework to conduct qualitative comparisons between cases as we felt this was most meaningful when dealing with a relatively small sample of cases each varying in nature along a number of dimensions. However the framework could be used as the basis for a quantitative comparison where cases were more homogeneous in nature and with some

means decided upon to deal with varying chunk sizes. Therefore, we present our operationalised levels of reflection framework as a starting point for other similar research. By giving extensive and detailed examples of what we constituted evidence of reflection at the various levels and demonstrating both how it was developed and applied to our data we hope it will enable other researchers to build on our research. They will be able to see clearly how the framework does and does not capture their own data, which will in addition allow for comparison between our application context and others. Whilst there are some unique features to SenseCam images (such as the number of images and the perspective from which they are taken) which may affect the kind of reflection they support and the best ways to make use of these types of images to support better reflection, we would expect that reflection around any still images would be similar enough that our framework would be largely applicable. Secondly, in the field of HCI there is increasing interest in designing for reflection in different contexts making use of a wide variety of technologies and techniques from learning (e.g., Price et al., 2003) to managing health (e.g., Mamykina et al., 2008). When moving to use a framework for rating reflection in domains further from ours, or to rate the reflection supported by technologies that are not image based reflection tools it may be that our operationalised framework is too specific to capture observed data. This will particularly be the case in the lower levels of reflection (R1.1–4) and non-reflective description (R0) outlined in our operationalised framework where it describes discussion that is closely tied to the images on which it is based. There should be less difficulty with higher levels of reflection that describe more generally types of reflection we observed that are less tied to the images themselves. This may well also be the case with other technologies, and it has been suggested that technology has a greater role in directly supporting lower levels of reflection to enable people to build on these to reach the higher levels (Fleck and Fitzpatrick, 2010). To establish this, we would suggest a methodological approach similar to ours to be of great value. By recording reflection supported by the technology or technique and then iteratively adapting our operationalised framework (with reference to the initial framework where ours is too specific) to produce an operationalised framework for the specific context, researchers have a tool which can allow them to compare reflection supported over time, across similar situations or with small adaptations. This tool will also allow them to understand the most effective ways how their technique or technology is supporting reflection. Somewhere between the initial framework and any operationalised version of it, there may be a more generic framework which includes enough detail to ensure consistency of ratings across contexts – allowing an evaluation and comparison of the effectiveness of various ways of supporting reflection on experience. Finally, the framework can also be used to inspire future design for reflection on experience. It could be used to conduct investigations into existing reflective practices, and in other work we describe how a wide range of techniques and technologies have been used to encourage the aspects of reflection on experience embodied in the levels of reflection of our framework (Fleck and Fitzpatrick, 2010). For example, the findings we summarised here of SenseCam use in teachers’ and tutors’ reflective practice suggest that amongst other things, image based technologies can be effective in supporting R2 aspects of reflection by enabling participants to ‘see more’ (e.g., things they missed at the time or patterns which emerged over the lesson) thereby encouraging them to consider different explanations, hypothesis and points of view. Other technologies have the potential to support ‘seeing more’ in a different way, such as sensor technologies that can make available aspects of an experience not otherwise perceivable. Application of the framework to rate reflection supported by these technologies and

R. Fleck / Interacting with Computers 24 (2012) 439–449

techniques can reveal how successful they are within various contexts and further inform this work. 5. Conclusion As the field of HCI becomes more interested in designing for reflection on experience we suggest that a tool for evaluating reflection may be invaluable to inform this research. We have described how we designed and iteratively developed a framework for rating levels of reflection whilst conducting field work to explore the potential of an image based reflection tool, SenseCam, to support teachers’ and tutors’ reflection on experience. This process allowed us to make a few key observations about how images support reflection in this context. For example, however images were used, the presence of another person was more likely to lead to higher levels of reflection. They also highlight the importance of guidance to support reflection with images, and we have suggested that awareness of how images can lead to reflection (including, most importantly, helping them see things they have previously missed) may lead to better reflection. These observations have been organised into specific guidelines for use of SenseCam in this particular domain (see Fleck, 2008), and could be evaluated in future research by making use of our operationalised levels of reflection framework. However, we argue that the most important contribution this research makes to the field of HCI in general is the framework and associated methodological approach for rating reflection. Our operationalised framework will provide a starting point for researchers interested in rating reflection supported by other image based reflection tools in order to explore how best to make use of them. We also suggest that following a process similar to ours to iteratively adapt and operationalise the initial framework we synthesised from the literature will allow researchers of other reflective tools, in other contexts, to compare reflection over time, across similar situations or with adaptations. This approach will also allow them to understand the most effective ways how their technique or technology is supporting reflection and ultimately can inspire the design of future technologies for supporting reflection on experience by building up an understanding of the most effective ways of doing so. Acknowledgments This research was funded by an EPSRC studentship as part of the Equator IRC project (www.equator.ac.uk) funded by EPSRC Grant No. GR/N15986/01. The SenseCams were provided by Microsoft Research in Cambridge, UK. Many thanks to Geraldine Fitzpatrick and Rose Luckin for supervising this work. Also to Geraldine Fitzpatrick, James Fleck, Yvonne Rogers and peer reviewers for their comments on the drafts. Appendix A. Supplementary material Supplementary data associated with this article can be found, in the online version, at http://dx.doi.org/10.1016/j.intcom.2012. 07.003. References Boud, D., Keogh, R., Walker, D., 1985. Promoting reflection in learning. Reflection: turning experience into learning. In: Boud, D., Keogh, R., Walker, D. (Eds.), Kogan Page, London, pp. 18-40. Cannings, T., Talley, S., 2003. Bridging the gap between theory and practice in preservice education: the use of video case studies. In: Proceedings of the 3.1 and 3.3 Working Groups Conference on International Federation for Information Processing: ICT and the Teacher of the Future, vol. 23. Australian Computer Society, Inc., Melbourne, Australia.

449

Chuang, H.-H., Rosenbusch, M.H., 2005. Use of digital video technology in an elementary school foreign language methods course. British Journal of Educational Technology 36 (5), 869–880. Davis, E.A., 2006. Characterizing productive reflection among preservice elementary teachers: seeing what matters. Teaching and Teacher Education 22 (3), 281. Fleck, R., 2008. Exploring the potential of passive image capture to support reflection on experience, Unpublished DPhil thesis, University of Sussex. Fleck, R., Fitzpatrick, G., 2006. Supporting Collaborative Reflection with Passive Image Capture. COOP ‘06, Carry-le-Rouet, France, pp. 41–48. Fleck, R., Fitzpatrick, G., 2009. Teachers’ and tutors’ social reflection around SenseCam images. International Journal of Human Computer Studies 67, 1024–1036. Fleck, R., Fitzpatrick, G., 2010. Reflecting on reflection: framing a design landscape. In: Proceedings of OzChi, 22-26 November, 2010, Brisbane, Australia. Harper, R., Randell, D., Smythe, N., Evans, C., Heledd, L., Moore, R., 2007. Thanks for the memory. In: Proceedings of the 21st BCS HCI Group Conference, vol. 2. Lancaster University, British Computer Society, UK, pp. 39–42. Harper, R., Randall, D., Smythe, N., Evans, C., Heledd, L., Moore, R., 2008. The Past is a Different Place: they do things differently there. DIS 2008. ACM, Cape Town, South Africa, pp. 271–280. Hatton, N., Smith, D., 1995. Reflection in teacher education: towards definition and implementation. Teaching and Teacher Education 11 (1), 33–49. Hodges, S., Williams, L., Berry, E., Izadi, S., Srinivasan, J., Butler, A., Smyth, G., Kapur, N., Wood, K., 2006. SenseCam: A Retrospective Memory aid. UbiComp 2006, California, USA, pp. 177–193. Hutchinson, B., Bryson, P., 1997. Video, reflection and transformation: action research in vocational education and training. Educational Action Research 5 (2), 283–303. Jay, J.K., Johnson, K.L., 2002. Capturing complexity: a typology of reflective practice for teacher education. Teaching and Teacher Education 18 (1), 73–85. Jones, L., McNamara, O., 2004. The possibilities and constraints of multimedia as a basis for critical reflection. Cambridge Journal of Education 34 (3), 279–296. Kember, D., Jones, A., Loke, A., McKay, J., Sinclair, K., Tse, H., Webb, C., Wong, F., Wong, M., Yeung, E., 1999. Determining the level of reflective thinking from students’ written journals using a coding scheme based on the work of Mezirow. International Journal of Lifelong Education 18 (1), 18–30. Lee, H.-J., 2005. Understanding and assessing preservice teachers’ reflective thinking. Teaching and Teacher Education 21 (6), 699–715. Leung, D.Y.P., Kember, D., 2003. The relationship between approaches to learning and reflection upon practice. Educational Psychology 23 (1), 61–71. Lindley, S.E., Harper, R., et al., 2009. Reflecting on Oneself and on others: Multiple Perspectives via SenseCam. Designing for Reflection on Experience. CHI 2009 Workshop Boston, MAS. Mamykina L, Mynatt E, Davidson P, Greenblatt D. 2008. ‘‘MAHI: Investigation of Social Scaffolding for Reflective Thinking in Diabetes Management, CHI 2008. ACM Press. Manouchehri, A., 2002. Developing teaching knowledge through peer discourse. Teaching and Teacher Education 18 (6), 715–737. Moon, J.A., 1999. Reflection in Learning and Professional Development. Kogan Page Limited, London. Parsons, M., Stephenson, M., 2005. Developing reflective practice in student teachers: collaboration and critical partnerships. Teachers and Teaching: Theory and Practice 11 (1), 95–116. Price, S., Rogers, Y., Stanton, D., Smith, H., 2003. A new conceptual framework for CSCL: supporting diverse forms of reflection through multiple interactions. In: Proceedings of the International Conference on CSCL’03, Kluwer, pp. 513–522. Reinman, A.J., 1999. The evolution of social roletaking and guided reflection framework in teacher education: recent theory and quantitative synthesis of research. Teaching and Teacher Education, 597–612. Schön, D.A., 1983. The Reflective Practitioner: How Professionals Think in Action. Temple Smith, London. Sherin, M.G., van Es, E.A., 2002. Using video to support teachers’ ability to interpret classroom interactions. In: Society for Information Technology and Teacher Education International Conference 2002, Nashville, Tennessee, USA, pp. 2532– 2536. Sumison, J., Fleet, A., 1996. Reflection: can we assess it? Should we assess it? Assessment and Evaluation in Higher Education 21 (2), 121–130. Thomson, C., MacDougall, L., McFarlane, M., Bryson, M., 2005a. Using Video Interaction Guidance to Assist Student Teachers’ and Teacher Educators’ Reflections on their Interactions with Learners and Bring About Change in Practice. Report for the Scottish Executive Education Department. Thomson, C., MacDougall, L., McFarlane, M., Bryson, M., 2005b. Using Video Interaction Guidance to assist student teachers’ and teacher educators’ reflections on their interactions with learners and bring about change in practice. In: British Educational Research Association Annual Conference. University of Glamorgan. Ward, J.R., McCotter, S.S., 2004. Reflection as a visible outcome for preservice teachers. Teaching and Teacher Education 20 (3), 243–257. Whitehead, J., Fitzgerald, B., 2006. Professional learning through a generative approach to mentoring: lessons from a training school partnership and their wider implications. Journal of Education for Teaching: International Research and Pedagogy 32 (1), 37–52. Zuber-Skerritt, O. (Ed.), 1984. Video in Higher Education. Billing & Sons Limited, Worcester.