Why Did My Mentor Teacher Only Give Me a Credit?

TEACH Journal of Christian Education, Dec 2012

By Beverly Christian, Peter Kilgour, and Andrew Kilgour, Published on 01/01/12

A PDF file should load here. If you do not see its contents the file may be temporarily unavailable at the journal website or you do not have a PDF plug-in installed and enabled in your browser.

Alternatively, you can download the file locally and open with any standalone PDF reader:

https://research.avondale.edu.au/cgi/viewcontent.cgi?article=1185&context=teach

Why Did My Mentor Teacher Only Give Me a Credit?

TEACH Journal of Christian Education W hy Did My Mentor Teacher Only Give Me a Credit? Beverly Christian Avondale College ResearchOnline@Avondale Follow this and additional works at: https://research.avondale.edu.au/teach Part of the Education Commons Recommended Citation - Article 11 TEACHR Why did my mentor teacher only give me a credit? The lonely task of grading your pre-service teacher Beverly Christian Senior Lecturer, Faculty of Education and Science, Avondale College of Higher Education, Cooranbong, NSW Peter Kilgour Senior Lecturer, Faculty of Education and Science, Avondale College of Higher Education, Cooranbong, NSW Andrew Kilgour Associate Lecturer, Work Integrated Learning, Faculty of Health Sciences, The University of Sydney, Lidcombe, NSW “Issues mostly revolve around the lack of control tertiary staff have over the way mentor teachers administer the assessment regimes Abstract The placement of pre-service teachers in schools to integrate theoretical learning with practical experience is an integral component of many tertiary education courses. Issues with both the reliability and validity of assessment grades in a workplace environment suggest a call to strengthen the level of academic rigour of these placements. In this study, professional development lecturers in one education program [Avondale College of Higher Education, NSW] constructed a standardsbased grading rubric designed to assist mentor teachers assess the performance of pre-service teachers. After implementation of the rubric for two Professional Experience sessions, mentor teachers were surveyed to assess the effectiveness and usefulness of the grading rubric. Results from quantitative and qualitative data found the grading rubric to be a vital tool in the assessment process. Benefits of the grading rubric included accuracy and consistency of ”asserts that the assessment grading rubric was grading, ability to identify specific areas of desired development and facilitation of mentor to pre-service teacher feedback. This research a useful tool for all three parties concerned: the course supervisor, the mentor teacher and the pre-service teacher. Introduction. While the assessment of students in the tertiary setting is complicated enough to plan, administer, mark and justify, the assessment of tertiary students in the workplace while on practical placement creates a whole new set of issues. Kegan (1994) made the insightful observation that “people grow best when they continuously experience an ingenious blend of support and challenge; the rest is commentary” (p. 42). These are the types of experiences tertiary institutions desire their students to have while on placement. The question then arises as to the best way to facilitate this. This paper reports on a study conducted into the attitudes and beliefs of onsite mentor teachers who were responsible for implementing a trial rubric to assess the practical performance of pre-service teachers while on placement. Also reported in this paper is a theoretical platform for the practical assessment process, common thought on practical assessment found in the literature, and the history of how this research became an area of interest. Issues identified by the authors in the practical assessment area mostly revolve around the lack of control tertiary staff have over the way mentor teachers administer the assessment regimes the faculties ask them to implement. Specific issues related to this concern include: • The mentor teacher may understand little about assessment of pre-service teachers. • The mentor teacher may care little about the assessment of the pre-service teachers on placement. • The mentor teacher may shortcut the evaluative part of the placement for various reasons including time pressures, priority allocations or feelings of inadequacy. 46 | TEACH | v6 n2 • The mentor teacher may both feel intimidated by the student and give a higher grade than deserved, or they may discourage the student with an undeserved poor grade for their stage of development. • Tertiary institutions generally have no authority over the mentor teachers on location but rely on their support and cooperation to train the next generations in the profession • Some tertiary institutions have not historically provided the mentor teachers with the tools to carry out an objective assessment. The depth of this issue became apparent during debriefing sessions with a group of pre-service teachers after a professional experience placement. Many of these pre-service teachers were either elated at their grade because it was significantly better than last time, or really discouraged at their low grade, given their excellent previous grades. Pre-service teachers reported some mentor teachers quickly and randomly ticking boxes on the last day of placement. It is for these reasons that it could be argued that workplace assessment supervisors should only be required to grade the pre-service teachers’ performances ‘satisfactory’ or ‘not satisfactory’. Foundational assertions for this research were: • That excellence can only be aspired to when levels of performance are identified in the student. • That mentor teachers have been expected to provide a grade with no real guidance or scale to use. • That if the validity and reliability of the assessment process were to be improved, a scale needed to be provided. A rubric for assessment of pre-service teachers was consequently developed. This paper reports on responses to a survey designed to measure attitudes mentor teachers have towards the use of the rubric. What research is saying Equipping pre-service teachers with the skills and confidence they need to function in a classroom requires collaboration between the training institution and mentor teachers. The worth of workplace experience as a complement to more theoretical coursework is well documented (Pungur, 2010; Billett, 2009; Gowing, Taylor & McGregor, 1997) . While Professional Experience placements offer a balanced practical component to teacher education courses, the associated assessment process is somewhat challenging. Assessment may be impacted by variables including the diversity of school demographics and localities, and schools adapting to different assessment criteria and expectations from different tertiary institutions (Sadler, 2009a) . A further significant variable is the status of mentor teachers. This can range from two to forty years of experience (See Figure 1) and extend from classroom teachers to department coordinators, assistant principals, and in the case of smaller schools, teaching principals. The position and experience of the mentor teacher also impacts on both understanding the mentoring / assessment process, and the time available to administer it. To complicate the process further, the assessment process is sometimes shared between two mentor teachers. This occurs either because of job sharing or in the case of high school teachers, mentors in two teaching fields. Considering potential variables, Sadler (2011) claims that the consistency of individual assessors cannot be relied on in practice. This view is supported by Tillema (2009) , who found considerable variation in how mentor teachers carried out assessment of pre-service teachers, both in relation to the perceived purpose of assessment and the criteria used. Yet, if assessment of pre-service teachers is to be useful, both interconsistency and intra-consistency are essential (Sadler, 2009b) . To grade or not to grade? The assessment practices of universities vary in regard to pre-service teachers on school placements. Anecdotal evidence pointing to the challenges of attaining consistency across a range of external assessors has resulted in some institutions adopting a pass / fail paradigm. Supporters of this assessment model claim that this is the fairest form of assessment given the complexity of different locations and assessors. Not all research supports this paradigm, however. Tillema (2009) asked three categories of participants in a Professional Experience program (university supervisors, mentors teachers and preservice teachers) to prioritise perceived problems in the assessment of pre-service teachers. Out of 13 identified problems, the “Lack of guidelines and grading rules for assessors” ranked at number one for top priority, level of agreement and congruence with a 95% certainty that this result did not occur by chance (p. 161). This clearly indicates that all three groups (university supervisors, mentors teachers and pre-service teachers) experienced a measure of frustration when there were no clear assessment guidelines. From this and other research (Blanton, Sindelar & Correa, 2006) it becomes evident that grading criteria are important because “they have “While Professional Experience placements offer a balanced practical component to teacher education courses, the associated assessment process is somewhat challenging ” “Rubrics can provide formative assessment by providing pre-service teachers with a clear picture of their interim skill set, this assists mentor teachers in giving helpful and specific feedback 48 | TEACH | v6 n2 a substantial affective impact on learners and their learning, influencing both students’ sense of achievement, and their motivation and level of engagement in future courses” (Sadler, 2009a, p. 159) . It appears that the literature in this area gives measured support to assessment procedures that establish standards against which performance is judged (Cochrane-Smith & Fries, 2002; Weisz & Smith, 2005) . Rubrics as an assessment tool Having established the importance of a grading system for professional experience placements, this review focuses on the assessment tool. A variety of assessment methods have been used to assess practical components of higher education courses. These include observation and note taking, checklists, continuums, journals and rubrics. The last of these is the assessment tool under investigation. Reddy (2011, p. 84) defines a rubric as an “assessment tool that is used to describe and score observable qualitative differences in performances.” Walvoord (2010 , p. 18) additionally states that ”professional experience placements. “the rubric is a format for expressing criteria and standards.” It is these characteristics that make rubrics suitable for the purpose of grading The use of evaluation criteria emerges in the literature as an important point in teacher education as a study on assessment by Pindiprolu, Lignugaris / Kraft, Rule, Peterson, & Slocum (2005) points out. This study concluded that the increasing demands on pre-service teachers to meet performance based criteria highlighted a need to develop effective scoring rubrics. Also supporting the need for criteria are the supervisors, mentors and pre-service teachers in Tillema’s (2009) study, who ranked ‘Not having clear criteria in appraisal’ in fourth place out of 13 identified problems in assessment of a practice teaching lesson (p. 161). There were several other problems identified in Tillema’s (2009) study that could be addressed by the use of a common grading rubric. These were ‘Using different appraisal sources / information’, ‘Conducting a supervision conversation’, ‘Maintaining supervision standards’, ‘Giving directions for future learning’, ‘Giving feedback to students’ and ‘Alignment in ratings among assessors’ (p. 161). In each of these instances a grading rubric could provide both a common language and rating scale that would not only provide criteria standards but also offer a starting point for professional conversations between the mentor teacher and the pre-service teacher. Reddy (2011) introduces a note of caution to the use of rubrics in a higher education setting. This relates to the nature of the rubric, its construction and implementation. Problems occur when performance descriptors lack clarity, inconsistency exists in descriptors across levels and rating scales are mismatched to descriptors. Also noted is the preference to train assessors by offering opportunities for debate and discussion about the rubric, providing practice opportunities, and giving assessors pre-marked samples as a reference (Reddy, 2011) . While this may work in a faculty or department, it is not traditionally feasible when the mentor teachers who will be assessing pre-service teachers are widespread geographically. A further challenge is to create an assessment tool that is detailed enough to accurately measure performance yet does not discourage mentor teachers from using it because it is time intensive. Rubrics do more than provide clear criteria and descriptions of desired performance for summative assessment. Rubrics can provide formative assessment by providing pre-service teachers with a clear picture of their interim skill set and as Taylor (2007) , points out, this assists mentor teachers in giving helpful and specific feedback. This has a positive effect on student professional experience learning. Using rubrics for the assessment of practical tasks is beneficial for all participants. Pre-service teachers benefit from the detailed descriptors’ support of increased understanding of assessments and are able to build on their performance and improve. Mentor teachers find it easier to assess their own effectiveness and give helpful feedback, and university supervisors are informed about the effectiveness and quality of their course (Reddy, 2011) . Aligning assessment with course objectives The importance of assessment which informs course structure and content should not be overlooked. With the move towards Graduate Teaching Standards, there is a need to combine assessment with course outcomes. Several authors on this topic speak in favour of the alignment of assessment with course objectives. McCarthy, Niederjohn and Bosack (2011 ) present a case for embedded assessment which allows “faculty to take an active and intentional role in specifying student learning and determining whether students meet specified criteria” (p. 81). Biggs (1999) takes the argument one step further, stating that desired learning and understandings will occur when all course components are aligned. Consequently, mentor teachers should be assessing pre-service teachers according to course objectives, rather than according to their own personal opinions. The grading rubric referred to in this article is an attempt to bring school-based assessment into alignment with evidence-based assessment practices, thus validating the assessment process. The use of valid, standardised assessment criteria generally supports a consistent and fair assessment system. What remains unanswered is to what extent the use of a standardised assessment tool can assure uniformity of assessment across all mentor teachers who participate in the professional experience program and also their affective response to implementing it. Methodology A cross-sectional survey instrument was constructed to determine how workplace supervisors used the rubric provided. It also collected their opinions on its ease of use, its accuracy and its effectiveness. Demographic data sought included length of teaching experience in years and qualifications of placement mentors / supervisors. The survey featured both closed- and open-items exploring assessor value of the grading rubric. The closed items used a five point Likert scale ranging from ‘strongly agree’ to ‘strongly disagree’. The survey face-validity was ascertained by iterative consultation with teacher education academics. Any comments on review of the surveys were absorbed into the survey content. Qualitative data from the survey was aligned with informal or unsolicited comments received by the authors. Analysis of results Responses from mentor teachers to the survey numbered 112. This represented a response rate of 30%. From the survey, key information was collected, collated, and is outlined graphically below along with qualitative data for each item. The mentor teachers’ years of experience are illustrated in Figure 1. It is clear from the chart that there is a wide distribution of years of experience, and all age groups are represented. Figure 2 shows the perceived ease of use of the rubric and Figure 3 indicates the percentage of respondents who believed that the rubric provided an accurate assessment of pre-service teacher performance. While the survey data showed that mentor teachers found the rubric easy to use and accurate (Figure 3), the qualitative comments collected in the course of the research implied it improved assessment accuracy, and that these two outcomes were very closely linked. Each of the following comments by mentor teachers shows how the elements of ‘ease’ and ‘accuracy’ are placed in the same category. I found the rubric essential for my final assessment of [pre-service teacher] and it made it incredibly easy to identify her exact level of achievement— in fact I felt that it was almost too quick and easy to use so I was able to spend more time and effort on my written comments for [pre-service teacher]. Makes assessing students a lot simpler and clearly defines to them areas that they are achieving well in and areas that need improvement. ” s r a e y 6 < rys0ea–61 rys01–21ea rys0ea21–3 Years of experience s r a e y 0 3 > a / n “The rubric clearly states the levels students can obtain and therefore gives them key performance indicators on which to focus ” I have found using the grading scale / rubric easy to follow it allows you to make / give a grading instead of relying on your own judgment. I love the grading / rubric as I was able to clearly identify what marks that the students I was working on should receive. I found it also very beneficial in being able to use the right words in being able to properly articulate my observations. I have kept a copy for personal reference. The data indicates a strong agreement that the rubric does in fact simplify the task of assessing the practical performance of pre-service teachers. ltrysong ragee reega irsaedeg ltrysngo irsgeead /na Using the rubric has had a positive effect on student learning There were a small number of mentor teachers who disagreed that this is the case. It appears that this was based on the length of time it takes to do the assessment thoroughly using the rubric compared to the less structured way they had completed it in the past. Further comments from mentor teachers added depth to the idea that the process of using the rubric increased their confidence in the overall process of assessment and it justified for them the grade they allocated. Two such comments follow: It gave me confidence to give the grade I did because I knew my PT (pre-service teacher) had covered the requirements. My staff had already determined the grades we were awarding without looking at the rubrics— however the rubrics provided not only confirmation of our decisions but also focused discussion when determining the grades to be awarded. Apart from the assessment benefits of using the rubric, the survey asked the question as to whether it may also be utilised as a tool to enhance pre-service teacher learning. Figure 4 indicates that there is agreement that the use of the rubric does help pre-service teachers learn. The mechanism at work is that the pre-service teacher can use the rubric as an indication of the standard expected for each graduate outcome and plan how they are going to achieve them. They may even seek advice as to how they can do better so as to achieve the standards. When the mentor teacher is reviewing the performance of the pre-service teacher with them at the end of the professional experience placement, the standards can again be used as the basis for the evaluation process and valuable learning can occur. The following comments from mentor teachers illustrate the pre-service teacher learning that they believe occurs while using the rubric: Feedback and discussion—verbal and written is valuable for student learning. The grading scale also allows me to give the student specific feedback that relates to their course. The rubric provides a target for the students to know what they could / should be aiming for. The rubric states clearly the various levels that students can obtain and therefore gives them key performance indicators on which to focus. The grading is incremental and allows students to see what they need to do to advance to the next level. This research hypothesised that the use of the rubric may result in the mentor teachers thinking a little more carefully about the whole assessment process for their pre-service teacher (Figure 5). The survey asked this question and around 75% of the respondents agreed that using the rubric as a basis for assessment of their pre-service teacher had caused them to think more about the assessment process, and probably think more carefully (see the following comment). Figure 5 illustrates this response. The following comment is indicative of several that showed how much the mentor teachers relied on the rubric in the assessment process: We have discussed the rubric many times, particularly when trying to come to a decision about [pre-service teacher’s] professional conduct and teaching practice... and [mentor teacher] kept referring back to it to help her assess [the preservice teacher’s] performance... and to confirm her decisions. Discussion Both the quantitative and qualitative data showed that the inclusion of the grading rubric with the pack of materials and resources sent out to the mentor teachers has been a popular strategy. The results are very comprehensive and the authors believe this is the case not only for the reasons surveyed and reported above, but because the grading rubric has filled a vacuum and given mentor teachers a tool to complete a task that has historically been approached in a somewhat random manner. In addition to comments relating to their own situation, mentor teachers were able to see a wider application of the benefits of a grading rubric. Some teachers felt the rubric would improve interconsistency. One typical comment stated, “it seems like an instrument that will develop a level playing field for you.” Other teachers saw its application as a diagnostic tool, not just for the pre-service teacher, but for course content and structure, with one stating that it could “identify areas of weakness within the student / cohort which need to be addressed.” There were also teachers who appreciated the fact that pre-service teachers were being assessed against teaching standards, and that it was “scaffolded to the New Scheme Teacher requirements.” Some mentor teachers from states other than NSW, however, saw this as irrelevant to their situation. With the implementation of National Teaching Standards in 2013, the rubric will be redefined according to the graduate level of the National Standards, thus addressing this problem. Each of the above points highlights an issue raised in the literature (Sadler, 2009a; Cochrane-Smith & Fries, 2002; Sadler, 2011; Tillema, 2009) and affirms the decision to move to a grading rubric for assessing pre-service teachers. It is important to recognise that this study revealed a small number of perceived issues relating to the rubric. Criticisms from mentor teachers pertained to the construction of the rubric, in particular the lack of clarity in performance descriptors (“Grading is important, however the examples supplied seem a little complicated / cumbersome”), and mismatched rating scales to descriptors (“I feel that some of the distinctions between the levels were ambiguous”). These comments were in line with the cautions by Reddy (2011) in regards to the development of rubrics. The grading rubric is continually being refined in response to these observations. Despite some minor criticisms, it appears that the overall impact of introducing the grading rubric was one of relief and perceived support for mentor teachers, pre-service teachers and college supervisors. Other phrases used by mentor teachers included: Great help, gives all teachers common ground, it was a helping guide, I hope other universities adopt this practice, it allowed me to sort my thoughts, it allowed me to focus on judgments that were relevant, it helped them identify their ‘next steps’, it provides language and details. Conclusion Historically, the practical assessment of pre-service teachers in the school setting has presented many issues. These concerns have usually been focussed “The rubric was a helping guide—it allowed me to sort my thoughts and focus on judgments that were relevant, it helped them identify their ‘next steps’ ” “The introduction of the grading rubric has empowered mentor teachers to assess confidently while encouraging pre-service teachers to attain predetermined levels of competence around questioning reliability and consistency in the way mentor teachers have allocated grades to pre-service teachers. The mentor teachers have felt under-resourced to decide on a grade, the preservice teachers have been bewildered by the inconsistencies in the way they have been graded and the college supervisors had not adequately addressed either of these situations. The authors believe that the rubric has achieved a satisfactory balance between providing adequate outcomes for assessment and not being so onerous as to discourage the supervisors from using it. This style of assessment is built on sound theory. It accepts that it is unfair and unreasonable to ask anybody to grade anything without valid criteria from which to work. The introduction of the grading rubric has empowered mentor teachers to assess confidently pre-determined levels of competence. It would be ”had eliminated inconsistencies, but the appraisal while encouraging pre-service teachers to attain overstating the use of the grading rubric to say it and level of acceptance of the rubric initiative suggest that with continued assessment, review and development the rubric will continue to provide an effective means of assessment of pre-service teachers in the workplace. TEACH Biggs , J. ( 1999 ). What the student does: Teaching for enhanced learning . Higher Education Research and Development , 18 ( 1 ), 1999 . Billett , S. ( 2009 ). Realising the educational worth of integrating work experiences in higher education . Studies in Higher Education , 34 ( 7 ), 827 - 843 . Blanton , L.P. , Sindelar , P. T. & Correa , V. I. ( 2006 ). Models and measures of beginning teacher quality . Journal of Special Education , 40 ( 2 ), 115 - 127 . Cochran-Smith , M. & Fries , M. K. ( 2002 ). The discourse of reform in teacher education: Extending the dialogue . Educational Researcher , 31 ( 6 ), 26 - 28 . Gowing , R. , Taylor , E. & McGregor , H. ( 1997 ). Making your work placement effective: A student guide to enriching workplace learning . Melbourne: RMIT Publishing. Kegan , R. ( 1994 ). In over our heads: The mental demands of modern life . Cambridge, MA: Harvard University Press. McCarthy , M.A. , Niederjohn , D. M. & Bosack , T.N. ( 2011 ). Embedded assessment: A measure of student learning and teaching effectiveness . Teaching of Psychology , 38 ( 2 ), 78 - 82 . Pindiprolu , S. S. , Lignugaris / Kraft, B. , Rule , S. , Peterson , S. & Slocum , T. ( 2005 ). Scoring rubric for assessing students' performance on functional behavior assessment cases . Teacher Education and Special Education , 28 ( 2 ), 79 - 91 . Pungur , L. ( 2010 ). Mentoring as the key to a successful student teaching practicum: A comparative analysis . In T. Townsend & R. Bates (Eds.) , Handbook of teacher education: Globalization, standards and professionalism in times of change . Dordrecht, The Netherlands: Springer. Reddy , M. Y. ( 2011 ). Design and development of rubrics to improve assessment outcomes . Quality Assurance in Education , 19 ( 1 ), 84 . Sadler , D. R. ( 2009a ). Indeterminacy in the use of preset criteria for assessment and grading . Assessment & Evaluation in Higher Education , 34 ( 2 ), 159 - 179 . Sadler , D. R. ( 2009b ). Grade integrity and the representation of academic achievement . Studies in Higher Education , 34 ( 7 ), 807 - 826 . Sadler , D. R. ( 2011 ). Academic freedom, achievement standards and professional identity . Quality in Higher Education , 17 ( 1 ), 103 - 118 . Taylor , S. S. ( 2007 ). Comments on lab reports by mechanical engineering teaching assistants: Typical practices and effects of using a grading rubric . Journal of Business and Technical Communication , 21 ( 4 ), 402 - 424 . Tillema , H. H. ( 2009 ). Assessment for learning to teach: Appraisal of practice teaching lessons by mentors, supervisors and student teachers . Journal of Teacher Education , 60 ( 2 ), 155 - 167 . Walvoord , B. E. ( 2010 ). Assessment clear and simple: A practical guide for institutions, departments , and general education . San Francisco: Jossey Bass. Weisz , M. , & Smith , S. ( 2005 ). Critical changes for successful cooperative education . In A. Brew & C. Asmar (Eds.), Higher Education in a Changing World: Research & Development in Higher Education , 28 , 605 - 615 .


This is a preview of a remote PDF: https://research.avondale.edu.au/cgi/viewcontent.cgi?article=1185&context=teach

Beverly Christian, Peter Kilgour, Andrew Kilgour. Why Did My Mentor Teacher Only Give Me a Credit?, TEACH Journal of Christian Education, 2012,