CEP813 – Assessment Genre

As a part of my school’s accreditation process we have selected two goals for each teacher to focus on in their own instruction. These goals are to improve student’s ability to evaluate and synthesize information. The goals were chosen based on low data points on two standardized tests that students take at our school, the Terra Nova and PSAT. To make sure that we are making progress on our goals we conduct a local assessment three times a year. This assessment is created by the staff under strict guidance from the district superintendent and our accreditors. Though it is supposed to be used as a formative assessment piece to see if our strategies are working, it is often treated as a summative assessment by staff.

Our school started by identifying two skills to increase student achievement in. This was a good first step in the UbD design process to “begin with an important skill” (Wiggins & McTighe, 1998, p. 258). A poor design choice, the skills chosen do not correspond with any of our standards as none of our standardized tests are aligned to them. Additionally, we did not develop any strategies for helping students using these skills effectively before designing the assessments. This is where we as a school and district failed. None of our assessments seemed to truly resonate with our goals. It is an ever-evolving process. The assessments are not written in stone and each iteration is adjusted. Right now the staff is disquieted, because with all of the “assessment evidence” gathered, no “learning plan” was ever put in place (Wiggins & McTighe, 1998, p. 257-258).

The first local assessment was administered in the first few weeks of school. It is predominantly thought of as a pretest for each class and for many teachers perhaps the only pretest of the year. “Teachers should not think of prior-knowledge assessment as a discrete pre-test to use from time to time. Rather, it should he common classroom practice” (Sheppard, 2005). Unfortunately that is not the prevailing practice. Certainly this local assessment identifies deficiencies in student performance and achievement, but how it used afterward is mixed. Students do not often even see the results. This is something that I am guilty of participating in. Take these two examples for instance.

In one of my classes I administer a math diagnostics test to get a sense of my student’s abilities as well as identify any students for possible acceleration. However, I do not go over the results with my students, which I justify by telling myself I that I lack the time and must get to the content  right away. This is in opposition to both Sheppard’s article and Wiggins and McTighe’s book. Sheppard (2005) would want to see “how the student could improve in relation to the standards.” While Wiggins and McTighe (1998) would cite my “sin” of coverage, attempting to march students through content to meet all of my standards (p. 16).

Students in my geography class had a different sort of assessment. Students have been researching various types of governments and historical examples of each. Identifying strengths and weaknesses of each. As a final piece of this lesson students had to create their own government, picking out the best features of others to add to their own. I used this piece as their local assessment as it gave me a clear means to evaluate how well they could synthesis the information they learned into something new (and strange). The results were not great. Many students opted to chose governments that already existed and add nothing relevant to them for improvement. For this assessment I have given specific feedback and will have students revise and re-submit to gain better mastery of the standard. Trumbull and Lash (2013) suggest for mastery learning that “student effort is increased when small groups of two or three students meet regularly for as long as an hour to review their test results and to help one another overcome the difficulties identified by means of the test” (p. 41). I think that this is a good opportunity for students to share their work with their peers and gain ideas and insight into their own governments.

Based on the readings and my experiences, I would recommend to my school that we re-evaluate the purpose of our goals and how they align to our standards. Before we can assess acquisition of these skills we need to “identify strategies that are helpful in using such skills effectively” and “devise learning activities that will enable learners to use such skill in context and to self-assess and self-adjust” (Wiggins & McTighe, 1998, p. 258). Without a plan for improving these areas any assessment we do will be without purpose.

For future iterations of this local assessment it may be more prudent to scale down the size and increase the frequency. “With formative assessments, teachers can evaluate students frequently via different strategies that can be tailored to the particular students” (Black & William, 1998, p. 8). An excellent method of administering formative assessments quickly and frequently is the use of Smart Response technology. This technology allows students to digitally enter their response to a question posed on a Smartboard. Students can then gain instant more focused feedback that Sheppard (2005) has shown to dramatically improve achievement compared to standard controls.

References

Black, P., & Wiliam, D. (1998). Assessment and Classroom Learning. Assessment in Education: Principles, Policy & Practice, 5(1), 7-74. doi: 10.1080/0969595980050102

Sheppard, L. A. (2005). Linking formative assessment to scaffolding. Educational Leadership, 63(3), 66-70. Retrieved September 20, 2014.

Trumbull, E., & Lash, A. (2013). Understanding formative assessment: Insights from learning theory and measurement theory. San Francisco: WestEd.

Wiggins, G. P., & McTighe, J. (1998). Understanding by design.

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a comment