Research Project
What is "good" writing?
Research Question
What is the reliability of grading practices for Essays in High School English Classrooms?
What can the findings of this research study teach us about the way in which we define "good" writing?
Do English teachers (within a district) evaluate writing differently?
The Rationale: (Context)
"The stakes have never been higher for the teaching of writing, and especially for the teaching of argument writing."
Evaluate Multiple Perspectives
In schools across the nation, the stress of argumentative writing is apparent in countless curriculum initiatives. However, very few people can come to a consensus as to what a proper product looks like. Part of this is because of the natural subjectivity that comes with writing assessment. Yet, despite this innate issue, this study hopes to identify common identifiers for “quality writing” that could be used to instruct per-service teachers, newly hired teachers, and veteran teachers on the criteria marked as appropriate for “good writing” in a synthesis argument setting. In a local context, this study is intended to shed light on the grading practices within a course learning team (AP Seminar). Through this research, not only can light be shed on the ways in which educators define good writing (synthesis argument), but it can also provide insight on AP Seminar writing expectations.
This study is inspired by the notion by my personal lack of confidence when it comes to assessing student essays and personal experience with educator grading inconsistencies (as a student). Therefore, the rationale of this study is twofold: (1) Can we identify what good writing looks like based on the perceptions of educators working in the same class with common assessments? (2) Can we shed light on the reliability of grading practices here in Crystal Lake, and can this information inform pre-service teachers beyond.
Review of Sources:
Brimi, H. M. (2011). Reliability of Grading High School Work in English. Practical
Beck, S. W., Llosa, L., Black, K., & Trzeszkowski-Giese, A. (2015). Beyond the Rubric: Think-Alouds as a Diagnostic Assessment Tool for High School Writing Teachers. Journal Of Adolescent & Adult Literacy, 58(8), 670-681.
Penketh, C., & Beaumont, C. (2014). ‘Turnitin said it wasn’t happy’: can the regulatory discourse of plagiarism detection operate as a change artefact for writing development?. Innovations In Education & Teaching International, 51(1), 95-104.
Synthesize Ideas: Claim / Solution(s)
This study is a verbal-protocol “think-aloud” study. The data will be collected via think-alouds with multiple AP Seminar instructors.
Each instructor will be recorded thinking-aloud through their assessment of a Section II synthesis essay. The first portion of the think-aloud will be unprompted as instructors are asked to verbalize their thoughts as best they can throughout their grading of their essay.
Next, instructors will be asked a serious of questions in relation to the essay rubric.
This data will be collected and analyzed—What consistencies, differences in grading practices?