Placerita Jr. High - October 15, 2018
Welcome to the Last Week of Quarter One
This was the whole crowd left at the end of the game... See video below of empty stadium.
Wednesday, October 17 is the day for us to be able to receive a 20% donation of sales to Placerita. This fundraiser will last all day at Presto Pasta. This is one of the Principal's favorite new places to eat in town. They have great Italian at a reasonable price. See flyer with address and information below.
Wednesday, October 17 at 2:45 in the Library. We will have a special guest presenter from Nearpod to help us know how we can better use this resource.
Friday, October 19th in the Library at Brunch. Don't forget to go get your Munchies at Brunch.
Finding Clarity in Assessment and Grading
Ensuring that grades are valid and reliable for all students starts with educators examining their assessment and evaluation practices.
By Laura Thomas
September 6, 2018
A recent Edutopia article about the question of zeros in classroom assessment set off a passionate debate filled with differing philosophies:
- “Kids need to learn how the real world works!”
- “One zero shouldn’t tank a kid in a whole class!”
- “There’s no reason not to let kids redo work until they demonstrate mastery!”
- “If we let them redo the work, they’ll never learn about deadlines!”
Teachers also discussed practices like dropping a student’s lowest grade, allowing homework passes, and grading homework solely for attempt. Reading through the conversation, one thing became clear to me: Teachers are in many different places when it comes to assessment—what its purpose is, how to do it, and why it’s necessary.
As universities drop SAT and ACT requirements and depend more heavily on students’ grade point averages for admissions decisions, it’s more important than ever for educators to gain clarity around our assessment philosophies and how they impact our assessment, evaluation, and grading practices.
SEPARATING ASSESSMENT, EVALUATION, AND GRADING
The word assessment comes from the Latin assidere, which means “to sit beside.” When we assess, we’re simply gaining information about what a student knows and can do. We need to ensure that our methods for gaining that information are as valid and reliable as possible, which means we must be clear about what we expect students to know and be able to do, and we have to be certain that the tools we’re using don’t prevent students from really showing us that.
If I give a paper-and-pencil exam to a student with fine-motor delays or one who can’t read well or can’t write well, or who doesn’t speak the language in which I’ve written the exam, I’m not assessing what the student knows, I’m assessing how well they can read or write or understand the language. The data I’m collecting is flawed.
Evaluation is where that flaw really comes into play because that’s when I put the data up against a standard and decide if it’s good enough. That means I have to know what “good enough” looks like, and I have to be able to trust my data. I have to be sure, among other things, that the lens I’m using is free of bias and that “good enough” is the same for all of my kids.
When I put my data up against my standards, I have double-check to be sure that I’m seeing a good distribution of students at all levels. If all of my white, cisgendered, affluent kids are meeting the standard and everyone else is falling below, for example, the problem generally isn’t with the kids—it’s with me and my system.
Grades are how we communicate to the outside world what a student knows and can do according to our standards and data collection systems. The problems here are myriad, as we know. Is a C (or an S) the same as “meets the standard,” or should that be an A or an E? Is a 70 percent a C, or should it be 75 percent? Do students automatically drop a letter grade for a certain number of absences? What about extra credit? Should we do a standards-based report card? What about those pesky zeros—do we use them, or is an F a 50 percent?
Add in subjective issues like citizenship grades and a teacher’s prerogative to round a grade up or down for effort or extenuating circumstances, not to mention biases around what a “good student” should look like, and we’re looking at a mess.
FINDING CLARITY IN OUR SYSTEMS
To get clear with ourselves and our colleagues about these issues, we should have conversations about questions like:
- What is the purpose of school? The role of the student? The role of the teacher?
- How do our instructional strategies align with that purpose and those roles?
- How do students earn points in our classes? Are there points for non-academic tasks? If so, why?
- Which students seem to have an easier time earning points? Which students don’t? What does that communicate to them about our priorities?
- Why do we think some kids succeed and others fail? What assumptions are we making about why kids succeed and fail?
- What are our go-to tools for assessment? Who do they favor? Who do they leave out?
- How can we be sure that the data we collect in our assessments is valid and reliable?
- What are the benefits and costs of our grading systems?
- What if our current systems aren’t serving our kids? What if they’re really about the convenience of adults?
Classroom teachers have a lot of power, and we need to think hard about the ways our assumptions about what it means to be a good student are colored by our biases, experiences, and personal beliefs until we’re sure our grades are valid and reliable for all students. Are there things we could adjust to help our assessments be fair and valid?