In this week’s episode of the Steve Barkley Ponders Out Loud podcast, Steve is joined by author, Tom Schimmer to discuss modernizing assessment in the evolving landscape of education.
Get in touch with Tom: email@example.com
Subscribe to the Steve Barkley Ponders Out Loud podcast on iTunes or visit BarkleyPD.com to find new episodes. Thanks for listening!
Announcer: 00:00 Steve Barkley Ponders Out Loud is brought to you by Academy for Educators. Online, professional development for teachers and leaders. Online courses, modules, and micro-credential programs for teachers to enhance their skillsets. Now featuring the instructional coaching micro-credential, including five online modules framed around the work of Steve Barkley. Learn grow, inspire. www.academyforeducators.org.
Steve [Intro]: 00:25 Hello and welcome to the Steve Barkley Ponders Out Loud podcast. For over three decades, I’ve had the opportunity to learn with educators at all levels, both nationally and internationally. I invite you to listen as I explore my thoughts and learning on a variety of topics connected to teaching, learning, and leading with some of the best and brightest educators from around the globe. Thanks for listening in.
Steve: 00:52 Modernizing Assessment: A Conversation with Tom Schimmer. A few months back, I had the opportunity while attending the NESA conference. That’s Near East South Asia conference in Bangkok, to meet up with Tom Schimmer who was also presenting there. Tom is the co-author of Grading From The Inside Out instructional agility and standards based learning in action. And having heard Tom present and having had the chance to have a little dinner conversation with him, I asked him if he would join us for a podcast and he agreed. So Tom, welcome.
Tom: 01:35 Yeah, thank you. Thanks, Steve. Thanks for having me.
Steve: 01:37 Would you take just a moment or two, Tom and give people a little introduction to your background?
Tom: 01:45 So as of today, which is May 21st, 2019, this is finishing out my 28th year in education and the majority of that time was spent in a school system, you know, various rules and responsibilities. I spent seven years as a classroom teacher, 11 years as a school-based administrator, both of those at the middle school and high school level combined. And then the last two years of my role was at central office where I worked primarily in the areas of curriculum and assessment and instructional capacity as well as a variety of other roles. So in 2011, that capped off 20 years in the school system. And in 2011, my first book, which was called “10 things That Matter From Assessment to Grading” was published. And as a result, decided in that sort of moment to resign from my position and begin – take the leap I suppose into the world of full-time speaking, consulting and authoring. And the last eight years I’ve been doing this full time. So that’s, you know, really a synopsis of the 28 years very quickly in terms of the span of my career. So, yeah.
Steve: 03:08 So as I read through several of your blogs and worked my way around your website, I latched onto this term of modernizing assessment. So I am wondering if you’d kind of kick us off with what that means to you.
Tom: 03:19 Well, what modernizing assessment means is that, you know, like any industry, education continues to grow, evolve. We continue to learn about the roles that all the various practices and systems and structures and strategies play in advancing achievement. And we certainly in the late 1990s, in the early part of the 2000’s had this renaissance in assessment practices. And I think there are very few sort of contrasts that are so stark in education in terms of what we need to do from an assessment perspective versus what has traditionally been our assessment paradigm, if you will. So the traditional, you know, point accumulation, all of the factors that and facets that went into producing grades and the kind of inadvertent or intentional grade grubbing that was produced from environments that we’re so fixated on scores and percentages, et cetera. And we’ve seen over the last, you know, 15 to 20 years.
Tom: 04:16 So in some respects when we talked about modernizing assessment, we have been needing to modernize assessment for the past 10 to 15 to 20 years. But modernizing assessment, for me, really means just bringing assessment practices into alignment with what our current, you know, goals and outcomes are for our students. So as we think about, for example, helping students develop 21st century skills or competencies like critical thinking, collaborative thinking, creative thinking, innovation, social competence, digital citizenship, et cetera, all of those require us to, on the one hand, lean on our assessment fundamentals and practices that are timeless and universal, but at the same time force us to think about assessment in a different light. So as we think, for example, about critical thinking, it’s very hard to produce a percentage score when you’re assessing critical thinking, which is most likely to be done through performance criteria that’s in the form of a rubric. So those are the things, that’s just one example of how we need to – and it’s not as though rubrics are new, but it’s the increased use of criteria that is clear and transparent. Critical thinking also is the idea of the, you know, project based learning or problem based learning where students are digging into authentic environment. So all of that is forcing us to continually modernize our assessment approach.
Steve: 05:34 Well, I’m gonna push you – that we need to go back further than 20 years. I have often looked back at the frightening thought that as a first year teacher, I actually put grades on some poor kid’s report card. And the truth is, I had no idea what that meant.
Tom: 05:59 Yeah. Oh, you’re absolutely right. I put marks and scores on worksheets back in the day.
Steve: 06:07 And 20 years later, my daughter came home with an 89 and creative dramatics. And I so wanted to go talk to the teacher because my daughter wasn’t a kid with a whole lot of A’s, but I knew that all she would do is open up a book and show me a list of numbers, that when you added them up and divided it came out to 89. So I didn’t even bother to contact her.
Tom: 06:37 Right. I mean, you know, my reference to 20 years was really about the renaissance in assessment practices. So if you go back to 1998, October of 98 was when Paul Black and Dylan William published their sort of seminal research around assessment and we all sort of sat up and took notice 20 years ago. And part of it, especially in the United States and even in Canada and other places around the world, the 90s brought about the standards movement. Right? So when you think of something like standards based grading, it’s interesting to me that in 2019, that is still a controversial idea given the fact that most teachers in the system had been teaching to curricular standards for the better part of two decades. So if we peg 1998 as a particular, you know an arbitrary time you could safely say that most teachers were teaching to curricular standards and yet in 2019 basing on grades on the achievement of those standards is still a controversial idea.
Tom: 07:30 So, my 20 year reference was really about the fact that that was when the renaissance — but you are definitely, for me, on point in thinking that we could go back a hundred years and look at a lot of the practices. I mean, in some of the traditional grading practices that get employed in schools, and this is not to impune most teachers, cause I don’t think most teachers are here, but it would not be far-fetched to think that there are some grading practices in some classrooms that resembled 1958, you know, 1935. I mean, some of those things have not changed. And I think the majority of people have evolved in their assessment practices, but I do think those remnants are there and I think there’s enough of them to kind of put a drag on the system.
Steve: 08:12 Talk a little bit about the standards-based movement and standards-based grading. I’ve worked with several districts that still started standard base and elementary moved to middle and when they got to high school they just ran into the wall. And the thought that I had had along the way, is that they went to standards-based grading before they really got to standards-based teaching.
Tom: 08:44 Right. No, I think that’s an excellent point because when you think of the flow through – so my book grading from the inside out is really about developing a standards-based mindset. And the basic premise of the book is don’t make any physical changes, report cards, policies — don’t make any of those changes until you have a different mindset around what grades are supposed to represent. So the standards-based mindset is the thinking along the lines of instructional paradigm. So if you work your way backwards, you realize that our standards-based report card needs standards-based grades and standards-based grades come from standards-based evidence and that evidence comes from standards-organized or focused assessments, which are a natural sort of outflow or derivative of standards based instruction. So to introduce a brand new report card without looking at all of those parts that lead through to that process, that to me is — I would say a critical error in the implementation model. And too often people try to take on too much too soon and it backfires.
Steve: 09:50 So if you really got embedded in standard-based teaching, it will drive you to standards-based grading just because the, the old report card becomes obsolete because of the work that you’re doing with students.
Tom: 10:07 Right. It becomes disjointed from the instructional flow versus introducing a new report card without any sort of changes to the instructional paradigm. Now the standards based report card feels very clunky and awkward because we’re not eliciting evidence that way and trying to make interpretation. So for example, just the reason it fractured is because even in 2019, a lot of teachers still organize their grade books by task type. So they’ll have categories called exams, tests, quizzes, assignments, projects. Our standards have never been organized that way. And so what ends up happening, is you elicit evidence through tasks and the tasks can be quite robust and rich and all of that. But you end up splintering them apart because you titled one a quiz and you titled one a test and you titled one an assignment and another as a project.
Tom: 10:59 And what you’ll end up with is overlapping evidence that is hard to parse out or discern along the learning goal. So right now, our fixation with the task type has us missing opportunities or reassessment. For example, like if you want — this is the same cognitive complexity as section A on a unit test, I have a builtin reassessment. I don’t even have to take any of my free time. I just have to learn to reconcile the new evidence with the old evidence. So that takes practice and eliciting evidence in that way is a helpful way to start changing that mindset.
Steve: 11:33 So am I following that the teacher’s grade book needs to be set up on the proficiency rather than on the task that measured the proficiency?
Tom: 11:44 Right, and usually at the elementary school level, the grade book would be organized by specific standards. Often, not always, but often at the middle school and high school level, we’re usually talking about organizing evidence by strand category or domain. So that that would account for as students get older, there are demonstrations of learning are quite robust and sometimes it’s hard to parse out every individual standard. But we know if we’ve got several standards within the domain of say, statistics and probability or geometry, or if it’s a, it’s a category like writing or reading, we can categorize it into those big headings as opposed to trying to parse out the standards. Because again, a lot of the, the more sophisticated learning at the high school, middle school level includes many standards and trying to parse those out almost gets you to the minutia that’s unnecessary when it comes to aggregating up to an overall score anyway. So it’s just being thoughtful about how you organize your evidence.
Steve: 12:45 Well, that kinda leads me into another phrase that I pulled from your writing that I really liked. And you talked about the over quantification of learning can distract the teachers.
Tom: 13:02 Yeah. You know, that that over quantification for me I think is what my colleagues and I – the reason we wrote the book “Instructional Agility” — when we conceptualize sort of what assessment, two things happened over the last two decades. One is, we have this renaissance and assessment practices, which was very much a positive. But the other thing that happened was the sort of increase in sophistication with technology. And so the introduction of the online grade book and and 24/7 access and decimal points and all of that sort of ramped up in the last 20 years. So that got us talking about data and a lot of, again, a lot of it is positive. We’re talking about data making informed decisions, where our students in the learning. But it led us, I think, in some places down a pathway where we tried to over quantify. Like every move students were making was being recorded and quantified.
Tom: 14:00 And what that was doing was interfering with our ability to use assessments formatively. To use them more organically. So that the idea of being instructionally agile is to make real-time decisions based on emerging evidence without necessarily quantifying but focusing on feedback. So assessment is just the idea of eliciting or gathering information about where students are in their learning. We either use it formatively or we use it summatively and a lot of people understand that. They understand the definitions, but the thing that we sometimes lose sight of is that formative grades are an oxymoron. The formative purpose of assessment is really – yeah, it’s really designed to initiate more learning. And we do that through [unintelligible]. If you’re verifying that learning has occurred and putting a score on something, then you’re doing that and that’s the summative purpose. So what’s really challenging is to initiate more learning while verifying that learning has occurred. It’s hard to do both. So often we need to pick one or the other, like decide what our primary — you can do both, don’t get me wrong, but the key would be to have that primary purpose in mind. That if you’re focused on initiating or advancing or reacting in that moment, being instructionally agile means I get information that allows me to turn that information around and help the students advance their learning. I don’t need to quantify that. I just need to do what coaches do and get good feedback.
Steve: 15:22 I was just going to ask you if the performing arts and athletic folks aren’t an example for us. The music teacher knows when to bring a more difficult piece out and nothing has to get written down and turned into a number and stuck and stuck away somewhere.
Tom: 15:39 Yeah. And scores aren’t instructional, right? So even if, 4/10 is true, 4/10 does not tell me what to do next in order to get to a six or eight or a 10. It just tells me how it was. It’s like the volleyball coach watching someone serve a volleyball and then saying 6/10. It doesn’t help the server unless you follow that up with some description. Now here’s the rub. And so, in the orthodoxy of assessment research, as I’ve mentioned earlier, formative grades are an oxymoron. That doesn’t always play out in practical terms. I think there is a place for formative scores if, and this is the caveat, if students are productively responding to feedback then providing them with a formative score as a here’s where you’re currently at is okay. What’s happened in the past often, and the research is sort of worn this out several over several decades, is that often, students will look at the score and if the score is of a satisfactory nature, they stop learning.
Tom: 16:40 They just, they settle for that score and so they deem the feedback to be unnecessary or on the other end of the scale, if the work is so substandard, they might deem the feedback to be undesirable. So now you’ve got a situation where you provided the formative score. So on the surface, there’s actually nothing wrong with providing a formative score provided the students use the feedback. So that’s the key for teachers – is if you’re providing formative scores, watch how your students respond to the feedback. If they take your feedback and at least attempt to address the deficiencies or try to advance their learning, you have no harm, no foul. But if students are ignoring your feedback as a result of the formative score, it could be problematic. So the research says the most favorable course of action is formative assessment in absence of grades and scores because we can’t read their minds. We don’t know how students are going to react and it’s not worth it if your primary goal is to focus on the formative process and systems and structures and routines.
Steve: 17:41 I’d like your help with a specific that I have run into on several locations.
Steve: 17:47 Okay.
Steve: 17:48 And that is, when high schools and middle schools went standards-based and went to standards-based grading, they drug along a remanent which was, they use grades every two weeks to decide sports eligibility. So now the teacher is forced to stick a grade into a grade book, which basically runs counter to the message that the teacher is giving to the a student. And I’m wondering if you’ve dealt with that as a specific anyplace in and what recommendations you were able to give to the folks you work with.
Steve: 18:36 It’s definitely one of those imperfect situations that – and this is the part that I think I’m just going to take one quick step back. Just in the implementation of standards based grading, it really is about – it’s about what I’ve been recently calling a million little ships. It’s not one big movement.
Tom: 18:54 It is – you’ve got a million little questions to answer. Are we going to continue to use letters of the alphabet? Are we going to use numbers? Are we going to use descriptors? What are they based on? How do we describe them? How many opportunities for reassessment, what does homework look like? These are all the questions that we have to answer. One of the questions that has to be answered is how do we handle athletic eligibility, since more often than not, those rules come from an external body. So we have no choice over that. And if the external body is calling for a grade update every two weeks, then we have to comply with that if we want our kids to play sports. So what I recommend is first to make sure that our assessment practices that are producing those grades are producing accurate information and accurate information that relates to the learning goals.
Tom: 19:43 So we want to make sure that we’re not getting into a situation where a student is indirectly – and I’ll clarify this in a moment, is indirectly behaving their way out of eligibility. Meaning, they handed the teacher something three days after it was due, which turned to 70 into a 49. When I say indirectly, I mean, these are not things that you would directly dock student for traditionally. So attendance is an example where attendance has a direct impact or indirect impact on achievement because the more you attend, the better you’ll do you, you know, odds are. But what happens is, if you have poor attendance, we don’t lower your grade. It may happen because you have poor attendance, but it’s not the direct result of your poor attendance. So now directly behaving your way out of eligibility might be if you were suspended or you know, if you did something illegal or anything like that.
Tom: 20:41 So, you know, there are ways to behave your way, but from an academic perspective, you don’t behave your way out of eligibility. So the other thing would be – grades are only about achievement. Grades are at a reasonable level of, I suppose, you’d say rigor or sometimes [unintelligible] or intentionally make school exponentially more difficult for kids by making the A scarce and using a kind of bell curve mentality. So we want to make sure that we have clear criteria. Everybody knows the rules to the game. The other thing to think about, especially for sports that happened early in the year is when you’re assessing on year-end or semester-end standards that really are a progression through the whole, because there are those standards that run longitudinally. When they run the entire year or the entire semester, often schools will employ a system of benchmarking which creates a early in the semester standard, a mid semester standard and a late semester standard so that you’re not filling your grade book with a bunch of ones and twos, you know, or Cs and Ds early in the semester so kids become ineligible.
Steve: 21:45 I would always recommend that that’d be very transparent and easily accessible to anybody who thinks that you’re just “dumbing it down” so kids can play football or anything like that. But making that transparent allows everyone to see that there is a progression of learning happening here and making sure – but those early scores would have to be eliminated from the grade book as the sophistication increases because we would want the grades to be a more accurate picture of the more robust part of the standard. So it’s looking at the idiosyncrasies of every state policy or every governing body’s eligibility rules, and just making sure that students who are authentically invested in their learning are not inadvertently becoming ineligible to participate.
Steve: 22:27 So it’s kind of having an on-target. So if it’s a year and we’re a month into the school year, this would say you’re on target to meet a passing level of the proficiency.
Tom: 22:43 Yeah. You’re, you’re on the trajectory, right? You’re on that pathway. I think, sometimes, it’s in those million little questions that I mentioned earlier, some of them are imperfect, but see, for me, when you approach when you have a standards based mindset, you’ll answer those dilemmas in a different way than you will if you have a traditional mindset, right?
Steve: 23:04 Yeah.
Steve: 23:05 And so for me it would just be about – let’s just make sure that anybody who is fully invested in their learning or at least invested in their learning to a point that is satisfactory. I’m not going to suggest that every 15 year old is going to be deeply immersed in all of the subjects are enrolled in. I get it. But they are an active participant. We want to make sure they’re not becoming ineligible. It would be different if it was a behavioral issue where someone was not doing their work, not fully participating, acting inappropriately around the school, violating social norms, things like that. That’s a different question about whether they should represent our school. But from an academic perspective, let’s just make sure that the systems and structures and routines and processes are transparent, clear, and that everybody understands the rules to the game. If you’ve got to put grades every two weeks, then you’re just going to have to do that in the most accurate way possible.
Steve: 24:03 Tom, kind of one last area here. A lot of the people who listened to this podcast are working as a instructional coaches alongside teachers. And I’m wondering if you might have some thoughts on some of the most critical questions or issues for instructional coaches to be raising as they’re working with teachers either individually or in PLCs. From an assessment standpoint is what I was thinking.
Tom: 24:37 Well, I think for instructional coaches two things – and I would say this of, of school leaders as well, is that, I think it’s true of everyone is that you have to kind of know what you’re talking about. So I’ve become very sort of clear in my own mind that an investment in your assessment literacy is – it has the greatest impact as far as I’m concerned. Like, I would say that investing in your assessment literacy is the most efficient and effective professional investment any educator can make. There are very few things that can operate in our system without fundamental assessment practices. A PLC is a great example. When you look at the PLC model, especially the PLC at work model, the four guiding questions about what do we want kids to know and be able to do and how will we know that they’ve learned it or can do it or know it.
Tom: 25:28 Those first two questions that guide PLCs are assessment questions. With the ability to design assessments that elicit accurate information is critical. The interpretation of assessment evidence is becoming more critical because the more we use performance assessment, the more we use project-based learning, the more we use inquiry driven models, the more we use rubrics. Whenever you’re using a rubric, you’re making a scoring inference. You have to infer quality. You have to be able to match a student’s performance to what’s described in the rubric, the criteria. There’s no straight line and there’s no percentage score. That requires practice, calibration, et cetera. You know, a continuum of behavioral support, differentiation. I mean, I could go on. English language learners, social competence, 21st century skills, everything hinges on our assessment fundamentals.
Tom: 26:15 So from an instructional coach’s perspective, I would say wherever you are in your assessment journey, continue to grow your understanding of sound assessment practices and principles because that will only help you coach others on how to use assessment practices that are on point.
Steve: 26:31 When you said the word practice, that’s where I was in my head. It takes a ton. I was just working with five Algebra teachers in a large high school. They’re all teaching Algebra One. And the first task I gave them was to select a student’s assessment that just met the standard and one that just missed it and then hand them to each other and see how the rest of the people at the table assessed it. Because that’s really coming down to a critical one that you’re deciding as a group, whether the same students met a standard or didn’t. You can’t do it without practice rehearsals and feedback as teachers would need to feedback.
Tom: 27:27 That’s a great exercise because that speaks to the clear fundamental in assessment, especially when it comes from a reporting perspective. Grading and reporting. The one issue that has to be addressed is the issue of reliability. If an assessment is reliable and if an assessor is reliable, then results are repeatable. So in a standards based instructional paradigm, there should be no such thing as a tough grader or an easy grader.
Steve: 27:55 Yeah.
Tom: 27:57 You have [unintelligible], you have sophistication. So when you calibrate on that criteria like that, really what you’re doing is ensuring what’s referred to as inter-rater reliability. And without reliability you have no consistency. And so in assessment, we often talk about validity and reliability, but it actually works in the other direction. You first need to have a reliable measure and then once you have a reliable measure, then you can have a valid interpretation of that in terms of what’s next for the learner.
Tom: 28:27 So the reliability issue that you gain is – there’s no other way to do it other than to sit down and do exactly what you described there, Steve which is how people consume the work, have conversations to make sure that we are on the same page. Now, to a point where we’re only human and there will be some disagreement, but the disagreement should be anomalies. The disagreements should be – when it just, it’s just really tough to tell and it could go either way. But in most instances we should be able to recognize what quality work looks like. And that again, for an instructional coach, understanding the concept of reliability and validity will help you engineer those opportunities both within individual teacher. Because here’s the other part of this and then we’ll sort of follow up with that. But the other part of this is the not just the inter-rater reliability. It’s the intra-rater reliability, which means as I’m scoring a stack of papers, I need to be consistent with myself from sample one down to sample 10 down to sample 20. I can’t be moving all over.
Steve: 29:27 I can’t get tired after five, right?
Tom: 29:30 I gotta be on point with myself in fairness students. So that’s just one piece of why understanding assessment is so critical and why would just, to say it again, why I would say that assessment literacy is the most efficient and effective professional investment any educator to make. We have to do what we’re talking about and sort of principles. Principles need to be able to at the very least have credible conversations with their colleagues about assessment practices. And they have to know what the research indicates. I don’t think principals and even instructional coaches don’t necessarily have to be the experts in the school on everything, but they have to know enough in order to coach you. You may be an instructional coach. You’re coaching someone who is maybe not in your content areas. You can’t know everything about everything, but you can know enough to have a repertoire that you can draw from to help push that person professionally.
Steve: 30:23 I saw that I didn’t need to be able to tell whether the kid met the Algebra One standard or not, but I needed to know how to orchestrate five teachers having the conversation and reaching an agreement. Yeah.
Tom: 30:36 Yeah. And it doesn’t matter if you know what matters is they know.
Steve: 30:40 Yeah, exactly. Exactly. Tom, thank you. This is a, this has been a delight. Really appreciate it, Tom. Thank you.
Tom: 31:47 Well, thank you Steve. Appreciate being on.
Steve: 31:50 All right. Take care.
Tom: 31:52 Bye, Steve.
Steve [Outro]: 15:04 Thanks again for listening. You can subscribe to Steve Barkley Ponders Out Loud on iTunes and Podbean and please remember to rate and review us on iTunes. I also want to hear what you’re pondering. You can find me on twitter @stevebarkley or send me your questions and find my videos and blogs at barkleypd.com.