Monday, February 24, 2014

The Hot Mess of RTTT, Teacher Evaluations and Student Testing Data In NY

THIS POST IS MY TAKE ON THE CONTEXT AROUND TEACHER EVALUATION AND STUDENT ASSESSMENT DATA IN NEW YORK (AND OTHER STATES).  IF YOU WANT TO SKIP THIS AND GO RIGHT TO READING MY MIDDLE-PATH PLAN ON MOVING FORWARD, HERE IT IS.


I’ve started this post five times this morning.  Because it’s about Race to the Top, teacher evaluation systems, and New York education, I have a lot I’d like to say, and I wouldn’t even be able to scratch the surface.  We’ll have to start in the present, then, and assume that if you’re reading this, you’re probably aware of the history of our Federal government’s intervention into public education that includes No Child Left Behind and Race to the Top.  Truthfully, I hope someone is putting together the objective history of these last bunch of years.  I don’t believe the debate or the stances are anything new, but the time period between A Nation at Risk and 2020 will make for an extremely interesting case study, one in which all of the arguments and the attempted fixes are elevated to an extreme at which our modern world is becoming far too talented.  


I want to focus on a tiny piece of this milieu: teacher evaluation and student-testing data.  I’m talking about the idea of using data on students’ performance on standardized tests as a large piece of a teacher’s overall evaluation.  In order to measure this, “experts” have created a method called VAM (Value-Added Measurement) to ostensibly assess how valuable a student’s time in any particular teacher’s class has been.  If you want more background on it, try this, this, this, and/or this article/feature/blog.  This is statistics and algorithms at a whole new level; it’s highly controversial, and one of the two pieces that have brought legislators and the Board of Regents in NY to a deeply unhealthy level of uncertainty and infighting.  


I believe, when it comes to implementing top-down initiatives, that I should work my hardest to change towards what good for students and take a “path of least compliance” with anything else.  This means that I know I work for people and for a system, but if my team(s) and I don’t see how something’s not beneficial to students, I’m not going to push to be ahead of the curve on it. I’d rather see how things will play out.  NY, however, opted to be an early adopter of not only the Common Core Standards, of which I’m a supporter, but also the idea that grades 3-8 testing must be “aligned” to the Common Core.  If this isn’t making sense, realize that all RTTT states are in a transition between their own testing systems from NCLB and either the PARCC or the SB consortiums’ tests, which have release dates that are coming soon but haven’t yet hit.  While there are many differences that are being expected from these new tests, the most important is supposedly going to be their ability to accurately track how much a student is learning with a given amount of time in a particular teacher’s room, VAM. NYs desire to be out in front meant that it’s own tests were “transitioned” to a version that more “aligned” with the  CCs increased “rigor.”  These would better measure what students “actually” knew and could do, and that would allow them to best be used to assess what is being learned in the teachers’ classrooms.  


This plan has been unravelling for a few years now for the following reasons, in no particular order:


1)   Parents, teachers and students are very upset about the amount of testing many of states’ plans involve.  This includes the states’ tests and, in NY, the assessments being used as “local measures” of student performance.  While few people are listening to teachers’ opinions, the parents’ and students’ voices are disrupting the process.  The “opt out movement” (see this as a general version of grass root action) is generating great amounts of anti-testing press and threatening schools’ ability to have enough students taking the tests to meet AYP (Adequate Yearly Progress) requirements.


2) The tests themselves have issues.  From poor questions to marketing within test questions and a lexile level of questions that seem absurdly inappropriate for kids, there have just been too many questions about the tests themselves for most people to feel okay about it.  Let’s also add the truth that many of the big test makers (in NY, it’s Pearson) is also profiteering by selling it’s curriculum and workbooks to schools as a way to prepare for its tests.  


3) You can’t sell failing grades to parents with a smile and expect to walk away.  It’s truly unbelievable to me that those in charge of NYs education plans - the governor included - haven’t done a better job predicting and handling public backlash over all of this.  When the tests became more “rigorous,” the grades were expected to drop.  This makes sense, but NY went ahead and opted to count these exams for all sorts of things, including teacher evaluations, instead of finding a way to pilot new questions.  They then went about insulting parents and dismissing all concerns.  They’re taking a “we don’t need your hearts and minds behind this since we’re more powerful than you are” approach that’s now haunting them.  They simply could have said: “We’re not going to have cut-scores that define student achievement this year.  It’s a trial. We’re going to analyze our data to figure out where our students are and how where we have to improve our efforts so that we’re all ready for PARCC.”  It just sounds better than: “Many more of you will fail this year than last because your teachers aren’t that good and you aren’t as smart as you’ve always thought you were.” Duh.  


4) Although both PARCC and SB claim that their tests are going to be able to measure students abilities with higher-order thinking and the upper reaches of Bloom’s Taxonomy, nobody has shown how our current tests are doing that, so - at least for now - very few people are interested enough in what the tests can actually tell us.


5) Increasing the rigor of a test as new curricula is being written around standards that have just somewhat recently been released isn’t going to give accurate measures of a student’s learning in a class.  Rushing the curriculum writing and the testing rigor only makes VAM less believable.  Teachers unions now held the moral high ground on what’s good for students (alongside parents, which made their case stronger than ever) and the professional validity to publicly denounce VAM and therefore the overall APPR (Annual Professional Performance Review) program that came along with it.  


Whew, we’re almost caught up.  Just this month, NY realized it has to bail, at least a bit.  If you read this, you’ll rejoice that NY is going to follow Massachusetts’ lead and take some time to consider their initial plan and adjust things as needed before students and teachers are held accountable for the Common Core curricula to which schools are still transitioning.  This leaves a ton of questions on the table, but it means that fewer people (students and adults) will be crying at night over the attached judgements and career-defining evaluations the way they were last year.  Questions I have:


1) Is there now a plan to ensure schools are moving forward with curriculum transitions?
2) Is there now a plan to gauge test validity and reliability?
3) What does this delay do to NYs relationship with RTTT funds?
4) What are this year’s exams going to look like?
5) Is PARCC going to become hesitant to go ahead with its plan for rigorous assessments?
6) What exactly will teacher evaluations look like since 20% of NYs were based on these assessments?


And finally, there’s this article, in which we learn that NYs teacher evaluation system is to somehow move forward, as is.  What a hot mess.  


No comments:

Post a Comment