We can and should use data for accountability, at least a little bit. We should also use it, in a much more important way, for continuous learning/program development. This weekend I have been following Diane Ravitch’s stream of tweets, and I was appreciative that as furious as she is, rightfully, about the misuse and abuse of data for accountability, particularly in what is called “high stakes” testing, she also recognizes that “Measurement is fine so long as there are no stakes attached. That’s why NAEP is credible but state tests are not.”
She also declares, somewhat rhetorically: “Not everything that matters can be measured. Can you measure friendship? decency? love? Sometimes, what is measured matters least.”
My thesis is that as problematic as high stakes, basic skills testing has become, there is still value in collecting and using the right data, the data that do measure what matters most in our schools, and that there are data tools out there to do so.
I believe many of f us who are leading in 21st century learning place high priority, in our educational missions and throughout our school cultures, upon (at least) these three core purposes:
- delivering personalized and differentiated learning which has a significant and positive impact improving the educational progress of individual learners of a wide range of abilities, maintaining a focus upon the individual and not the mass of learners;
- forging and sustaining a connected community of engaged, active, intrinsically motivated, extracurricularly involved, technologically employing, hard-working learners;
- and developing the significant growth of not only our students’ basic skills, but also their higher order thinking skills, including critical thinking, written communication, and creative problem-solving.
And yet, none of the common measurements we use– the myriad of multiple choice, scantron reading comprehension, mathematics, and other basic skills tests– give us very meaningful or significant data on any of these three core and key goals.
Measurement matters in a third way too: to paraphrase, the measurement can become the message. It is not just that what we measure is what gets done– this is important, and can be compelling. If our teachers know we are measuring carefully students’ individual growth, their engagement in learning, and their higher order thinking development in addition to their basic skills and content mastery, they will teach these things more carefully. But if our students see that we are measuring these things– they get the message from us about what we think is most important, and they will themselves change their own view about what is important in their learning. The measurement is the message.
However, we should delight in the fact that there have come on-line in recent years a new trio, a new valuable trinity, of powerful and empowering national assessment tools, each of them aligned with and providing valuable data for schools and for school improvement on each of the three core and common goals.
None of these are high stakes tools; none of them are for firing teachers, or classifying schools. They can serve in a small way to showcase to parents, boards, or accrediting bureaus an accountability for excellence and progress, but they are primarily tools for continuous self-improvement.
The MAP, the Measurement of Academic Progress, allows us to gather efficiently, multiple times a year, the academic achievement of each individual student and gives us in real-time, not delayed, the information and gap analysis we need to meet a wide range of learners’ needs and improve each learner’s performance, the low, the median, and the high. This is something that standardized testing doesn’t do because it is so much more about categorizing the mass of learners, and about defining whether the low rose up to basic competency levels.
The HSSSE, (High School Survey of Student Engagement), surveys students annually, asking them whether they feel engaged in their learning and at their schools, whether they feel safe, in good rapport with fellow students and teachers, motivated to learn, active in their school community, and finding leadership and collaboration opportunities at school. Schools get results that compare their individual results to the full sampling, allowing for comparison analysis to determine areas for focused improvement.
The CWRA (College and Work Readiness Assessment) tests students in fall of their 9th grade and spring of 12th grade, in an open-ended, non-multiple choice, authentic assessment of their problem solving, critical thinking, and written communication skills, via a test format called performance assessment.
By use of these three, we can measure our success at exactly the things which are most important to us, and we can use the data collected to improve our performance at these things. Let’s get going.
This post is a preview and preparation for a panel presentation I am giving in September at the US Dept of Education in DC; I welcome and invite readers to use the leave a comment box to give me input, feedback, supportive quotes, or examples to assist me in preparing the presentation.