Aligning Data with School Mission: Collecting and Using the Right Data for School Improvement

Data matters, but most data stinks.

We can and should use data for accountability, at least a little bit.  We should also use it, in a much more important way, for continuous learning/program development.   This weekend I have been following Diane Ravitch’s stream of tweets, and I was appreciative that as furious as she is, rightfully, about the misuse and abuse of data for accountability, particularly in what is called “high stakes” testing, she also recognizes that “Measurement is fine so long as there are no stakes attached. That’s why NAEP is credible but state tests are not.”

She also declares, somewhat rhetorically: “Not everything that matters can be measured. Can you measure friendship? decency? love? Sometimes, what is measured matters least.”

My thesis is that as problematic as high stakes, basic skills testing has become, there is still value in collecting and using the right data, the data that do measure what matters most in our schools, and that there are data tools out there to do so.

I believe many of f us who are leading in 21st century learning place high priority, in our educational missions and throughout our school cultures, upon (at least) these three core purposes:

  • delivering  personalized and differentiated learning which has a significant and positive impact improving the educational progress of individual learners of a wide range of abilities, maintaining a focus upon the individual and not the mass of learners;
  • forging and sustaining a connected community of engaged, active, intrinsically motivated, extracurricularly involved, technologically employing, hard-working learners;
  • and developing the significant growth of not only our students’ basic skills, but also their higher order thinking skills, including critical thinking, written communication, and creative problem-solving.

And yet, none of the common measurements we use– the myriad of multiple choice,  scantron reading comprehension, mathematics, and other basic skills tests–  give us very meaningful or significant data on any of these three core and key goals.

Measurement matters in a third way too: to paraphrase, the measurement can become the message.  It is not just that what we measure is what gets done– this is important, and can be compelling.  If our teachers know we are measuring carefully students’ individual growth, their engagement in learning, and their higher order thinking development in addition to their basic skills and content mastery, they will teach these things more carefully.   But if our students see that we are measuring these things– they get the message from us about what we think is most important, and they will themselves change their own view about what is important in their learning.   The measurement is the message.

However, we should delight in the fact that there have come on-line in recent years a new trio, a new valuable trinity, of powerful and empowering national assessment tools, each of them aligned with and providing valuable data for schools and for school improvement on each of the three core and common goals.

None of these are high stakes tools; none of them are for firing teachers, or classifying schools.   They can serve in a small way to showcase to parents, boards, or accrediting bureaus an accountability for excellence and progress, but they are primarily tools for continuous self-improvement.

The MAP, the Measurement of Academic Progress, allows us to gather efficiently, multiple times a year, the academic achievement of each individual student and gives us in real-time, not delayed, the information and gap analysis we need to meet a wide range of learners’ needs and improve each learner’s performance, the low, the median, and the high.   This is something that standardized testing doesn’t do because it is so much more about categorizing the mass of learners, and about defining whether the low rose up to basic competency levels.

The HSSSE, (High School Survey of Student Engagement), surveys students annually, asking them whether they feel engaged in their learning and at their schools, whether they feel safe, in good rapport with fellow students and teachers, motivated to learn, active in their school community, and finding leadership and collaboration opportunities at school.    Schools get results that compare their individual results to the full sampling, allowing for comparison analysis to determine areas for focused improvement.

The CWRA (College and Work Readiness Assessment) tests students in fall of their 9th grade and spring of 12th grade, in an open-ended, non-multiple choice, authentic assessment of their problem solving, critical thinking, and written communication skills, via a test format called performance assessment.

By use of these three, we can measure our success at exactly the things which are most important to us, and we can use the data collected to improve our performance at these things.  Let’s get going.

This post is a preview and preparation for a panel presentation I am giving in September at the US Dept of Education in DC; I welcome and invite  readers to use the leave a comment box to give me input, feedback, supportive quotes, or examples to assist me in preparing the presentation.

3 Comments

  1. Susan Wells said:

    I too followed Diane Ravitch’s tweets through the weekend. My least favorite was her dig at computers, “kids need human beings not computers…” I find this frustrating in a world where we’re working so hard to move toward innovative technologies.

    So data… I was interested to read your thoughts on assessment tools. Those you’ve discussed could be used well. My concern always is time. How much time is given away to assess? How quickly can teachers move to change and differentiate their lessons for those assessments to become living instruments of change rather then static proof of success or failure?

    In our school with 1:1 iPod touches we use google forms with student taking pre-tests on their handhelds as they enter a room. Teachers can literally make changes in their lessons at that moment, for individuals or for a group. The same happens for students at the conclusion of a lesson with the students taking a short quiz as their “ticket” out. Teachers work in learning communities to study the success of their lessons and make changes as necessary before the next day.

    Formative assessment has been our focus, this embedded technology has helped meet our needs.

    August 8, 2010
    Reply
    • Hi Susan:

      It took me over a month to respond, but here we go. I agree, time is of the essence.

      I don’t think these three tools overtax the precious resource of time. Certainly HSSSE and CWRA don’t.

      HSSSE takes students one hour a year, and we use part of our morning meeting/assembly time for that.

      CWRA takes students about two hours, but it is only your 9th and 12th graders, so in the course of their four years they’d spend four hours on this assessment.

      MAP, well, MAP takes more time. Maybe 4 hours each admin, 3 times a year. It is a bigger commitment, to be sure. We are piloting it this year and very much evaluating whether it is worth the time, but we think it might be.

      Jonathan

      October 6, 2010
      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *