How the education nonprofit City Year tackled “measurement drift” by reorienting its measurement activities around one simple premise: Data should support better decision-making.
In 2014, City Year—the well-known national education nonprofit that leverages young adults in national service to help students and schools succeed—was outgrowing the methods it used for collecting, managing, and using performance data. As the organization established its strategy for long-term impact, leaders identified a business problem: The current system for data collection and use would need to evolve to address the more-complex challenges the organization was undertaking. Staff throughout the organization were citing pain points one might expect, including onerous manual data collection, and long lag times to get much-needed data and reports on student attendance, grades, and academic and social-emotional assessments. After digging deeper, leaders realized they couldn’t fix the organization’s challenges with technology or improved methods without first addressing more fundamental issues. They saw City Year lacked a common “language” for the data it collected and used. Staff varied widely in their levels of data literacy, as did the scope of data-sharing agreements with the 27 urban school districts where City Year was working at the time. What’s more, its evaluation group had gradually become a default clearinghouse for a wide variety of service requests from across the organization that the group was neither designed nor staffed to address. The situation was much more complex than it appeared.
With significant technology roadmap decisions looming, City Year engaged with us to help it develop its data strategy. Together we came to realize that these symptoms were reflective of a single issue, one that exists in many organizations: City Year’s focus on data wasn’t targeted to address the very different kinds of decisions that each staff member—from the front office to the front lines—needed to make. Its strategy served national-level needs well—where data were used for broad, aggregated, periodic tracking to inform reports to funders and evaluate overall program effectiveness. But reports and dashboards built to respond to top-down view requirements didn’t provide the operational insights the majority of users needed. In the field, reporting wasn’t optimized to support the work of City Year’s 3,000 AmeriCorps members, who were providing direct academic and social-emotional supports to students in nearly 300 schools. To drive better outcomes, they needed access to a consistent, high-quality data set, on a more frequent basis, and in a format that would help them monitor an individual student’s progress and make decisions about that student’s intervention needs. That real-time, on-the-ground decision-making is fundamental to City Year’s ability to improve educational outcomes for students and schools. Yet delivery and evaluation of services lacked a consistent data set, and measurement structures and standards didn’t support this core activity.
We’ve seen many organizations face similar challenges as they strive to measure their social impact and make timely adjustments for continuous improvement. Given the high level of effort required to carry out social impact measurement, the resulting measures ought to provide leaders with useful, relevant information that makes decision-making clearer and easier. Yet for so many organizations, measurement and evaluation has become an albatross. We see many leaders’ well-intentioned efforts to measure performance become saddled with unrealistic expectations, imprecise tools, and misaligned incentives. All too commonly, measurement activities drift away from what should be their central goal: to help individuals across the organization make better decisions.
Many of us in the social sector have probably seen elements of this dynamic. Many organizations create impact reports designed to satisfy external demands from donors, but these reports have little relevance to the operational or strategic choices the organizations face every day, much less address harder-to-measure, system-level outcomes. As a result, over time and in the face of constrained resources, measurement is relegated to a compliance activity, disconnected from identifying and collecting the information that directly enables individuals within the organization to drive impact. Gathering data becomes an end in itself, rather than a means of enabling ground-level work and learning how to improve the organization’s impact.
Overcoming this all-too-common “measurement drift” requires that we challenge the underlying orthodoxies that drive it and reorient measurement activities around one simple premise: Data should support better decision-making. This enables organizations to not only shed a significant burden of unproductive activity, but also drive themselves to new heights of performance.
In the case of City Year, leaders realized that to really take advantage of existing technology platforms, they needed a broader mindset shift. Through our work together, City Year was able to flip a number of unspoken orthodoxies about data that had gradually and unintentionally built up within the organization:
City Year resolved to put its most critical decision-makers—its AmeriCorps members who serve nearly 200,000 students every day—at the center of the organization’s approach to measurement. We helped City Year solve a business need around its impact data approach that emphasized real-time monitoring more than backward-looking measurement. This ultimately required changes to data protocols, processes, and behaviors. And it provided clarity around the collection and management of data that would most effectively support City Year’s broader goals.
Originally published here in the Stanford Social Innovation Review.