Posts tagged nonprofit data
Warnings on SEL and Accountability

Social Emotional Learning (SEL) continues to spread as we deepen our understanding of what really contributes to youth success. As it grows, there are more data collection tools and reports. That’s great for sparking conversations about how we can support students, BUT, there are reasons to be cautious:

  • The most ubiquitous measure of SEL is currently self-reported surveys. Surveys have 99 problems. Including:

    • Reference bias. With surveys, we’re trusting young people to grade themselves in their own mindsets and beliefs. Like this:

      Survey: Do you care about other people?

      Student: Sure do!

      Actual Person: Can you hold the door for me?

      Student: Hold it yourself!

      Would we also ask them to grade themselves in Math? No, because a student’s perception of their own skills is subject to reference bias. A student might report being good at math because he is the best in his class, but in another school he would be below average. I worked with a group of programs that found students who were lowest performing academically actually rated themselves highly in Academic Self-Efficacy. The experts agree that reference bias can “produce results opposite of the truth.” The Learning Policy Institute recommends NOT holding schools accountable for self-report survey data due to reference bias and because SEL surveys are relatively new and not designed for accountability purposes.

    • Surveys are administered and taken by humans. And humans are subject to a lot of factors that might impact results: weather, literacy, mood, relationships, understanding of the survey purpose, etc. In my experience working with dozens of programs on survey administration, the results don’t always turn out the way you expected, and it’s not necessarily a reflection of your work. Which is why I always hear skepticism from staff when looking at survey results.

      • Some issues highlighted by the Student Success Network:

        • How the survey is framed: is it personal, genuine, and does it make sense? Students may not understand or pay attention to the questions.

        • How the survey is perceived based on norms and culture. Students may give the right answer rather than the honest answer.

    • Other administration considerations:

      • The survey is not given to students who don’t show up (skewing the data)

      • The survey is given right before a state math test vs. during a happy graduation ceremony.

  • Will accountability help? Some organizations are in the early stages of learning about and institutionalizing SEL. Before we grade staff on their ability to move the needle, do they know what they’re being graded on? Have they been trained? Does the organization have supports in place for them to develop practices? If we use the data to identify weak spots - how will that help?

I believe non-profits need to be held accountable, and SEL is a key focus of our work. Surveys can provide thought-provoking data, and we’ve seen important connections between SEL growth and achievement. We still need to be very careful about the messaging and administration plan before holding people accountable for survey results. I’ll try to have a growth mindset about it, but we’re not there…yet. Our measures are not sharp enough to say to funders: Don’t give us money if we fail to move the needle on this survey.

Before You Collect Data, Ask Some Questions:

We take for granted now that schools and non-profits run on data. You’ll never hear a non-profit leader say “we have too much data”, or “we shouldn’t be collecting this or that”. However, before you say “let’s collect that”, ask yourself:

1) What’s the point? You might not hear it easily, but within each data request there is a subtle distinction between “Show me data so I can learn what works” and “Show me data so I can prove it works.” In one case, the purpose is learning and improvement, and the other it’s probably funding. Both purposes serve an organization’s mission. When bringing data in to a conversation, make sure you’ve got data that responds to the right question. For example, you may set a shoo-in goal for your funder that 75% of participants will achieve the outcome. When you meet with your internal leadership team or staff, let’s celebrate that outcome, but then look at the participants who did NOT achieve the outcome. In an improvement culture, we’re combing the data for problems: any participant who the program didn’t work for or who may need more interventions. In funder reports, we’re highlighting success and growth.

2) Is the juice worth the squeeze? I was chatting with a high school counselor recently who told me he spends three half-days a week inputting data, mostly case notes. I must have misheard. That’s a full day and a half each week! Why does he do this? Well, the boss needs it to keep the organization running. While I appreciate his commitment to the organization and data collection, that’s too much “squeeze” for something the funders aren’t actually asking for. When considering whether to collect data on something, know your plan for what you’re going to do with it. It’s not enough to say “It would be cool if we knew…”. We have to think about the time staff takes collecting data as a monetary cost to the organization. How much would you pay? You should pay for data that benefits participants because it will: (1) help fund your organization, (2) help staff do their jobs more effectively or efficiently (even considering that data collection), (3) play a key role in leadership’s decision-making process.

3) Are we going to look at the data? If there’s no plan and no team to review the data, and the purpose is learning and improvement (not funder-driven), don’t waste your time. Let everyone leave a bit early today. OR, consider how the data will inform key services before you collect it. If half of your organization’s work is helping young people plan for college, what do counselors need to know, and when? If the transcript report takes 5 minutes to run every time they meet with a student, the data can be used for reflection, but it’s not a tool. Do leaders and supervisors know how to talk about data, or value it? If not, get feedback to inform capacity building.

3 bullets is pretty standard. I’ll stop there. Anything I missed?