Warnings on SEL and Accountability

Social Emotional Learning (SEL) continues to spread as we deepen our understanding of what really contributes to youth success. As it grows, there are more data collection tools and reports. That’s great for sparking conversations about how we can support students, BUT, there are reasons to be cautious:

  • The most ubiquitous measure of SEL is currently self-reported surveys. Surveys have 99 problems. Including:

    • Reference bias. With surveys, we’re trusting young people to grade themselves in their own mindsets and beliefs. Like this:

      Survey: Do you care about other people?

      Student: Sure do!

      Actual Person: Can you hold the door for me?

      Student: Hold it yourself!

      Would we also ask them to grade themselves in Math? No, because a student’s perception of their own skills is subject to reference bias. A student might report being good at math because he is the best in his class, but in another school he would be below average. I worked with a group of programs that found students who were lowest performing academically actually rated themselves highly in Academic Self-Efficacy. The experts agree that reference bias can “produce results opposite of the truth.” The Learning Policy Institute recommends NOT holding schools accountable for self-report survey data due to reference bias and because SEL surveys are relatively new and not designed for accountability purposes.

    • Surveys are administered and taken by humans. And humans are subject to a lot of factors that might impact results: weather, literacy, mood, relationships, understanding of the survey purpose, etc. In my experience working with dozens of programs on survey administration, the results don’t always turn out the way you expected, and it’s not necessarily a reflection of your work. Which is why I always hear skepticism from staff when looking at survey results.

      • Some issues highlighted by the Student Success Network:

        • How the survey is framed: is it personal, genuine, and does it make sense? Students may not understand or pay attention to the questions.

        • How the survey is perceived based on norms and culture. Students may give the right answer rather than the honest answer.

    • Other administration considerations:

      • The survey is not given to students who don’t show up (skewing the data)

      • The survey is given right before a state math test vs. during a happy graduation ceremony.

  • Will accountability help? Some organizations are in the early stages of learning about and institutionalizing SEL. Before we grade staff on their ability to move the needle, do they know what they’re being graded on? Have they been trained? Does the organization have supports in place for them to develop practices? If we use the data to identify weak spots - how will that help?

I believe non-profits need to be held accountable, and SEL is a key focus of our work. Surveys can provide thought-provoking data, and we’ve seen important connections between SEL growth and achievement. We still need to be very careful about the messaging and administration plan before holding people accountable for survey results. I’ll try to have a growth mindset about it, but we’re not there…yet. Our measures are not sharp enough to say to funders: Don’t give us money if we fail to move the needle on this survey.

Building and Measuring Student Relationships: Two Examples

Relationships with students are important, duh. They engage, motivate, and buffer trauma. Good ones result in higher attendance and graduation rates. Two of my recent conference presentations can be tied together by the ideas that (1) Relationships with students are vital, (2) we can get better at building relationships, and (3) we can measure their impact.

1) Through the Good Shepherd Services Improvement Fellowship I co-designed and facilitated, we brought together schools and programs to tackle the issue of chronic absenteeism with improvement science methods. Each site developed a theory for why students are not present (ie. negative peer influences, academic fatigue] and then tested ideas. The result: three interventions that focused on building adult and peer relationships. That might sound weird. We were trying to raise attendance, but the solution was NOT outreach, attendance awareness campaigns or incentives. The solutions were peer groups, afterschool career exploration, and case conferencing. And, for some students, we saw increases in their attendance. See more about the fellowship and out outcomes here.

Ready by 21 National Meeting, Forum for Youth Investment, April 2019: The Problem with Attendance: Using Continuous Improvement to Tackle the Root Causes of Chronic Absenteeism.

2) At South Brooklyn Community High School, we found that if we asked students which adults they were connected to, the ones with more connections had higher attendance, on average. So, we adapted some activities with staff to figure out who we are connected to and WHY. Turns out staff were more connected to females, interns, students who had been enrolled longer, and students who were either struggling or excelling academically. This reflection with staff, and at the Transfer School conference, was important in starting the conversation on HOW to build better connections going forward.

NYC Transfer School Conference, June 2019: Building Connections for Student Success

Alison HolsteinComment
How Funders Can Help Nonprofits With Data

I recently attended the Ready by 21 National Meeting where I sat in a networking discussion that was cleverly titled “I Got 99 Problems, and Data Portals Are All of Them.” The group of mostly nonprofit leaders realized we share some of the same frustrating data challenges. Upon my return to NYC, many of those concerns were echoed. Here are some things funders could do that might help:

  • Do not require nonprofits to enter data directly into a funder’s system. Government agencies seem to do this the most. For small nonprofits that lack good data systems, having a government system could be helpful. However, check to see if the nonprofit has the information captured somewhere else, and consider how it could be formatted for your purposes or connected to your system.

  • If you ignore my first bullet and create a separate system, make sure this system is not a “black hole.” The worst example of this would be front-line staff entering case notes that they are not able to refer back to, the grantee can’t produce any aggregate report, and no insights are reported back from the funder. Yes, this still happens.

  • Help us advocate for data sharing. Help us gain the appropriate access to existing databases, rather than creating duplicate systems. If a nonprofit serves a school-based population, why not have some way to sync rosters? I’ve worked with programs where staff are responsible for attendance outreach, but they do not have direct access to attendance data,. As a result, they take attendance for the same students in TWO systems! In NYC, nonprofit community school leaders can access academic data through the New Visions data portal - how can that be expanded to other partnerships? And how can a system share data both ways?

  • Fund data quality, literacy, and integration. Most people who know how to create sophisticated, integrated data systems do NOT work at a nonprofit. But, nonprofits need this outside skill set to make sure systems are efficient. They also need internal capacity to: manipulate the data to answer questions, train staff to enter data, maintain systems as the work evolves, and develop the skill/will of staff to USE the data to improve. Few non-profits have dedicated full-time staff to assess performance.

  • Consider programming cycles when creating report deadlines. We know board meetings and fiscal years make it difficult to be flexible, but you also don’t want rushed and incomplete information. If you want to see outcomes for a college retention program, make sure the semester is over and there is enough time for staff to input the data.

  • Know your power. Here’s what’s tricky: nonprofits want to know how they stack up to help identify growth areas. However, they don’t want to do this in full display of funders. Anything you touch is high-stakes and could impact budgets, systems, and jobs. Improvement cultures grow in low-stake environments. Funders could help connect agencies to develop common measures, but make sure they are not punished for coming to the table.

Did I miss anything?

How Data Helps You Know More About The Kids You Know

Why would staff in a small school or program do a needs and assets assessment? A counselor already feels like they know their young people better than any survey or database. They are probably right. Here’s why we should do it anyway:

  • Prioritizing Resources and Advocating: In a program that has 160 students on four caseloads, each counselor may have a few students with mental health needs. They are addressing the needs (or not) on an individual basis. However, if an assessment reveals that a few kids on each caseload is actually 10% of the program, there is now motivation to identify partners, train staff, talk to funders, etc. One alternative school I worked with administered a survey that revealed ONE THIRD of students reported being sexually assaulted. While I may not recommend giving a survey that asks such intimate questions, WOW does that give some insight into what supports students need.

  • Programming to Build Upon Strengths and Interests: A key piece of any youth development program is getting students to learn more about themselves. As they share these revelations with us, where does it go? Despite the lack of advanced degrees, young people are most qualified to design programs that work for them. And we should collect data on their dreams, learning goals, desired experiences, strengths and growth areas, hobbies, etc. to inform program design. Don’t forget the assets, even if they seem less urgent. The assets give us the best clues as to how we can engage students.

I’m still trying to figure out needs and assets assessments, but here’s what I’m learning:

  • They can be ongoing. You don’t need to know all the needs and assets from a survey administered on Day 1. And sometimes, we must build relationships first to understand what questions to ask. If you are planning trips or resource fairs do it collaboratively with youth who can best identify their interests and needs. Photo voice is a great activity for this. Focus groups can be excellent. Community schools in NYC also have family forums each spring where they incorporate activities designed to get feedback into fun barbecues or carnivals. Assessments should be important, cyclical moments where we collect data on the bigger picture, but the assessment mindset can be applied daily.

  • They can be separate. In one school, we had counselors fill out a basic spreadsheet with each student listed and 12 columns of “Needs” because we realized we didn’t have a high-level view of many needs we had identified at the student-level (substance abuse, work conflicts, family responsibilities, etc.). With a “Yes”, “No”, or “Unknown” response, we got to see that over 20% of students (as identified by their counselor) have a mental health issue. By looking at the needs along with academic data, we found that our attendance problem was also a WORK CONFLICT problem. All 18 students with a work conflict were severely chronically absent. How can the attendance team address THAT?

  • They can be irresponsible. If we ask students and families what they need, and they tell us, we have a responsibility to respond to the best of our ability. And, we should be cognizant of how someone might feel when answering questions about his or her home, family, or trauma. I work with a program where we’ve heard anecdotally about youth experiencing trauma, and we considered administering the Adverse Childhood Experiences (ACES) Questionnaire to learn more and help advocate for resources. However, without mental health specialists on site and a comprehensive plan to avoid triggering young people, we decided to hold off.

Overall, just because staff are empathetic, curious, active listeners, does not necessarily translate to program design. The better we know our young people, at every level of an organization, the better they are served.

A few resources:

Alison HolsteinComment
Graduation AND Chronic Absenteeism Up in NYC Traditional High Schools

How does a high school have an 82% 4-year grad rate, but 70% of students are chronically absent (missing 10% of school days)?

While digging around NYC’s school quality data recently, I saw many schools that had high graduation rates AND a high rate of chronic absenteesim. Of course, schools with 100% grad rates are generally doing better with attendance than schools with 70% grad rates, but the data’s all over the place: see scatter plot below of traditional high schools. On average, 4-year grad rates for traditional high schools are up 3 percentage points in the past two years, and chronic absenteeism is also up slightly at just over 1 percentage point. Of the schools who improved their grad rates, 56% of them have HIGHER rates of chronic absenteeism, and 10% have the same rate, compared to the previous year. So, improved attendance is not the reason for that growth. My optimistic hypothesis: more graduation pathways, more support for absent students, inaccurate data. My investigative journalist hypothesis: lower and varied standards across schools.

This is what normal people do for fun. Right, guys?

Source: NYC DOE School Quality Reports 2017-18

Source: NYC DOE School Quality Reports 2017-18

Alison HolsteinComment
Before You Collect Data, Ask Some Questions:

We take for granted now that schools and non-profits run on data. You’ll never hear a non-profit leader say “we have too much data”, or “we shouldn’t be collecting this or that”. However, before you say “let’s collect that”, ask yourself:

1) What’s the point? You might not hear it easily, but within each data request there is a subtle distinction between “Show me data so I can learn what works” and “Show me data so I can prove it works.” In one case, the purpose is learning and improvement, and the other it’s probably funding. Both purposes serve an organization’s mission. When bringing data in to a conversation, make sure you’ve got data that responds to the right question. For example, you may set a shoo-in goal for your funder that 75% of participants will achieve the outcome. When you meet with your internal leadership team or staff, let’s celebrate that outcome, but then look at the participants who did NOT achieve the outcome. In an improvement culture, we’re combing the data for problems: any participant who the program didn’t work for or who may need more interventions. In funder reports, we’re highlighting success and growth.

2) Is the juice worth the squeeze? I was chatting with a high school counselor recently who told me he spends three half-days a week inputting data, mostly case notes. I must have misheard. That’s a full day and a half each week! Why does he do this? Well, the boss needs it to keep the organization running. While I appreciate his commitment to the organization and data collection, that’s too much “squeeze” for something the funders aren’t actually asking for. When considering whether to collect data on something, know your plan for what you’re going to do with it. It’s not enough to say “It would be cool if we knew…”. We have to think about the time staff takes collecting data as a monetary cost to the organization. How much would you pay? You should pay for data that benefits participants because it will: (1) help fund your organization, (2) help staff do their jobs more effectively or efficiently (even considering that data collection), (3) play a key role in leadership’s decision-making process.

3) Are we going to look at the data? If there’s no plan and no team to review the data, and the purpose is learning and improvement (not funder-driven), don’t waste your time. Let everyone leave a bit early today. OR, consider how the data will inform key services before you collect it. If half of your organization’s work is helping young people plan for college, what do counselors need to know, and when? If the transcript report takes 5 minutes to run every time they meet with a student, the data can be used for reflection, but it’s not a tool. Do leaders and supervisors know how to talk about data, or value it? If not, get feedback to inform capacity building.

3 bullets is pretty standard. I’ll stop there. Anything I missed?