Today’s post looks at program evaluation from both sides now – from the perspective of Jackie Gelfand, MA a recent addition to our team here at The Measurement Group. Jackie has worn the hats of both evaluator and executive director of several community-based organizations (although not at the same time).
I have had the opportunity to participate in program evaluation as an evaluator, as well as a program director. The dual role allowed me to see the benefits of conducting program evaluation. It also helped me explain to staff that strangers would be hanging around with surveys, interrupting their work flow, and wanting to speak to “their” clients about intimate issues. Both roles required patience, an understanding of the process of evaluation, and excellent communication skills (to pass on that good understanding and patience).
In my experience as an executive director, the majority of staff come to work, do a good job, and like what they do. They complete required forms, attend required meetings, and grouse about other things. However, I think sometimes they can lack awareness of why they are doing things in a particular way. Do they understand how they have impacted the client? Are the clients improving? Getting better? Getting worse?
When unsuspecting staff lives are turned upside down by evaluation teams, they learn how their “interventions” are helping the clients. They learn what staff are doing right for clients, and what they may be doing wrong. Funders want to know how we are impacting clients. How can we tell that what we’re doing is good stuff? The funder also wants to see measurable goals and objectives and how our procedures deliver positive outcomes.
Preparation is an important factor in getting staff ready. People are coming. The evaluation team will talk to you about what you do and what the clients are doing. They’re going to talk to the clients. They will talk to administrators, and the City and County and whoever else is around that deals with your organization and your clients. If you’re dealing with youth, they might even talk to parents (if they’re around). They could also speak with the neighbors where your agency is located who have never liked you being there. And, over the length of a year (perhaps more), they will process data continually so that they can give you the answers to questions you didn’t even know you were asking.
To illustrate, an agency I worked with received funds from a foundation to conduct program evaluation for a group home and foster family agency. The organization housed children from birth through their teens, many of whom had medical problems. What we learned from an internal informal evaluation was that many more of our youth ended up in permanent foster care than from other youth agencies. Our youth received a variety of services, including mental health, 24-hour nursing care, case management, and wellness programs. We were successful in our mental health program. We provided the support clients needed to function at school and sometimes within their families of origin. These youth were getting the assistance they needed in separating them from the child welfare system. The Department of Children & Family Services liked what we did. And we had the data to prove it.
Evaluating the Second Program
As an evaluator, this was the dream part of the evaluation. However, it wasn’t the only part that was being evaluated. Our Family Support Services Program was raucous and independent. They appeared to not understand what we were trying to do in this more formal evaluation. Interestingly enough, these particular staff members were more highly educated than the residential staff. They were therapists and therapy interns and case managers.
There were some negative outcomes. Parents complained that they weren’t getting their children back fast enough (this had to do with the Court System, DCFS and not our program). Our new foster family agency had difficulty signing up foster parents. This was true throughout Los Angeles County. However, we had numbers to meet that we promised to deliver. We did not meet those numbers ultimately. We did have a place to start to examine why we weren’t meeting our numbers. This was because of the evaluation.
I felt the frustration of the evaluators. I experienced some of the same frustration on the evaluator side when I was trying to get organizations to return calls or required forms. Staff arriving for weekly meetings and had not completed the tasks they were assigned. How do we report numbers or input them into a data base system if we don’t have any to report? Apparently I wasn’t doing that great of a job explaining the concept of evaluation and the buy-in was tenuous. Where we had success with the group home, we were not doing well in the foster family program.
Despite the good and the bad, I believe that the money from the foundation was well spent. We received information that was helpful to one program and that showed that we were benefiting the youth. The staff ended up questioning whether the new foster family program was viable in the current environment or with the particular population we served.
Without evaluation, we would have not been having discussions about any of this. Conducting the evaluation gave us the opportunity to look carefully at what we were doing. We were able to see how it impacted the clients, the neighborhood, the government entity, the staff, and, I would guess, the evaluators.