What is mixed methods evaluation? Simply put, it refers to an evaluation design that combines both quantitative (numeric) and qualitative (descriptive) elements. In this blog post, I thought I’d share some thoughts about quantitative, qualitative, and mixed methods evaluation approaches.
Although my professional training as a research psychologist originally emphasized the quantitative side of the field, I have come to appreciate that numbers don’t always tell the whole story of a program’s characteristics, outcomes, or impacts. A recent article in the Guardian described four common misconceptions about data that illustrate some of these issues and limitations:
[the quotes in italics below are from the Guardian article; the comments that follow in regular text are my annotations]
- “Not everything that counts can be counted.” In other words, there are important factors that, although they cannot be quantified, can be described in other ways (e.g., words, pictures, etc.). Themes can be identified that capture what is most important or noteworthy. [Also, I would argue that the converse is just as meaningful – that just because something can be counted does not necessarily mean that it is important to measure.]
- “Data is not the same as statistics.” The words “data” and “statistics” are frequently used interchangeably, but as the author noted, they are not the same. Data are descriptive and observational (and data can be either quantitative or qualitative). Statistics are tools used to identify trends in quantitative data by building and testing models.
- “More data does not mean better decisions.” More data can mean better decisions, or at least more informed decisions, but many other contextual factors (such as the values of the decision makers, the validity of the data, and available resources, to name a few) must be considered in making data-informed decisions.
- “There are other methods to knowing than through counting.” Or in other words, it is just as important to be able to describe context, nuance, and the descriptive richness that comes with qualitative data. Again, if something cannot be counted, it doesn’t necessarily mean that it’s not important to understanding the big picture.
As researchers and evaluators, the field is getting better at recognizing a diversity of approaches to measurement and research design. For example, one of my professional “homes” — the American Psychological Association’s Division 5 which was formerly known as the division of Evaluation, Measurement, and Statistics, is now the Division of Quantitative and Qualitative Methods. Another one of my professional homes – the American Evaluation Association – is currently hosting Qualitative Evaluation Week on its AEA 365 Blog. So far the AEA blog posts have shared resources and perspectives on the value of qualitative evaluation techniques, as well as identified important competencies for evaluators using qualitative methods.
Which approach is better? It depends. Every project is different. When our team develops an evaluation design, we work with the people who will be using the evaluation and its results to understand what they hope to show through the evaluation study. The evaluation needs to be able to accurately describe the program’s processes, outcomes, and impacts, but it also often needs to accommodate data reporting requirements from one or more funders. More often than not, we usually end up a mixed methods design. Counts are important but don’t tell the whole story; descriptions and stories are extremely valuable but may not convey the full scope of the program’s activities, outcomes, and impacts without numbers to put them in context. What mixed methods approaches provide is the flexibility to best help a program tell its story with data – whether those data are quantitative, qualitative, or a combination.
Image courtesy of Master isolated images at FreeDigitalPhotos.net