What is a systematic review? Our evaluator explains how this important decision-making tool works and what it can tell you
What is a systematic review?
A systematic review is a summary of available research on a particular topic; and similar to a literature review. It identifies, assesses and synthesises the findings from multiple studies to answer specific questions. The key difference from a literature review is that it approaches evidence systematically, thus avoiding potential bias in findings.
What did we do in our systematic review?
Last year, the EIB Group’s Evaluation Division conducted a systematic review of residential energy efficiency. This was the first time we used a systematic review. To ensure that we would get the most from this pilot, we worked with the International Initiative for Impact Evaluations, known as 3ie, a global leader in producing and synthesising rigorous evidence. Along the way we learned some lessons useful for other evaluators we’d like to share:
- Advantages of using a systematic review
- Why starting with an evidence map can be helpful
- Trade-offs to bear in mind when undertaking a systematic review
What are the advantages of a systematic review?
Often there is a large amount of existing evidence available. What’s more, findings from the evidence can at times appear to be contradictory, either due to the quality of the research, or because results are influenced by the circumstances under which the research has taken place. By collating, appraising and then summarising the evidence, systematic reviews facilitate the use of the evidence.
This makes them especially useful for decision makers without much time on their hands, who want to understand what works, and what doesn’t work under which circumstances. But systematic reviews are equally useful for evaluators and researchers who can see where a lot of solid evidence already exists and where gaps remain.
In a systematic review:
- The scope and methods are defined upfront in a study protocol, which sets out clear inclusion criteria (transparent)
- The search is comprehensive and includes published and unpublished studies, in any language (unbiased)
- A critical appraisal of the quality of studies is carried out to assess the reliability of studies that are included (rigorous)
- Data extraction and organisation is systematic and reproducible (transparency)
What is an evidence (gap) map?
An evidence gap map is a tool designed to provide an overview of the existing evidence on a topic.
Why starting with an evidence map can be useful
Instead of agreeing a topic and research question for our systematic review, then collecting the available evidence and synthesising the findings, we started the review by mapping the evidence. The advantages of this approach are that it puts together an inventory of all the available evidence, which can help identify future research priorities and void duplication of effort.
The evidence gap map thus:
- Makes existing evidence more accessible
- Identifies synthesis gaps (areas with a lot of primary research that hasn’t yet been summarised in a systematic review)
- Shows primary evidence gaps (areas with little or no evidence)
- Can guide decisions for future evaluation work
This two-step approach (evidence map before systematic review) helped us avoid an ‘’empty’’ systematic review. An empty review wouldn’t answer our research questions. Instead, it would tell us that more primary research is needs to be conducted.
What lessons did we learn about evidence maps and systematic reviews?
Evidence gap maps can help avoid an empty systematic review, but they:
- Take considerable time to produce (6-12 months depending on the volume of evidence)
- Do not tell us what the evidence says
- Do not assess the quality of the primary evidence identified (especially important for grey literature/ non-published studies)
- Are most useful for evaluators and researchers, and are of less interest to operational staff
Findings from systematic reviews are more robust than findings from a single study and allow you to draw conclusions on about what works and what doesn’t work beyond specific contexts, but:
- The questions they are able to answer will always be guided by the available evidence
- If focused on effectiveness studies (randomised control trials/quasi-experimental evidence), they are unlikely to be able to tell us much about the mechanisms (why/how) and explain the findings
We were able to identify a good number of studies that measured the same outcome (energy consumption), which meant we could conduct a meta-analysis. The advantage of a meta-analysis is that it can combine several smaller studies, and by doing so, boost the sample size and increase the accuracy of the results.
The evaluation function conducts independent evaluations of the EIB Group’s activities. It helps the EIB Group be accountable to its stakeholders and draw lessons on how to continuously improve its work, thereby contributing to a culture of learning and evidence-based decision-making.
Find out more about our work at Evaluation.