The process of establishing an evidence-base for a particular therapy is complex. When a researcher designs an intervention, it is usually first tested with a small group of clients who have volunteered their time, often with a ‘mild’ presentation of (for here) a mental health or AOD concern, in what’s known as a pilot (or feasibility) study. If results are encouraging, a larger study is arranged. Over time, other researchers are likely to study the effectiveness of the intervention with different groups within the population. And eventually, groups of researchers pool together all the results from previous studies and explore the effectiveness of the intervention overall (a meta-analysis or review). The strongest evidence comes from these meta-analyses, but it can take many years to have a collection of studies to review. It follows that if an intervention doesn’t have an established evidence base, it does not necessarily mean that it’s not effective. Rather, it may be an emerging approach where an evidence base hasn’t yet been established. So, instead of the presence or absence of evidence it’s important to consider the level of evidence, or strength of the evidence. One small study showing weak positive effects of an intervention is not nearly as compelling as several large review studies with large effect sizes.
Many studies are designed with a group of people not receiving the intervention (for example a placebo medication), or having their intervention delayed. This study design can strengthen the conclusions that are able to be drawn. For obvious reasons, it is morally and ethically difficult to use a no-intervention or delayed-intervention condition when researching interventions for infants, children and adolescents with moderate to severe mental health and/or AOD issues. So researchers often test interventions on children and young people with less significant needs (subclinical populations) or with no co-morbidities, leading to concerns that interventions may not work as well with infants, children and young people presenting with more complex concerns in the ‘real world’ (Weisz et al., 2013).
It’s also been suggested that evidence-based approaches may not allow for the health professional to personalise the intervention to meet individual clients’ needs (Weisz et al., 2013). Many evidence-based interventions have been manualised (i.e. a manual has been written outlining how to deliver the intervention) to ensure that there is consistency in how they’re delivered. This can be reassuring for therapists (knowing that they’re providing the ‘science’), but can also diminish some of the ‘art’ of therapy.
Despite each of these concerns, evidence-based interventions have been shown to be more effective than usual care (Weisz et al., 2013).