Tag Archives: RCT

The Randomized Control Trial is only one research method.

Quants. or Quals.?” A distinction that some colleagues like to make about their own research or indeed as a label to apply to others. In some ways it is rather a silly question, to which the answer should really be “whatever is appropriate to answer the research question”. However, we all know that many of us have a preference for either qualitative research (sometimes referred to as touchy-feely research) or quantitative research – which can either be described by colleagues as ‘scientific research’ or ‘number-crunching research’, depending in part upon the degree of prestige it is felt that a quantitative focus confers on the research.

 

Clearly in educational research, there is a need for both quantitative and qualitative studies. But sometimes I have had conversations with colleagues (such as those from the clinical sciences) who feel that anything ‘less’ (their emphasis) than a randomized control trial (RCT) is just tinkering at the edges – even when it comes to medical education. Even so, it seems that the demand for evidence-based practice in medical education is not pursued with the same rigor as it is in clinical medicine.

 

Scriven (2008: 11) refers to an ‘exclusionary policy’ that requires RCT-based evidence for practice to move forward. Even if this was sensible in medicine (and that is debatable), within education, a policy of requiring RCT-based evidence to inform policy would be nonsensical. Grossman & Mackenzie (2005) talk about the context in which RCTs should be applied, and consider ‘The RCT design is a theoretical construct of considerable interest, but it has essentially zero practical application to the field of human affairs.’ They go on to explain that ‘The real ‘gold standard’ for causal claims is the same ultimate standard as for all scientific claims; it is critical observation‘. This leads to the implication that case study research that involves rigorous analysis is required to inform practice.

 

Let me give an anecdote of a recent time where RCTs failed to help me in a clinical situation. I have used an inhaler to control asthma for several years. A while ago I went to collect my prescription and I noticed that the design of the inhaler appeared to have been modified. I though little of it. When my old inhaler ran out and I switched to the new inhaler I was made quite unwell by the drug. Being a little slow to recognise the cause and effect, I increased my dosage for a day or so with the effect that I got worse. It turned out that the design of the inhaler hadn’t been changed, but the pharmacist gave me the wrong drug. Whilst apologetic, the pharmacist seemed less concerned about the mix up that I was and assured me that it was a similar drug that had undergone rigorous clinical testing so there shouldn’t have been any danger. Hmmm.  However, it shouldn’t have made me ill either!

 

Examining the literature on the subject, it is clear that there have been a number of RCTs that have compared the effectiveness of the drugs (Formoterol and Salmetrol), with little difference detected on average between the two (e.g. Bodzenta-Lucaszyk et al, 2011; Papi et al, 2017; Remington et al, 2002). However, none of these trials had included patients who were also on the same cocktail of other medications for other illnesses that I was on. So they were not proof that the new inhaler would work for me. They just give a likelihood of potential. From which I gained little satisfaction as I was struggling for breath.

 

So RCTs have their place, and may be essential at certain stages of the clinical trials, but they still do not take things to the level of the individual patient. My individual case study (not scientifically recorded anywhere) shows that the data from the RCTs is only a guideline. Judging by the various chat rooms on the internet that are devoted to this exact issue, it seems that I am not the only one to have had an adverse reaction to Formoterol. But these are only considered as ‘anecdotal’ as they have not been observed under controlled conditions. So unfortunately, my case study (along with others reported online) have not been analysed rigorously (Grossman & Mackenzie, 2005). If only there had been someone to offer a critical analysis of my case study.

 

Scriven (2008: 11) states clearly his view that ‘the randomised control trial (RCT) is not a gold standard: it is a good experimental design in some circumstances, but that’s all.’  Direct observation seems to be the real gold standard, and I appreciate that this is not always practical in a study that includes hundreds of participants. However, in our classes, direct observation of practice would seem to offer the best way to analyse the quality of teaching and learning. Anything less is a proxy. So for all the big data, learning analytics and national surveys, we cannot be sure that any intervention will be right for a given student in a given context at a given time. Perhaps we just all need more time to observe those students who are ‘struggling for breath’.

 

References

Bodzenta-Lukaszyk, A., Dymek, A., McAulay, K., & Mansikka, H. (2011). Fluticasone/formoterol combination therapy is as effective as fluticasone/salmeterol in the treatment of asthma, but has a more rapid onset of action: an open-label, randomized study. BMC pulmonary medicine, 11(1), 28.

Grossman J. & Mackenzie, F.J. (2005) The randomised control trial: Gold standard, or merely standard? Perspectives in Biology and Medicine, 48(4): 516 – 534.

Papi, A., Paggiaro, P., Nicolini, G., Vignola, A. M., & Fabbri, L. M. (2007). Beclomethasone/formoterol vs. fluticasone/salmeterol inhaled combination in moderate to severe asthma. Allergy, 62(10), 1182-1188.

Remington, T. L., Heaberlin, A. M., & DiGiovine, B. (2002). Combined budesonide/formoterol turbuhaler treatment of asthma. Annals of Pharmacotherapy, 36(12), 1918-1928.

Scriven, M. (2008) A summative evaluation of RCT methodology: & an alternative approach to causal research. Journal of Multi-Disciplinary Evaluation. 5(9): 11 – 24.