Tag Archives: research methods

The Randomized Control Trial is only one research method.

Quants. or Quals.?” A distinction that some colleagues like to make about their own research or indeed as a label to apply to others. In some ways it is rather a silly question, to which the answer should really be “whatever is appropriate to answer the research question”. However, we all know that many of us have a preference for either qualitative research (sometimes referred to as touchy-feely research) or quantitative research – which can either be described by colleagues as ‘scientific research’ or ‘number-crunching research’, depending in part upon the degree of prestige it is felt that a quantitative focus confers on the research.


Clearly in educational research, there is a need for both quantitative and qualitative studies. But sometimes I have had conversations with colleagues (such as those from the clinical sciences) who feel that anything ‘less’ (their emphasis) than a randomized control trial (RCT) is just tinkering at the edges – even when it comes to medical education. Even so, it seems that the demand for evidence-based practice in medical education is not pursued with the same rigor as it is in clinical medicine.


Scriven (2008: 11) refers to an ‘exclusionary policy’ that requires RCT-based evidence for practice to move forward. Even if this was sensible in medicine (and that is debatable), within education, a policy of requiring RCT-based evidence to inform policy would be nonsensical. Grossman & Mackenzie (2005) talk about the context in which RCTs should be applied, and consider ‘The RCT design is a theoretical construct of considerable interest, but it has essentially zero practical application to the field of human affairs.’ They go on to explain that ‘The real ‘gold standard’ for causal claims is the same ultimate standard as for all scientific claims; it is critical observation‘. This leads to the implication that case study research that involves rigorous analysis is required to inform practice.


Let me give an anecdote of a recent time where RCTs failed to help me in a clinical situation. I have used an inhaler to control asthma for several years. A while ago I went to collect my prescription and I noticed that the design of the inhaler appeared to have been modified. I though little of it. When my old inhaler ran out and I switched to the new inhaler I was made quite unwell by the drug. Being a little slow to recognise the cause and effect, I increased my dosage for a day or so with the effect that I got worse. It turned out that the design of the inhaler hadn’t been changed, but the pharmacist gave me the wrong drug. Whilst apologetic, the pharmacist seemed less concerned about the mix up that I was and assured me that it was a similar drug that had undergone rigorous clinical testing so there shouldn’t have been any danger. Hmmm.  However, it shouldn’t have made me ill either!


Examining the literature on the subject, it is clear that there have been a number of RCTs that have compared the effectiveness of the drugs (Formoterol and Salmetrol), with little difference detected on average between the two (e.g. Bodzenta-Lucaszyk et al, 2011; Papi et al, 2017; Remington et al, 2002). However, none of these trials had included patients who were also on the same cocktail of other medications for other illnesses that I was on. So they were not proof that the new inhaler would work for me. They just give a likelihood of potential. From which I gained little satisfaction as I was struggling for breath.


So RCTs have their place, and may be essential at certain stages of the clinical trials, but they still do not take things to the level of the individual patient. My individual case study (not scientifically recorded anywhere) shows that the data from the RCTs is only a guideline. Judging by the various chat rooms on the internet that are devoted to this exact issue, it seems that I am not the only one to have had an adverse reaction to Formoterol. But these are only considered as ‘anecdotal’ as they have not been observed under controlled conditions. So unfortunately, my case study (along with others reported online) have not been analysed rigorously (Grossman & Mackenzie, 2005). If only there had been someone to offer a critical analysis of my case study.


Scriven (2008: 11) states clearly his view that ‘the randomised control trial (RCT) is not a gold standard: it is a good experimental design in some circumstances, but that’s all.’  Direct observation seems to be the real gold standard, and I appreciate that this is not always practical in a study that includes hundreds of participants. However, in our classes, direct observation of practice would seem to offer the best way to analyse the quality of teaching and learning. Anything less is a proxy. So for all the big data, learning analytics and national surveys, we cannot be sure that any intervention will be right for a given student in a given context at a given time. Perhaps we just all need more time to observe those students who are ‘struggling for breath’.



Bodzenta-Lukaszyk, A., Dymek, A., McAulay, K., & Mansikka, H. (2011). Fluticasone/formoterol combination therapy is as effective as fluticasone/salmeterol in the treatment of asthma, but has a more rapid onset of action: an open-label, randomized study. BMC pulmonary medicine, 11(1), 28.

Grossman J. & Mackenzie, F.J. (2005) The randomised control trial: Gold standard, or merely standard? Perspectives in Biology and Medicine, 48(4): 516 – 534.

Papi, A., Paggiaro, P., Nicolini, G., Vignola, A. M., & Fabbri, L. M. (2007). Beclomethasone/formoterol vs. fluticasone/salmeterol inhaled combination in moderate to severe asthma. Allergy, 62(10), 1182-1188.

Remington, T. L., Heaberlin, A. M., & DiGiovine, B. (2002). Combined budesonide/formoterol turbuhaler treatment of asthma. Annals of Pharmacotherapy, 36(12), 1918-1928.

Scriven, M. (2008) A summative evaluation of RCT methodology: & an alternative approach to causal research. Journal of Multi-Disciplinary Evaluation. 5(9): 11 – 24.













Innovation in qualitative research

Colleagues who are new to educational research, such as those engaged in teacher preparation and faculty development programmes often ask questions such as, “how many references should be cited in a good assignment?”, or “how many interviews is enough?“. Of course there is no nice simple answer to such questions. It depends on the research question being asked and the context in which the research is being undertaken.

One of the problems that arises from such research projects is that they tend towards the ‘accepted’ and the ‘tried-and-tested’. Inevitably this means that many of the resulting projects use semi-structured interviews or questionnaires, almost as a default setting. Some will stretch to focus groups, but beyond that methods avoid the risky or the unfamiliar.

All too often then the research has to be qualified at the end because of small sample sizes and low return rates of questionnaires.  So the aim of achieving generalizability is lost. But what does the holy grail of generalizabilty offer?  Within clinical research, the need for large sample sizes and rigorous randomised controlled trials is considered to be the gold standard. However, whatever the sample size within the trial, there is no way to be assured that a particular treatment will work for a particular patient. Generalizability can only tell us that, all things being equal, a treatment is likely to work for a certain percentage of patients over a given period of time.

Many teaching colleagues shy away from qualitative studies because they feel that a case study or an observation of a particular event cannot confer generalizability and so is not worth undertaking. But how many instances of an event have to occur before it becomes significant? There are lots of one-off events that can be seen to have had significance, not just on a personal level, but also on an international level – I’ll leave it to you to think of some.

In looking at the literature on autoethnography (where the researcher and the researched become one) there are some interesting ideas that might help novice education researchers to feel less constrained by the orthodoxy of ‘accepted approaches’ to research. With regard to the term of ‘generalizability’, Ellis (2004) points out that autoethnographic research seeks generalizability not just from the respondents but also from the readers. Ellis says, “I would argue that a story’s generalizability is always being tested – not in the traditional way through random samples of respondents, but by readers as they determine if a story speaks to them about their experience or about the lives of others they know. Readers provide theoretical validation by comparing their lives to ours, by thinking about how our lives are similar and different and the reasons why.”

Many of the small scale research projects undertaken by participants on PGCAPs or Grad Certs are unlikely to change the world, but they do have the potential to inform the professional practice of the project authors. This is most likely to happen when there is a strong resonance with the observations and the participant’s professional context – whatever the sample size.


Ellis, C. (2004) The ethnographic I: A methodological novel about autoethnography. Walnut Creek, CA, AltaMira Press.