# The Randomized Control Trial is only one research method.

Quants. or Quals.?” A distinction that some colleagues like to make about their own research or indeed as a label to apply to others. In some ways it is rather a silly question, to which the answer should really be “whatever is appropriate to answer the research question”. However, we all know that many of us have a preference for either qualitative research (sometimes referred to as touchy-feely research) or quantitative research – which can either be described by colleagues as ‘scientific research’ or ‘number-crunching research’, depending in part upon the degree of prestige it is felt that a quantitative focus confers on the research.

Clearly in educational research, there is a need for both quantitative and qualitative studies. But sometimes I have had conversations with colleagues (such as those from the clinical sciences) who feel that anything ‘less’ (their emphasis) than a randomized control trial (RCT) is just tinkering at the edges – even when it comes to medical education. Even so, it seems that the demand for evidence-based practice in medical education is not pursued with the same rigor as it is in clinical medicine.

Scriven (2008: 11) refers to an ‘exclusionary policy’ that requires RCT-based evidence for practice to move forward. Even if this was sensible in medicine (and that is debatable), within education, a policy of requiring RCT-based evidence to inform policy would be nonsensical. Grossman & Mackenzie (2005) talk about the context in which RCTs should be applied, and consider ‘The RCT design is a theoretical construct of considerable interest, but it has essentially zero practical application to the field of human affairs.’ They go on to explain that ‘The real ‘gold standard’ for causal claims is the same ultimate standard as for all scientific claims; it is critical observation‘. This leads to the implication that case study research that involves rigorous analysis is required to inform practice.

Let me give an anecdote of a recent time where RCTs failed to help me in a clinical situation. I have used an inhaler to control asthma for several years. A while ago I went to collect my prescription and I noticed that the design of the inhaler appeared to have been modified. I though little of it. When my old inhaler ran out and I switched to the new inhaler I was made quite unwell by the drug. Being a little slow to recognise the cause and effect, I increased my dosage for a day or so with the effect that I got worse. It turned out that the design of the inhaler hadn’t been changed, but the pharmacist gave me the wrong drug. Whilst apologetic, the pharmacist seemed less concerned about the mix up that I was and assured me that it was a similar drug that had undergone rigorous clinical testing so there shouldn’t have been any danger. Hmmm.  However, it shouldn’t have made me ill either!

Examining the literature on the subject, it is clear that there have been a number of RCTs that have compared the effectiveness of the drugs (Formoterol and Salmetrol), with little difference detected on average between the two (e.g. Bodzenta-Lucaszyk et al, 2011; Papi et al, 2017; Remington et al, 2002). However, none of these trials had included patients who were also on the same cocktail of other medications for other illnesses that I was on. So they were not proof that the new inhaler would work for me. They just give a likelihood of potential. From which I gained little satisfaction as I was struggling for breath.

So RCTs have their place, and may be essential at certain stages of the clinical trials, but they still do not take things to the level of the individual patient. My individual case study (not scientifically recorded anywhere) shows that the data from the RCTs is only a guideline. Judging by the various chat rooms on the internet that are devoted to this exact issue, it seems that I am not the only one to have had an adverse reaction to Formoterol. But these are only considered as ‘anecdotal’ as they have not been observed under controlled conditions. So unfortunately, my case study (along with others reported online) have not been analysed rigorously (Grossman & Mackenzie, 2005). If only there had been someone to offer a critical analysis of my case study.

Scriven (2008: 11) states clearly his view that ‘the randomised control trial (RCT) is not a gold standard: it is a good experimental design in some circumstances, but that’s all.’  Direct observation seems to be the real gold standard, and I appreciate that this is not always practical in a study that includes hundreds of participants. However, in our classes, direct observation of practice would seem to offer the best way to analyse the quality of teaching and learning. Anything less is a proxy. So for all the big data, learning analytics and national surveys, we cannot be sure that any intervention will be right for a given student in a given context at a given time. Perhaps we just all need more time to observe those students who are ‘struggling for breath’.

References

Bodzenta-Lukaszyk, A., Dymek, A., McAulay, K., & Mansikka, H. (2011). Fluticasone/formoterol combination therapy is as effective as fluticasone/salmeterol in the treatment of asthma, but has a more rapid onset of action: an open-label, randomized study. BMC pulmonary medicine, 11(1), 28.

Grossman J. & Mackenzie, F.J. (2005) The randomised control trial: Gold standard, or merely standard? Perspectives in Biology and Medicine, 48(4): 516 – 534.

Papi, A., Paggiaro, P., Nicolini, G., Vignola, A. M., & Fabbri, L. M. (2007). Beclomethasone/formoterol vs. fluticasone/salmeterol inhaled combination in moderate to severe asthma. Allergy, 62(10), 1182-1188.

Remington, T. L., Heaberlin, A. M., & DiGiovine, B. (2002). Combined budesonide/formoterol turbuhaler treatment of asthma. Annals of Pharmacotherapy, 36(12), 1918-1928.

Scriven, M. (2008) A summative evaluation of RCT methodology: & an alternative approach to causal research. Journal of Multi-Disciplinary Evaluation. 5(9): 11 – 24.

# Why doesn’t everyone become an expert?

It is clear that expertise is often confused with experience. And yet there seem to be many people engaged in a variety of activities for a considerable time who never become expert. Let me give two simple examples that annoyed me today:

1. A visit to the corner minimarket. When I have a few groceries to buy I often nip into our local minimarket rather than drive to the big supermarket. The one thing that annoys me about my local shop is that none of the sales people – however polite and friendly they are, seem to know how to pack a bag. Today I bought (fairly typically) some eggs, a loaf of bread, some chicken breasts and some potatoes. However I arrange them in the basket, it seems the sales assistant always attempts to put the eggs and the bread at the bottom of the bag and then to throw the much heavier and less fragile potatoes and chicken on top – with the result that I have crushed bread and cracked eggs. Every time I have to stop the process to ensure the eggs and the bread go at the top. Admittedly, I don’t take a great deal of time to explain my actions to the sales assistant, but as I am talking about ‘not breaking the eggs’ or ‘flattening the bread’ I guess they might pick up on the general trend. But, no.
2. The second example of experience not transferring into expertise comes when I cross the road on the way home just by the roundabout. As I wait for a gap in the traffic, I always notice how few drivers feel the need to use their indicators to show where they are going. I don’t trust their indicators, but I notice that so many of them either fail to indicate at all or actually indicate the wrong way as they exit the roundabout and zoom past me.

It may be that these two examples of a failure to achieve expertise in repetitive tasks occur for two different reasons. I assume that the sales assistants are not particularly interested in packing my shopping and so pay little attention to what is going on. This lack of interest and lack of focus will explain why they plateau at a basic level of competence where the shopping gets into the bag, I pay and go. Job done. The drivers, on the other hand probably have a very different perspective, and I suspect if I challenged them about the underuse of inappropriate use of their indicators they would explain that they are in fact excellent drivers and that the fault was mine for wanting to cross the road at a silly point.

So how does this translate to observations of teaching? There are evidently some academics who don’t really pay attention to their teaching and just want to get through the lecture – job done. Likewise, there are some academics who have been teaching for a long time and still fail (metaphorically) to ‘use their indicators’. They will claim to be ‘good teachers’ and state that students are just ‘not what they used to be’ – they don’t have the skills and want to learn in ‘silly ways’.

So much for experience.

Contents

# International Symposium on Pedagogic Frailty and Resilience – Registration now open.

## The University of Surrey – 6th September 2017.

http://www.surrey.ac.uk/dhe/cpd/pedagogic-frailty-resilience/index.htm

The programme will include the following presentations:

“The Origins and Potential of Pedagogic Frailty”
Prof. Ian Kinchin, University of Surrey, UK.

“Safe Spaces or Strange Places?  Pedagogic Frailty and the Quality of Learning in Higher Education”
Prof. Ray Land, University of Durham, UK.

“Bend or Break? Dimensions of Intrapersonal and Organisational Resilience”
Dr. Naomi Winstone, University of Surrey, UK.

“Do No Harm: Risk Aversion versus Risk Management in the context of Pedagogic Frailty”
Dr. Julie Hulme, Keele University, UK.

“Profiling Pedagogic Frailty”
Prof. Paulo Correia, University of São Paulo, Brazil.

“Developing Online Resources to Support the Exploration of Pedagogic Frailty”
Miss Irina Niculescu, University of Surrey, UK.

# From ‘evidence-based’ to ‘post-truth’: is this a trend in higher education?

Is there a trend within higher education that parallels the general trend in society, from ‘evidence-based’ to ‘post-truth’? There has been a trend (that I have been aware of for several months, though it has probably been going on for very much longer) of a move away from research and data towards a justification of claims in the media by using statements such as, ‘a lot of people think that’. This trend has been played out very publicly in elections in the UK and in the US in the past year, where it seems that if you say something often enough and loud enough, then it will be accepted as part of the canon. Maybe that has always been so? But when we have Government ministers on the TV telling us that we shouldn’t listen to experts because sometimes they can get things wrong, it does sound like Homer-Simpson-reasoning.

We seem to be witnessing a similar trend in higher education where ideas seem to be distorted to fit political and economic aims. If you are really cynical, you might go back through press cuttings and see a move from ‘evidence-based’ to ‘student-centred’ to ‘post-truth’. I am not arguing against student-centredness here, but I am aware of the ways that is can be miss-represented so that the phrase ‘but the students want it’ seems to trump other arguments without any real analysis of what or why. But there is a question (probably many) here about what students want, which students want it and why students want it – whatever ‘it’ might be. There also seem to be a number of contradictions in what ‘students want’. We are told that students want more online learning. So it seems sensible to capture lectures and allow students to review the content in their own time. All very sensible. However, I have been told by a lot of people (in a post-truth sense) that if we insist that lecturers are filmed teaching, then they adopt a more conservative approach in the classroom for fear of being ‘You-tubed’. This might lead to increasingly teacher-centred, didactic lectures – after all, discussion and dialogue don’t always play well in recordings of lectures. But, hang on. I am also told that students want more engagement in class – something that might be inhibited by lecture-capture. So the students want it both ways? Problem.

In the media there seems to be an apparent polarisation of the community in which elements are now referring to students as a ‘snowflake generation’, who we cannot challenge or upset, for fear of unleashing their displeasure as costumers. Such an approach to students seems to be a device to increase the distance between teachers and students. And when I have interacted with students recently, they actually seem to want challenge in their education. So again, what is it that students want? We seem to be in danger of assuming there is a single ‘student voice’ that is truly representative. But ‘a lot of people think’ there are actually a lot of different voices within the student body, and among the academics. As the philosopher said, ‘all generalisations are incorrect’. Perhaps we should be looking at ways to exploit diversity rather than seek homogenisation?

Politicians seem to be in the same position of power over universities as the students. After years of criticism about the NSS and the way it informs us (or not) about teaching quality, we are now set to employ selected elements of it in the TEF to evaluate the teaching quality of institutions. By referring to this as a ‘metric’, we have managed to confer some level of credibility to the numbers generated so that interested parties may call the whole process a rigorous and tested procedure. In the face of such post-truth pronouncements, universities seem to have rolled over and accepted their fate – ready to be measured-up (either for their new ball gown or their coffin, depending upon which axe you are interested in grinding).

There may also be a difference between what students want and what students need. To take a health analogy here – over the years many patients have wanted (and got) antibiotics from their doctors because they are suffering from a virus. This is despite all the evidence that antibiotics do not have an effect on viruses. Any expert can tell you this. However, after years of overprescribing, we are now in the situation where antibiotics are becoming less effective against bacteria – bacterial resistance. This pandering to patients has had a harmful effect on the overall population. Without proper debate and analysis of issues, some colleagues view acknowledgement of the student voice as a similar kind of pandering – and such a dismissive view does not help the development of more informed pathways to student-centredness and student engagement.

So where are we and where do we want to go? It appears that we now live in a society where if we don’t provide evidence, then the fake news sources will make it up anyway. I am now wondering if I will soon read a research report that offers ‘post-truth’ as a theoretical framework to underpin conclusions. Perhaps peer review of research will become even more interesting in the coming years?

# Christmas mindbender – “Learning Jounce”: real or imaginary?

Here’s an idea that may or may not make sense. Stemming from one of those after dinner conversations that ended up with a “what if … ?”. Starting from the idea that things move at a certain speed and in a certain direction – velocity. This has some resonance with current discourse on learning – ‘learning gain’ being the distance travelled by a student over a period of time. So students learn at a certain speed. So we can see an analogy between velocity and learning gain – see the left side of the figure below. Learning gain has been under-theorised up until now – so let us problematize!

Now we can consider things like accelerated learning – where there is a change in the velocity of learning. We could have learning acceleration as an idea. So far so good. So the 1st derivative of the position vector (of understanding) would be learning gain  and the second derivative of the position vector would be learning acceleration:

PDF FIGURE:  learning-jounce

Accelerated learning is widely known, where students are taken along at a faster speed than is typically anticipated. So what if we push the analogy along? It might be more fun than trying to figure out the jokes in your Christmas cracker. And the discussion might make more sense after a glass or two of mulled wine! Anyway, here goes:

A change in acceleration is known as ‘Jerk’. So if the rate of acceleration increases or decreases we would have +ve or -ve Jerk. So if we had a change in the acceleration of learning we would have ‘Learning Jerk’. This is something that perhaps we could get our minds around. If learning gain has a ‘normal speed’ (e.g. one module per semester), then accelerated learning would have an increased speed (e.g. one module per semester, then two modules per semester, then three, and so on). A change in that rate of acceleration (e.g. suddenly back to one module per semester or up to four modules per semester) would be a ‘learning jerk’. So learning jerk would be variation in learning acceleration – a break in the smooth pattern of acceleration. If we then take the student voice into consideration, we could have ‘student-initiated acceleration’, where students felt they could move ahead more quickly, or ‘student-initiated jerk’ where the student body were allowed to vary the rate of learning acceleration at different points in their learning journey in response to changes in other factors.

For those who would like to apply the maths for Jerk:    $j = \frac{da}{dt} = \frac{d^2v}{dt^2} = \frac{d^3x}{dt^3}$  .

So logically (perhaps after another glass of wine), we should be able to proceed to a change in learning jerk – learning jounce. This would be a change in the change in the change of learning gain. In a student-led institution there will be variation in jerk across the student body and this will need a dedicated administrative team: possibly overseen by a new senior post (PVC – JOUNCE). Just imagine the learning analytics (Jounce analytics), wouldn’t they be fun. But what would learning jounce look like? And more importantly, what would be the metric that we could apply to TEF?

The ‘Jounced University’ would certainly be student-focussed, and may even come to the realisation that assessment inhibits  Jounce. Within the TEF, universities that acknowledge Jounce could be awarded a Bronze level in the TEF, those that implement ‘Assessed Jounce’ awarded a Silver, with Gold reserved for those institutions who manage to operate ‘Free Jounce’.

As an aside (and I think we need one here so we don’t get too serious), Jounce (the 4th derivative) is also known as ‘snap’. So can you guess what the 5th and 6th derivatives are called?  Yep – ‘crackle’ and ‘pop’. Wouldn’t it be great if we could have metrics for snap, crackle and pop with which to confuse our political masters? The Boxing Day game here is therefore to imagine your university league table for 2017, ranking institutions for “snap, crackle and pop”.

Merry Christmas!

Reference: