The origins and potential of pedagogic frailty


This presentation from the 1st International Symposium on Pedagogic Frailty is available by clicking on the link below:


Symposium presentation


Frailty Evolution 1




Concept mapping at the pedagogic frailty symposium.

cmappers at frailty symposium


Participants constructing concept maps at the pedagogic frailty symposium.

Values underpinning flipped pedagogy


Flipped pedagogy is not really an issue of technology. It is a problem of teaching. What guides that teaching is the values that underpin our decisions in classroom management. Whilst we could write books on this subject, for practical purposes here, I would suggest four guiding principles that should be considered:

Appreciate students’ prior knowledge. This is not to say that we have to assess each student to see what they are bringing with them. If you have a class of 400 students, such one-to-one interrogation is not practical. It is more important that students activate their own prior knowledge and understand what is important in the ‘new context’ of the current course.

Consider the relationship between meaningful and rote learning. Do you want students to memorise facts to be regurgitated or do you want them to be able to apply those higher order thinking skills that require synthesis, evaluation and creation of knowledge?

Consider the value of formative assessment. This can help to activate prior knowledge and get students to better appreciate our expectations of meaningful learning. Well-constructed formative assignments can also help students to organise and structure their knowledge so that it can be used more effectively in the future.

Consider where you want to be on the student-centred / teacher-centred spectrum. Again, this relates to the ways in which the previous principles are enacted.

These four guiding principles are not isolated from each other. The relationships between these elements are dynamic – see the figure below. Therefore, if we fail with respect to one of these guiding principles, we are in danger of letting to whole enterprise collapse.

In the PowerPoint slides that are included below, I show how the neglect of meaningful learning (possibly through lack of constructive alignment between learning outcomes and assessments) allows the dominance of rote learning to negate any interest in formative assessment or prior knowledge. The outcome of this will be a focus on lower order thinking skills and a retreat into traditional, conservative modes of teaching:


values and principles for flipping

For set of PowerPoint slides that show what happens when meaningful learning is replaced by rote learning:  click values and principles for flipping

Where the two models presented in the slides attached are in simultaneous operation within a department, the students will probably opt for the line of least resistance and strategically opt to focus on the lower order thinking skills that are rewarded by rote learning. Students are therefore less well prepared for study in the following year (where understanding of previous modules will be assumed), or indeed for professional practice where students have to apply theory to practice in novel situations. So, if the underlying values of the curriculum are not explicitly shared across a faculty, there is a danger of the environment exhibiting pedagogic frailty and the typical outcome will be a retreat into conservative and ‘safe’ pedagogic practices. Where this happens, the energy expended on developing a flipped classroom will have been wasted.

Values should be the starting point for the development of the flipped classroom, not content or technology.





The Randomized Control Trial is only one research method.

Quants. or Quals.?” A distinction that some colleagues like to make about their own research or indeed as a label to apply to others. In some ways it is rather a silly question, to which the answer should really be “whatever is appropriate to answer the research question”. However, we all know that many of us have a preference for either qualitative research (sometimes referred to as touchy-feely research) or quantitative research – which can either be described by colleagues as ‘scientific research’ or ‘number-crunching research’, depending in part upon the degree of prestige it is felt that a quantitative focus confers on the research.


Clearly in educational research, there is a need for both quantitative and qualitative studies. But sometimes I have had conversations with colleagues (such as those from the clinical sciences) who feel that anything ‘less’ (their emphasis) than a randomized control trial (RCT) is just tinkering at the edges – even when it comes to medical education. Even so, it seems that the demand for evidence-based practice in medical education is not pursued with the same rigor as it is in clinical medicine.


Scriven (2008: 11) refers to an ‘exclusionary policy’ that requires RCT-based evidence for practice to move forward. Even if this was sensible in medicine (and that is debatable), within education, a policy of requiring RCT-based evidence to inform policy would be nonsensical. Grossman & Mackenzie (2005) talk about the context in which RCTs should be applied, and consider ‘The RCT design is a theoretical construct of considerable interest, but it has essentially zero practical application to the field of human affairs.’ They go on to explain that ‘The real ‘gold standard’ for causal claims is the same ultimate standard as for all scientific claims; it is critical observation‘. This leads to the implication that case study research that involves rigorous analysis is required to inform practice.


Let me give an anecdote of a recent time where RCTs failed to help me in a clinical situation. I have used an inhaler to control asthma for several years. A while ago I went to collect my prescription and I noticed that the design of the inhaler appeared to have been modified. I though little of it. When my old inhaler ran out and I switched to the new inhaler I was made quite unwell by the drug. Being a little slow to recognise the cause and effect, I increased my dosage for a day or so with the effect that I got worse. It turned out that the design of the inhaler hadn’t been changed, but the pharmacist gave me the wrong drug. Whilst apologetic, the pharmacist seemed less concerned about the mix up that I was and assured me that it was a similar drug that had undergone rigorous clinical testing so there shouldn’t have been any danger. Hmmm.  However, it shouldn’t have made me ill either!


Examining the literature on the subject, it is clear that there have been a number of RCTs that have compared the effectiveness of the drugs (Formoterol and Salmetrol), with little difference detected on average between the two (e.g. Bodzenta-Lucaszyk et al, 2011; Papi et al, 2017; Remington et al, 2002). However, none of these trials had included patients who were also on the same cocktail of other medications for other illnesses that I was on. So they were not proof that the new inhaler would work for me. They just give a likelihood of potential. From which I gained little satisfaction as I was struggling for breath.


So RCTs have their place, and may be essential at certain stages of the clinical trials, but they still do not take things to the level of the individual patient. My individual case study (not scientifically recorded anywhere) shows that the data from the RCTs is only a guideline. Judging by the various chat rooms on the internet that are devoted to this exact issue, it seems that I am not the only one to have had an adverse reaction to Formoterol. But these are only considered as ‘anecdotal’ as they have not been observed under controlled conditions. So unfortunately, my case study (along with others reported online) have not been analysed rigorously (Grossman & Mackenzie, 2005). If only there had been someone to offer a critical analysis of my case study.


Scriven (2008: 11) states clearly his view that ‘the randomised control trial (RCT) is not a gold standard: it is a good experimental design in some circumstances, but that’s all.’  Direct observation seems to be the real gold standard, and I appreciate that this is not always practical in a study that includes hundreds of participants. However, in our classes, direct observation of practice would seem to offer the best way to analyse the quality of teaching and learning. Anything less is a proxy. So for all the big data, learning analytics and national surveys, we cannot be sure that any intervention will be right for a given student in a given context at a given time. Perhaps we just all need more time to observe those students who are ‘struggling for breath’.



Bodzenta-Lukaszyk, A., Dymek, A., McAulay, K., & Mansikka, H. (2011). Fluticasone/formoterol combination therapy is as effective as fluticasone/salmeterol in the treatment of asthma, but has a more rapid onset of action: an open-label, randomized study. BMC pulmonary medicine, 11(1), 28.

Grossman J. & Mackenzie, F.J. (2005) The randomised control trial: Gold standard, or merely standard? Perspectives in Biology and Medicine, 48(4): 516 – 534.

Papi, A., Paggiaro, P., Nicolini, G., Vignola, A. M., & Fabbri, L. M. (2007). Beclomethasone/formoterol vs. fluticasone/salmeterol inhaled combination in moderate to severe asthma. Allergy, 62(10), 1182-1188.

Remington, T. L., Heaberlin, A. M., & DiGiovine, B. (2002). Combined budesonide/formoterol turbuhaler treatment of asthma. Annals of Pharmacotherapy, 36(12), 1918-1928.

Scriven, M. (2008) A summative evaluation of RCT methodology: & an alternative approach to causal research. Journal of Multi-Disciplinary Evaluation. 5(9): 11 – 24.












Why doesn’t everyone become an expert?

It is clear that expertise is often confused with experience. And yet there seem to be many people engaged in a variety of activities for a considerable time who never become expert. Let me give two simple examples that annoyed me today:

  1. A visit to the corner minimarket. When I have a few groceries to buy I often nip into our local minimarket rather than drive to the big supermarket. The one thing that annoys me about my local shop is that none of the sales people – however polite and friendly they are, seem to know how to pack a bag. Today I bought (fairly typically) some eggs, a loaf of bread, some chicken breasts and some potatoes. However I arrange them in the basket, it seems the sales assistant always attempts to put the eggs and the bread at the bottom of the bag and then to throw the much heavier and less fragile potatoes and chicken on top – with the result that I have crushed bread and cracked eggs. Every time I have to stop the process to ensure the eggs and the bread go at the top. Admittedly, I don’t take a great deal of time to explain my actions to the sales assistant, but as I am talking about ‘not breaking the eggs’ or ‘flattening the bread’ I guess they might pick up on the general trend. But, no.
  2. The second example of experience not transferring into expertise comes when I cross the road on the way home just by the roundabout. As I wait for a gap in the traffic, I always notice how few drivers feel the need to use their indicators to show where they are going. I don’t trust their indicators, but I notice that so many of them either fail to indicate at all or actually indicate the wrong way as they exit the roundabout and zoom past me.

It may be that these two examples of a failure to achieve expertise in repetitive tasks occur for two different reasons. I assume that the sales assistants are not particularly interested in packing my shopping and so pay little attention to what is going on. This lack of interest and lack of focus will explain why they plateau at a basic level of competence where the shopping gets into the bag, I pay and go. Job done. The drivers, on the other hand probably have a very different perspective, and I suspect if I challenged them about the underuse of inappropriate use of their indicators they would explain that they are in fact excellent drivers and that the fault was mine for wanting to cross the road at a silly point.

So how does this translate to observations of teaching? There are evidently some academics who don’t really pay attention to their teaching and just want to get through the lecture – job done. Likewise, there are some academics who have been teaching for a long time and still fail (metaphorically) to ‘use their indicators’. They will claim to be ‘good teachers’ and state that students are just ‘not what they used to be’ – they don’t have the skills and want to learn in ‘silly ways’.

So much for experience.