Christmas mindbender – “Learning Jounce”: real or imaginary?

 

Here’s an idea that may or may not make sense. Stemming from one of those after dinner conversations that ended up with a “what if … ?”. Starting from the idea that things move at a certain speed and in a certain direction – velocity. This has some resonance with current discourse on learning – ‘learning gain’ being the distance travelled by a student over a period of time. So students learn at a certain speed. So we can see an analogy between velocity and learning gain – see the left side of the figure below. Learning gain has been under-theorised up until now – so let us problematize!

Now we can consider things like accelerated learning – where there is a change in the velocity of learning. We could have learning acceleration as an idea. So far so good. So the 1st derivative of the position vector (of understanding) would be learning gain  and the second derivative of the position vector would be learning acceleration:

learning-jounce

PDF FIGURE:  learning-jounce

Accelerated learning is widely known, where students are taken along at a faster speed than is typically anticipated. So what if we push the analogy along? It might be more fun than trying to figure out the jokes in your Christmas cracker. And the discussion might make more sense after a glass or two of mulled wine! Anyway, here goes:

A change in acceleration is known as ‘Jerk’. So if the rate of acceleration increases or decreases we would have +ve or -ve Jerk. So if we had a change in the acceleration of learning we would have ‘Learning Jerk’. This is something that perhaps we could get our minds around. If learning gain has a ‘normal speed’ (e.g. one module per semester), then accelerated learning would have an increased speed (e.g. one module per semester, then two modules per semester, then three, and so on). A change in that rate of acceleration (e.g. suddenly back to one module per semester or up to four modules per semester) would be a ‘learning jerk’. So learning jerk would be variation in learning acceleration – a break in the smooth pattern of acceleration. If we then take the student voice into consideration, we could have ‘student-initiated acceleration’, where students felt they could move ahead more quickly, or ‘student-initiated jerk’ where the student body were allowed to vary the rate of learning acceleration at different points in their learning journey in response to changes in other factors.

For those who would like to apply the maths for Jerk:    j = \frac{da}{dt} = \frac{d^2v}{dt^2} = \frac{d^3x}{dt^3}   .

So logically (perhaps after another glass of wine), we should be able to proceed to a change in learning jerk – learning jounce. This would be a change in the change in the change of learning gain. In a student-led institution there will be variation in jerk across the student body and this will need a dedicated administrative team: possibly overseen by a new senior post (PVC – JOUNCE). Just imagine the learning analytics (Jounce analytics), wouldn’t they be fun. But what would learning jounce look like? And more importantly, what would be the metric that we could apply to TEF?

The ‘Jounced University’ would certainly be student-focussed, and may even come to the realisation that assessment inhibits  Jounce. Within the TEF, universities that acknowledge Jounce could be awarded a Bronze level in the TEF, those that implement ‘Assessed Jounce’ awarded a Silver, with Gold reserved for those institutions who manage to operate ‘Free Jounce’.

As an aside (and I think we need one here so we don’t get too serious), Jounce (the 4th derivative) is also known as ‘snap’. So can you guess what the 5th and 6th derivatives are called?  Yep – ‘crackle’ and ‘pop’. Wouldn’t it be great if we could have metrics for snap, crackle and pop with which to confuse our political masters? The Boxing Day game here is therefore to imagine your university league table for 2017, ranking institutions for “snap, crackle and pop”.

Merry Christmas!

 

Reference:

MrReid.org at: http://wordpress.mrreid.org/2013/12/11/jerk-jounce-snap-crackle-and-pop/

Student evaluation of teaching: are we reaching for the wrong type of excellence?

Over twenty years ago Carr (1994: 49) wrote:

 ‘It is a shallow and false view of education and teaching which takes it to be a matter of the technical transmission of pre-packaged knowledge and skills in the context of efficient management’

However, it seems that this false view is still able obscure a more contemporary and research-informed views of teaching. The on-going drive for ‘teaching excellence’ still seems to focus on actions of the teacher that promote Carr’s ‘shallow view’. That is not to say that the student voice is not important, but we need to ensure that students are asked the right questions so that we do not promote student passivity as learners and do not subvert the student voice for purely political ends.

Fitzgerald et al (2002) wrote, ‘I value student perspectives in thinking about my practice. However, the institutional instrument designed to assess student perspectives focuses on a form of practice that ill fits my own values. Each semester students have been asked to rate the course and instructor on a nineteen-item rating scale. Many of the items are consistent with a teacher-directed pedagogy and linear information-processing model of learning (for example, Objectives are Clear, The Instructor Enhanced Knowledge of Subject, Organised Class Sessions Well, Demonstrates Knowledge of the Subject). When I first came to Uni, I was intimidated by these student evaluations because I believed them to be overly focused on clear objectives and a class structure predicated on teacher control of the classroom. These survey items do not adequately capture what I hope to accomplish in the classroom, and I correctly anticipated receiving mixed review on this measure. Students may desire and need clearly presented knowledge, attained in a highly structured teacher-directed context, but educational opportunities are impoverished if this is the only form of pedagogy provided. A rather monovocal assessment tool inscribes a particular vision of education, and fails to provide useful feedback to educators who teach in alternative ways. This narrow representation of education limits our vision of what ‘good’ education might be, and privileges a particular mode of learning.’

So are we still promoting ‘monovocal assessment tools’ ? If so why? Many commentators ask why those who purportedly revere the power of critical thinking go on to employ simplistic, quantitative tools to ‘measure’ teaching quality. Clearly, in the UK, the Government’s agenda to assess teaching is pushing things along with a single purpose in mind. Katzner (2012) has asserted that in their quest to describe, analyze, understand, know, and make decisions, western societies have accepted the myth of synonymy between objective science and measurement. He comments that what we cannot measure gets demoted as ‘less important’. If it has been measured, it must be ‘scientific’ and ‘rigorous’ – especially if we can apply statistical analysis that the common man/woman will not understand.

So we go from monovocal to monocular (possibly also myopic). A system in which ‘ideas diversity’ and ‘methodological variation’ (typically seen as indicators of health for an academic community) are apparently no longer valued. We then end up with a ‘hard core’ set of unquestioned statements and assumptions that are not supported by evidence. The result is that we have an academic community that will survive by ‘maintaining their autonomy and academic freedom through demonstrating symbolic compliance or pragmatic behaviour’ (Teelken, 2012: 287). This could result in innovative teaching being driven underground, like a resistance movement – a situation that is likely to promote pedagogic frailty (Kinchin, et al, 2016). An outcome that is the opposite of that intended. Fitzgerald et al (2002) talked about values as the underpinning concept that drives things forwards with any meaning. I wonder if the explication of values (particularly shared values, rather than any spurious mission statement placed on a web site) by universities will form part of the TEF, which will inform teaching evaluations in the UK over the coming years. Or is teaching supposed to be ‘values-free’ in the modern era? I didn’t get that memo.

 

References:

Carr, D. (1994) Educational enquiry and professional knowledge: Towards a Copernican revolution. Educational Studies, 20(1): 33 – 54.

 

Fitzgerald, L.M., Farstad, J.E. & Deemer, D. (2002) What gets ‘mythed’ in the student evaluations of their teacher education professors? In: Loughran, J. and Russell, T. (Eds.) Improving teacher education practices through self-study. London, Routledge/Falmer (pp. 203-214).

 

Katzner, D. W. (2012). Unmeasured information and the methodology of social scientific inquiry. Springer Science & Business Media.

 

Kinchin, I.M., Alpay, E., Curtis, K., Franklin, J., Rivers, C. and Winstone, N.E. (2016) Charting the elements of pedagogic frailty. Educational Research, 58(1): 1 – 23.

 

Teelken, C. (2012) Compliance or pragmatism: how do academics deal with managerialism in higher education? A comparative study in three countries. Studies in Higher Education, 37(3): 271 – 290.

 

Are you already researching pedagogic frailty?

The concept of pedagogic frailty ( see earlier post: https://profkinchinblog.wordpress.com/2016/01/20/pedagogic-frailty-a-new-lens-to-examine-university-teaching/ ) consists of a number of dimensions that are directly related to established fields of research in higher education. As a result, you may already be involved in research that informs the development of the model. If you are engaged in research that looks at values, academic identity, academic leadership, teaching quality, the research-teaching nexus, authenticity or academic resilience then your work will resonate with studies into pedagogic frailty. If you are investigating the application of concept mapping (and in particular the development of excellent maps) then again your work will be of relevance here as concept maps have been instrumental in the visualisation of the model.

frailty-relations

PDF: Pedagogic Frailty relations: frailty-relations

The value of pedagogic frailty is that it helps to bring these elements into simultaneous focus so that the dynamic relationships between these ideas can be better understood in the ways that they interact to influence the development of pedagogy.

In a special issue of “Knowledge Management & E-Learning” we are hoping to bring together some of the international research that can help to inform development of the concept and to interrogate the model. A call for papers in a previous post ( https://wordpress.com/post/profkinchinblog.wordpress.com/593 ) is still open (until March 2017), and I would be interested in hearing from anyone who is considering a submission.

 

CMC 2016 Tallinn, Estonia

Congratulations to the organisers of the Concept Mapping Conference in Tallinn for an excellent event.

 

tallinn

Street view of Tallinn

I attach the slides from my presentation:

The mapping of pedagogic frailty: A concept in which connectedness is everything“:

 

cmc-2016-pedagogic-frailty-concept-maps

Click here for: PDF: cmc-2016-pedagogic-frailty-concept-maps

Access to paper at: http://link.springer.com/chapter/10.1007/978-3-319-45501-3_18?no-access=true

What does your academic web page say about you?

I am often looking people up on the web to see who they are and what they do. What I have noticed is a tremendous variation in the quality and quantity of information that people offer on their home pages. It is also interesting to speculate on the intended audience for these web pages: students or other academics? So what do we find and what questions does it raise:

  1. Picture or no picture. It is often helpful to have a picture of the person on the page, particularly if you are arranging to meet that person in venue other than their office. There is probably an interesting study to be done on this alone – why do some academics post a photo and others choose not to?
  2. Office hours. It is fascinating that some academics appear to be guarding their time more than others. Some post limits to office hours because they are working part-time at the university whilst others appear to be trying to restrict access to students. The pages that list office hours as ‘Friday 17:00 – 18:00’ appear to be particularly restrictive.

The bulk of most pages often appear to be taken up with listing research activity in terms of papers and books that have been published, grants won and listing collaborations. There may also be some indication of awards that have been gained and some more technological savvy academics will also have Twitter feeds, Facebook pages, video links and links to other resources.

The relative space that is given to research and teaching is very interesting. A while ago I compared the staff pages of all the staff in two academic departments in the same university and looked at the space given to different activities measured in terms of lines of text dedicated to a particular activity. Within the pages of the Physics Department the averages were 2.9% teaching and 78.6% research. Most academics will list their publications in chronological order and this takes up considerable space. In contrast the space given to teaching will be succinct in the extreme and will typically say something like, “modules taught: Physics 101; Physics 201”. Nothing is offered about the philosophy of teaching or even a link to the module catalogue to allow a student to see what Physics 101 entails. Over 30% of the staff pages in the department had no mention of any teaching at all. Interestingly, even among the home pages of Teaching Fellows (who are not engaged in research) the information about teaching is typically no richer.

In comparison, the Sociology Department in the same university showed a slightly different pattern with teaching taking up an average of 9% of the space and research an average of 64%. This raises some questions about disciplinary differences. There are a number of practical reasons why these averages might not express a difference in philosophical emphasis: the sociologist rarely included more than three or four authors on a research paper whereas it is quite common for the physicist to include 10, 20 or more co-authors in a paper. In addition, many of the outputs in the Sociology Department were books rather than journal papers. So on average their outputs were much longer with fewer produced each year – hence taking up less space on the web.

The other interesting thing to note is that some academics clearly see their home page as their shop window to the world and the data included is up to date and complete. Whilst others apparently see little value in their web page and even their research record is not up to date with publication output mysteriously stopping four or five years ago (presumably the last time the page was updated) or listing papers as “submitted 2006”. Such sloppiness doesn’t really offer a very professional image – whether the intended audience is other academics or students.

The variation that is observed does raise a question about how institutions view their own web sites. Who are they for and what are they intended to convey? I think there is some interesting research to be done here. Before you check, I now have the summer to make sure my own profile is up to date.