So what’s this educational research stuff all about anyway?

I have had numerous interesting conversations with colleagues over the years about the nature of research in education in relation to the nature of research as perceived in other disciplines.

Conversations with colleagues attempting assignments on Grad Cert or PGCAP-type (faculty development) courses have also considered the problems raised by undertaking a research project. Questions such as, “how many interviews is enough?“, and “how do I show statistical significance when I have only studied the work of my two PhD students?“. Some of these conversations have been quite animated, with some colleagues presuming that because many studies use small numbers of observations that “educationalists just don’t like quantitative studies“.

Research design is often constricted by what you have in front of you. If you want to study what is going in your class of six students, you can’t just increase the size of your sample to try to make the results statistically significant. So very often, educational researchers will use qualitative methodologies to explore a phenomenon. This is not a choice of last resort, however. Quantitative methods have their limitations and will often show “what” is going on, but qualitative studies are often then required to show “why” it is going on (see: Hanson et al., 2011).

Wieman (2014) has discussed the comparison between education research and research in the hard sciences in some detail. He comments that whilst the common perception is that research in physics and chemistry is perceived to be more controlled and precise than in education, he concludes that this is only as a result of the relative maturity of the sciences so that in most of the research that gets presented in textbooks and in the media, the “messiness” has been “cleaned up”.

Wieman concludes that the fundamental property of good research is its predictive power – by which he means that “one can use the results to predict with reasonable accuracy what will happen in some new situation“. He likens the messiness that might be observed in education research, to the messiness that exists at the cutting edge of the hard sciences, where the key influencing variables might still be unclear. Messiness that is not reported beyond those directly involved.

Other challenges with education research concern the ethics of the endeavour. The gold standard of RCTs is just not always possible in education, and the ethics of control groups (groups where the innovation intended to enhance learning is intentionally withdrawn) is somewhat problematic. The problem is also, what is the ideal control? A group where no teaching is offered? Not one that would sit comfortably against the background of £9,000 tuition fees. The control group can be a difficult one to justify. It is clear that the ethics of educational research has changed over the years. For example, Claparède (1911) reports on the positive results of an experiment to raise student attainment by administering low doses of nerve gas in the classroom ( !!! ). Clearly something that the ethics committee would not approve today, particularly if you took this to the extreme and sought to carry out an LD50 test on your students.

We also have to be clear that even in large scale quantitative studies, statistical significance is not the same as educational significance. Having one of your students fail their finals may (in the big picture) not have any statistical significance to the institution overall, but it will certainly have educational significance to the parties concerned. We also have to be clear that complex statistical analysis on large data sets will not provide anything of worth if the researcher starts with the wrong question.

Finally, we must acknowledge that not all research (in any field) is good research. Take the following link for some stupid research: http://www.totallystupid.com/?what=analysis

References

Claparède, E. (1911) Experimental pedagogy and the psychology of the child. (4th Edn. – English translation) Edward Arnold.

Hanson, J. L., Balmer, D. F., & Giardino, A. P. (2011). Qualitative research methods for medical educators. Academic Pediatrics, 11, 375–386.

Wieman, C. E. (2014). The Similarities Between Research in Education and Research in the Hard Sciences. Educational Researcher, 43(1), 12-14.

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s