Teachers sometimes experience that students who attend a lecture strongly believe that they have learnt the material or even complain that this was known material. Then they fail subsequent tests on the material they so confidently claimed to know. This failure to assess own understanding seems to be commonplace.
Is anyone aware of research papers which can back this subjective impression?
See the the bottom of this note, some additional possible sources of references that I have drawn upon in trying to analyse our experience - I plan to write up something more formal on this subject in due course.
I agree that there is a problem with the perception by students of what constitutes an appropriate and sufficient depth of learning of the material to hand and that this may reflect the varying rates at which individuals acquire 'learning maturity' progressing through a programme.. I routinely perceive there to be at least three major styles of learning across my second year group of 200+ medical students, shading from immediate and fluid engagement with deeper learning concepts, through varying attempts to fit facts into some sort of rational framework, to a tenacious reliance on memory and rote learning, 'because memorising stuff got me to university, so why should I change now?' Formats such as PBL, task-based and problematized learning, while sometimes moderated by peer pressure, are often still undermined by a superficial engagement with the material, while in my experience flipped teaching works only for the small number of students who have engaged with the material provided in preparation for the class. I would absolutely caution however against the response from students that they need more lectures as a preferred way of learning since, without additional reinforcement and deductive problem solving, this simply represents a naive opportunity for students to engage passively with material in the way noted by Dr Gould. This also overlooks that it should not be the primary role of teachers in tertiary institutions to fire gobbets of information at students, as if through peashooters, but rather to inspire their interest in and desire to know more about a subject and to teach them how to learn in general, such that in class students are simply presented with the toolkit with which to work and true learning begins when a student leaves the classroom.
The problem perhaps then reduces to that noted above, namely how to encourage students to appreciate what constitutes a sufficient depth of learning and how to put that into practice. I adopt several strategies during my teaching to the group of 200+ medical students including revisiting the basic biomedical sciences information delivered in 'curated' lectures in a more clinical context in cased-based interactive tutorials and PBLs, such that the majority of students who are not inclined to reinforce the lecture material themselves are encouraged to do so anyway through such formal teaching. Furthermore, taken together, these' curated' lectures (consisting of information selected by me, as opposed to giving students a random chapter or article and a crystal ball to work out what may or may not be relevant), followed by cased-based tutorials and PBLs represent my flipped classroom. Students then have the opportunity to reinforce this material further in their own time by completing a series of interactive Computer-Assisted Learning packages with question and explanatory answer components – in effect on-line tutorials undertaken at the students' own pace – authored by staff and increasingly by students who have previously undertaken the course, as part of an ongoing student-developed curriculum project.
Promoting student authoring of curriculum content, albeit under appropriate supervision and quality control arrangements, takes inspiration in part from the old adage that if one wants to learn something properly, then one should teach it … in other words 'learning by doing'. This also speaks to the observation that many of the popular learning techniques favoured by students – such as re-reading and highlighting their texts – are relatively poor at promoting student learning, while the most effective, though possibly not the most popular, techniques in promoting long-term retention g are centred on some form of practice testing. For the majority who are not inclined to get involved in CAL authoring we offer 'PeerWise', an open-access learning resource developed at the University of Auckland, NZ, designed to promote deep formative learning by encouraging students to create assessment-type questions and provide explanations for their answers and distractors based on course material and also to discuss questions, answers and distractors created by their peers. The means by which students choose to utilise this resource are again intriguing with perhaps 10% of the class authoring questions and increasing proportions answering, discussing and critiquing questions formatively, which the majority simply practice answer questions as part of their examination revision. Interesting this latter group most frequently complain about the 'small' number of questions available to them – usually 100-200 questions across a 12 week semester1 – while requesting access (always denied) to question banks from other years. The key issue is that this group seem not to distinguish between the superficial learning prompted by the simple response to questions authored by someone else and the deeper concerted effort required review and organise material to compose good questions and answers and crucially credible distractors that are not obviously wrong but require to be discarded by a process of deduction.
It follows from this that the relatively superficial learning practised by many students probably reflects a lack of mature learning techniques that can be and should be taught from an early stage by instructors, rather than assuming that these have already been acquired during secondary education or can simply be absorbed by 'osmosis'.
References:
PeerWise has generated a considerable literature both in terms of formal publications and supporting experience and comments on the Instructor pages of the PeerWise website (https://peerwise.cs.auckland.ac.nz/). Another useful starting point is the overarching review (and references therein) 'How Students Learn - and How We Can Help Them?' by John F. Kihlstrom, at the Department of Psychology, University of California, Berkeley
http://socrates.berkeley.edu/~kihlstrm/GSI_2011.htm
I hope that helps and please 'up-vote', my answers if you think that hey are useful!
The "Learning Pyramid" (google it for dozens of versions!), derived in part from Edgar Dale's "Cone of Experience" widely suggests that a student retains 5% of a lecture - although as you allude to, the empirical evidence to justify this is very limited. A [critical] entry into the limited literature would be: The Learning Pyramid: Does it point teachers in the right direction? JP Lalley, RH Miller - Education, 2007. Good luck.
Thank you. That looks like something I should read.
However, it only answers half my question at best. So to rephrase it: Why do students genuinely believe that they retain 95% (or so) when indeed they only do retain 5%?
What evidence do you have that students genuinely believe they retain 95%? Anecdotally and from my own experience, I think I would suggest the complete opposite. When you ask, students will deny any prior learning, even though I know for instance the previous year, I gave them that lecture and material! I get through this by giving them a summary of any prior learning at the start of the lecture by way of context, and sometimes as preparatory material beforehand. Definitely a challenging area! Are Norwegian students overly confident and Scottish students lacking this?!
Fair point, Simon. Like you, I have only anecdotal evidence. The clearest anecdote is from one colleague of mine who started the semester repeating material from the previous year. The students complained that he had to told them that it was revision so that they could skip it. Then they failed the exam because of they did not know the revision material. He had better results the next year when he started with a diagnostic, peer-assessed test before starting revision ...
I do agree with you however, that if I ask the students if they know a specific theory or technique they will most likely answer no, and if I ask if they had it last year, they do not know.
What I had in mind, however, was rather next-day or next-week retention, than next-year retention. In my own experience, I have not as definite observations as my colleague, but talking to students and discussing exercises, I have a clear impression that the students /feel/ they understand and learn the lecture material, yet the first exercise demonstrates that nothing has stuck.
No, I do not think this is special for Norwegian students. I have taught many English and Greek students in England, and I would be very surprised if Scottish students are fundamentally different from any of the nationalities I am familiar with.
Come to think of it. When I hear a lecture I commonly think that I understand, that it is easy, meaning that I am able to validate the reasoning. Yet, that does not mean that I have learnt anything useable.
Dear Anders and Hans
Thank you for both your commentaries and thoughts.
My educational interests have been heading towards providing students with choices and autonomy, so they can take ownership of their own learning that is experiential in a research-rich environment.
I agree with you both. I really don't think that our students here in Scotland will differ that much from yours in Norway - the similarities will be greater than any perceived differences - and I thought that it may generate an interesting debate :-)
It is an interesting potential contrast in learning behaviour, where students may be over confident over a week or two and think they know [almost] everything [Anders - that repetition, and use, application and context of the material is essential], but that is in comparison with much later on, say after a year, and they deny knowledge.
There is likely a significant bias with how we ask them about what they know. If they are subjected to more robust testing, perhaps these will begin to converge ie short term = shows they have less understanding than they think; long term = shows understanding they thought they had forgotten?
Interesting and important discussion ... I am currently wrestling with such apparently universal issues in my University of Edinburgh MBChB2 Endocrine Module, for which I am trying to introduce some more innovatory learning experiences. One hope that some student-led research or reasoning-based sessions will be a valuable contribution and at least part of the answer. Some useful references and discussion in one of the papers that I have been looking at:
Serrano A, Liebner J, Hines JK (2016) Cannibalism, Kuru, and Mad Cows: Prion Disease As a “Choose-Your-Own-Experiment” Case Study to Simulate Scientific Inquiry in Large Lectures. PLoS Biol 14(1): e1002351. doi:10.1371/journal.pbio.1002351
The following is NOT the reference I was thinking of - I will keep looking - but I hope that it may help:
Distinct apical and basolateral mechanisms drive planar cell polarity-dependent convergent extension of the mouse neural plate.
Williams M, Yen W, Lu X, Sutherland A.
Dev Cell. 2014 Apr 14;29(1):34-46. doi: 10.1016/j.devcel.2014.02.007.
You might want to search for articles on 'metacognition', 'judgment of learning', 'self-monitoring', or 'monitoring accuracy'. For instance, John Dunlosky has written some insightful articles on why students cannot accurate estimate how much they truly know, and how that affects their study behavior (e.g. if you overestimate your knowledge, you will (re)study less, which is detrimental for learning) and how to improve students' metacognitive ksills (e.g. only estimating how much you know after a delay). Another relevant researcher is Anique de Bruin.
Hope this helps!
Whoops - please disregard my second answer 'The following is NOT the reference I was thinking of ...' - it somehow got posted into the wrong thread!
Sorry about that!
One issue that confounds the responses is the measure of learning. Are we focussing on shallow or deep learning or some untested combination of them. My students oftwn believe they have learned (deeply) until tested in some way. It is like an hypothesis they hold, simply because they have engaged with the work (usually superficially). It is similar to their beliefs when researching. Simply because they have engaged with the linguistics in their research (meaning they can read it and recognise the words) they believe the 'know' what the rezearch means, again until they test the ideas, usually in class with me. The solution is to teach them how to self test, and it has to be fairly specific, over a long time, possibly because it is a sign of a mature mind, to self test, to run their understanding through some form of testing procedure. It is possible, as ai said, that it is a sign of cognitive maturity to self test, in which case may not be developed easily. If that is the case, then it may be better to teach in a way that focusses on the necessity to learn deeply, such as PBL, task based learing, problematized learning, flipped teaching etc.
That's a good observation, Mark, thank you.
I totally agree that the discrepancy is probably rooted in some misunderstanding about the level of learning required. I am still surprised though. I teach mathematics where the students are used to being drilled with rather predictable exercises and problems. One would think that self-assessment would be trivial.
I do think your proposed cure is the right way to go, but it is not straight-forward. Existing research shows that PBL and whole-task learning is not effective. Straight-forward implementation of flipped classroom is also not effective, even though flipped classroom provides a framework where effective methods may fit in. Getting it right takes years of trial and error in my experience.
Anyway, I am still after more references which can illuminate the matter based on research.
Have you followed up on the researchers/keywords that I suggested? Do they answer your question?
Vincent,
I have started. The keywords you suggested throws the net widely, so it will take time to plow through the false hits. Metacognition looks like a very appropriate keyword, which I had not thought of, so I am sure that I will find something useful as well. Thanks a lot for the ideas.
And, thanks to Steven too. His suggestion also looks like an interesting step on the way.
See the the bottom of this note, some additional possible sources of references that I have drawn upon in trying to analyse our experience - I plan to write up something more formal on this subject in due course.
I agree that there is a problem with the perception by students of what constitutes an appropriate and sufficient depth of learning of the material to hand and that this may reflect the varying rates at which individuals acquire 'learning maturity' progressing through a programme.. I routinely perceive there to be at least three major styles of learning across my second year group of 200+ medical students, shading from immediate and fluid engagement with deeper learning concepts, through varying attempts to fit facts into some sort of rational framework, to a tenacious reliance on memory and rote learning, 'because memorising stuff got me to university, so why should I change now?' Formats such as PBL, task-based and problematized learning, while sometimes moderated by peer pressure, are often still undermined by a superficial engagement with the material, while in my experience flipped teaching works only for the small number of students who have engaged with the material provided in preparation for the class. I would absolutely caution however against the response from students that they need more lectures as a preferred way of learning since, without additional reinforcement and deductive problem solving, this simply represents a naive opportunity for students to engage passively with material in the way noted by Dr Gould. This also overlooks that it should not be the primary role of teachers in tertiary institutions to fire gobbets of information at students, as if through peashooters, but rather to inspire their interest in and desire to know more about a subject and to teach them how to learn in general, such that in class students are simply presented with the toolkit with which to work and true learning begins when a student leaves the classroom.
The problem perhaps then reduces to that noted above, namely how to encourage students to appreciate what constitutes a sufficient depth of learning and how to put that into practice. I adopt several strategies during my teaching to the group of 200+ medical students including revisiting the basic biomedical sciences information delivered in 'curated' lectures in a more clinical context in cased-based interactive tutorials and PBLs, such that the majority of students who are not inclined to reinforce the lecture material themselves are encouraged to do so anyway through such formal teaching. Furthermore, taken together, these' curated' lectures (consisting of information selected by me, as opposed to giving students a random chapter or article and a crystal ball to work out what may or may not be relevant), followed by cased-based tutorials and PBLs represent my flipped classroom. Students then have the opportunity to reinforce this material further in their own time by completing a series of interactive Computer-Assisted Learning packages with question and explanatory answer components – in effect on-line tutorials undertaken at the students' own pace – authored by staff and increasingly by students who have previously undertaken the course, as part of an ongoing student-developed curriculum project.
Promoting student authoring of curriculum content, albeit under appropriate supervision and quality control arrangements, takes inspiration in part from the old adage that if one wants to learn something properly, then one should teach it … in other words 'learning by doing'. This also speaks to the observation that many of the popular learning techniques favoured by students – such as re-reading and highlighting their texts – are relatively poor at promoting student learning, while the most effective, though possibly not the most popular, techniques in promoting long-term retention g are centred on some form of practice testing. For the majority who are not inclined to get involved in CAL authoring we offer 'PeerWise', an open-access learning resource developed at the University of Auckland, NZ, designed to promote deep formative learning by encouraging students to create assessment-type questions and provide explanations for their answers and distractors based on course material and also to discuss questions, answers and distractors created by their peers. The means by which students choose to utilise this resource are again intriguing with perhaps 10% of the class authoring questions and increasing proportions answering, discussing and critiquing questions formatively, which the majority simply practice answer questions as part of their examination revision. Interesting this latter group most frequently complain about the 'small' number of questions available to them – usually 100-200 questions across a 12 week semester1 – while requesting access (always denied) to question banks from other years. The key issue is that this group seem not to distinguish between the superficial learning prompted by the simple response to questions authored by someone else and the deeper concerted effort required review and organise material to compose good questions and answers and crucially credible distractors that are not obviously wrong but require to be discarded by a process of deduction.
It follows from this that the relatively superficial learning practised by many students probably reflects a lack of mature learning techniques that can be and should be taught from an early stage by instructors, rather than assuming that these have already been acquired during secondary education or can simply be absorbed by 'osmosis'.
References:
PeerWise has generated a considerable literature both in terms of formal publications and supporting experience and comments on the Instructor pages of the PeerWise website (https://peerwise.cs.auckland.ac.nz/). Another useful starting point is the overarching review (and references therein) 'How Students Learn - and How We Can Help Them?' by John F. Kihlstrom, at the Department of Psychology, University of California, Berkeley
http://socrates.berkeley.edu/~kihlstrm/GSI_2011.htm
I hope that helps and please 'up-vote', my answers if you think that hey are useful!
Ha, Re comments from Welie Schaathun, you have it exactly right, I am afraid. There is no problem engaging the motivated few, but how to capture those who are less inclined to drill down. The answer has to be in part to design sessions of the type referred to in the PLOS Biology reference above where all the learning and engagement takes place within the session and also to utilise 'Peer Pressure' in something like PeerWise where competition to provide the best answers an explanations can become quite intense, though I am not sure that I can continue to offer the annual subscription to iTunes as first prize out of my own pocket!
I am just about to pilot a new 75 minute interactive learning session on physiological homoeostatic feedback with my second year MBChB Endocrinology Class, so watch this space and wait for the howls of agony!
Definitively is a matter of metacognition. Having the ability to know what I am learning and what I am not learning is crucial for learning. We should dedicate time with our students for practicing metacognitive techniques and strengthening their metacognition skills.
I recommend the following article: Building a Metacognitive Curriculum: An Educational Psychology to Teach Metacognition (http://cgi.stanford.edu/~dept-ctl/tomprof/posting.php?ID=1048)
Let me share a few observations from a long time of teaching. I hate to admit how long, as I should have learned better from it. All of them center around why students (humans)are generally terrible judges of how well they understand something.
1. "Multiple choice exam scores jumped between 8 and 14%. . . . only difference was that I had asked them to prepare to write about the overarching principle for each chapter."
Knowing something for recognition is different from knowing relationships to be able to communicate them. New students rarely distinguish between them. Seasoned students often don't either. I think that this is consent with Wellie and Steven's comments.
One semester starting at midterm I used the exact same tests in Winter term as I had in the fall for a GE Physical Science course, except that I added 3-4 essay questions, of which they were to choose 2-3 to answer. When they asked for topics for the essay questions, I reworded the theme of each chapter. They thanked me as if I had given away the store!
Multiple choice exam scores jumped between 8 and 14%. The only difference was that I had asked them to prepare to write about the overarching principle for each chapter.
This is more significant than it sounds, because Winter scores normally averaged worse than Fall term because it contained, among others, all the students who had postponed taking their science until their last semester out of fear of failure.
2. Describing the complete reasoning in complete sentences "afixes" some learning in the brain.
Teaching Physics by Inquiry I routinely interview students at regular points in their work. In interviews I commonly observe students making correct predictions, whose speech tells me they are getting the ideas. In other courses this might be represented by their success completing the homework problem. However this understanding often evaporates before the exam unless they can communicate the complete answer (and reasoning) in full sentences to me and their partners.
3. Students (humans) are adept at filtering out what they don't expect to be important, and interpreting new material in ways which support their original ideas. That skill is essential for our survival. It may sometimes be biologically impossible to do otherwise.
We must filter out the vast majority of what our senses take in. It would be overwhelming and bewildering otherwise. Some evidence exists that not being able to do that might have something to do with autism. Fundamental in the biology of the brain are mechanical systems for doing this.
Ie. The eye and the ear are strong examples. The LGN filters information from the eye before it is assembled into an image in the occipital lobe. It is significant that 80% of the connection which end in the LGN come from the rest of the brain. That is, our expectations can "edit" images before they are images which we could be conscious of.
We similarly edit sounds to look for what we are expecting to hear with the outer hair cells in the cochlea.
I personally feel that peer pressure is the big hammer in our tool box. We need to be careful with it. I feel that the hardest thing to do is cultivate an environment in which students feel safe to share what they really think. Then we have a chance of changing what they think.