#6981087 Leedale wrote:
I sincerely love how you stir up more discussion. This style of MOOC seems to suit you down to the ground. :-)
...
Quantum 2: How so? I thought the point of assessment was to assess the knowledge or skill-level of a student/participant. Keep in mind I'm coming at this as a very straightforward technical person. :D
Quantum 3: I'm not sure why these are exclusive to one another?
...
I would also say that there are some features of this kind of MOOC that would degrade with the kind of assessment we're talking about. We educators tend to be assessing creatures, though. Is it even possible to avoid any kind of assessment? Is there any kind of assessment that wouldn't tend to "degrade the learning behavior" as Michael seems to be indicating?
Ahhh, busted! My MOOC Assessment Quanta were (a little) satirical. However, they spring from discussions surrounding assessment, cheating, plagiarism and institutional learning.
Assessment -- the external force -- changes the objectives of the learner and their enagement with learning. A student asking
"Is this on the test?" causes their teacher's heart to sink, on a couple of levels. The educator pauses before they answer, because they know that assessment is driving that student's engagement with the subject material. In the old days, this was called "cart before the horse" or "the tail wagging the dog."
The student is behaving perfectly rationally, concentrating on the outcomes (grades) that their teacher has communicated to them.
Resource constraints prevent us from engaging directly with the learning of a hundred-plus students, so we rely on summary data from a variety of sources.
The summary data is open to "gaming" of various kinds. Coupled with the disruptive effect of high stakes, we have the same situation as professional sports -- for many students, the rewards of cheating and the punishments of failure are creating an irresistible vortex.
Daniel Pink's TED talk outlines what happens to performance when the stakes become higher -- our attention is taken by the stakes, and the quality of engagement suffers. The more complex the task, the greater the disruption.
Education is very complex and the stakes are much, much higher than the kind of monetary rewards Dan Pink used in his experiments.
However, those things easiest to measure -- multi-choice response, for example -- are less useful. Imagine the following question on a final test:
I have learned the following amount in this subject:
A: A great deal of fascinating knowledge from an excellent teacher.
B: Somewhat worthwhile information from a moderately engaging teacher.
C: Very little. The teacher is a knob. Even if no grade were allocated to the question, and the test were anonymous, certain students would take perverse glee in responding "C".
Further imagine however that the "correct" response ("A") constitutes 100% of the student's grade for the subject. It would be a courageous student who answered "C". If that student (
or their parents) had taken out a sizeable student loan, there would be only one rational choice.
Assessment has an opportunity time cost: more, or finer-grained, assessment occurs at the expense of other activities. Those other activities are, broadly, teaching/learning, school administration, or personal life.
The educator who behaved rationally would refuse to allow unpaid overtime to consume their personal time. Fortunately for students, their teachers tend not to be rational!