Some thoughts on self-checking / auto-grading

Hey folks – I’m the Chief Academic Officer at Desmos and it’s been so wonderful to see all of your interest in using Computation Layer in new and novel ways. School closure has produced lots of chaos and misery, but also flashes of brilliance and creativity, like your work here.

I’ve tuned in, especially, to your interest in self-checking activities and auto-graded questions. I understand why that interest is especially intense now, during school closure and I wanted to share some thoughts about why we’ve moved cautiously and slowly to support those interests. Please check out my blog post and feel free to comment here or there.

1 Like

Thanks for your post Dan. I have a different view partly because I’m using Desmos Activities for a variety of purposes, not just for exploring new concepts in Mathematics. (In particular, the most common activities I make are for formative assessments on a variety of skills, and for retrieval practice). In an ideal world I might have pursued different quizzing platforms for those purposes, but Desmos is also the easiest/fastest platform for creating this type of activity, and I don’t want my students having to use more platforms than necessary.

I think one mistake in the current situation is that the nature and timing of feedback is decided almost exclusively by the activity author, rather than the teacher (I realise that Feedback beta is an exception to this.) This means that we are more likely to end up with activities that provide the “you’re wrong, try that question again” which leads to guess-and-check.

Another phenomenon I see, which seems ridiculously inefficient, is to have a teacher projecting the dashboard in anonymous mode for the class.

For my purposes, it is often the case that as a teacher I want to tell students to “go back and think about this screen” or “check that screen again”, and I want to control the timing of that as well. I am aware that I can now copy that message to the clipboard and frantically click in on each screen with a cross to paste into the feedback box.

To me, it seems that a possible way forward would be to give ability to teachers through the dashboard to do things like this:

  1. Annotate students’ screens with their current dashboard status. (Literally, just the ticks/crosses/dots placed up the top of each screen, or a selection of screens that the teacher chooses to release).
  2. Inform students that particular screens currently seem to have a mistake, and allow them to resubmit up to … times with a minimum delay between each time of … seconds (this is to prevent blind guess-and-check.)

I like this type of solution because it allows the classroom teacher the control of when to provide feedback on particular screens, and also allows you at Desmos to guide teachers towards better feedback practices.

1 Like

Thanks for the comments here, Bryn. The difference between the activity author giving the feedback and the teacher giving the feedback is a really interesting one I need to think more about.

However, I’m not sure I agree that students guess and check because they have infinite opportunities to answer or they can get feedback without delay. Rather that feels incomplete to me. I suspect we see students guess and check because the feedback doesn’t support them in doing anything more than guessing and checking.

What else can you do when all you know about your work is that it’s wrong?

Thanks Dan,

Regarding your question “What else can you do when all you know about your work is that it’s wrong?” I think this is the condition that adults and independent learners find themselves in regularly (for instance, I tried to poach some eggs the other day and I could see it was a failure; I didn’t have a professional chef next to me giving me personalised feedback, so it was up to me to work out the next steps.)

As such, an important part of our job as teachers is to train students to deal with this, and here are a few options I would expect a student to consider:

  1. Look over the process that led them to the wrong answer. Ideally they could find the mistake themselves.
  2. Look in a text book/tutorial video to see if they have followed the correct procedure/applied the correct definitions.
  3. Ask another person (e.g. a student, a teacher) to help.

My views on feedback have been shaped a lot by listening to Dylan Wiliam’s research, for instance this summary article.

One principle he advocates is that feedback should be more work for the learner than the teacher. He also talks about making feedback into detective work, which contrasts with the very detailed feedback a teacher might want to give which could cause less thinking. In fact it could be better to just tell a student to check their working again.

I think another pertinent question is “what is the alternative to providing this type of semiautomatic feedback?” Even before remote teaching, in most classrooms I teach in, I could maybe provide great feedback to one student per minute (let’s say 20 pieces of great feedback per activity). Plus I have students asking me “is my answer correct on this screen?” (Yes, I could then have a conversation with them, but that then counts towards the total of 20.)

If an activity has 15 screens and a class has 20 students that is 300 possible feedback points. Which means maybe 280 student-screens will go without any direct feedback.

I can provide whole class feedback at certain points but students have potentially been labouring under a mistaken belief for a while, and other students might not be up to that part of the activity. Also I tend to choose a selection of screens where the most issues arose, so there are likely to be mistakes that students make that they never know about.

How do you envisage giving effective feedback on a 15-screen activity to a class of 20?

1 Like

Rather than simply right or wrong, this (from @Omar_Ibarra) is more what I’d envision for feedback. This is not feedback per se, but does offer direction when work is incorrect.

Instant, specific feedback takes an experienced teacher who can predict where students will have mistakes or misconceptions (e.g. negatives in a variety of contexts are common reasons for errors). This can be useful if students are working on activities unattended as they often are in our current situation. Nothing really replaces facilitation. It’s hard distance learning when you can’t ask a student 30 questions to really help them gain understanding instead of rote algorithm.

I’m wondering if it would be helpful to list out different types of feedback (I like Daniel’s “direction when work is incorrect”)? I’m feeling I want to get that on the table and do a card sort. I am envisioning that if I have some sort of list like that I might be able to make better choices when designing or implementing lessons. Maybe that already exists??

Thanks the follow-up, Bryn. This is a pretty important anecdote IMO.

I tried to poach some eggs the other day and I could see it was a failure.

I notice:

  1. You identified the egg as a failure. A computer didn’t do that. A pro chef didn’t do that. You had enough identity and agency as a chef / food eater to know you could make the call yourself.
  2. You had substantial feedback about the eggs. You didn’t just see an x or a check above the eggs. Maybe you saw that the yolk was too hard or the white wasn’t hanging together. That feedback will determine your next steps.

That’s what we want for students as well. Lots of feedback about their task. Lots of development of their identities as mathematicians.

PS. Personally I find a splash of white vinegar really keeps the whites hanging together nicely! :smiley:

1 Like

I agree, in part. While we want our students to have “enough identity and agency” as mathematicians, that’s just not always the case. If you’ve never had a poached egg, how would you know anything was wrong, even if you were following a recipe to the best of your ability?

In trying to teach curriculum/standards, particularly to a population that may be upwards of 5 years behind, students are sometimes more akin to the machines of The Matrix trying to match the flavor of Tasty Wheat or chicken. They just have no frame of reference. We can do our best to facilitate understanding, but certain skills may be lacking that just don’t allow for appropriate judgement.

I guess I walked into that!

My analogy was meant to answer the question of “what to do if you know something is wrong” rather than a case study in how to determine something is wrong.

Even if I keep in the world of my analogy though, I think I can illustrate the problem with “authentic feedback only”.

Let’s say the key components of poaching an egg are “adding a splash of vinegar”, “having the right water temperature”, “swirling the water in a way so the yolk doesn’t break”, and “having sufficiently fresh eggs”. (I don’t know if these are the components, as I only watched one YouTube video before attempting)

If I want to practise any of those components, I can’t see a way of knowing if I am getting them right without basically a master chef by my side saying “no, you’re swirling the water too fast” or “that is too much vinegar” or “the eggs were not fresh enough”. If I am going down a discovery learning route, then I think I need to experiment with all those components simultaneously and get the natural feedback at the end about how the eggs turned out.

Which would be fine if I have a large motivation and plenty of time to learn this skill. But in the absence of a caring master chef advising me through the process, I would have preferred my stove to simply say “no, the temperature of the water is incorrect”, or “no, the speed at which you are swirling the water is wrong” rather than wasting lots of time and lots of eggs as I try to master the process.

If you’ve never had a poached egg, how would you know anything was wrong, even if you were following a recipe to the best of your ability?

It seems like a mistake to ask someone to poach an egg who’s never experienced a poached egg before.

We need to poach that person an egg. Not with a step-by-step explanation. Not so they’ll write down a recipe. Just so they’ll experience it.

We need to invite them to describe the egg’s taste, smell, texture, and sight. We need them to compare their notes with other people’s who are on a continuum from novice to expert. We need them to experience several poached eggs – some poached well and others not – and describe their differences.

Same with math. Students need concrete experiences with mathematical ideas before we ask them to do more complex or abstract work with those ideas.

1 Like

But in the absence of a caring master chef advising me through the process, I would have preferred my stove to simply say “no, the temperature of the water is incorrect”, or “no, the speed at which you are swirling the water is wrong” rather than wasting lots of time and lots of eggs as I try to master the process.

I largely agree but wanted to add a couple of follow-up comments:

  1. Computer-based feedback in math class simply isn’t master chef-level good. For some extremely operational problems, it’s possible to guess with good confidence at the ways students are thinking about the math based on their answers, but I see those guesses miss the mark constantly. (Certainly, computers can’t offer feedback at that level of quality for our card sorts.)
  2. Even if computers were that good, that feedback would come at the cost of agency & authority. Amateur cooks need to know that, to some very large degree, the chef’s opinion about the poached egg matters less than their own. Who died and made this chef the arbiter of good taste? Same with math. A student’s solution to an equation is correct whether or not a computer says it is. A student’s argument is correct if the community says it is.

Yes, 100%, but some of us don’t have the luxury of doing that with all that we teach. I’m totally an advocate of teaching higher or deeper math concepts even when students lack certain lower basic skills (i.e. students can understand a concept without being able to multiply in their head).

I teach at a continuation high school, so most have decided that school in general, let alone math, isn’t that important to their survival (I use that word very intentionally). When I survey my Integrated Math I classes, mostly juniors and seniors, at least 80% (that’s really conservative) say they have never passed a math class. They don’t want to cook poached eggs, or even boil one. They’d rather get an egg mcmuffin. 3-act tasks, PBL. They can be effective, but it’s often just viewed as contrived.

I agree with Dan philosophically, but in practice I had to forge somewhat of a middle ground.

The structure of schooling (grades, standardized tests, pacing, prescriptive teaching, etc.) distorts parents and students views of the purpose of mathematics. Thus, teaching from a constructivist epistemology or invention (“discovery”) vantage point can be difficult. By high school many students have bought-in to prescriptive methods of learning, so they are often waiting for steps to manifest on their own. The realities of grading, testing, pacing guides, and students own differing beliefs about mathematics made it difficult to operate entirely from this perspective.

My favorite lessons and many of my students’ were those that they were able to abstract mathematics from a meaningful context. We move from concrete to abstract with different levels of immersion. A given unit might be comprised of 5 phases:

(1) Spend 4-5 days in one context before creating some type of generalize
(2) Then we move to a similar situation for 2-3 days.
(3) 2-3 contexts in a single day
(4) Each context is essentially a 10-minute-ish problems that are set in new or familiar contexts.
(5) typical abstract math that a lot of curriculum starts with.

Autograding helped me with phases (4) and (5). I used a “star” system for feedback and students would then try again, ask a friend, or ask me. Just having simple on-path/off-path feedback was meaningful for students when we got to these stages. Most of my students found this to be incredibly useful and wondered why we didn’t always use it (because I had not taught myself that CL yet).

It was such a success I built quizzes and tests the same way (if interested, see: https://sites.google.com/site/panettamath/desmos/precalculus?authuser=0). Students got feedback on their assessments and had the ability to improve their grade while taking it. The assessments auto graded which also made it easy to re-test for students who did not perform well.

Although the amount of time it took me to build these was immense, it saved on the back-end. The biggest payoff was that students could literally practice the test at home and re-take it at school when they were ready. It saved me time on grading at home and it allowed me dedicate more time to struggling students during phases 4/5.

However, using Desmos in this way was a privilege I have by learning CL. This certainly isn’t something other teachers could do at my school.

1 Like

Things that should be considered:

(1) Although from your perspective, I too would be hesitant to open the proverbial pandora’s box. It’s kind of like when the government makes something illegal – it just means its regulated by criminal organizations instead of government organizations. If Desmos doesn’t provide this type of support, teachers will still use it somewhere else. This will make the learning experience less cohesive as students switch platforms. The problems will also not be able to build meaningfully from their experience on Desmos. This same gap typically occurs moving to the assessment, which is why I began building my assessments in Desmos. However, I understand the potential for abuse – but the teacher that would abuse this feature are probably already abusing a copy machine. So maybe we could at least save trees.

(2) Historically, correct answers are determined by a mathematical community as the new mathematics emerges. I wonder in a synchronous environment if there is a way for students to opt to get feedback from their peers after “X” number of wrong attempts. Similar to teacher written feedback. Other students could go back to a screen and give feedback in some way.

(3) If the fear is what teachers would do with this feature, maybe its a limited feature based on some sort of Desmos certification? Not ideal, but I suppose it’d be a way to moderate.

(4) Again, the potential for students to (re)assess. Assessment is only valuable it a student is doing the assessing. Unfortunately, some students do not give themselves a hard look in the mirror until it’s black and white. It’s a byproduct of the “illusion of knowledge” which is our brain’s way of protecting itself.

Unfortunately, I am an instructional coach that supports half the building (not just Math) so I probably won’t be able to provide support in either creating or teaching to use CL in this type of way. I miss working on these and implementing them with students. But I can say although I felt automated feedback was incongruent to my own philosophy it was much appreciated by all students. It helped grow their self-efficacy and increased the amount of time they’d spend helping others.

I teach at a continuation high school, so most have decided that school in general, let alone math, isn’t that important to their survival (I use that word very intentionally).

I take as axioms that a) no one wants to feel stupid and b) everyone has brilliance to offer math class, even and especially students in a continuation setting.

That said, by the time students reach your classroom, many of them have spent many years digesting negative messages about their worth and intellect. Some students have metabolized more negative messages than a single math teacher can subvert in one school year and particularly this school year.

It sounds like you’re doing everything you can with a population that needed teachers like you a lot more often in their lives.

3 Likes

It’s funny because I have had students on numerous occasions, after I’ve gone through a series of questions, tell me that I’ve made them feel stupid because I made something seem so easy. I usually have to apologize that that wasn’t my intent, I just want them to understand! One of my mantras is “math is not magic” but that’s not what their educational career has shown them.

And that’s exactly it.

While I’m sure Dan is rightly concerned with where automation might lead, and his goals are largely altruistic at first blush, when development forces of the developer contrast with that of the user, you have to listen to the user. In our case, both teachers, and more importantly, students.

In this particular environment we’re seeing the volume of this discourse tip ever higher to the needs of the users and this is no bad thing, but we’re only now seeing this more clearly because of the need and volume.

The history of frameworks are littered with the remains of those that did otherwise. Not trying to be overly dramatic about it, but it’s just one of those lessons of history that comes to mind.

UX > DX. Every time.

1 Like

Something people might be interested in the idea of “safety-net” questions (https://www.researchgate.net/publication/27686094_Assessment_and_realistic_mathematics_education). Often we ask great open questions, but the answer might not really provide us with certainty that a student is understanding what we are aiming for. Safety net questions are often a narrower follow-up problem (or set of problems) which allow designers to check more specifically for understanding. The linked book is really worth a read-- as is anything that Marja van den Heuvel-Panhuizen writes (for that matter, anything that comes out of the Freudenthal Institute is worth reading).

1 Like

I really like the idea of more robust ways for students to give feedback to each other. This happens in a normal class setting with well functioning groups, but in the distance learning setting, not nearly as easy. Zoom meetings, breakout rooms help, but tools built into Desmos would be great. Something like the vote-up function in Reddit has alway been attractive to me. “Here are the top 2 suggestions the class decided on. See if either of these help you.”

1 Like

Circling back-- Here is a quick starter list of some forms of feedback I can see how to use via AB and CL. (The list order just reflects only the order in which I thought of them.)

  1. “Natural” feedback-- Ex: Picture Perfect, picture hits the hook or the floor
  2. Answer marked as right or wrong.
  3. Compare answers to other students. (Via the show other answers or aggregating.)
  4. Written feedback from teacher via the dashboard. (Synchronous and Asynchronous)
  5. Automated text feedback via CL – Example “Remember that ‘percent’ means out of 100.”
  6. Safety net questions following a more open ended question.
    6a) Getting the student to use their answer (strategy) to solve another problem. Possibly one where a wrong approach will give a more glaringly wrong answer.
1 Like