Some thoughts on self-checking / auto-grading

Super grateful for this entire discussion. I think a lot of folks are hitting the nail on the head.

We’re certainly not against automated feedback (in fact, almost all of our best activities have some of it built in). We’re just aware that the design choices we make now will have cascading and sometimes unintended consequences and want to get these right.

To @PanettaMath’s & @Mike_Gleeson’s point that we need to at least meet folks expectations or they’ll go elsewhere: heard! and agreed. We’ll move as quickly as we feel we can safely.

Adding on to @Jeff_Holcomb’s taxonomy: I just love (1) [natural, interpretive feedback], (4) [feedback from teacher], and (3) [comparison’s with other students – especially if they can provide feedback to each other, per @PanettaMath]. We’re going to be working to make all 3 of these easier and easier with time.

One of the big risks – in addition to pedagogical design challenges – with 2 and 5 to me are the risk of falsely grading. It’s so easy, especially with string comparisons, to mark something as incorrect when it’s correct (and vice versa, though that’s both less common and less worrisome). We’ll also keep working to make it easier to write correctness conditions (e.g. using xyLine instead of comparing LaTeX).

In the meanwhile, we kind of already have opened up Pandora’s box. Everyone now has access to Computation Layer now… so the challenge is to try to help out especially new folks use the power of feedback as constructively as possible.

Again, love the discussion here. Grateful to all and let’s keep it open as we continue to change (and hopefully upgrade) Computation Layer.