Pre(r)amble

This is part of a series of posts documenting some of the process of (building|cat herding an AI agent to build) Clipy, an easily hosted Python teaching tool built with just front-end JS and a WASM port of MicroPython.

You can find the GitHub repo for the project here

I keep a relatively up to date version here if you want to try it

What we had before

Last week was mostly about fixing up issues with implementation methods.

Read about it in the post from last time

What’s happened since

This past week has been very similar in theme to the previous one: fixing a bunch of mistakes, chopping out dead code, and just generally tidying up the project. I’ve thought a couple of times along the way that this was mostly “done” in terms of features, but I don’t think anything really new has been added in the last fortnight, so maybe it’s true this time?

Record-Replay still doesn’t feel entirely baked, but after poking at it from both ends (C and JS) without a lot of improvement, we might be starting to hit the wall.

Pruning

Here are a couple of the things that got taken out this week:

  • The old Python instrumentation record-replay code
  • The in-memory mirror of the workspace. Now we’re just dealing with IDB and it seems fine, with far fewer bugs caused by sync issues (So. Many. Bugs.)

Playgrounds and Problem Lists

No matter what set of problems (or an individual problem) has been loaded, there is now a persistent “Playground” problem that is loaded as well. This lets students have a place to try out code that follows them around.

This also required some changes to the way that problem lists were handled, since the playground is local to the app, whilst problem lists can be loaded from remote servers like a Github. Originally I was just treating all the problem in the list as being on the same remote server, but the mixed method meant that those lists needed to be handled a bit differently.

One of the other features that I fixed this week was success indicators for lists, not just problems. Since the playground doesn’t have tests, it can’t be “passed”, and so the success indicator needed to take into account problems potentially not having tests. Problems in lists now have a completion indicator in the drop-down menu for choosing problems, and that indicator can show that it has been passed, not yet passed, or not passable because there are no tests. In the problem list context, the entire problem list can also show as passed.

Testing

I’ve been starting to put together lists of problems that can be used to test features, which can also serve as a bit of a demo.

You can check out the beginnings of this problem list here

You can also see the config files that this loads from here