If I made a mistake and the second plate still has a lid assigned, then this code would execute 1 and 2, and then fail on 3.
Now, I might have valuable liquid in my tips, and would need to manually restore the state.
How do you prevent something like this? Could we use a “check errors at compile time” function?
I used the ChatterBoxBackend, but it does not catch all errors that could be catched (wrong formats of firmare strings for example)
Is it possible to simply assert before you run this function?
Compile time checks would be nice, but we don’t currently have a way to queue operations. (would be slightly non trivial to add, but definitely doable in a context manager using the state trackers)
Another less general, but doable now, approach would be to figure out what the maximum values your function can take and adding validation at the beginning.
e.g. if you had an example function that did 11 aspirations of 100 ml in a 1000 ml tip, than you could put in a validator that number_of_aspirations * aspiration_amount < tip_volume
You would put in conservative tested values and expand it as you tested things.
This would require duplicating all asserts that PyLabRobot does. This is mostly an issue while writing new protocols, where I don’t know in advance what will fail. Most recent example is having the length of a kwarg to dispense being to long. This error was only catched when I did a liquid test, and only after a number of other steps executed succesfully
I see. Most of errors of this type where you don’t know what will go wrong will be hard to catch in a ChatterBox / simulated backend, obviously.
As a way to work around, I find it extremely useful to develop stuff in Jupyter notebooks so I can run things interactively and also very easily continue from a current state. When running from Python files, you have to set up the state yourself which is a pain.
what I would do is introduce a global queueing parameter.
if this parameter is True, queue operations in the state trackers (similar to how the state is currently being queued), and make sure to not call a backend method in the LiquidHandler methods.
then when the user calls queue.execute(), call all the backend methods in the correct order.
this will not solve errors originating in the backend, which seems to be impossible to jit-simulate, but it does raise the PLR-tracking errors which we can hopefully make very good.
I do also think we can expand the number of asserts that ChatterBox is doing, it should have the same ones as the Liquid Handler. I’m imagining a STARChatterBox that is identical to STAR except that the serial connection prints to std out. My error was ultimatly catched by an assert deep in STAR.py, so this is not an issue with actual liquid handler