Writing a new backend / Agnosticity

Yes I noticed :heart: but I was not sure if the TubeRack I had in mind would have ended up looking like your original, and I think it didn’t in the end.

From my point of view, tube racks are like tip racks, and well plates are tube racks with unmovable tubes. So, to me, that is a slightly more “unifying” abstraction (i.e. rack → spots → containers).

This is why I hacked together the “spotted” TubeRack here: pylabrobot/pylabrobot/resources/pipettin/tube_racks.py at pipetting · naikymen/pylabrobot · GitHub

You’ll notice that it duplicates much of the code in PLR’s TipRack. This I would have liked to avoid, but chose not to spend time on it for now, since I don’t know the codebase (nor python) nearly as much as you.

I did have to intercept __getitem__ to be able to do this directly lh.aspirate(tube_rack["A1:A3"], vols=[100.0, 50.0, 200.0]), because LH doesnt know about TubeSpots.

1 Like

Awesome :stuck_out_tongue: I’ll get back to that and the other comments later. I’ll sleep now.

Thanks for everything as always!

1 Like

Done!

I checked my Deck and it didn’t seem to have those. Where did you look?

Interesting, why is that? Validation of protocols (e.g. correct initial volumes) seems important.

The only think I’m struggling with is that PLR does not seem to use the idea of pipette “tools”. There is the idea of channels, but this is not quite the same.

Imagine I have 3 pipettes on the robot: two 1-channel pipettes, and one 8-channel pipette (all actuated at once in this last one).

I know the OT2 backend “guesses”, but this is not optimal. For example, the two single-channels might need to be used in different situations to avoid contamination, or they might have different hardware suitable to different applications.

How do you imagine a user selecting the “right” pipette for each transfer, in general?

1 Like

plr:main on github

i suspect because there is some manual handling of liquids in most settings where people use PLR, and this is slightly annoying with the tracker.

is this something that can be done using backend_kwargs?

Of course, it would be as simple as telling people to add tool="p20" to lh.pick_up_tips, but that would break the dream of hardware agnosticity I am hoping for: a single frontend (i.e. protocol syntax) for all LH robots.

A note: I chose tool instead of pipette there on purpose, because not all Liquid Handler tools will be pipettes (e.g. syringes or futuristic ultrasonic dispensers), but they will always be tools.

I would actually prefer to overload the more agnostic channels argument, instead of succumbing to the religious backend_kwargs (examples of this idea below).

I think we may be close to this scenario.

From my perspective as a labperson and hardware developer, PLR shows some traces of Hamilton-specific hardware in its architechture (which could be expected).

A few of these traces have made their way into the code’s abstractions, and may require some amount of rethinking/refactoring to remove.

For example this sentence in LiquidHandler.pick_up_tips:

use_channels: List of channels to use. Index from front to back.

This, I promise, is not obvious at all unless you’ve seen pictures of a Hamilton robot, which has its channels arranged “front to back”.

I see one compromising path to approach this issue.

Overloading channels

Bringing back my previous example, if there were 3 pipettes and 10 channels in total, the user could “select” the tool implicitly, by recalling the integer IDs of the channels in each tool (presumably defined when instantiating the backend) and passing them to the atomic actions in LH.

For example, let there be a custom backend with two pipette tools:

# Single-channel
gil_p20 = Pipette(channels=[0],
                  model="gilson_p20_adapter.v1")
# Multi-channel
ot2_gen2_8ch = Pipette(channels=[1,2,3,4,5,6,7,8],
                       channel_layout="front_to_back", # Maybe
                       model="ot2_multichannel_8.v2") # A recognizable ID?

# Example using "tools" instead of "num_channels".
back = toolchanging_backend(tools=[gil_p20, ot2_gen2_8ch])

Note that because Hamilton robots have fixed hardware, their setup and protocols would remain exactly as they were.

The advantage is that no user-facing changes are needed, and protocols can be kept hardware-agnostic. The disadvantage is that the channels syntax is opaque, unless we did something about it.

Front-end example:

# The backend may decide if it wants to guess the tool or throw an error.
await lh.pick_up_tips(tiprack["A1"])

# Or one might just tell it which pipette to use by the channel ID.
await lh.pick_up_tips(tiprack["A1"], channels=[0])

# Or by using the tool/channels object, which is les opaque.
await lh.pick_up_tips(tiprack["A1:A8"], channels=ot2_gen2_8ch[:])

What do you think? Is this something you’d want to implement?

This idea would also provide a way to remove the following limitation I came across in the OT backend:

I realise just now that the number of channels is already inferred in the way I would expect for this idea, but its also strange given the above.

This would give length 9 for a 1+8 configuration?

Here it is :slight_smile: pylabrobot/pylabrobot/liquid_handling/backends/piper_backend.py at pipetting · naikymen/pylabrobot · GitHub

I have not decided if I want to use the coordinates that PLR calculates or keep using the ones from my “controller” module, so for now I did the latter.

This means that coordinate calculations by PLR are useless right now (its my Deck’s fault) and that the backend only works only with that Deck, populated with data from our UI.

This mostly achieves one path I was interested in: [ui] -> [PLR] -> [piper], which after more than a year of sparse efforts, is something!

(realized I wrote a reply but didn’t hit send :man_facepalming: )

Great points about robots that have a non-linear configuration of pipetting channels. As you said, this is a relic from PLR starting with Hamilton STAR and needs some rethinking. I’m happy to be doing this now following a use-case and with simultaneous implementation.

Overloading/extending use_channels is a neat solution because it fits in well with existing code. Plus having both a backend_kwarg and use_channels is very confusing. We can make it so having use_channels be None means the backend chooses the channels to use and is hardware agnostic from the LiquidHandler perspective. Users maintain the ability to have more fine-grained control over how their robot executes their commands, by specifying use_channels explicitly, which will always come at the cost of reduced agnosticity.

makes a lot of sense. The way I see it is we have the PLR coordinate space (x left right, y front back, z up down), which is used by users and received by backends, and then backends send commands to the machine in whatever way that machine expects. For example, Tecan backends do a PLR space (LiquidHandler) → Tecan space (EVO) translation. From our user’s perspective, everything is 1 space, for LiquidHandler and in the resource model. I would highly recommend adopting this pattern in your backend.

For the UI, this is a consideration for you. One the one hand, you can have it be in your native coordinate space, and then translate to PLR, and then translate back to native in the backend. This will work, but if you want your UI to be usable across machines (I am biased and a big proponent of that :)) it may be better to have the UI in PLR coordinate space. This may require some rewriting of the UI, but the final code can be simpler & it will make the GUI hardware agnostic.

Hi thanks for the replies!

Fantastic! I think we agree and will be glad to work on it too.

I want to agree with that, because PLR and CNCs use that coordinate system, and really there is no clear advantage to what the UI has now (other than making frontend people happy).

However refactoring the UI in that way would take significantly more effort than transforming coordinates here and there.

For now PLR is currently working just as the UI does: it sends atomic actions and references to resources by name, and the “backend” works out coordinates by itself. This is not agnostic at all of course, but let me skip some steps, so I could focus on getting PLR to work first.

After the dust settles, it will be easy to write a couple coordinate transforms (only two actually).

Given the above, and though not elegant, this has seem to be the simplest option to me so far.

I’m willing to change my mind with time/arguments though.

Best!

1 Like

Bonus

I’ve borrowed the slot coordinates from the OT2 Deck and gave it a go in the UI, adding them by hand.

I am kind of happy with it. The top slots (I call them anchors) are out of the area because I haven’t written the transforms yet.

1 Like

Also, how do you propose we get started?

1 Like

all makes sense - not really my call at all, but I agree that focussing on the programmatic PLR interface seems most important and most valuable near term and the hardware agnostic UI, though super valuable, can wait for a bit.

With the use_channels overloading, I think it is best to start with removing the STAR-inspired channel usage from LiquidHandler (eg _make_sure_channels_exist, the typing) and moving it to STAR, Tecan, OT backends. There will be some duplication between STAR and Tecan, but “a little duplication is cheaper than the wrong abstraction”. This work is all irrespective of your backend. I can probably work on this tomorrow, but if you want to open a PR in the meantime feel free to. Please allow edit from maintainers so I can contribute in the PR. No pressure, I’m happy to do it, but might take 1-2 days as I’m wrapping up something else.

After that, it will be natural to use use_channels in your backend.

Great thanks a lot for working on this Rick!

Here, the basics are up:

This channel-to-tool mapping feels nice. I’m not hitting any bumps for now.

The backend iterates over zip(stuff, use_channels) and handles one channel at a time. I think this is what Hamilton does to some extent; their tool seems to be an array of independent pipettes that can work in parallel too, with some smart motion planning.

Also, when I get to integrating the multi-channel tool (1 yr from now prolly) I’ll try to figure out how non-independent multi-channel pipettes should work in this scenario. Do you have any forethoughts on this? Maybe it is just each backend’s business.

The only real itch I have is around the “portability” of use_channels between backends. What will someone reusing a PLR protocol on a different robot need to figure out? Can/should it be made simpler for them somehow?

Best!

1 Like

Yes. The OT hardware supports this, but we don’t support this in PLR yet. I was thinking that we can have the backend raise an error if the requested aspiration locations are not in the 9mm equal-space configuration that is supported by the hardware.

They would need to figure out which channels to use. I think in the vast majority of cases use_channels is something that should not be specified by the user, so that each backend may automatically infer what the optimal pipette for an operation is.

If the user is specifying channels, which to be clear is totally legit, they are likely optimizing a protocol for a specific (group of) robots. In that case automated translating is not doable automatically, or at least not easily.

From PLR, it would be nice if similar-functioning robots like STAR and EVO supported selecting channels in the same way for maximizing interoperability. For example, both should specify them as integers with 0 being the backmost channel.

So users choose: inferred default option with high interoperability or high control at the cost of interoperability. Seems optimal to me, but happy to discuss as always.

1 Like

Will do the same then.

Yep good enough, thanks!

1 Like