Writing a new backend / Agnosticity

Yes I noticed :heart: but I was not sure if the TubeRack I had in mind would have ended up looking like your original, and I think it didn’t in the end.

From my point of view, tube racks are like tip racks, and well plates are tube racks with unmovable tubes. So, to me, that is a slightly more “unifying” abstraction (i.e. rack → spots → containers).

This is why I hacked together the “spotted” TubeRack here: pylabrobot/pylabrobot/resources/pipettin/tube_racks.py at pipetting · naikymen/pylabrobot · GitHub

You’ll notice that it duplicates much of the code in PLR’s TipRack. This I would have liked to avoid, but chose not to spend time on it for now, since I don’t know the codebase (nor python) nearly as much as you.

I did have to intercept __getitem__ to be able to do this directly lh.aspirate(tube_rack["A1:A3"], vols=[100.0, 50.0, 200.0]), because LH doesnt know about TubeSpots.

1 Like

Awesome :stuck_out_tongue: I’ll get back to that and the other comments later. I’ll sleep now.

Thanks for everything as always!

1 Like

Done!

I checked my Deck and it didn’t seem to have those. Where did you look?

Interesting, why is that? Validation of protocols (e.g. correct initial volumes) seems important.

The only think I’m struggling with is that PLR does not seem to use the idea of pipette “tools”. There is the idea of channels, but this is not quite the same.

Imagine I have 3 pipettes on the robot: two 1-channel pipettes, and one 8-channel pipette (all actuated at once in this last one).

I know the OT2 backend “guesses”, but this is not optimal. For example, the two single-channels might need to be used in different situations to avoid contamination, or they might have different hardware suitable to different applications.

How do you imagine a user selecting the “right” pipette for each transfer, in general?

1 Like

plr:main on github

i suspect because there is some manual handling of liquids in most settings where people use PLR, and this is slightly annoying with the tracker.

is this something that can be done using backend_kwargs?

Of course, it would be as simple as telling people to add tool="p20" to lh.pick_up_tips, but that would break the dream of hardware agnosticity I am hoping for: a single frontend (i.e. protocol syntax) for all LH robots.

A note: I chose tool instead of pipette there on purpose, because not all Liquid Handler tools will be pipettes (e.g. syringes or futuristic ultrasonic dispensers), but they will always be tools.

I would actually prefer to overload the more agnostic channels argument, instead of succumbing to the religious backend_kwargs (examples of this idea below).

I think we may be close to this scenario.

From my perspective as a labperson and hardware developer, PLR shows some traces of Hamilton-specific hardware in its architechture (which could be expected).

A few of these traces have made their way into the code’s abstractions, and may require some amount of rethinking/refactoring to remove.

For example this sentence in LiquidHandler.pick_up_tips:

use_channels: List of channels to use. Index from front to back.

This, I promise, is not obvious at all unless you’ve seen pictures of a Hamilton robot, which has its channels arranged “front to back”.

I see one compromising path to approach this issue.

Overloading channels

Bringing back my previous example, if there were 3 pipettes and 10 channels in total, the user could “select” the tool implicitly, by recalling the integer IDs of the channels in each tool (presumably defined when instantiating the backend) and passing them to the atomic actions in LH.

For example, let there be a custom backend with two pipette tools:

# Single-channel
gil_p20 = Pipette(channels=[0],
                  model="gilson_p20_adapter.v1")
# Multi-channel
ot2_gen2_8ch = Pipette(channels=[1,2,3,4,5,6,7,8],
                       channel_layout="front_to_back", # Maybe
                       model="ot2_multichannel_8.v2") # A recognizable ID?

# Example using "tools" instead of "num_channels".
back = toolchanging_backend(tools=[gil_p20, ot2_gen2_8ch])

Note that because Hamilton robots have fixed hardware, their setup and protocols would remain exactly as they were.

The advantage is that no user-facing changes are needed, and protocols can be kept hardware-agnostic. The disadvantage is that the channels syntax is opaque, unless we did something about it.

Front-end example:

# The backend may decide if it wants to guess the tool or throw an error.
await lh.pick_up_tips(tiprack["A1"])

# Or one might just tell it which pipette to use by the channel ID.
await lh.pick_up_tips(tiprack["A1"], channels=[0])

# Or by using the tool/channels object, which is les opaque.
await lh.pick_up_tips(tiprack["A1:A8"], channels=ot2_gen2_8ch[:])

What do you think? Is this something you’d want to implement?

This idea would also provide a way to remove the following limitation I came across in the OT backend:

I realise just now that the number of channels is already inferred in the way I would expect for this idea, but its also strange given the above.

This would give length 9 for a 1+8 configuration?

Here it is :slight_smile: pylabrobot/pylabrobot/liquid_handling/backends/piper_backend.py at pipetting · naikymen/pylabrobot · GitHub

I have not decided if I want to use the coordinates that PLR calculates or keep using the ones from my “controller” module, so for now I did the latter.

This means that coordinate calculations by PLR are useless right now (its my Deck’s fault) and that the backend only works only with that Deck, populated with data from our UI.

This mostly achieves one path I was interested in: [ui] -> [PLR] -> [piper], which after more than a year of sparse efforts, is something!

(realized I wrote a reply but didn’t hit send :man_facepalming: )

Great points about robots that have a non-linear configuration of pipetting channels. As you said, this is a relic from PLR starting with Hamilton STAR and needs some rethinking. I’m happy to be doing this now following a use-case and with simultaneous implementation.

Overloading/extending use_channels is a neat solution because it fits in well with existing code. Plus having both a backend_kwarg and use_channels is very confusing. We can make it so having use_channels be None means the backend chooses the channels to use and is hardware agnostic from the LiquidHandler perspective. Users maintain the ability to have more fine-grained control over how their robot executes their commands, by specifying use_channels explicitly, which will always come at the cost of reduced agnosticity.

makes a lot of sense. The way I see it is we have the PLR coordinate space (x left right, y front back, z up down), which is used by users and received by backends, and then backends send commands to the machine in whatever way that machine expects. For example, Tecan backends do a PLR space (LiquidHandler) → Tecan space (EVO) translation. From our user’s perspective, everything is 1 space, for LiquidHandler and in the resource model. I would highly recommend adopting this pattern in your backend.

For the UI, this is a consideration for you. One the one hand, you can have it be in your native coordinate space, and then translate to PLR, and then translate back to native in the backend. This will work, but if you want your UI to be usable across machines (I am biased and a big proponent of that :)) it may be better to have the UI in PLR coordinate space. This may require some rewriting of the UI, but the final code can be simpler & it will make the GUI hardware agnostic.

Hi thanks for the replies!

Fantastic! I think we agree and will be glad to work on it too.

I want to agree with that, because PLR and CNCs use that coordinate system, and really there is no clear advantage to what the UI has now (other than making frontend people happy).

However refactoring the UI in that way would take significantly more effort than transforming coordinates here and there.

For now PLR is currently working just as the UI does: it sends atomic actions and references to resources by name, and the “backend” works out coordinates by itself. This is not agnostic at all of course, but let me skip some steps, so I could focus on getting PLR to work first.

After the dust settles, it will be easy to write a couple coordinate transforms (only two actually).

Given the above, and though not elegant, this has seem to be the simplest option to me so far.

I’m willing to change my mind with time/arguments though.

Best!

1 Like

Bonus

I’ve borrowed the slot coordinates from the OT2 Deck and gave it a go in the UI, adding them by hand.

I am kind of happy with it. The top slots (I call them anchors) are out of the area because I haven’t written the transforms yet.

1 Like

Also, how do you propose we get started?

1 Like

all makes sense - not really my call at all, but I agree that focussing on the programmatic PLR interface seems most important and most valuable near term and the hardware agnostic UI, though super valuable, can wait for a bit.

With the use_channels overloading, I think it is best to start with removing the STAR-inspired channel usage from LiquidHandler (eg _make_sure_channels_exist, the typing) and moving it to STAR, Tecan, OT backends. There will be some duplication between STAR and Tecan, but “a little duplication is cheaper than the wrong abstraction”. This work is all irrespective of your backend. I can probably work on this tomorrow, but if you want to open a PR in the meantime feel free to. Please allow edit from maintainers so I can contribute in the PR. No pressure, I’m happy to do it, but might take 1-2 days as I’m wrapping up something else.

After that, it will be natural to use use_channels in your backend.

Great thanks a lot for working on this Rick!

Here, the basics are up:

This channel-to-tool mapping feels nice. I’m not hitting any bumps for now.

The backend iterates over zip(stuff, use_channels) and handles one channel at a time. I think this is what Hamilton does to some extent; their tool seems to be an array of independent pipettes that can work in parallel too, with some smart motion planning.

Also, when I get to integrating the multi-channel tool (1 yr from now prolly) I’ll try to figure out how non-independent multi-channel pipettes should work in this scenario. Do you have any forethoughts on this? Maybe it is just each backend’s business.

The only real itch I have is around the “portability” of use_channels between backends. What will someone reusing a PLR protocol on a different robot need to figure out? Can/should it be made simpler for them somehow?

Best!

1 Like

Yes. The OT hardware supports this, but we don’t support this in PLR yet. I was thinking that we can have the backend raise an error if the requested aspiration locations are not in the 9mm equal-space configuration that is supported by the hardware.

They would need to figure out which channels to use. I think in the vast majority of cases use_channels is something that should not be specified by the user, so that each backend may automatically infer what the optimal pipette for an operation is.

If the user is specifying channels, which to be clear is totally legit, they are likely optimizing a protocol for a specific (group of) robots. In that case automated translating is not doable automatically, or at least not easily.

From PLR, it would be nice if similar-functioning robots like STAR and EVO supported selecting channels in the same way for maximizing interoperability. For example, both should specify them as integers with 0 being the backmost channel.

So users choose: inferred default option with high interoperability or high control at the cost of interoperability. Seems optimal to me, but happy to discuss as always.

1 Like

Will do the same then.

Yep good enough, thanks!

1 Like

Hi again! I bring minor updates.

I’m working to properly bridge the UI’s definition of resources to PLR. Coordinates are now not just placeholders, making the UI’s decks potentially useful to robots not controlled by our wonderful software stack.

Most changes are boring, so here is a really cute ASCII summary of one of my workspaces, for the summary method:


Screenshot_20240920_205442

Next up is some roadmapping around a PLR → Pipettin bridge. Atomic actions have already been bridged but deck objects have not. I think this means that the next step is a PLR → Pipettin labware translator.

In other news, we’re close to shipping the very first pair of hardware kits for the pipettin-bot, our open source liquid handler. One will end up in the Acceleration Consortium. :slight_smile:

Anyone interested in getting or making one can raise their hand.

It’s crazy that this thread will turn two by the end of the year. :birthday:

2 Likes

Exciting to see! Let me know where I can help

Well after some work, I got somewhere interesting: the deck can now be imported from the UI/backend’s format into PLR, the deck can be used, and its serialisation can be converted back to a UI-friendly format.

:partying_face:

The bad news is that some rough spots became apparent, which I’d like to discuss or ask about. These are mostly due to differences in patterns and data schemas, within and between.

  1. Question: are the current liquid volumes in well/tube/… serialised with the deck? I kind of hacked my way through this one, by adding the tracker to some serialize methods. It seems important that the whole state is in the serialisation, but I could not find the volumes in it.
  2. Request: Everything in PLR seems to be a resource, except for Tips. Would it be possible to refactor the Tip class into a resource?
  3. Question: PLR tip racks, well-plates, and tube racks have “spots” for resources, but only TipRacks define them. I have hacked together a TubeSpot class for my purposes, and for fun. Would you consider adding “spots” to the other main classes like Plate, TubeRack, and maybe others?
  4. Request: I would love to have the parameters passed to create_ordered_items_2d (et al) show up in the serialization of an ItemizedResource. I’m not sure how this should be implemented, if at all. The issues is that my alternative is to hack the parameters out of hardcoded locations of resources. I use this data to define and display grid-like labware.

I’d like to note that points 2 and 3 relate to differing patterns across similar situations: tips and tubes are similar, and tip racks, tube racks, and well-plates are very similar too. Perhaps unifying them further would simplify the code (also writing for it :slight_smile:) and help normalize PLR’s data.

If this is interesting, here is my attempt at classes for Tubes, TubeSpots, and TubeRacks, as a hopefully illustrative example: pylabrobot/resources/pipettin/tube_racks.py · d745c840c0401f670fffb1db923524d9dde9e32c · Open Lab Automata / Forks / pylabrobot · GitLab

Best!
nico

1 Like

Great news! It sounds like you are using two data formats, one from PLR and one for the UI? My goal is to make PLR’s serialization so good that you don’t need a separate format.

These are actually state of the deck, and serialized separately by Resource.serialize_state (just one resource) and Resource.serialize_all_state (for all children as well). The reason behind this is that often you preserve the deck between experiments, but the state differs from moment to moment. Having them in the same file would make source control and state management harder.

They are technically a “resource” you use in experiments. In PLR, Resource is anything with a location on the deck and part of the resource tree. Since tips are moved around all the time and discarded, it did not make sense to me to make them Resources. We have TipSpot instead, as children of TipRack, because they are actually fixed on the deck. This is needed when specifying a drop location for tips: lh.drop_tips(my_tiprack["A1"]) would not work if my_tiprack["A1"] is empty. We do this with CarrierSite on Carriers as well. It’s a bit of a hack, but the best I was able to come up with.

What is the reason you want this? (often new use-cases reveal better insights for program design!)

For TubeRack, probably, because tubes are sometimes moved around. Do you want to PR this?

For Plate, I think it’s fine to stick with Well. (There is no moving around wells)

This location is already encoded by children (in their location). This is the most flexible approach. Storing the information would only work for regular labware, introducing two cases of 1) regular and 2) irregular labware. Simpler to treat everything as irregular imo. I see where you’re coming from though.

What you could do is implement a convenience feature get_dx_from_itemized_resource. It can even go in itemized_resource.py imo. Raises an error when resource is irregular. By computing this when requested, we can keep storing this information in only one place.

I tried to do that with ItemizedResource. Anything that can be moved from Plate&TipRack&TubeRack into there, should be moved.

1 Like

Hi thanks for the quick reply!

I’ll start with a comment and then come back with actually productive content.

I don’t think that this will happen, but not because of how I judge its fairness. That kind of thing only happens if a lot of people use it, something that comes can happen independently of how good a “standard” is.

I imagine that the criteria you took when making decisions on the data structure (Tips are not resources, plates have no spots, and so on…) came from looking at the labware, and looking at people using it.

This makes sense for a user-labperson point of view, and is important for designing the front-end: things should have the properties that are evident from looking at stuff.

However, from a developer’s point of view, this feels undesirable. The back-end stuff, which is what I’m aiming at, should be designed for development, not for users.

Having the same data structure across all objects makes more sense: less edge cases, less room for error, less code to write, etc.