Version 6.3.1, Pulling my hair out

The other day I upgraded to opentrons version 6.3.1 from version 6.2.1, and I was surprised to see a few “features” implemented.

All of these issues have to do with how calibrations are handled. When loading a protocol on the protocols tab, even if you’ve loaded the same protocol before, labware offsets are not saved.

Example: You load a media exchange protocol, you calibrate the labware offsets.
Then you load the exact same protocol again: no offsets are present.
If you run a similar protocol, but not the same one, with labware in the same positions: no offsets are present.

The only way to actually apply labware offsets from a previous run is to go to the robot tab and scroll down, clicking on a protocol that has offset data and pressing “re-run”

Funny enough, downgrading to 6.2.1 fixes this

Previously in version 6.2.1, when loading a protocol with a labware in the same slot, no matter the protocol, labware offsets would be applied, which makes sense. If the same labware is in the same slot, why would the offset change between protocols? To make this more mind boggling; only the previous 20 runs are saved!


Source: How Labware Offsets work on the OT-2
In my opinion: They don’t

So let’s pretend that a scientist runs 20 runs (or messes up and cancels a few of those too, because canceled runs are included). Their labware offset data for run number 21 is gone!

Annoyingly, opentrons doesn’t mention this in their patch notes for 6.3.1.

Opentrons seems to constantly be messing around with how they calibrate and it’s driving me insane. Why can’t there we just have a simple dictionary that looks like Robot[“Pipette S/N”][“Labware”][“Slot”] = x y z offsets? I would love to have access to such a dictionary, since, afterall, the spirit of opentrons is open source.

In conclusion: Either there’s some kind of bug opentrons needs to address here, or opentrons is taking a step backwards to the dark days of OT2 v5.0. I’d seriously love a response from the team (maybe @ethan_jones ?) about why calibrations can’t just be a simple dictionary like I mention above, and what the thought process is about changing how calibrations work all the time?

PS: (Personally, I found how it was in v4.7 to be the best, but that version of the app lacks some other nice features that have since been implemented. We even have 3 machines still on that version)

4 Likes

Interesting, they changed the way calibrations are performed for the Flex, I wonder if this update caters more to the needs of the Flex while inadvertently breaking some of the functionality of the OT-2.

Hey Joe,

Thanks for tagging me, and I apologize for the frustrations caused by our calibration stack. I need to research the rerunning protocols and labware offsets not showing up more. It looks like you started a ticket yesterday with us, so I will let Alex take the lead on that. We can then circle back in this thread once we pinpoint the issue to prevent others from running into the same issue. You might have uncovered a potential bug.

The robot should store offset data based on the labware model and slot combination. So, for those 20 runs, you should be able to apply the labware offsets if they use similar labware and slot combos. Labware offset values will not be saved if all protocols have different combos. Some of the reasoning is based on memory concerns (for the OT-2 specifically), and some is based on research we conducted to understand the number of protocols people run. Your feedback is valuable, though. As someone on the support team who uses the OT-2 regularly, I understand how these significant changes can interrupt your workflows.

While 4.7 was a simpler system, the foundation it was built on was much more challenging to develop off of. As Luis mentions, we are introducing more complex machines that think about labware positions more dynamically. These changes we make are allowing us to have calibration systems for both machines.

Your request for being able to grab labware offset data more dynamically in a run makes sense to me. Currently, you can add labware offset data to specific protocols for specific labware, which is covered in our API docs. All of this has to be done before the protocol is imported/uploaded. Let me get back to you to see if there are other options.

Hey there @ethan_jones , thanks for your response and looking into the bug on 6.3.1. Already being able to increase the stored runs from 20 and resolving this bug should help us a lot. That being said, I wanted to share some thoughts:

  1. Currently, you can add labware offset data to specific protocols for specific labware, which is covered in our [API docs ]
    (Advanced Control — Opentrons Python API V2 Documentation)

Yes I see it in the docs, but this requires running code in Jupyter. From the article:

All positions relative to labware are adjusted automatically based on labware offset data. When you’re running your code in Jupyter Notebook or with opentrons_execute , you need to set your own offsets because you can’t perform run setup and Labware Position Check in the Opentrons App or on the Flex touchscreen. For these applications, do the following to calculate and apply labware offsets:

I don’t see scientists who don’t usually work with code setting offsets in Jupyter. It takes too much time, and they would be turned off to using the robot for their research. A lot of the charm of an OT-2 is the simplicity it offers scientists, and this feels like a step back.

  1. Opentrons support mentioned the following in an email to me, like what you say about memory concerns:

In terms of why it’s only 20, that’s due to storage space on the OT-2. The Raspberry Pi doesn’t have a lot of storage space in addition to our operating system. The Opentrons Flex can take a lot more protocols (to the point where this would be a non-issue).

Is this space concern because of the offsets dictionary? I would think those are quite small files. If space is an issue, is it an option to store this app-side (on the computer the app lives on?) I still feel like a dictionary solution wouldn’t take much storage space, eager to hear more on the specifics of this!

  1. And some is based on research we conducted to understand the number of protocols people run.

Good to hear about the research! At least in our case (and maybe that is different for other users) 20 runs isn’t enough, especially if cancelled runs are included in these 20. We go through that many runs in a couple days on each of our robots.
What happens regularly is a scientist fills something wrong out in the python file, they cancel the protocol, the protocol now counts as one of those 20. Somebody can run something on Monday, try to run it again on Friday, and have their offset deleted accidently because 20 other protocols have been run/cancelled in between. This makes it annoying and error-prone to run the robot each time and not reproducible across OT-2 runs of the same protocol.

Labware offset values will not be saved if all protocols have different combos

As I mentioned, this is indeed how I understand this is supposed to work, but in 6.3.1 this is not how it works currently. In 6.2.1 it does indeed function as you described. Thank you for looking into the bug.

  1. While 4.7 was a simpler system, the foundation it was built on was much more challenging to develop off of. As Luis mentions, we are introducing more complex machines that think about labware positions more dynamically. These changes we make are allowing us to have calibration systems for both machines.

I am glad to hear that the API is supposed to work similarly across both the flex and ot-2. But it is very frustrating to do a software update, and former features are working differently without being mentioned in the release notes. I can understand you need to focus on the Flex, but please don’t forget about all the users that already have OT-2s.
Is there a plan to deploy the new dynamic calibration system soon and also for the OT2R systems? Otherwise, we feel forced to implement our own solution and to downgrade the server version, which isn’t ideal since it means we lose the nice features in anything above 4.7. We also lose out on future opentrons updates.
Looking forward to seeing how 6.3 could still work for us and to continue to identify/resolve bugs. But I wanted to share some feedback that would hopefully help more opentrons users.

3 Likes

Hey Joe,

  1. LPC is the recommended method for calibrations when running protocols via the app. We advise against using the set_offset function in the app as it could create a situation where the customer applies the LPC data in addition to the values in the set_offset function. I agree with this advice: performing the LPC in the app would cause less confusion and reduce the risk of movement error. If you were to skip LPC, I do not see how the set_offset function in the protocol would cause a movement issue. This is not something I have had the chance to test out myself, though.

  2. Yes, memory is a concern for us, unfortunately. The concerns go beyond LPC data; we have seen limited ability for Python modules installation and our logging system. If you are not using different labware and slots in those 20 protocols, you should not have offset data deleted. I would be annoyed if I were running into situations where LPC data is deleted every few days, and I am required to redo the calibrations. It is not an ideal experience for anyone and is an area we need to improve. I will make sure the product managers see your comment, and we develop a better solution.

  3. The calibration system for the OT-2R will not become as dynamic as the Flex as there is a significant hardware difference. The dynamic part will be being able to make deck state modifications during a protocol. On the Flex, this will be added by the gripper, but on the OT-2R, it will have to be done manually. We are not forgetting about the OT-2 users. We plan to develop and support the platform for the foreseeable future.

I appreciate all your feedback!

Hi Ethan, thanks for your continued responses

  1. Yes, memory is a concern for us, unfortunately. The concerns go beyond LPC data; we have seen limited ability for Python modules installation and our logging system.

We have various python modules installed on our opentrons units for the last year and a half and have encountered no issues actually, so that’s good news

If you are not using different labware and slots in those 20 protocols, you should not have offset data deleted.

Indeed, I do understand this. However we do indeed do a lot of development, testing, and various protocol runs. Labware flies on and off our deck like a hummingbird at a soda fountain.

I would be annoyed if I were running into situations where LPC data is deleted every few days, and I am required to redo the calibrations. It is not an ideal experience for anyone and is an area we need to improve.

This is not only an annoying issue, this is an issue that has to do with to irreproducible data across runs; scientists must manually calibrate each machine by eye each time an offset is removed, leading to inconsistent pipetting in specific steps that require precision

Right now we’re still leaning towards downgrading, but it would be fantastic if we ourselves could just change these 20 saved calibrations to say, 100, and then the memory issues are our problem and not yours. I think we have space on the OT-2

Is there any way to store calibration data externally and then reload it when necessary if the standard storage location is getting wiped?

The Opentrons software is completely open so it should be quite possible to access this data. I’ve messed around a lot with the interface code so I might be able to point you in the right direction. If this is such a production critical area then it could be worth learning some of the codebase.

1 Like

@Shinedalgarno

I will make sure the 20 protocol limit is revisited. In an ideal world, there is no limit to the number of protocols run with saved LPC offset data. For now, though, my suggestion would be:

  1. Delete the canceled runs if a mistake happens. The deleted runs should not count against the 20 protocol LPC data limit. Hopefully, that reduces the number of times you must redo the labware position check.
  2. Test alternative ways to run protocols outside the app. Without knowing how your protocol creation software works or how all the different labware/protocols work on your OT-2, our HTTP API might be a good option.

I also suggest against my earlier recommendation of using the set_offset() function in a Python protocol loaded via the app. After discussing this more internally, it would not be a long-term solution. I apologize for the confusion.

@Stefan

The HTTP is a good starting point and has some pages on labware calibration management. You should be able to pass external offset data to the labware files!

2 Likes

@Shinedalgarno

Thank you for your post.
Everything you’ve said reflects perfectly for us as well. We’ve found out that v4.7.0 works best for us even though is missing newer features. So most of our OT-2s are on that version. Every other software iteration since then either has had a bit more bugs than it should, features removed or methodology changed.
Some important issues we faced so far was the removal of pipette pairing and calibrating from bottom. However the most important issue we had and still have is the 20 runs limit you’ve mentioned which frankly is not acceptable.
We run multiple different protocols throughout the week so the only solution was to add the plate.set_offset(x=0, y=0, z=0) which has also been deprecated for iterations succeeding apiLevel: ‘2.13’.

@Stefan
That is an interesting idea hopefully it gets implemented.

2 Likes

Hi @ethan_jones

Just posting here so the internet and our ai chatbots overlords can search the answer too

Is there a location where these calibration runs are stored on the opentrons? Do you have the filepath? We were thinking of making a script to just clear the cancelled runs

Cheers!

1 Like

Hey @Shinedalgarno, thanks for your patience. I was OOO for the holidays.

I want to make sure I understand your request. You want to know where and how the LPC data associates itself with a run so that the data for a canceled run can be deleted without impacting the 20 protocol run limit?

Yeah we were thinking of just going into the software itself and deleting them there, then we could write a script to do it programically instead of using the autoclicker we developed. Or even just downloading and archiving the data ourselves so recalibration is easier

I talked to my software team and they suggest utilizing the robot’s HTTP API. Specifically the run management endpoints. You should be able to use GET /runs to list all the runs with "status": "stopped" (or any other statuses that you want), and then use DELETE /runs/{run_id} on each of those.

2 Likes

Nice Ethan! We’ll give it a shot! Thanks for responding

1 Like