Peer review of shared liquid handler protocols

A linkedin post from Kelcy Newell got me thinking about the mechanisms for publishing liquid handling protocols. There aren’t many options. Most journals are focused on novel biology and methods, but utility and practicality are good reasons to share that don’t always align w/ novelty. And most journals publish papers as linear text documents that aren’t a good fit for code & process that span many files of different types.

One option that my team is exploring is GitHub. Some folks at Calico are using GitHub as an open publishing platform that enables them to host all of the different ancillary files & data, and allows them to generate all the figures, movies, links to other info, etc that they want. They also get to decide for themselves the thresholds of novelty and/or utility that warrant publishing.

Maybe scientific publishing and sharing protocols are two fundamentally different things, with different motivations, criteria, and mechanisms - and I shouldn’t try to conflate the two. But one thing they both share is an intent to demonstrate quality and competence. Peer review is the mechanism for publishing - and Kelcy rightly pointed out that that can be lacking when someone simply drops a bunch of files into a GitHub repo and makes it public.

To that end (and sorry for the long preamble) I was wondering if any VWorks & Bravo experts would be interested in reviewing and commenting on Calico’s RNAseq protocols that we’d like to share via a public GitHub repo. I can setup a VM instance that has everything setup and configured. It wouldn’t be connected to any HW, but you could run the protocols in simulation and dig into all of the method files & embedded logic. I’ve also been thinking a bit about what criteria are appropriate for evaluating the quality of a liquid handling protocol and anyone’s thoughts on that would be welcome.

Send me a message or comment here if you’re interested. Thanks a lot.

7 Likes

Full disclosure: I review papers and I am surprised by the number of papers these days that have some kind of code, advanced or super basic. It’s been amazing to see the slow but steady progression. And yet the reality is that not all code is tested.

Furthermore, take something that’s a bit more niche like lab automation and you’ll find less reviewers that can spin up VM’s of the exact versions of software with the exact hardware drivers.

I recently hosted an AI Lab Automation Hackathon in San Francisco as part of Deep Tech Week and one thing that stood out to me is that the level of context that we miss out on with standard scientific protocols is staggering. However seeing an LLM try to reason through an experiment on an Opentrons Flex, I felt a bit of hope? Maybe LLM’s are our best chance at standardizing language, behavior, intent and context because it’s almost impossible to capture all of the above in a reliable manner. And it’s even tougher to replicate it with efficiency and trust. And it’s damn near impossible with standardizing how we program our systems and build our labs.

3 Likes

And I’d argue that’s less than half the problem. A lot of the details are in the liquid class definitions, pipetting techniques, and properly defined labware. Give me those but not the protocol structure and we’ll probably end up closer to the same result than vise versa. But past attaching a backup file of the automated system to a paper how will we get there?

Everybody is always talking about AI being able to write protocols from scratch to replace humans, but until AI can pause a method during an aspirate step and jiggle that plate to make sure you really are pipetting 0.5mm from the bottom of the plate I’m not going to be too worried.

2 Likes

Do you currently pause methods and jiggle them manually?

Why couldn’t a tilt module or a gripper finger accomplish a similar feature without the need to pause the system?

When developing a new protocol that we haven’t run before? Absolutely. Especially if the labware is also new. I don’t mean running a protocol that is already established.

Yeah I guess it depends on your code but I can for sure tell AI to generate protocols with pauses and to Slack me when it hits that pause so I can visually inspect it.

I think anyone who build their workflows in a programmatic way can do that today.

I agree that the physical configuration details matter a lot. I’m trying to compile all of the files that matter for these VWorks protocols (.pro, .dev, .reg, .vzp, etc). It won’t cover any hw & real world quirks, but it seems like the best one can do.

Regarding testing - that was why I thought a VM would be a good way to enable others to test it. I’ll install everything on a VM, make sure it works, then take a snapshot. I’ll give anyone who wants to review the protocols access to the VM w/ that snapshot cleanly loaded. Obviously this only covers the logic and quality of the script, but we’ve also got our internal validation data that I’ll share to show it does actually work for generating libraries.

Are those files formats just alternative XML or ZIP’s?

No, the point is that a human is still involved. If you believe the hype humans will be taken out of the equation completely by AI, even for the nitty gritty of lab automation.

I don’t disagree, humans will remain in the loop especially in regulated environments.

With that said, there are groups working on some level of autonomy today (for example, SDL’s for Material Discovery or Digital Chemistry), those challenges & victories are well documented.

Interestingly, we had a group at the hackathon that was able to develop a self closed loop.

They recorded on an iPhone conversation, had the audio translated from recording to text, and then had an AI summarize it. And then using an Opentrons Flex & Opentrons AI protocol generator, they asked the AI to generate a protocol for the experiment. They ran it on the Opentrons Flex, took the plate to an imaging device for analysis, had the AI analyze those images, the AI pointed out what went wrong and then they asked the AI to rewrite the protocol to compensate for the error.

The AI tool was able to point out that something was off with their serial dilution and then pointed out areas where things may have gone wrong and then rewrote their Python code to try and fix that error.

Hype or not, that’s pretty damn cool.

1 Like

Have you seen this?

I wonder if a tool like this could be leveraged for protocol translation across multiple vendor formats.

Are those files formats just alternative XML or ZIP’s?

the .pro’s and .dev’s are just renamed XML. That’s actually pretty useful for direct editing or scritpting to edit them.

Unfortunately, VWorks also has a bunch of stuff in the registry (teachpoints, labware, liquid classes). You’ve got to be careful importing those - as it’s easy to overwrite your local profiles with a registry import. The .vzp is essentially a ZIP of everything - and VWorks has an import function that helps you avoid conflicts.

If I were trying out an unknown protocol or reviewing an externally generated one - I’d go the VM route to avoid messing with our working configurations.

I wonder if a tool like this could be leveraged for protocol translation across multiple vendor formats.

Briefly does look useful, I’ve chatted w/ them a few times. I think it would be quite useful for iterating on a natural language protocol until you’ve got enough info for a AI genereated one.

1 Like

Good conversation, but to steer back to the original post:

Does anyone want to test run and provide feedback on RNAseq library prep protocols, developed on VWorks & Bravo?

The end goal would be to publish the protocols (and all other necessary files) in a public GitHub repo. The feedback and experimental validation data would also be included so that others would have some confidence that the protocols do work as intended.

a phenomenal idea!

+1 for github repo. There is massive value in having repositories of common protocols, and I predict eventually a number of big winner repos will emerge. Peer review would happen by people just running the protocols and making pull requests (similar to PLR), more like a continuous process of improvement than a final ‘this is the completed protocol’ (as with traditional publishing).

If the protocols in a repo are structured to form building blocks for bigger protocols (like miniprep), in addition to running standalone, this will increase the utility of that repo and increase the chance of becoming the de-facto implementation people point at.

3 Likes

so exciting!

(I don’t have a bravo unfortunately)

1 Like

We have a bunch of Bravos but have no need for RNAseq. I can validate the protocol with water if you want?

We only do 96-well, but have both 96LT and 96ST heads. We also have complete standalone Bravos as well as the “Genomics Bravo” that has a BenchCel and Minihub integrated. Would be good to understand what sort of auxiliary devices will be needed for your protocol. Feel free to include those and we can go from there.

1 Like

Thanks a lot. Can you message me your email address and I’ll share the draft document that outlines all the requirements and methods?

I’m setting up a VM so that you can run it in simulation too. If you’re willing to do a dry or water run, that would be awesome. It uses a Bravo & BenchCel. We also have a magnet and chiller, but you could ignore those for a test run.

I’m working on some review criteria too, hopefully that would help structure any comments & feedback and make it a little easier. I’ll post that soon.

Thanks again, hopefully this external review attempt adds some value and credence to the release. I am having some folks internally review everything too. Once that is done I’ll make the repo public.

2 Likes

I’ll PM you my address.

We also have a magnet and chiller so no need to ignore. One thing I have struggled with in the past is importing a method and getting it to run with very little changes, so let’s try to work on getting an export file that would allow anyone to run that method given that they have all the correct hardware.

It would be ideal if we could share these files publicly, do you think that will be possible? As in, do you have somewhere you can share them publicly? :slight_smile: