Each user has different needs.
Operators and many scientists without programming background still need a GUI to program simple tasks (e.g. serial dilution).
Software and automation engineers will prefer more granular control in python, C# or any language. And others in between will be comfortable with an old school drag n drop IDE like VENUS, Biomek or Evoware, and some minor scripting for data handling.
The AI can definitely save time writing methods these days.
But you are still dealing with the multiple variables and physics of liquid handling to fine tune the results, and the human operator still needs to feed back the AI with the observed results after every test (e.g. small droplet on the well’s wall, not fully resuspended solution, the tip touches a bit the bottom cell layer, etc.).
Like others mentioned, liquid handling vendors don´t have the luxury of accessing their install base data to train an AI, so community sharing like Github or this forum will be key to improve AI method generation in the coming years.
But the number of options on those moves and the number of ways those moves can be made is nearly infinite. It’s one thing to say you’re just going to move a liquid from here to there in the abstract. In reality you need to deal with the liquid classes, how fast will you draw up? How fast to dispense? Will you touch off? Will all the liquid end up in the well or is there any chance that a drop stays on the tip? Do you need to think about the viscosity? How will the conductivity of the liquid affect the liquid level detection? What about timing issues? Will this reaction go too far while we are waiting on the robot to set up the next step? I could go on and on.
If I want to be as reductive about your specialty then I would just say what is so hard about programming? You ask the LLM and it gives you the code. You need to know nothing. But for setting up a protocol you need to know the science behind it. It is at least as involved as doing the same protocol on the bench by hand and I don’t know many programmers that could just jump into a lab situation and do that.
There’s a lot that you don’t realize that you don’t know.
I guess I just mean to say that there is a lot more to setting up a robotics protocol than just planning the pipetting steps. Anyone who has ever done that in real life knows that. Even when you have the nice graphical interfaces making it easy to plan the pipetting steps, there is still a LOT more to developing a method for a robot than just that.
This is a topic I’m quite interested in. We’re thinking about using AI agents or related techniques to evaluate liquid classes and build a pipetting parameter database that works across most applications.
Curious if anyone here has experience with this or any tips to share.
Someone at one my hackathons built a database of sorts. It did not win but it received a special shout out and was a very interesting approach to the problem.
There are also some interesting papers that have come out recently on the subject. Quite a few camera driven solutions as well (which won’t scale.) And I’ve seen a few groups like Festo advertise products that are in this domain that look to be available to demo at SLAS.
If you’re working with Hamiltons then TADM can be very helpful, TADM curves can show you not only if an aspiration worked correctly but the specific way it failed if it did (requires some interpretation).
This is very helpful information, thank you!
We are using our own pipettes and test platform, so it may take some time to set up the testing environment, but it is definitely worth trying. I will share feedback once we have results.
We are using our own pipettes, and we plan to use the balance data as the main feedback. The pressure curve can of course be used as complementary information as well. Thank you for the suggestion.
Hey all. I’m loving the conversation here. We’ve been working on exactly this problem at Cornucopia Biosciences. The dream, of course, is to be able to feed the AI a protocol and have it do all of the methods development instantly. However,
As pointed out, there are numerous hurdles to overcome before it is reasonable to start thinking about taking the human out of the loop. Part of that involves building custom RAGs to build up the full context needed for any particular customer, or even particular device.
The way we are thinking of it at the beginning is to use the agents to dramatically accelerate the development process. This allows one engineer to do more, more quickly. An example is of course the de novo code generation. But I’m also talking with the agent as I’m doing the liquid transfer validations. “We are pinning on this plate, adjust all offsets when using the current pipette up my 2mm” and the agents runs through the entire method and makes the update.
The agents also enable non code people to make small tweaks to methods without having to bring in an engineer. Or requiring the engineer to code in a whole bunch of runtime parameters. Example: “We need to move the reaction plate over by two carriers”. “Reduce the sample input volume in half” It’s all coming together remarkably quickly.
I’ve been doing exactly this for a while, especially using Claude Code. Working in the Hamilton VENUS ecosystem, having an AI assisted workflow has been a big win for building reusable HSL libraries quickly. It’s not “method development” in the traditional sense, but it is a really effective way to prototype features, optimize, and integrate things that don’t exist yet out of the box.
I also built a small VS Code extension for HSL as a personal project (link below). Even just having syntax support plus fast navigation makes it much easier to treat HSL like a real software project (libraries, style consistency, code review, etc.).
One workflow that’s been surprisingly effective for QA is that every method you build in the VENUS Method Editor generates a parallel HSL representation. Feeding that generated HSL into an LLM is a great way to:
• sanity check intent vs. implementation
• spot edge cases and out of bounds conditions
• identify repeated patterns that should be factored into submethods or libraries
• improve readability and traceability
Big caveat I’ve hit: the linked/generated HSL files contain method “block markers” (the encoded alphanumeric sections between steps). These are documented, but not particularly well, and I have not had enough time yet to fully decode and recapitulate them in a robust way. For now, the QA benefits of this workflow have been more than sufficient for most of our needs.
Right now, I can safely edit only the user inserted HSL portions without the Method Editor complaining about mismatches. I have noticed that even small edits can make the editor unhappy and sometimes legacy values seem to stick around. That said, for fresh builds and proof of concepts, this approach has been really useful.
If anyone has insight into:
• what those encoded blocks represent conceptually
• whether there’s a supported way to regenerate or reconcile them
• best practices for AI assisted editing while keeping VENUS happy
I’d love to learn from peoples’ experience.
Also, if this is interesting to anyone, I would genuinely encourage forking the repo and contributing. Even small improvements help, and having more eyes on the extension (and eventually the block marker decoding) would benefit the whole community.
Practical warning: if you do this workflow (especially with an editor plus AI in the loop), put your method directory under version control and back it up. Even minor auto formatting or “helpful edits” can desync things.
The utility of AI in lab automation is entirely dependent on the context. Seems like the bulk of what I see is intended to act as a replacement for the programmer or engineer by making it simple enough that non-automation folks can understand.
That approach definitely shares lineage with the GUI-based programming. It’s great for a tech demo where a presenter magically drag-and-drops spins up a protocol from a prompt, but when it comes to troubleshooting that protocol, you either need to be able to figure out yourself what went wrong, or hope that you don’t up staring at an AI response saying “You’re completely right, I shouldn’t have thrown those irreplaceable primary cells into the tip waste”.
Not trying to be derogatory, but the root problem a lot of these tools are trying to address is not wanting to understand what’s actually happening, so they try to simplify everything. Unfortunately by doing this, everything gets abstracted away into neat little processes, which then get re-complicated in more obscure and messy ways when the simplification fails to be able to achieve the desired outcome.
I think a more valuable application is not writing entire workflows, but finding problem spaces that AI is already good at and applying it there.
For example, we use a python library for building our transfer sequences, and then feed data from that script into Venus to handle the liquid manipulation. This lets us separate out complex logic and math into tools that are better suited for those tasks, while allowing us to use Venus’s hardware specific tools where they’re needed. Copilot or Claude can help figure out clever ways to sort sequences or divvy up reagent or communicate with LIMS or dozens of other things that automated systems need to do besides how to move liquid, but we still have access to OEM tools for handling the physical properties that others in this thread have mentioned (although the PyLabRobot folks are making great strides on that problem as well, so we’ll probably move everything over to a fully python-based system at some point)
AI has the potential to raise the skill floor via coding assistance and putting together functional if not simplistic scripts. So great for a lab getting started with automation or an R&D team who wants bespoke scripts for their current experiments.
However, as mentioned earlier in the thread lab automation isn’t just creating a list of steps; it’s working through the physics of liquid handling, coding around edge cases, working around specific instrument software idiosyncrasies, integration with other instruments, designing and performing validation tests, etc. I’m unsure if the training data is there yet to handle all of the above.
It’d be neat to see whether AI can tackle the pipetting aspect of development, maybe via LLS and/or an onboard platereader to iterate on a pipetting protocol.
From my internship at Hamilton where I worked on similar problems its very good at that if you have examples of what a bead cleanup looks like. Once you have well defined unit ops that youre just composing together you can go very fast. Copilot can even predict the next step in a protocol quite accurately.
I think using it to wholesale generate production ready scripts is not the best use case and theres a lot of other ways to generate value. For instance imagine if you could jog a pipette teaching tool by voice commands alone, this is fully possible and would be pretty useful. Making the small stuff really fast helps us make the big stuff work better. Its more about increasing testing velocity than anything else in my opinion.