Here’s a post I made about AI in lab automation, I was thinking a lot about this because of the panel talk I gave at SLAS and the recent discussion on here. I hope people aren’t getting tired of the topic but I find it very interesting and it will take a lot of care to navigate using these new tools productively. This forum is a great place to have these conversations.
I think the main point I want to convey to the forum is that using AI to write wholesale production scripts that work perfectly out of the box is not a great use-case, and that there are many very useful ways to use AI besides that. My day-to-day job is very similar to a typical automation engineer but the infrastructure I’ve built up in my lab for doing that job is very different. It’s taken me a while to build up a good set of practices so I’d love to try conveying those to the community.
A very valuable and growing area in our field is coupling data to execution, which takes a lot of different forms. The processes our robots run are often tied to a much larger data infrastructure that provides information about sample quantity, type, and so on. I’ve found it very helpful to build simulated testing infrastructure that represents the entire stack of how data flows to the robot methods and back. An agentic tool like Claude Code or Codex can run entire in-silico tests and catch errors throughout the pipeline, and then patch those errors itself with minimal human intervention. None of this takes the design control out of the hands of live engineers where it still belongs, but it’s enormously helpful at managing the complexity of data responsive pipelines.