I really wanna see an inspo board with the most effective/efficient/optimal deck layouts for a particular process, so we can learn from existing deck design strategies built over months of iteration
That’s not a bad idea especially with larger work cells which can come in many more form factors.
Also interestingly once you see enough configurations, the more creative elements begin to pop out which leads to more systems that want to adopt those specific elements… and thus forever propelling our creativity forward.
Thanks @luisvillaautomata
We were only supplied with the 60mm nests, so we cannot access the bottom positions of the plate stacks and WP hotels. We only have the AirFCA and RCA arms. On the deck we have a heating/cooling Orbishake, SciRobotics petriplater and colony picker, and a Tecan Infinite M Plex WP reader. Our waste management is through deck as the entire system is inside a BSL-2 cabinet.
If anyone has ideas for a more efficient layout I’d love to hear it.
I’m personally curious about how much of it you actually use because that’s a lot of nest positions.
Without knowing anything else about your process, I would consider swapping the 61mm thru deck waste chute for an FCA thru deck because it opens up a 61mm nest position (functional and active!) and minimizes the waste position to a 2-3 grip spot (less functional, passive). You could even slow it into grid locations 7-8.
I can only visually overlay my workflows over each other for now but if I were to construct a tool…
Query the worktable at the start, during (major movements) and end of runs (alternate -comment to AuditTrail)
Query the labware when it’s used and for how long (alternate - comment to AuditTrail)
Generate a ton of run data
Map the positional run data using something like Seaborn
First, if you’re not commenting actions to the AuditTrail, you absolutely should.
Second, abstracting your workflows so they’re programmatic also allows you to query in advance where things are and will be. If you’re dragging and dropping your commands into scripts, it’s tougher and your best bet is what I stated above.
Final boss challenge: parse the Logviewer Logs for positional data but eeesh.
This seems too much for me. I’ll see what data I can extract from the Zeia file and then see if I can create a worktable grid file that I can use to plot the data in a heat map. Then later I could add a network graph for spacial connection to see what locations are used most.
query the worktable from .zeia file
Create a worktable file that contains labware and nest positions (as the worktable is)
locate each transfer of plates / labware with deck position
create a RGA timeline
locate each Aspirate / Distpense with labware / deck position
create a FCA timeline
Create heat map for FCA and RGA
Create network graph for spacial connection for FCA
Create network graph for spacial connection for RGA
Could take some time, but I’ll go on holiday first and then I might have some time
The following is for Google searchers of the future,
The benefit of building robust audit trail commenting in your scripts from the start is that you can query that information the way you’d parse a CSV, toss into data lake for analysis or/and in the future analyze it with new language models.
Furthermore the benefit of a programmatic interface with the software is that I can send this data directly to a DB and get the runtime data without touching my scripts. I can abstract the logging so I don’t have to go back in and modify the core script functionality.
Tecan export files are ZEIAs which are basically just zip files that are composed of XML files that are linked only by GUIDs. Your .xscr file is maybe 3000-5000 lines long and your .xwsp file is probably 20000-35000 lines long which isn’t too bad if you have a couple of workflows but the complete information for any given is a web of files communicating with each other to extract data.
Tecan export files are ZEIAs which are basically just zip files that are composed of XML files that are linked only by GUIDs. Your .xscr file is maybe 3000-5000 lines long and your .xwsp file is probably 20000-35000 lines long which isn’t too bad if you have a couple of workflows but the complete information for any given is a web of files communicating with each other to extract data.
I know and about 6 months ago I’ve already wrote an R-script to parse these files to extract all the lines, functions and variables that are used inside one script.
Will need some tweaking adjust it for this purpose and to get it right, but might take a few nights.
I’ll put it on my ‘some time in the future to-do list’ and then see what I can do.
That’s awesome. You’ll want to consider how to incorporate runtime data as well. You’re going to easily get snapshot data from the ZEIA but the value of the heatmap is not just only positional but also time.
1: I just put up a wall to prevent the FCA to go that way, so it doesn’t end up dripping in one of the wrong columns
2: I think it is just placeholders for plates
3: I dont actually know but i will check up what it is
Hi @luisvillaautomata
In terms of volume, we use the nest positions for culture selection and sub cloning. Sometimes we have 10 source and destination plates.
We unfortunately cannot change the 61mm thru deck as it is part of the BSL-2 setup to drop plates.
Heatmaps is a new idea for me. I’d have to review our audit trail and make sue that all the scrips are logged (we have two other people creating scripts).
Furthermore, this is getting into hard coding, something that is still on my “To Do” list (especially R-). Another factor I’s need to track is tip use per script runtime.