I studied biomedical engineering at Hopkins. Before I started there, research was the promised land. I dreamt of spending my time thinking about how to solve critical problems and testing solutions.<p>What I saw instead were people spending the vast majority of their time pipetting. All the way up the ladder, upto and including postdocs. I sometimes thought our PI had it worse for having to spend most of her time applying for grants.<p>The AWSification of synbio research would be a game changer. Some labs at Hopkins have tried to build robots but with limited success. Given how cheap labor is at research institutions competing on price will be incredibly difficult.
Hi! This is the Yelling Math Fairy and THE WORD EXPONENTIAL DOES NOT MEAN MORE. IT IT IS A MATH WORD. IT MEANS e^kx. IT MEANS THE SOLUTION OF y' = ky. DOES EACH NEW ASSAY DOUBLE THE TOTAL AMOUNT OF WORK? NO IT DOES NOT. THE ADDED OVERHEAD FOR EACH ASSAY IS LINEAR GROWTH. NOT EXPONENTIAL GROWTH. I KNOW IT SEEMS LIKE A LOT TO YOU BUT THAT DOES NOT MAKE IT EXPONENTIAL. EXPONENTIAL IS NOT A SYNONYM FOR BIG AND EXPONENTIAL GROWTH IS NOT A SYNONYM FOR FAST. THANK YOU.
I believe this startup is as close as it gets (for now!) to what you're describing, <a href="https://www.transcriptic.com/" rel="nofollow">https://www.transcriptic.com/</a>.<p>(I don't work there or anything)
The problem with this fantasy is that easily automated and distributed tasks are not the rate-limiting steps in most biomedical research. The hard parts (in addition to designing the right experiments and analyzing data..) are in constructing and validating relevant model systems and doing the specific experiments to address questions of interest.<p>These are extremely dependent on the question being studied and often are not amenable to automation, and may require very rare, expensive, and difficult-to-handle samples. For example, my collaborators work with transgenic mice that are a model for a particular disease, and these mice have to be bred then aged to 12 weeks until they exhibit the phenotype before we can even start doing an experiment. In another model, they have to do brain surgery on each mouse and then wait several weeks for the phenotype.<p>The 'easy' parts, such as DNA synthesis and sequencing, are already highly standardized and automated, and there is fierce competition to improve the technology and bring costs down.
A big problem is that scientists are traditionally very secretive. This would increase the possibility of leaks, and there'd need to be some way of assuring that the experiment was conducted correctly. Good idea, though.
Most worthwhile research is about the mundane. One of the first research projects I did required painstakingly adjusting and modifying conditions to the point that I could actually start collecting data. That process took weeks, but the day it worked was insanely satisfying. In the process I became a master at making small incremental changes, recording them, and learning exactly what didn't work. Years later, as a computational scientist, the process was much the same, except that there were no pipettes and beakers involved.<p>Any worthwhile work I have ever done has mostly been about grunt work. Along the way there have been cool things (after all Leno made fun of our research [1] once) and insanely fun times. I may not be in research now, but every day I apply the lessons learned from patiently repeating and iterating.<p>1. <a href="http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=723226&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel4%2F5858%2F15610%2F00723226.pdf%3Farnumber%3D723226" rel="nofollow">http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=723226...</a>
I've taken a <i>very tiny</i> first step towards something like this for computer graphics and vision: trying to make user studies as easy as possible: <a href="http://www.imcompadre.com" rel="nofollow">http://www.imcompadre.com</a> It's not a ready product by any means, but two paper submissions have been made with it so far.<p>It's a difficult problem to solve, because these pesky researchers are always trying out new things that you didn't anticipate - who would've thought! But still, for the mundane things that can be automated, something like this is definitely the way to go. Of course, as other people here point out, figuring out what to actually test is always the hardest part.
In our lab today we are consistently dealing with the opposite problem. The experiments themselves are easy in comparison with the design and analysis.
The Center for Open Science poses itself as something similar to what you describe. <a href="http://centerforopenscience.org/" rel="nofollow">http://centerforopenscience.org/</a><p>I believe they call themselves more of the Github of Science for scientific collaboration. Adding hooks to 'push' the tasks and 'checkout' the findings could be maybe extensible on their platform.
Experimenting with a simulation is a great time saver as far as it goes, however, all models are just models, subject to the assumptions that went into them.<p>There is a great deal we do not know about cellular biology. Any simulation would be a fairly gross approximation. The point of many experiments is to further our understanding of the model of cellular mechanics.
From my experience (having done a similar project as an undergrad) is that the first problem is convincing people to switch and take the risk/time to use your new workflow, even if your workflow allows them to continue to use their existing infrastructure / machines.
DIYbio has already started on this:
Experimental Robot for $4k: <a href="http://www.opentrons.com/" rel="nofollow">http://www.opentrons.com/</a>