> Rodriques: Many people assume we’re focused on wet lab automation. There are certainly opportunities there and we are exploring them, but the biggest opportunities are actually on the cognitive side.<p>Wet lab automation is very difficult and capital intensive. And once you build your lab you are constraining yourself to answering questions within a certain domain for which you have the relevant sample prep and characterization equipment. Your equipment in essence defines your design space, and thus your potential solution space, which places a bound on your TAM.<p>So of course automating the thinking part of science is more approachable with current AI - but is that what people want? It’s certainly an attractive proposition for management: automate away the highly-paid SMEs and turn R&D into more of a factory environment with replaceable lab techs.. but actually implementing this depends on where the power lies in an org. My theory is that in many orgs the “cognitive” folks hold the true power (= the unwritten expertise about what works and what doesn’t, when to work around your existing setup, how much to trust each number an instrument produces). They’ll resist this change to their last breath.<p>You may gain some short-term efficiency by accelerating the experiments of today, but in the long run you lose the expertise to break out of local minima imposed by your equipment and training data.<p>Or to think about this another way, imagine a PhD student who was never allowed to talk to other people, attend conferences etc., and could only read papers and try things in lab. But they can read papers extremely fast. Would they be successful?
I will never stop being amazed at AI folks' childish views of animal cognition:<p>> A lot of your tools reference crows. What’s up with that?<p>> White: When I got started in this space around October 2022, I was red-teaming with GPT4. Around the same time, a paper called “Language Models are Stochastic Parrots” was circulating, and people were debating whether these models were just regurgitating their training data or truly reasoning. The analogy is appealing, and parrots are definitely known for mimicking speech. But what we saw was that pairing these language models with external tools made them much more accurate — a bit like crows, which can use tools to solve puzzles.<p>> In the work that led to ChemCrow,1 for instance, we found that giving the large language model access to calculators or chemistry software made its answers much better. So we kind of retconned a little bit to make “Crows” be agents that can interact with tools using natural language.<p>This is incredibly insulting to crows, who can spontaneously create tools and use bizarre man-made tools with no training. And when crows use tools for problem solving in the lab, the tools are not "solve the problem for me" like a calculator, they require much more creative thinking. What White really means - whether he knows it or not - is that crows are known for being intelligent and he wants to use this for marketing purposes.<p>I don't think anyone alive today will live to see an AI as smart as a crow, in no small part because AI researchers and investors refuse to take animal intelligence seriously.
>> I’m optimistic that an AI scientist will help with reproducibility overall. Did you do the experiment that you said you did? Did you record all the variables in a way that you can report it in the way you did it?<p>Interesting to think of the potential long term impact for science. Reminds me of the reform in the early 20th century that focused on ensuring the contents of canned goods matched their labeled ingredients.
I'm so incredibly tired of all of the BS claims. (I'm an AI/ML researcher)<p>> has enabled open-source LLMs “to exceed human-level performance on two more of the lab-bench tasks: doing scientific literature research and reasoning about DNA constructs” with only “modest compute budgets.”<p>No. They did not. They just ran a crappy experiment and came up with an absurd result.<p>As a community we need to invest much more effort into benchmarking as a science. Our space is full of garbage claims like this and it isn't doing us any favors.<p>Eventually the hype will die down and people will realize that a lot of the claims were obvious falsehoods. Then we'll all get collectively punished for it.
There are massive opportunities for accelerating science through AI-human collaboration. However, it requires new management and a new set of standards. If AI can write a paper that was good for publication a year ago, what does that mean for science?<p>It’s really unclear.<p>Im particularly interested in AI assisted education research. I think we need to keep an eye on empirical methods for developing smarter humans.
I think more reliable in Silico experimentation will yield much better results in the long run but is probably akin to a spacex or Tesla type of investment and 1-2 oom more compute intensive.
How is this different in technical standpoint from the AI agents used for marketing use cases?<p>Is this really some out of the blue use case as this article made it look like?