I consider this a glimpse into how neural networks and "AI"-like techs would be implemented in the future. Lots of engineering, lots of clever manipulations of known techniques woven together with a powerful, well trained, model, at the center.<p>Right now I think stuff like chatgpt is only at the first step of making that foundational model that can generalize and process data. There isn't a lot of work going into processing the inputs into something the model can best understand (not at the tokenizer level, even before that). We have a basic field about this i.e. prompt engineers but nothing as sophisticated as Alphafold exists for natural language or images yet.<p>People are stacking LLMs together and putting system prompts in to assist this input processing. Maybe when we have some more complex systems in place, we can see something resembling a real AGI.
This is an awesome writeup that really helped me understand what's going on under the hood. I didn't know, for example, that for the limited number of PTMs AF3 can handle it has to treat every single atom, including those of the main and side chain, as an individual token (presumably because PTMs are very underrepresented in the PDB?)<p>Thank you for translating the paper into something this structural biologist can grasp.
It's so, so complex! I confess I had a sense of this but had no idea. We don't even hear which MSA algorithm is used to align the protein sequences.
I have no prior knowledge on protein folding but nevertheless I enjoyed (attempting) to read through this. It's interesting to see the complexity in techniques used in comparison to a lot of other ML projects today.