I've been looking into the field of continuous learning recently, and was interested in the various ways people are solving problems. For example, continually fine tuning an NLP model on fresh data to adapt to new knowledge, and avoid data drift.<p>Those of you who incorporate continuous learning into your ml workflows, what are some common problems you have?
I'm not a professional in the biz, but I always figured a poisoned pot could be catastrophic. What I mean is, once something has been accepted into the model, when it is discovered to be wrong or even malicious, it would hard to get rid of it if a lot of other "learning" has occurred before the problematic bits were noticed.<p>You could investigate and figure out where the poisoning occurred and then start anew from there, but the longer it takes, the more you lose.