>I noticed something that seemed almost too obvious. While our sophisticated models were still processing frame sequences and temporal features, the viewers in the comments section had already identified the crisis.<p>>Comments like "don't do it" or "it's not worth it" were appearing consistently. While we were pouring resources into optimizing frame embeddings and acoustic models, the clearest signals were hiding in plain sight.<p>First, I call bullshit. There's no way you're the first person in the room to think "let's check for keywords in the chat". I can believe that being able to tell these kind of bullshit stories is what gets someone promoted at the big companies, but I think this one is not even particularly good. Wouldn't any interviewer be skeptical? Feels like a Feynman story. Then again maybe life is stranger than fiction sometimes. Or maybe the real contribution at the time was in suggesting a feasible mechanism to incorporating the comment data?<p>Secondly, I hope that whatever model you came up with extended to livestreams without viewers, or livestreams where the viewers were egging them on. Also "Don't do it" seems like a pretty weak signal when you consider the entire variety of dumb shit people do on livestreams, e.g. the cinammon challenge, ice bucket challenge, whatever.<p>Also this is Facebook we're talking about, shouldn't they already know whether a user is a suicide risk in general from all the data mining shit they do? Shouldn't there just be a report button on the stream so users can report such things?<p>Sincerely,
guy who went from new grad to laid off in 3 years