Validating headlines may not be as good a model as it seems.<p>First, <i>what problem are you trying to solve</i>? In this case, it's "How can I find good articles even with bad headlines?" So while the approach addresses headlines, the interest is in the content. So I'm not sure the proposed solution solves the perceived problem.<p>Second, <i>what are the current solutions/workarounds to the problem</i>? In my case, at least, the solution is blanket rejection of certain sites. I assume certain sites are so full of clickbait nonsense and/or partisan propaganda that I won't read them at all. The probably works better than some software that will consistently rate The Economist as good and anything from Infowars as nonsense (or worse, think the nonsense headline and the nonsense content are sympatico, so it's fine).<p>Third, <i>what is the root of the problem</i>? And the root is largely that people <i>like</i> their nonsense. People consistently read bad headlines and bad stories, often preferring them over respectable mainstream news.<p>And finally, <i>how do you implement this</i>? You clearly don't want something that can be gamed by crowdsourced campaigns, or it <i>will</i> be gamed. So you're either somehow relying on deep learning automation, or you're relying on human editorial effort. The former is unreliable, the latter is expensive, and itself prone to both bias and rejection (consider how many people consider Snopes to be untrustworthy).<p>I dunno. Maybe there's a great business or social idea here. But it's going to take some deeper thinking.