Hi, I have been reading marketing and business books recently and found plenty of them are filled with meaningless corporate jargon. These books could often be 1/3 of the original length and much more straightforward. I wrote a tiny library to calculate the amount of meaningless jargon in any text for myself, and open-sourced it later because someone may need this.
Presumably, the identified phrase list could be used to finetune a Bert model or similar that could catch more cases, as a binary classifier. But presumably some actual semantically meaningful words would be needed too. That would be straight forward to do too though. Someone has probably already done it.
The advantage would be you could get probability metrics on a broader set of text. Good data is the key thing though.
The fact that OP felt the need to add a disclaimer suggest that they expect people who write such abominations to search for detectors :)<p>Who knows, though. Maybe there is a marketing dude who once thought "maybe that's too much?". Naah.
Hm. I wonder how well this works versus a large LLM. Seems like something a very strong LLM should be able to handle well with the right prompting.<p>If you can handle it with just phrases that would save a lot of time and money though.
It counts these phrases that the author doesn't like:<p><a href="https://github.com/pilotpirxie/bullshit-detector/blob/main/src/phrases.ts">https://github.com/pilotpirxie/bullshit-detector/blob/main/s...</a><p>By including this file this project should therefore correctly give itself a very high bullshit score. It's performance art really.