Kinda chisels away at their official "We're not a keylogger!" argument:<p>> Is Grammarly a keylogger?<p>> No. A keylogger records every keystroke, <i>sends data to a third party for the benefit of that party</i>, and does so without the user’s knowledge. Grammarly’s product doesn’t fit any of these descriptions.
[0].<p>Certainly the data being sent must be primarily for their own benefit. Clearly if it was for the benefit of the user then they would have not need to make turning it off a premium feature. Maybe Grammarly will create an AI that makes their product better, but they're still using user data to make that product in the first place.<p>[0]. <a href="https://support.grammarly.com/hc/en-us/articles/360003816032-Is-Grammarly-a-keylogger-" rel="nofollow noreferrer">https://support.grammarly.com/hc/en-us/articles/360003816032...</a>
Grammarly is an extremely obvious intelligence cutout masquerading as a company. Their entire business model is getting employees insecure about their English skills to dump sensitive government and business documents into their service without their employer's knowledge. They were founded in and keep multinational offices in intelligence laundering hotspots so the Americans can claim to have received tips from the Germans and the Germans from the Ukrainians and so on.
I kinda hate this, but it also seems like the only way to keep Grammarly in business. I've paid them for a personal account since 2016, since ChatGPT's launch - my usage has fallen precipitously. I can write utter stream of consciousness garbage, paste it into GPT-4, and get out a professional looking piece of text.<p>I'd definitely pay for a private AI assisted writing experience - the biggest blocker I have with using Grammarly is my inability to use it at work.
I’m on the security team at Grammarly, and our CISO addressed this here on Mastodon: <a href="https://infosec.exchange/@suha/110860810624160582" rel="nofollow noreferrer">https://infosec.exchange/@suha/110860810624160582</a>. Copying his response below for viz:<p>When it comes to our genAI features, we use Microsoft Azure as our LLM provider and don’t allow Azure, or any third party, to use our customers’ data to train their models—this is contractually mandated. For text analyzed by Grammarly to provide revision suggestions (like adjusting tone or making text more concise), we may retain randomly sampled, anonymized, and de-identified data to improve the product. This data is disassociated from user accounts and ONLY used in aggregate.<p>We’ve devoted a ton of time and resources to developing methods that ensure the training data is anonymized and de-identified. And any Grammarly user (Free, Premium, Business) can view the data associated with their account by requesting a personal data report from us.<p>Re: opt-out: When we go through a security review with a business, if requested, that business can completely opt out of Grammarly training on their de-identified and anonymized data—opt-out is not limited to a 500+ license size.<p>We don’t skimp on security or responsible data practices at Grammarly. We have strict enterprise-grade controls to protect customer data—restricted access, encryption, audit logging, and more. These are backed by industry-standard certifications like SOC 2 (Type 2), HIPAA, and ISO and verified and audited by industry-leading third parties.<p>More on what we do is at <a href="https://grammarly.com/trust" rel="nofollow noreferrer">https://grammarly.com/trust</a>.
Currently going through it around this and Zoom at my employer, who are a small team and seem to really not trust my experience in security and privacy. Definitely a losing battle and I’ll probably need to exit soon due to differences in direction.<p>Anyone hiring in the security field these days? Off to check the last who’s hiring post I guess.