I propose a litmus test: "Would it be okay if this LLM was actually a client-side tool, trained and hosted on the user's machine?"<p>This framing helps highlight that (A) the data used to train and configure it is not reliably secret, and (B) a determined user can manipulate its output to fit their own goals.