I just had an interesting experience this morning with my oldest child. He wanted to write a letter to the Splatoon development team to show his appreciation for the game. He printed it out and brought it to me with a proud expression on his face. It took about five seconds to catch the AI smell.<p>We then proceeded to have a conversation about how some people might feel if an appreciation letter was given to them that was clearly written by AI. I had to explain how it feels sort of cold and impersonal which leads to losing the effect he's looking to have.<p>To be honest, though, it really got me thinking about what the future is going to look like. What do I know? Maybe people will just stop caring about the human touch. It seems like a massive loss to me, but I'm also getting older.<p>I let him send the letter anyways. Maybe in an ironic twist an AI will respond to it.
Last week I got an email from a manager about the number of free beverage taps in our office being reduced.<p>They'd clearly dumped the memo they got about the reduction into some AI with a prompt to write a "fun" announcement. The result was a mess of weird AI positivity and a fun "fact" which was so ludicrous that the manager can't have read it before sending.<p>I don't mind reading stuff that has been written with assistance from AI, but I definitely think it's concerning that people are apparently willing to send purely AI generated copy without at least reviewing for correctness and tone first.
> That’s when it dawned on me: we don’t have a vocabulary for this.<p>I'd like to highlight the words "counterfeiting" and "debasement", as vocabulary that could apply to the underlying cause of these interactions. To recycle an old comment [0]:<p>> Yeah, one of their most "effective" uses [of LLMs] is to counterfeit signals that we have relied on--wisely or not--to estimate deeper practical truths. Stuff like "did this person invest some time into this" or "does this person have knowledge of a field" or "can they even think straight."<p>> Oh, sure, qualitatively speaking it's not new, people could have used form-letters, hired a ghostwriter, or simply sank time and effort into a good lie... but the quantitative change of "Bot, write something that appears heartfelt and clever" is huge.<p>> In some cases that's devastating--like trying to avert botting/sockpuppet operations online--and in others we might have to cope by saying stuff like: "Fuck it, personal essays and cover letters are meaningless now, just put down the raw bullet-points."<p>[0] <a href="https://news.ycombinator.com/item?id=41675602">https://news.ycombinator.com/item?id=41675602</a>
Ironically (and perhaps proving the author’s point), when reading I couldn’t help feeling like this was at least AI-assisted writing, if not prasted directly.
This is a big problem, but we all know the solution— cease taking anyone’s writing seriously, unless they develop a reputation for natural writing (not using AI in their writing).<p>This is what we will all do. We all are spam filters now.
I wonder if it's because my writing has always been so imperfect because of dyslexia/etc so I basically really couldn't care less, for me anyway least message hunting I have to do the better, also if it means someone doesn't sit and hmm and haa all day over sending a 4 sentence email to their boss because they're having a bad imposter syndrom day, who cares. ALSO... what to do about the fact you can't unread the AI?? also.... proof is in the pudding. oh, and also I emailed dang the other day, I wanted to make a bunch of intersecting fine points so the email got kinda contrived, I gave it to chatgpt but instead of replacing my email, I send my email and included the chatgpt share link for him, he thanked me for leaving it in my own voice.
It is interesting. It is interesting in several different ways too:<p>- The timing is interesting as Altman opened US branches of his 'prove humanness' project that hides the biometrics gathering effort
- The problem is interesting, because on HN alone, the weight of traffic from various AI bots seems to have become a known ( and reported ) issue
- The fact that some still need to be convinced of this need ( it is a real need, but as first point notes, there are clear winners of some of the proposals out there ) resulting in articles like these
- Interesting second and third order impact on society mentioned ( and not mentioned ) in the article<p>Like I said. Interesting.
Even before the advent of AI, I already realized that most of what's written by people isn't worth reading. This is why I don't read most of the e-mails that I receive.<p>Before AI, it was hard for many people to write literate text. I was OK with that, if the text was worth reading. I don't need to be entertained, just informed.<p>The thing that gets me about AI is not that what it generates is un-original, but if it's trained on the bulk of human text, then what it generates is <i>not worth reading</i>.
I "prompt prong", i.e pass all the AI emails that I receive through an AI and ask it to write a response. At what point do we get the 2 AIs to email each other directly, without as bio-agents having to pretend that we wrote/read them?
Sorry, but any trick you think you have for detecting AI generated text is defeated by high temperature sampling (which works now with good samplers like min_p/top-n sigma) and the anti-slop sampler: <a href="https://github.com/sam-paech/antislop-sampler">https://github.com/sam-paech/antislop-sampler</a><p>You'll be able to detect someone running base ChatGPT or something - but even base ChatGPT with a temperature of 2 has a very, very different response style - and that's before you simply get creative with the system prompt.<p>And yes, it's trivial to get the model to not use em dashes, or the wrong kind of quotes, or any other tell you think you have against it.
In the era of AI every time I have to read a report I ask myself: if the author probably didn’t spend his time writing this, why should I waste mine reading it?
Hmm.<p><pre><code> Almost makes sense to write emails
with deliberate newlines--not counting
the widths, but advancing the line as
it were (ourselves), like how we'd do
with a typewriter.
Because (in Outlook) I didn't like
seeing the message stretch out past
where my eyes could scroll too far
right; instead, to Enter when each
line sort of looked alright, some
length among their sentence homes.</code></pre>
It's becoming a big problem. But also, pretty easy now to filter out authentic people. In terms of AI in personal and business relationships, the maximum I'll do is short auto complete. Superhuman email, which I use, just came out with AI written replies, and it is too much.<p>I simply don't want to live in a world where all I am doing is talking back and forth with AI that are pretending to be people. What is the point in that? I am working on <a href="https://humancrm.io" rel="nofollow">https://humancrm.io</a> and I am explicitly not putting any AI into it. Hopefully there are more than a few people that feel like I do.
I DM’d someone a specific question recently and got back some generic, long-winded, impeccably formatted bullet points when previously this person had barely used any punctuation. I said something like, wow I like your formatting! Is that an extension or something? I always have a hard time with Slack formatting, etc. They replied that they were “trying something called structured communication.”<p>It probably made me angrier than it should have. Now I’m wondering if I’m the “old man yelling at cloud”.
> It’s the sense of authorship. The idea that you can read something and know who wrote it—and whether they meant it<p>This quote from the short prose at the beginning of this article expressed one of my major concerns with LLM generated prose.<p>And specifically, why I don't want LLM generated text as a substitute for web search results.<p>Since LLM generated prose is a statistical aggregation of web crawling, there is no ability to assess the author's position, or opinions, and how they might affect the author's writing. This removes a major tool in evaluating any potential biases in the writing.<p>This article itself may have been written by an LLM. But the statement that LLM generated prose eliminates an ability to assess a human's position on a topic is still valid.<p>Just another step in loosing touch with objective reality. Contrary to popular opinion, perception is NOT reality... It's only an idea inside a person's head, while the overwhelming majority of reality is outside of our liter of jello...