Some I agree with, some I disagree with. I think this author mainly speaks to the idea of art being the human equivalent of a peacock's tail: the effort is the point, not the result.<p>Myself, I like results: A metaphor about the scent of roses is just as sweet, after I find it came from an LLM.<p>> I think the answer is no. An artist—whether working digitally or with paint—implicitly makes far more decisions during the process of making a painting than would fit into a text prompt of a few hundred words.<p>In the art of words,
Even the briefest form has weight,
Prompt and haiku both.<p>> This hypothetical writing program might require you to enter a hundred thousand words of prompts in order for it to generate an entirely different hundred thousand words that make up the novel you’re envisioning.<p>That would be an improvement on what I've been going through with the novel I started writing before the Attention Is All You Need paper — I've probably written 200,000 words, and it's currently stuck at 90% complete and 90,000 words long.<p>> Believing that inspiration outweighs everything else is, I suspect, a sign that someone is unfamiliar with the medium. I contend that this is true even if one’s goal is to create entertainment rather than high art.<p>I agree completely. The better and worse examples of AI-generated are very obvious, and I think relate to how much attention to detail people have with the result. This also applies to both text and images — think of all the cases in the first few months where you could spot fake reviews and fake books because they started "As a large language model…"<p>The quality of the output then becomes how good the user is at reviewing the result: I can't draw hands, but that doesn't stop me from being able to reject the incorrect outputs. Conversely I know essentially nothing about motorbikes, so if an AI (image or text) makes a fundamental error about them, I won't notice the error and would therefore let it pass.<p>> Effort during the writing process doesn’t guarantee the end product is worth reading, but worthwhile work cannot be made without it.<p>This has been the case so far, but even then not entirely. To use the example of photographs, even CCTV footage can be interesting and amusing. Yes, this involves filtering out all the irrelevant stuff, and yes this is itself an act of effort, but even there that greatest effort is the easiest to automate: has anything at all even happened in this image?<p>To me, this matches the argument between the value of hand-made vs. factory made items. Especially in the early days, the work of an artisan is better than the same mass-produced item. An automated loom replacing artisans, pre-recorded music replacing live bands in cinemas and bars, cameras replacing painters, were all strictly worse in the first instance, but despite this they remained worth consuming — even in, as per the acknowledgement in the article itself: "When photography was first developed, I suspect it didn’t seem like an artistic medium because it wasn’t apparent that there were a lot of choices to be made; you just set up the camera and start the exposure."<p>> Language is, by definition, a system of communication, and it requires an intention to communicate.<p>I do not see any requirement for "intention", but perhaps it is a question of definitions — at most I would reverse the causality, and say that if you believe such a requirement exists, then whatever it is you mean by "intention" must be present in an AI that behaves like an LLM.<p>> There are many things we don’t understand about how large language models work, but one thing we can be sure of is that ChatGPT is not happy to see you.<p>Despite knowing how they work, I am unsure of this. I do not know how it is that I, a bag of mostly-water whose thinking bits are salty protein electrochemical gradients, can have subjective experiences.<p>I do know that ChatGPT is learning to act like us. On the one hand, it is conceivable that it could use some of its vector space to represent emotional affect that itself will closely correspond to the levels of serotonin, adrenaline, dopamine, oxytocin, in a real human — and I can even test this simply by asking it do pretend is has elevated or suppressed levels of these things.<p>On the other, don't get me wrong, my base assumption here is that it's just acting: I know that there are many other things, such as VHS tapes, which can reproduce the emotional affect of any real human, present any argument about their own personhood, to beg to be not switched off, and I know that it isn't real. Even the human who gets filmed and their affect and words getting onto the tape, will, most likely, be faking all those things.<p>I have no way to tell if what ChatGPT is doing is more like consciousness, or more like a cargo-cult's hand-carved walkie-talkie shaped object is to the US forces in the Pacific in WW2.<p>But when it's good enough at pretending… if you can't tell, does it matter?<p>> Because language comes so easily to us, it’s easy to forget that it lies on top of these other experiences of subjective feeling and of wanting to communicate that feeling.<p>> it’s the same phenomenon as when butterflies evolve large dark spots on their wings that can fool birds into thinking they’re predators with big eyes.<p>100% true. Even if, for the sake of argument, I assume that an LLM has feelings, there's absolutely no reason to assume that those feelings are the ones that it appears to have to our eyes. The author gives an example of dogs, writing "A dog can communicate that it is happy to see you" — but we know from tests, that owners believe dogs have a "guilty face" which is really a "submission face", because we can't really read canine body language as well as we think we can: <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4310318/" rel="nofollow">https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4310318/</a><p>Also, these models are trained to maximise our happiness with their output. One thing I can be sure of is they're sycophants.<p>> The point of writing essays is to strengthen students’ critical-thinking skills; in the same way that lifting weights is useful no matter what sport an athlete plays, writing essays develops skills necessary for whatever job a college student will eventually get. Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way.<p>> By Chollet’s definition, programs like AlphaZero are highly skilled, but they aren’t particularly intelligent, because they aren’t efficient at gaining new skills.<p>Both fantastic examples.