I find myself more and more going to ask AI instead of going to man pages or docs, or even specialized books on these topics these days. How would you also as an author compete against that?
A colleague asked me for advice on being a manager. Amongst other advice, I suggested he read High Output Management.<p>He came back a few days later and said that chatGPT knew what was in the book - was it okay if he just read the summary?<p>I wasn’t sure what to say. It’s probably true that chatGPT can summarize all the main points from the book. But it has always been easy to find the key points from books on line. The hard part of being a manager is figuring out how to take the obvious instructions and act on them consistently.<p>Maybe some people can do that just be reading the summary. For me though, reading the whole book is important. I find myself thinking back to the examples used to illustrate the points. And I find that repeating ideas in different ways as I read the book helps make them part of my mental framework. I read lots of interesting ideas in quick articles, but they rarely stick with me unless there is a specific translation to action.<p>I ended up telling my colleague that it was up to him to decide how to learn best. If it was me, I need the book. But he needs to know his own learning system.
Books exist for people who want in depth information with the full context, in an organized manner.<p>Short forms have always been available, it be blog posts, Wikipedia articles, cliff notes, or other such things. Books survive, because source material is needed to generate all of those other things, and those short form versions don’t cut it for everyone. I don’t see LLMs as any different.<p>A book can tell you something you didn’t know. With an LLM you need to know enough to ask.
LLMs do not generate new content, they just shuffle old content together in new ways. So no, it does not kill an industry of people creating new original content. And authors only need to worry about it if they were not adding anything new to the world to begin with, and were instead relying on marketing to sell re-packaged existing content.
AI slop produces AI slop content. It is the new spam. Sure it may be able to summarise things, but it gets things very wrong.<p>You will have shallower knowledge than the person who reads good source content.<p>Do you want broad but very shallow knowledge?<p>I’d much rather cast a deep net that AI slop can’t touch.<p>I say this as someone that has used most LLM tools. They are tools, not replacements. And they are remarkably shallow but great at appearing “magical”
I think you might be confusing different activities that look similar.
Search, research, exploratory reading, browsing, fact checking, cross
referencing, debunking, genealogy, making etymological and
epistemological connections are all different things. As an author and
researcher I produce and consume a lot more types of connections and
paths than a simple neural net that can make fast associations on past
training material can offer. YMMV.
Generative AI is just the new "bottom" in terms of quality. All that you have to do to compete against it is to be a little better than it. The question to me really is whether the quality of this new "bottom" is adequate. Sometimes it is for some people and for some applications, and sometimes it is not.<p>I do not use it myself because I am a researcher and I often ask questions that don't have a lot of "training data" yet. And even if an area is well covered in terms of "training data", often there is a lot of "know how" that really isn't written down in an easily digestible form. It is passed verbally or through examples in person. So the idea that the "training data" is complete is also not true in general.<p>Many other people in this thread have already covered that books are much more structured and organized than any answer generative AI gives you. Let me discuss another reason why books still matter. Books can give you a wider view than the "consensus" that something like ChatGPT gives you. I know a lot of books in my field that derive results in different ways, and I often find value in these different approaches. Moreover, suppose that only one book answers the question that you want answered but others gloss over that subject. Generative AI likely will not know precisely what one random book said on the subject, but if you were searching through multiple books on the subject yourself, you likely would pick up on this difference.<p>Relevant Paul Graham quote [1]:<p>> We can't all use AI. Someone has to generate the training data.<p>[1] <a href="https://x.com/paulg/status/1635672262903750662" rel="nofollow">https://x.com/paulg/status/1635672262903750662</a>
A question like this can only come from someone who doesn't know how specialized books can get.<p>'Ten things to know about being a manager' and similar aren't specialist books.
Authors compete by being competent, doing research, and outputting factual information. Or just, you know, being original. In a world where LLMs can’t even differentiate between a recipe and an old Reddit joke and tell you to put glue on pizza, it is absurd to think they “killed the book industry”.<p>What’s with this bloody obsession of killing other products and industries? Every time someone farts in tech, everyone starts shouting that it just killed something else. Calm down. Relax a little bit and get some perspective. You’re drowning yourself in the Kool-Aid.<p>LLMs did not kill the book industry, just like Bitcoin did not kill the world’s financial system.
The smell of paper, ink and binding glue.<p>The feel of quality paper.<p>The way the spine cracks when you first open a book.<p>The way the spine creases after you've read a book a few times.