Am I the only one who thinks this is a really bad idea?<p>By offloading your cognitive tasks to an AI, even though you now look smarter, you're becoming dumber in the long run, because you're never really challeging and exercising your intelect. This book[1] goes into a lot of detail about how rote memorization and recall is essential to critical thinking (you have a limited working memory, and the way by which you're able to critically think about complex subjects is by chunking, which only works with concepts you've previously memorized). If you just stop exercising your recall and critical thinking, they'll get weaker and weaker.<p>I feel that already with ChatGPT. Before, whenever I needed to learn some programming concept, I'd have to search vast amounts of resources to learn it. By being exposed to many different points of view, I always felt that what I had learned stuck with me for much longer. If I just ask ChatGPT, I get the answer faster, but I also forget faster. It's not learning.<p>Learning, with capital L, is not supposed to be easy. It's supposed to be hard. Education is about making what is hard a worthwhile pursuit. The people who get lured into thinking they'll be smarter if they plug themselves to the matrix will be shooting themselves in the foot.<p>For me, relying on OpenAI to function cognitively is like relying on Google to turn my lightbulb on. It looks cool, but it doesn't make any sense.<p>[1] <a href="https://www.goodreads.com/book/show/4959061-why-don-t-students-like-school?from_search=true&from_srp=true&qid=KzeavOftBB&rank=1" rel="nofollow">https://www.goodreads.com/book/show/4959061-why-don-t-studen...</a>
"For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise."
- Socrates, from Plato's dialogue Phaedrus<p>Yes, this is going to make us smarter. Just like the personal computer was the 20th century "bicycle for the mind", large language models will be the 21st century "copilot for the mind". The only scary part it it feels like we're handing over some of the reins.
All week I've seen basically two dominant takes about how AI is going to impact creative work in the future (three if you count "generative AI is a fad with no value," which I do not): (1) It's going to take people's jobs (rip); or (2) It's going to help people do their jobs better.<p>I'm a writer and have been thinking a lot about use case (2)—since I'm not emotionally ready for my job to be taken by AI, I'm trying to figure out how to use AI to do it better. So far, I've been exclusively considering the use of AI in the sense of the "photographers using photoshop" analogy. That is, using GPT-4 to quickly draft or edit things based on prompts, while I, the human, am still "doing the work" creatively speaking. Obviously this is feasible today and going to become normalized soon, to some extent.<p>However, this article makes an interesting case for (2) in a way I hadn't considered before: GPT-4 opens up <i>entirely new ways for humans to work</i>, period. ChatGPT already improves on Google for high-level research—ask it for a summary of some well-established topic or field and chances are you'll get a coherent and set of pretty-accurate facts cobbled together from its training data (much of the Web, Wikipedia, scientific papers). But when tools become available that let ChatGPT provide this kind of summary <i>from my own works, notes, and prior research</i>? That is going to totally change the game.<p>In the last couple years I've already seen easily a 2-3x boost in my writing productivity thanks to Obsidian, a research tool that—at least the way I use it—is entirely "manual" (i.e. not automated or "smart"). If I could get the benefits of Obsidian for making connections between information and ideas, powered by an intelligent assistant that "knows" how I think and what I think about... it's cliché to say the possibilities are endless, but that's really what I'm looking at here.<p>Anyway, I want to inject some optimism into this hot topic. It may end up that in 10 years we're all unemployed. But I still need to do my job today. I see a lot of reasons to be excited, rather than defeatist, for the applications and value of GPT-4 in this regard.
I've just watched the Microsoft 365 copilot presentation. They never mentioned hallucinations, and talked about errors maybe once?<p>This thing will definitely make stupid errors and will make up things when summarizing, doing presentations etc. - unless it achieves near human level intelligence of course - but in this case everyone'll lose their job.<p>I'm really curious what'll eventually happen? How are we going to live with it - strange presentation points, wrong numbers in reports, enormous amount of auto-generated business talk texts? Will the knowledge be corrupted more and more?
Productivity junkies need to remind themselves that their goals aren't necessarily useful or practical goals.<p>The mind is working exactly how it should: there is no practical need to remember everything you've read or seen (it has been like this ever since Google Search existed and since 2008 even more so with the ability to google the entire world from your pocket). The mind does a great job at filtering all the noise already and ensures you only remember the most important stuff.
"Over the next year or two, I expect GPT-4 and its successors to become a copilot for the mind: a digital research assistant that will bring to bear the sum total of everything you’ve read, everything you’ve thought, and everything you’ve forgotten every time you touch a keyboard. "<p>I'm not sure everybody is comfortable with handing over all that to Microsoft or other tech giants. Would you be comfortable letting facebook have video cameras in all the rooms of your house? If not, then why let Microsoft get all your emails, browsing history and everything else you have on your computer.<p>Letting some AI, that is controlled by some megacorp, know and control everything about your life, isn't necessarily the best idea.
> In Jose Luis Borges’ short story “The Library of Babel,” he creates an infinite library that contains all possible books … a book that predicts your future accurately, a book that unifies quantum mechanics with general relativity … But again, this library contains every possible book. So it also contains a lot of gibberish. Most of the books, in fact, are complete gibberish.<p>Google is converging on this "Library of Babel" now, and if it isn't there already LLMs will ensure that is its final state.<p>This idea of LLMs + curated authoritative sources is an interesting and potentially really powerful antidote.
So, note-taking? I already use Google Keep as an extension of my long-term memory, for storing everything from book recommendations to birthdays. For short-term "volatile" memory, a pen and paper or a .txt on my computer works great.<p>In addition, I have hundreds of pages of typed notes in per-topic gdocs that I can refer to quite handily using Ctrl+F.<p>I actually wouldn't want GPT to summarize or reduce these notes for me because I would always doubt the accuracy, and reading the exact words that I've typed invokes much better contextual recall than reading a rephrasing. Also, usually when I can't find something immediately using Ctrl+F, it means it's time to reorganize and that grows the body of knowledge.<p>Not to mention, much of the good that comes from note-taking comes from the act of writing notes. Sending off large chunks of other people's text for an AI to summarize defeats this benefit.<p>GPT is impressive in some ways but such "pockets" of the GPT ecosystem are very reminiscent of web3/blockchain - forcing inferior solutions on already well-solved problems.
For me this would work only on a "work" setting. Like, if I'm being paid to perform some tasks, then I want to be able to use tech like "a copilot for my mind". But outside it, I would rather prefer not. If you come to my party using such copilot and start to quote nice books always in the right time, and tell really good jokes and all, well, what can I say, you won't surprise me at all. I actually wouldn't like you in my party.
> It will bring back the ideas, quotes, and memories you need, when you need them most, with no organizing, tagging, or linking required. It will work as a personalized extension of your intelligence available 24/7 at the touch of a button.<p>How can people come to this conclusion after using chatgpt?<p>(1) Its responses are often good but they are routinely wrong as well.<p>(2) Search engines already do these things, without the same loss of fidelity.
I would like an AI assistant that can leverage personal/work emails(via API), texts, phonecalls, photos (friends social media posts with me tagged), movies/music I like, spotify likes, netflix likes, browser history, health data, youtube history, etc. - to train on and answer (and be able to ask it) personal questions about me. When was the last time I played softball? Use it to determine interests and hobbies (ranked list)<p>This may be a security and privacy nightmare. However, I do believe that Google, Microsoft and more have already begun this process. So why should we not have detailed access of all combined accounts?
I am extremely bullish on this. I use ChatGPT every day. I am learning at a rate that I haven‘t in years and I love it, the pure fact I can ask follow up question or ask it to shorten things. This is the core problem of learning: I already know this I am bored, I can‘t understand this now. For the first: Hey ChatGPT sum this up for me, for the second hey ChatGPT give me more examples, set this in relation to this. I am soo excited for the future.
> If I just ask ChatGPT, I get the answer faster, but I also forget faster. It's not learning.<p>I agree. Its necessary to put in the effort to internalize knowledge, as well as to recall once in a while. It does seem important to internalize _some_ knowledge.