TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

I used GPT to build a search tool for my second brain note-taking system

291 点作者 abbabon超过 2 年前

18 条评论

michaericalribo超过 2 年前
These &quot;augmented intelligence&quot; applications are so exciting to me. I&#x27;m not as interested in autonomous artificial intelligence. Computers are tools to make my life easier, not meant to lead their own lives!<p>There&#x27;s a big up-front cost of building a notes database for this application, but it illustrates the point nicely: encode a bunch of data (&quot;memories&quot;), and use an AI like GPT to retrieve information (&quot;remembering&quot;). It&#x27;s not a fundamentally different process from what we do already, but it replaces the need for me to spend time on an automatable task.<p>I&#x27;m excited to see what humans spend our time doing once we&#x27;ve offloaded the boring dirty work to AIs.
评论 #34685536 未加载
110超过 2 年前
In case folks are interested in trying it out, I just released the Obsidian plugin[1] for Khoj (<a href="https:&#x2F;&#x2F;github.com&#x2F;debanjum&#x2F;khoj#readme">https:&#x2F;&#x2F;github.com&#x2F;debanjum&#x2F;khoj#readme</a>) last week.<p>It creates a natural language search assistant for your second brain. Search is incremental and fast. You notes stay local to your machine.<p>There&#x27;s also a (beta) chat API that allows you to chat with your notes[2]. But that uses GPT, so notes are shared with OpenAI if you decide to try that.<p>It is not ready for prime time yet but maybe something to check out for folks who are willing to be beta testers. See the announcement on reddit for more details[3]<p>Edit: Forgot to add that khoj works with Emacs, Org-mode as well[4]<p>[1]: <a href="https:&#x2F;&#x2F;obsidian.md&#x2F;plugins?id=khoj" rel="nofollow">https:&#x2F;&#x2F;obsidian.md&#x2F;plugins?id=khoj</a><p>[2]: <a href="https:&#x2F;&#x2F;github.com&#x2F;debanjum&#x2F;khoj#chat-with-notes">https:&#x2F;&#x2F;github.com&#x2F;debanjum&#x2F;khoj#chat-with-notes</a><p>[3]: <a href="https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;ObsidianMD&#x2F;comments&#x2F;10thrpl&#x2F;khoj_an_ai_search_assistant_for_your_second_brain&#x2F;?utm_source=share&amp;utm_medium=web2x&amp;context=3" rel="nofollow">https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;ObsidianMD&#x2F;comments&#x2F;10thrpl&#x2F;khoj_an...</a><p>[4]: <a href="https:&#x2F;&#x2F;github.com&#x2F;debanjum&#x2F;khoj&#x2F;tree&#x2F;master&#x2F;src&#x2F;interface&#x2F;emacs#readme">https:&#x2F;&#x2F;github.com&#x2F;debanjum&#x2F;khoj&#x2F;tree&#x2F;master&#x2F;src&#x2F;interface&#x2F;e...</a>
评论 #34689962 未加载
rolenthedeep超过 2 年前
One of my biggest dreams is a self-hosted AI that always listens through my phone and automatically takes notes, puts events in my calendar, set reminders, and template journal entries. A true personal assistant to keep my increasingly-complex life in order.<p>I&#x27;d love a system where I can just point a search engine at my brain. I tried really hard for a while, but I just didn&#x27;t have the discipline or memory to exhaustively document everything.<p>An AI that can do this kind of thing in the background would be an absolute godsend for ADHD and ASD people.
评论 #34690633 未加载
评论 #34686065 未加载
评论 #34693558 未加载
评论 #34690045 未加载
leobg超过 2 年前
Slight overkill to use GPT, though it works for the author and I can see that it’s the low hanging fruit, being available as an API. But this can also be done locally, using SBERT, or even (faster, though less powerful) fastText.<p>Also, it’s helpful not to cut paragraphs into separate pieces, but rather to use a sliding window approach, where each paragraph retains the context of what came before, and&#x2F;or the breadcrumbs of its parent headlines.
评论 #34686922 未加载
评论 #34685511 未加载
sowbug超过 2 年前
I wonder whether your individually trained chat bot will be allowed to assert the Fifth Amendment right against self-incrimination to stop it from talking when the police interview it. And if it&#x27;s allowed, do you or it decide whether whether to assert it? What if the two of you disagree?<p>Similar questions for civil trials, divorce proceedings, child custody....
评论 #34685762 未加载
评论 #34684891 未加载
评论 #34684901 未加载
评论 #34684968 未加载
评论 #34684077 未加载
PaulHoule超过 2 年前
Would be nice to see some indication of how well it works in his case.<p>I worked on a ‘Semantic Search’ product almost 10 years ago that used a neural network to do dimensional reduction and had inputs to the scoring function from the ‘gist vector’ and the residual word vector which was possible to calculate in that case because the gist vector was derived from the word vector and the transform was reversible.<p>I’ve seen papers in the literature which come to the same conclusion about what it takes to get good similarity results w&#x2F; older models as a significant amount of the meaning in text is in pointy words that might not be included in the gist vector, maybe you do better with an LLM since the vocabulary is huge.
评论 #34682925 未加载
trane_project超过 2 年前
I&#x27;ve been thinking of using GPT or similar LLMs to extract flashcards to use with my spaced repetition project (<a href="https:&#x2F;&#x2F;github.com&#x2F;trane-project&#x2F;trane&#x2F;">https:&#x2F;&#x2F;github.com&#x2F;trane-project&#x2F;trane&#x2F;</a>). As in you give it a book and it creates the flashcards for you and the dependencies between the lessons.<p>I played around with chatgpt and it worked pretty well. I have a lot of other things in my plate to get around first (including starting a math curriculum) but it&#x27;s definitely an exciting direction.<p>I think LLMs and AI are not anywhere near actual intelligence (chatgpt can spout a lot of good sounding nonsense ATM), but the semantic analysis they can do is by itself very useful.
评论 #34686403 未加载
评论 #34689642 未加载
tra3超过 2 年前
This is fascinating.<p>Can I train it on 5 years of stream of consciousness morning brain dumps and then say &quot;write blah as me&quot;?<p>Before I do that, I&#x27;d love to know if training data becomes part of the global knowledge base available to everyone..
评论 #34684180 未加载
评论 #34684039 未加载
评论 #34684286 未加载
asdff超过 2 年前
Did the author show how this system outputted results? I see an example of a lexical search and the technical implementation, but no example of some semantic output showing how its relevant to the lexical search string without containing that string. The author used the literal search string &quot;failure mode&quot; as their example. I was wondering if chatgpt would bring up results relevant to the lay person interpretation of failure mode, a technical interpretation, or something in between.
danwee超过 2 年前
Umm, the only thing that stops me from doing this is uploading my notes to OpenAIs&#x27; servers.
评论 #34690701 未加载
FiberBundle超过 2 年前
Does anybody know how search engines apply semantic search with embeddings? To my knowledge no practical algorithms exist that find nearest neighbors in high dimensional space (such as that in which word&#x2F;sentence&#x2F;document vectors are embedded in), so those wouldn&#x27;t give you any benefit compared to an iterative similarity search as applied here. Which obviously is totally impractical for real search engines. There are approximate nearest neighbor algorithms such as Locality-sensitive hashing, but even they seem impractical for real world usage on the scale of the indexes that search engines use. So how can Google e.g. make this work?
评论 #34693631 未加载
whatever1超过 2 年前
So if you keep adding notes furiously every day for years, do you asymptotically get your consciousness on—a-chip?
abrkn超过 2 年前
I’d love to have a ChatGPT that was also trained on all of the pages from my “second brain,” Roam Research.<p>Imagine, I could ask it questions about myself, my friends, and my business. It would in many ways know me better than me from reading all my journal entries.<p>How many years are we away from something like this?
评论 #34686233 未加载
qwerty456127超过 2 年前
I don&#x27;t imply any judgment (like good&#x2F;bad) but I tend to suspect the major (not necessarily intentional) reason of all these &quot;second brains&quot; (I use too) to exist in the grand scheme of things is to be a high-quality input for AIs to learn.
评论 #34702106 未加载
articsputnik超过 2 年前
Wow, this was super interesting as someone using a Second Brain daily. Thank you so much for digging into it, putting in the work, and sharing with us all! Much appreciated. I will follow you for more.<p>I am much excited to do more with my Second Brain, but one concern, as you point out, is to use chatGPT or similar; we&#x27;d need to upload all our private and sometimes sensitive notes, which is a no go for me. So happy that you do everything locally. I wonder what the equivalent would be to train the model to search and ask questions based on our second brain (plus the already trained information). That&#x27;s also where Obisidan will win in the long run, as other tools do not have the data locally. Obviously, it&#x27;s already in the cloud; they could train on them, but training on customer-sensitive data would be a big problem. Something I will follow closely.
college_physics超过 2 年前
Desktop search feels like it has stagnated for at least a decade. Yet its an obvious way to both enhance privacy, improve relevance and even open up entirely new capabilities
totetsu超过 2 年前
I think letting a language model make an outline of a topic you want to make notes on and writing in the details might not be such a bad thing.
lukemtx超过 2 年前
I wanted to do this! :D