TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Knowledge-base site using VSCodium/FOAM/MkDocs

2 点作者 dv35z大约 1 年前
Hello HN -<p>I&#x27;m using VSCodium (open-source VSCode), with the FOAM plugin (similar to Obsidian &#x2F; ROAM) to create a cross-linked notes&#x2F;knowledge base. If you haven&#x27;t tried it out - it&#x27;s easy to setup, and works great.<p>It&#x27;s like Obsidian... but open-source.<p>The next step I&#x27;m doing is to publish those notes as an HTML site. MkDocs-Material is perfect for this. The difficulty with using MkDocs &amp; Markdown files with [[MyTopic]] links, is that MkDocs doesn&#x27;t know how to automatically link to other pages. Fortunately, there is a MkDocs plugin called RoamLinks, which automatically converts the [[MyTopic]] links into HTML links to other pages - no matter where they are on the filesystem (very cool!!). That &quot;unblocked&quot; me - one struggle I was having when writing, was that I was renaming files &amp; re-organizing - but that resulted in tons of Markdown links being broken - a huge chore that made writing un-fun. Now I can just &quot;bracket bracket&quot; any term that I want to reference&#x2F;write later, and its good to go. Basically - its working great.<p>(Approximating the experience of Obsidian Publish)<p>The problem I&#x27;m having now is, when I do NOT have a page written [[MyNewTopic]], MkDocs&#x2F;ROAMLinks can&#x27;t link to the (unwritten) page. Instead, it outputs the text quite literally, as &quot;[[MyNewTopic]]&quot;, with the brackets. Quite ugly. I&#x27;d love to have MkDocs&#x2F;ROAMLinks NOT generate links for those pages AND hide&#x2F;suppress the &quot;[[ ]]&quot;. - I did a quick search on this (including ChatGPT) and the suggestions were. (1) Write a custom markdown renderer, before being processed by MkDocs, (2) create a custom MkDocs plugin, and (3) HTML post-processing (scraping out the [[ ]]).<p>One of the benefits of the VSCodium&#x2F;FOAM&#x2F;MkDocs&#x2F;RoamLinks stack is that it basically &quot;just works&quot; - I write, run MkDocs build, get a website, and can `aws s3 sync` it to a AWS bucket, and the site is live. The only &quot;glitch&quot; right now is these dannnng brackets<p>Can anyone suggest a better way to suppress the [[ ]] for links which don&#x27;t exist yet?<p>I&#x27;d also love to hear others&#x27; thoughts on the &quot;open-source, cross-linked notes, published as a searchable website&quot; stack.<p>Thanks!<p>&#x2F;&#x2F; Links to the tools<p>- https:&#x2F;&#x2F;vscodium.com&#x2F;<p>- https:&#x2F;&#x2F;foambubble.github.io&#x2F;foam&#x2F;<p>- https:&#x2F;&#x2F;squidfunk.github.io&#x2F;mkdocs-material&#x2F;<p>- https:&#x2F;&#x2F;github.com&#x2F;Jackiexiao&#x2F;mkdocs-roamlinks-plugin

1 comment

dv35z大约 1 年前
Option #3 (HTML post-procssing) seemed to work. This Python script takes a directory of HTML files, and removes the [[ and ]] and replaces them with &lt;span class=&quot;link-placeholder&quot;&gt; and &lt;&#x2F;span&gt;.<p>----<p><pre><code> import os from bs4 import BeautifulSoup # Function to process HTML files in a directory def process_html_files(directory): for root, dirs, files in os.walk(directory): for file in files: if file.endswith(&#x27;.html&#x27;): filepath = os.path.join(root, file) process_html_file(filepath) # Function to process individual HTML file def process_html_file(filepath): with open(filepath, &#x27;r&#x27;) as f: html_content = f.read() # Parse HTML soup = BeautifulSoup(html_content, &#x27;html.parser&#x27;) # Iterate through all text nodes in the HTML for text_node in soup.find_all(text=True): if &#x27;[[&#x27; in text_node and &#x27;]]&#x27; in text_node: # Replace [[ and ]] within the text nodes with &lt;span&gt; tags replaced_text = text_node.replace(&#x27;[[&#x27;, &#x27;&lt;span class=&quot;link-placeholder&quot;&gt;&#x27;).replace(&#x27;]]&#x27;, &#x27;&lt;&#x2F;span&gt;&#x27;) text_node.replace_with(BeautifulSoup(replaced_text, &#x27;html.parser&#x27;)) # Write back the modified HTML with open(filepath, &#x27;w&#x27;) as f: f.write(str(soup)) # Replace &#x27;your_directory_path&#x27; with the directory containing your HTML files process_html_files(&#x27;site&#x27;)</code></pre>