A couple of notes on the blog post: the example<p>echo "Please explain this code: $(cat some_class.py)" | mark<p>needs a dash at the end to work correctly. Also, it doesn't output pandoc-flavored markdown (blank lines before headings and code chunks) unless I specifically ask it to, as in:<p>echo "Please explain this code, using pandoc-flavored markdown, leaving a blank line before headings and code chunks: $(cat some_class.py)" | mark -
This looks very good, I was just reading the code on GitHub.<p>I mostly use local models. I might modify 'mark' myself, or wait a while and see if anyone does a pull request.<p>A little off topic, but I run ollama at the command line using:<p>echo "what is 1 + 3?" | ollama run llama3:latest
I wonder if the CLI could have a "watch mode" where it watches a file or directory, and automatically append the response as you edit and save a Markdown file. Not sure how well it would work in practice, but seems like it could be an interesting alternative to the "chat" format.
Fabric and this!!! This is promising to build on.<p>danielmiessler.com/p/fabric-origin-story<p>together with obsidian is my setup I am trying to build now. I'm using obsidian to plan the vector and meta data to pull and reference with the assistants and building function tools to query.
A similar tool for llama.cpp: <a href="https://tildegit.org/unworriedsafari/mill.py" rel="nofollow">https://tildegit.org/unworriedsafari/mill.py</a>
See also: <a href="https://news.ycombinator.com/item?id=40866228">https://news.ycombinator.com/item?id=40866228</a><p>I think Ryan Elston's blog post is more effective in explaining the advantages of markdown for LLM interaction.