Beautiful article.<p>Off-topic:<p>The Nominate app is exactly what’s missing from today’s UIs. Most of today’s user interfaces can benefit from “sparkles of AI helpfulness” instead of requiring a separate “AI App.” For example, macOS file renaming should provide AI-powered predictive suggestions when renaming a file that doesn’t have a useful name. Another example: when creating a GitHub issue, the UI should use AI to predict which labels are most likely relevant and bring them to the top for selection.<p>It seems many “AI products” attempt to replace entire workflows instead of enhancing the existing ones. GitHub puts a lot of effort into building Copilot features like Copilot Code Reviews, but it doesn’t appear interested in using AI to make existing code reviews more powerful and useful.
Curious about RAGs. The article made it look like just a few additional parameters (context) you pass to the LLM. Somehow I was under the impression RAGs required training.<p>All I want is an LLM front-end to a local Wikipedia drop.
“Can we take a moment to appreciate the name? Apple Intelligence. AI. That’s some S-tier semantic appropriation. On the level of jumping on “podcast” before anyone knew what else to call that.”<p>Apple didn’t invent the term Podcast though!
This looks great! I’ve been using OllamaKit, for prototyping but it’s not Linux compatible. While not explicitly documented as Linux compatible it looks like this package is.<p>It’s an unfortunate aspect of the Swift on the server ecosystem that most Swift developers don’t really know what it takes to be Linux compatible (very little in most cases) and so many packages are somewhat accidentally not Linux compatible.
<p><pre><code> The simplest way to use a language model is to generate text from a prompt (followed by python code)
</code></pre>
for the initiated this is not the simplest way.
You can just `ollama run` in your Terminal and get a chat repl