For fun (and learning) I've let an AI built FixThisBug.de, an AI-selfpowered code fixing tool that's different from typical AI coding assistants as it uses a local Ollama instance with the qwen2.5 model.<p>Key Features:<p>- AI build: build entirely with AI, over several iterations via v0 and cursor.<p>- Local Processing: Uses qwen2.5 model through Ollama for code analysis and fixes<p>- Dual Language: Full support for English and German interfaces<p>- "Free Tier": Limited daily fixes for casual users<p>- "Pro Access": Unlimited fixes for authenticated users (just for fun, its free :) )<p>- Privacy Compliant: GDPR-compliant & hosted in Germany<p>Tech Stack:<p>- Frontend: Next.js 15.0, React 19<p>- Backend: Supabase + Next.js API Routes -<p>- AI: Local Ollama instance running qwen2.5<p>- Auth: Supabase Auth with HCaptcha<p>- Infrastructure: German servers for EU compliance<p>To add a little <i>extra</i> to the project: Unlike GitHub Copilot or similar tools, FixThisBug.de processes everything locally on the server. No external AI service is used.<p>I'm particularly interested in feedback on:<p>1. The local LLM integration approach<p>2. Performance optimizations for Ollama<p>3. Your User experience using the service<p>4. Privacy considerations<p>Would love to hear your thoughts and suggestions!