TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: Need Advice:RTK Query with Redux vs. React-Query with Zustand

2 pointsby uniquepj12 months ago
Hello Everyone,<p>I&#x27;m currently building a chatbot using NextJS, and I&#x27;ve been using Redux with Redux Toolkit (RTK) for state management. So far, I&#x27;ve only implemented slices and haven&#x27;t included any caching mechanism.<p>As my project grows, I&#x27;m planning to implement RTK Query for caching and infinite scrolling. However, I&#x27;ve also come across React Query with Zustand as a potential alternative.<p>I&#x27;m particularly concerned about future scalability and the ability to handle API requests efficiently. Given these considerations, I&#x27;m looking for advice on the following:<p>1. How does RTK Query compare to React Query in terms of performance and scalability?<p>2. Are there any significant advantages of using Zustand over Redux for state management in this context?<p>3. Any potential pitfalls or challenges I should be aware of with either approach?<p>4. What are some best practices for implementing caching and infinite scrolling with these libraries?<p>I&#x27;d appreciate any insights, experiences, or recommendations you can share. Thank you!

1 comment

solardev12 months ago
(Not an expert, just a fellow JS dev)<p>What exactly are you wanting to infinite scroll&#x2F;cache? You said chatbot, but your questions make it sound like you have a lot more than that going on in the frontend?<p>If it&#x27;s the actual chatbot prompts and responses you want to cache (like the actual messages in a ChatGPT-like app), I think the challenge there would be more about reasoning through the server&#x2F;client sync and how to paginate an infinitely long document (each individual chat) so that the client can fetch and cache chunks at a time. Or maybe you don&#x27;t even have to worry about that at first, since the chats are probably going to be mostly natural language and code, which should easily compress over the wire anyway. It&#x27;s something you can optimize and paginate later.<p>But what I&#x27;m getting is that I think the challenge is more going to be about figuring that server&#x2F;client logic than worrying about clientside state in and of itself. Unless your chat engine or long-term storage is in the frontend itself (and at that point, why not just deploy as a desktop app altogether), the chat app is really just a thin client to the backend operations where the read&#x2F;writes and scaling would actually happen.<p>For the client-server stuff, RTK-query in particular is really good at that. It&#x27;s designed to easily sync up client state with API responses and handle caching effortlessly, with great developer ergonomics. I hated RTK on its own but add RTK-query and it becomes wonderful, like a more powerful&#x2F;mature version of useSWR. But I don&#x27;t really think that&#x27;s really the crux of your problem, if I&#x27;m understanding your use case right (which I might not be)...<p>You&#x27;re not planning on implementing the backend using NextJS too, are you? Next.js is great for write-rarely, read-often monoliths (like blogs, web apps that rarely need to save state to the server, etc.). It&#x27;s primarily a Node server backed by a great CDN integration, with some serverless helpers. But if you try to implement real-time chat on top of that, with multiple users constantly writing and reading in real time, that&#x27;s not the sort of thing that&#x27;s going to be easy to scale on that architecture. Vercel does offer various hosted&#x2F;integrated storage solutions (Postres, KV, etc.), but if you go down that route, you basically end up paying Vercel lots of money to handle scaling for you (which is fine if that&#x27;s what you want to do). In that case they&#x27;ll take care of the backend scaling for your dollars, and the clientside state lib is less important (because each chat&#x27;s authoritative state is really on the server, and the client just displays read-only fragments of that at any given time). If you build it all on serverless&#x2F;managed storage, presumably the bottleneck then wouldn&#x27;t be from the Vercel network to each user, but from the Vercel network to your chatbot server. Where&#x27;s that part actually going to live? (Or are you just using someone else&#x27;s LLM API?)<p>If you just want to plug in someone else&#x27;s API, I also wonder if you might not be reinventing the wheel... on the frontend, there&#x27;s already open source kits like <a href="https:&#x2F;&#x2F;github.com&#x2F;lobehub&#x2F;lobe-chat">https:&#x2F;&#x2F;github.com&#x2F;lobehub&#x2F;lobe-chat</a> that take an API key and give you a functional chat interface. Might be worth looking into if you haven&#x27;t already?