Long time Kdb/q enthusiast, absolutely NO enterprise deployment experience whatsoever.<p>This feels like a ‘pick your poison’ situation. You’ve been told already you won’t be allowed to dump kdb; it’s probably embedded in your infra in a bunch of ways, and ripping it out is a no-go.<p>OK, so, you have data in kdb. What you’re doing right now (it sounds like) is using it as literally just a raw data store. That’s the worst way to use it; a lot of work went into making it very fast to run summarization/grouping/sorting/etc all right on the kdb servers. Note that this is very unlike how an Apache project works.<p>Unfortunately, you wrote a rust library that probably doesn’t really distinguish your kdb storage from, say, JSON files, so you are at a crossroads.<p>Option 1: Get some good data cloning up, clone data over to your preferred generalized data lake tech, run rust against it.<p>Option 2: Go through your rust code with a fine tooth comb and figure where exactly it’s doing things that cannot be done semantically in q/k. Start slimming down your Rust lib, or more exactly, rework what queries its sending, and what state of data it expects<p>Option 3: dump your rust library and rewrite it in q or k.<p>Of these, I would be willing to bet that for an ‘ideal’ developer, meaning a 160+ IQ dev skilled in Rust, vs a 160+ IQ dev skilled in kdb, vs a 160+IQ dev skilled in say Java + Spark, Option 3 is going to be by far the least resource intensive in terms of deployed hardware, and the fastest / lowest latency.<p>That said, given where you’re at, a principled Rustacean who’s looking at coming to grips with kdb realtime, I think I’d recommend you think hard about Option 2. By the end of Option 2, you will probably be like “Yeah, this could be all k, or nearly all,” but you’re likely going to have some learning to do.<p>Think of it this way, when you’re done, you’ll be on the other side of the cabal, and can double your base rate for your next gig. :)