I’m interested in playing with vector databases to detect interesting anomalies in a large volume of logs, like 1TB / day.<p>Is it reasonable to attempt to generate embeddings for every log event that hits the system? At 1TB/day, it’s like 1B log events per day, over 10k per second.<p>Would I just have to sample some tiny percentage of log events to generate embeddings for?<p>The volume feels too high, but I’m curious if others do this successfully. I want this to be reasonably cheap, like less than 1 cent per million log events.<p>Twitter seems to be doing something like this for all tweets at much higher volume. But I don’t want to spend too much money :)
Maybe have a look at what netdata does, maybe not 1 to 1 applicable to your use case, but I've used netdata for monitoring my own servers which ingests thousands of datapoints per second and the anomaly detection seems to work.<p><a href="https://learn.netdata.cloud/docs/ml-and-troubleshooting/machine-learning-ml-powered-anomaly-detection" rel="nofollow noreferrer">https://learn.netdata.cloud/docs/ml-and-troubleshooting/mach...</a>