I am working at a company that has 75,000 customers, and we keep track of a fairly large set of personal data about these 75,000 customers. The data is typically kept in PostGres, but for reports, we assumed we could dump it out in a denormalized form to ElasticSearch. We would not dump all the data, of course, we only take 18 items that are considered very important. We had never done much analysis of who our customers were, and what their level of engagement was. We had a new person come in, focused on business intelligence, and they were desperate to get some data about our customers. So I wrote a short Python script that pulled the data we wanted out of PostGres and stored in ElasticSearch. I then made it available to the team via Kibana. I assumed everyone would be fascinated to look at the data and perhaps see various trends.<p>But that didn't work. Kibana was unusable. With 75,000 records it never loaded. Not in anyone's browser. So I cut that in half, to 36,000 records. And still, it never loaded. So I kept cutting the amount. And eventually I got down to 10,000 records. Then it loaded, but was so slow no one could use it. So finally I cut it down to 7,000 records. Now it loaded, and it was fast enough that we could use it.<p>To do the real analysis, I ended writing another script that dumped out the 75,000 records as a CSV file, then I uploaded it to a spreadsheet on Google Docs. This worked fine.<p>I am curious why Google spreadsheets can render 75,000 records, but Kibana can not? I am also curious what the real use case is for Kibana? If it can't handle large datasets, then its ability to make pretty charts seems useless -- we could never get the data in there to make the chart. I assume that other people will do what I did, and use a spreadsheet instead.