TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: How do you automate your data analytics report?

2 pointsby mohonalmost 3 years ago
I&#x27;m currently working on how to speed up our analytics report development workflow.<p>So imagine, you have this table called A with this structure<p>+----------+--------------+---------+----------+<p>| location | order_count | gmv | net_gmv |<p>+----------+--------------+---------+----------+<p>| TX | 1000 | 9000.0 | 8000.0 |<p>| FL | 1000 | 9000.0 | 8000.0 |<p>+----------+--------------+---------+----------+<p>then you want to have another table called B with this structure<p>+-------+--------------+---------+----------+<p>| age | order_count | gmv | net_gmv |<p>+-------+--------------+---------+----------+<p>| 20-30 | 1000 | 9000.0 | 8000.0 |<p>| 30-40 | 1000 | 9000.0 | 8000.0 |<p>| 40-50 | 1000 | 9000.0 | 8000.0 |<p>+-------+--------------+---------+----------+<p>The location and age are the dimension needed for the report, eventually we&#x27;ll be having different dimension needed for our report. What we&#x27;re doing now is we develop a Spark-SQL job for each table. But we think this is not gonna scale because every time we want to add new dimension, we need to develop the Spark-SQL job again (same logic but different group by dimension)<p>So I&#x27;m wondering whether there&#x27;s a better way to do this. Anyone has any experience with this kind of problem before? Any pointer how to do this efficiently (I&#x27;m thinking someone could just specify the dimension they need and there&#x27;ll be a script where it&#x27;ll automatically generate the new table based on the specified dimension)<p>Thanks

4 comments

mattewongalmost 3 years ago
There is a better way to do this.<p>An example of output that fits this paradigm you describe-- but to a much further degree-- would be the dozens of tables shown in the securities offering described at <a href="https:&#x2F;&#x2F;www.sec.gov&#x2F;Archives&#x2F;edgar&#x2F;data&#x2F;0001561167&#x2F;000114420412065490&#x2F;v782900_424b5.htm--" rel="nofollow">https:&#x2F;&#x2F;www.sec.gov&#x2F;Archives&#x2F;edgar&#x2F;data&#x2F;0001561167&#x2F;000114420...</a> search for &quot;Stated Principal Balances of the Mortgage Loans as of the Cut-off Date&quot; (on page A-3).<p>How do you generate reports like this in a manner that is flexible for end users without requiring IT in the middle? You start with the bare input, which is: data + report logic:<p>1. Specify the <i>common</i> columns that you want in your output tables (i.e. columns other than the first). In your example, that would be order_count, gmv, net_gmv<p>2. Separately, specify the <i>tables</i> that you want to generate, where each table spec consists of:<p><pre><code> - the bucket logic (in your example, that would be a formula representing &quot;age broken out in buckets of 10&quot;) - optional other characteristics such as whether the data should be filtered before going into the table, which column the table should be sorted on, whether to limit the table output to the &quot;top [10]&quot; rows, etc etc </code></pre> 3. Third, run your data, plus the above spec, through some software that will generate your report for you<p>As for part 3, my company has recently launched a free platform for doing all of the above in a collaborative and secure manner. Please reach out if you&#x27;d like more info on this. Of course, you can do it yourself or have your IT do it-- but be aware it is not as easy as it sounds when you start having to deal with real-world practicalities like schema variability and scalability. And anyway, why bother if you can now do it all for free?
评论 #31702524 未加载
hodgesrmalmost 3 years ago
Hi,<p>You don&#x27;t mention how much data you have, what the arrival rate is, and how long you would keep it.<p>You <i>did</i> mention you are familiar with ClickHouse. For datasets under a few billion rows you don&#x27;t need any special materials views for specific dimensions. Put everything in a single table with columns for all variants and an identifying column for the type of data. Just ensure you make good choices on datatypes, compression, and primary key&#x2F;sort order. In this case you can then just apply more compute to get good results.<p>ClickHouse can handle 1K columns without much trouble.<p>edit: clarify use of a single table.
评论 #31720877 未加载
blakeburchalmost 3 years ago
If I&#x27;m understanding your question correctly, it sounds like you&#x27;re just generating cache tables for each variant of the root table with a different set of group bys (although I&#x27;m curious why you wouldn&#x27;t want live views instead). I would recommend a python script that accepts a list of group by parameters. When executed, it loops through the list of parameters, generates and executes the DML with those parameters, then runs a generated query to store the results in the new table.<p>Feel free to message me separately if you want to dive into specifics.
评论 #31702509 未加载
codevarkalmost 3 years ago
First World Problem. Help me find my webserver. I know it&#x27;s in my basement.