TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

For Want of a JOIN

268 pointsby subsetover 2 years ago

36 comments

dagssover 2 years ago
It is one thing when a junior does this because they haven&#x27;t learned better.<p>It&#x27;s quite another when experienced seniors ban the use of SQL features because it&#x27;s not &quot;modern&quot; or there is an architectural principle to ban &quot;business logic&quot; in SQL.<p>In our team we use SQL quite heavily: Process millions of input events, sum them together, produce some output events, repeat -- perfect cases for pushing compute to where the data is, instead of writing a loop in a backend that fetches events and updates projections.<p>Almost every time we interact with other programmers or architects it&#x27;s an uphill battle to explain this -- &quot;why can&#x27;t just just put your millions of events into a service bus and write some backend to react to them to update your aggregate&quot;. Yes we CAN do that but why do that it&#x27;s 15 lines of SQL and 5 seconds compute -- instead of a new microservice or whatever and some minutes of compute.<p>People bend over backwards and basically re-implement what the databases does for you in their service mesh.<p>And with events and business logic in SQL we can do simulations, debugging, inspect state at every point with very low effort and without relying on getting logging right in our services (because you know -- doing JOIN in SQL is not modern, but pushing the data to your service logs and joining those to do some debugging is just fine...)<p>I think a lot of blame is with the database vendors. They only targeted some domains and not others, so writing SQL is something of an acquired taste. I wish there was a modern language that compiled to SQL (like PRQL, but with data mutation).
评论 #34095338 未加载
评论 #34094614 未加载
评论 #34095811 未加载
评论 #34094605 未加载
评论 #34094212 未加载
评论 #34094496 未加载
评论 #34096406 未加载
评论 #34095554 未加载
评论 #34096403 未加载
评论 #34094529 未加载
评论 #34096323 未加载
评论 #34095595 未加载
评论 #34098287 未加载
评论 #34097348 未加载
评论 #34100660 未加载
评论 #34094235 未加载
评论 #34100813 未加载
评论 #34095384 未加载
评论 #34094280 未加载
评论 #34096429 未加载
评论 #34094612 未加载
评论 #34095987 未加载
评论 #34094579 未加载
Jupeover 2 years ago
Firstly, I&#x27;d suggest the author look at this differently; perhaps &quot;For Want of a Code Review&quot;. Especially code from a relatively recent graduate, on a piece of code for which the engineer in question has little experience.<p>With that said, the JOIN is a very powerful concept which, unfortunately, has been given a terrible reputation by the NoSQL community. Moving such logic out of the database and into to DB&#x27;s client is just a waste of IO and computing bandwidth.<p>SQL has been the ONLY technology&#x2F;language that has stuck with me for &gt; 25 years. The fact that it is (apparently) not being taught by institutions of higher learning is just a shame.
评论 #34094721 未加载
评论 #34093052 未加载
评论 #34093080 未加载
评论 #34093790 未加载
评论 #34094383 未加载
phamiltonover 2 years ago
I&#x27;ve learned that &quot;a month in the lab saves an hour in the library&quot; usually can be distilled to &quot;A shallow understanding produces complex solutions. A deeper understanding is usually required to create simple solutions.&quot;<p>While the original example of not understanding JOIN might just be a lack of of general knowledge, the later steps are great examples of this, especially if someone else comes along and is told to fix the error.<p>Making something execute slow code in parallel is pretty easy to do generically. It doesn&#x27;t require understanding much about the slow code. It&#x27;s fairly low risk, you probably won&#x27;t have to tweak tests, there won&#x27;t be additional side effects. The major risks will be around error handling and it&#x27;s easy to turn a blind eye to partial success&#x2F;failure and leave that as a problem for a future team. You can confidently build the parallel for loop, call the task done and move on.<p>Striving for a deeper understanding requires a lot more effort and a lot more risk. Re-writing the slow code is a lot more risk. All side effects must be accounted for. Tests might have to be re-written. The new implementation might be slower. The new index might confuse the query planner and make unrelated queries slower somehow. It&#x27;s not just a matter of investing time, it&#x27;s investing energy&#x2F;focus and taking on risk. But the result will have comparatively fewer failure modes, it&#x27;ll be cheaper to operate and less likely to have security implications.<p>I&#x27;ve been in both spots and while I wish I could say we always went with the deeper understanding that wouldn&#x27;t be an honest statement. But the framing has been really helpful, especially as I work with other execs in the company to prioritize our limited resources.
评论 #34094051 未加载
评论 #34097445 未加载
yyykover 2 years ago
The worst part is not the missing JOIN. This happens, especially with juniors.<p>It&#x27;s the &#x27;all signup errors warranted paging the on-call even on 4am&#x27; bureaucratic decision followed by being unable to apply any fix quickly. No surprise the author did not stay.
评论 #34096950 未加载
rubyist5evaover 2 years ago
This article speaks to me. So many times I have needed to go back and fix queries that were naively written this way like it was some kind of &quot;optimization&quot;. There is no difference in effort between writing a join or doing the ORM-double-round-trip in the vast majority of cases. People are so afraid of doing joins I see people doing subqueries with the id in a subselect because &quot;joins are slow&quot;. The worst is usually some kind of pseudo-join and then an aggregate or filtering in the application code. It drives me up the wall when I see it in code review, usually because I get into some argument about &quot;joins are slow&quot; (with no evidence) and then I have to go and rewrite the query and maybe add an index to show that, yes - an aggregate that takes seconds and a ton of memory in the application code can in fact take milliseconds in the database.<p>The NoSQL people have really done a lot of brain-damage to this industry.<p>It&#x27;s so pervasive that I&#x27;ve starting using this kind of question in our technical interviews, doing a double round-trip ends the interview for anyone higher than a junior.
评论 #34094967 未加载
评论 #34096284 未加载
acdhaover 2 years ago
Back in the 90s, I remember getting a project from a local F500 company. Our design team had been doing some work for them and they&#x27;d been happy with the results so when they had problems on a backend project which was over a year behind schedule they asked if we could help &amp; I was pulled in. The project was a fairly straight forward product selector for industrial equipment but the team from a large consulting firm which had been working on it was struggling with performance &amp; hadn&#x27;t completed most of the features. The client was saying it was unacceptable that pages would take 5 or more minutes to load and they weren&#x27;t going to drop $500K on bigger servers like the developers were swearing were necessary to run the site.<p>I knew something was off performance-wise since the entire product catalog was only on the order of tens of thousands of records. As soon as I looked at the source code, the mystery was explained: they had allegedly experienced 3 developers working on it but none of them knew about SQL WHERE constraints! Instead, they were doing nested for loops to repeatedly retrieve every row of every table and doing the equality checks in VBScript. Finishing the rest of the project backlog took me a couple of days and the customer was quite happy that the slowest pages were now measured in hundreds of milliseconds rather than tens of minutes.<p>I was proud of how quickly we were able to turn that project around but the PM &amp; I were discussing how even our rush rate wasn&#x27;t enough to get us anywhere close to the amount of money the previous contractors had charged.
funstuff007over 2 years ago
&gt; With the exception of NPM modules, most tools are designed to solve problems, possibly the ones you have<p>Upvoted just because of the chuckle this gave me.
btownover 2 years ago
As someone who’s primarily worked with monoliths, I often wonder how often this exact problem happens, but where A and B are [micro]services owned by two different teams, one is required by company policy to use their APIs not their raw databases, and escalation of each of these issues e.g. query size&#x2F;rate limiting runs the risk of burning political capital on top of everything else.<p>How does one JOIN across not just tables but opaque services, in the general case? Or does every team doing microservices silently expect that one day a data team will start querying for a massive number of records-by-ID from every service, and the veterans in each team plan for this load pattern accordingly?
评论 #34093458 未加载
评论 #34093658 未加载
评论 #34093958 未加载
评论 #34094387 未加载
评论 #34093577 未加载
评论 #34094632 未加载
评论 #34096399 未加载
评论 #34096842 未加载
评论 #34100812 未加载
Ayeshover 2 years ago
That was a fun read, and I loved that little joke with NPM packages.<p>I find SQL, Regular Expressions, DNS, Client-side caching, CORS, TLS, and a few other things to be a MUST when hiring people, because most of the over-engineered crap can be avoided with a little bit of expertise with these. I spend most of my semi-leisure time with some good Regex books and golfing too.<p>Modern databases are amazing. Every few months, I take pleasure and not shy away in refactoring some complex and frequent queries into SQL views, carefully replace data logic (but not business logic) into stored procedures, and replace certain batch scripts with one-off queries.
评论 #34093521 未加载
评论 #34093382 未加载
评论 #34094068 未加载
icedchaiover 2 years ago
I&#x27;ve seen systems where people are doing manual JOINs with CSV, JSON, and the results of DynamoDB scans on relatively tiny datasets (&lt;10 megs.) Everything could fit in sqlite on a single machine. Instead, they build a Rube Goldberg contraption that uses &quot;modern cloud architecture.&quot;
pier25over 2 years ago
The aversion to SQL by younger devs is pretty amazing. Yes it has a weird cognitive model and a learning curve but it&#x27;s a cornerstone of web dev. Instead they resort to convoluted and technically inferior solutions like Prisma just because of a superficial DX advantage.<p>I&#x27;m certain Mongo only became popular because of this even though for many years it was crap.<p>That said I do think we need a better SQL. It&#x27;s still not there but EdgeDB looks very promising.
评论 #34095786 未加载
评论 #34101996 未加载
Thaxllover 2 years ago
The other problem with the double join is that it&#x27;s not an atomic operation so between the two select data could have changed.
评论 #34106422 未加载
评论 #34102136 未加载
nightpoolover 2 years ago
I&#x27;m not sure I understand how a JOIN would have fixed this problem. That is, if each chunk is fetching 1k rows, and you&#x27;re doing 50 simultaneous chunks, then you&#x27;re doing a 50,000 row query, and that&#x27;s ALSO going to be extremely slow, in terms of exclusive database contention (less of an issue with bigquery) and result set memory usage (definitely still a huge issue for python). In fact, one of my most frequent pieces of feedback to junior engineers who are just working on a larger backend for the first time is &quot;this query tries to fetch too much data at once, it will take too long and use way more memory then it needs to, please use find_each to automatically batch the query so that we balance memory usage and database contention&quot;. Indeed, Rails by default will use the exact same batching strategy the junior engineer chose in this case: fetch 1k items, process those items, and move on to the next 1k items. I understand that the author rankles about things not being done the &quot;right way&quot; with JOIN, but I question whether their focus on &quot;best practices&quot; is preventing them from seeing the optimization forest (split things up into parallel background tasks, don&#x27;t try to keep the entire dataset in memory at once) for the &quot;doing things right&quot; trees (use JOIN)
评论 #34094886 未加载
评论 #34094621 未加载
评论 #34094518 未加载
评论 #34098830 未加载
评论 #34094684 未加载
评论 #34095389 未加载
评论 #34095105 未加载
darepublicover 2 years ago
Yes I remember inheriting a project where in a similar fashion people were allergic to join. So we got js code selecting entire table, looping over the rows and then doing inner loops with further selects. I eventually had to switch everything around to using joins. There seemed to be a huge disdain for SQL.. like if you ever endeavoured to try some raw SQL you were playing with matches. Sure OK but code that is handling db operations that inefficiently is 100x worse tho...
评论 #34094567 未加载
Nihilartikelover 2 years ago
I sling a lot of SQL, and, mirroring a lot of peoples sentiment here, wish it had better syntax and composability.<p>DuckDB and Apache spark expose nice apis that almost completely remove the need to faff around with textual strings. Each projection returns a view that can be treated like another table, so composition and reuse is simple.. It would be nice if such a thing we&#x27;re more standard and available on the other dbms that I have to work with.<p>I feel like, in the continuum of abstraction, SQL is like opengl 3.. high level and a bit inflexible. Taking the analogy further, an ORM would be like the game engine on top of opengl.. What doesn&#x27;t exist, as far as I know, is the Vulkan equivalent. A low level, api that exposes the relational algebra and exactly how to execute it. There are cases where I would have saved a lot of effort if I could just write the damned physical plan for a query execution myself rather than rearranging table join orders and sending hints that the query optimizer is just going to passive aggressively ignore anyway.
评论 #34099163 未加载
bayesian_horseover 2 years ago
An SQL query goes into a bar, walks up to two tables and asks: May I join you?
LudwigNagasenaover 2 years ago
I would call it “for want of reasonable hiring and onboarding processes”. How does someone get into a data engineering job without any knowledge of SQL and doesn’t even get basic onsite training?
评论 #34095254 未加载
jeffreygoestoover 2 years ago
&quot;If you encounter an unusually round system limit, you’re probably using the system in a way its designers never imagined.&quot;<p>Haha, so true. We triggered a static code analyzer error &quot;Cyclomatic Complexity bigger than 1.000.000.000!&quot;. The vendor was very interested in that code snippet (generated classifier code) and we shared a good laugh.
Ensorceledover 2 years ago
We hired a data engineering consulting company and none of their team of SQL experts had heard of upsert or merge. I find it weird that people don&#x27;t spend a bit of time searching for a better way of doing stuff before just jumping into a long, hard way of doing things.
评论 #34094920 未加载
johnthussover 2 years ago
&quot;Don’t let junior SWEs get 2000 lines into a change before submitting a pull request.&quot;<p>This is good advice. Share your code early and often so you can get feedback before you&#x27;re fully committed to one approach.
评论 #34096203 未加载
data-ottawaover 2 years ago
In the spirit of HN I should confess I’ve done exactly this, not the using Python to feed data back to SQL, but writing terrible hacks to get around resource limits on deadlines.<p>On a recent project I needed to process a couple years of data for a hard deadline of Monday, and it was Friday. Our DB had a query timeout and a resource memory limit which blocked doing the full analysis without building new data models which would take days to get shipped and to build the new data models. The deadline couldn’t be moved so hacks were needed.<p>The solution: write some Python code to generate one query per week of data going back two years (over 100 queries), save the results to individual scratch tables, and then use a second query to union all the results together in our BI tool.<p>Of course the first time I ran it serially it was too slow, so I parallelized it. That was too many queries so I added a limit. Then one query failure broke the whole thing so I added retries… by the end of the day it looked exactly like this article.<p>It worked though! I got all the data we needed processed for Monday, I presented it to our execs and our project was approved. We only needed to manually run that script once more before I built the real solution and deleted the script.
评论 #34094791 未加载
im3w1lover 2 years ago
I&#x27;m gonna be the contrarian and say this is mostly fine. We can research the proper way to do things, or use code review to teach about the proper ways. But this can lead to code shaming and a fearful environment where people second guess themselves and spend a lot of time chasing a perfection that doesn&#x27;t move the business metrics.<p>In this case, doing the join manually isn&#x27;t a huge deal, chunking isn&#x27;t a huge deal, parallel requests isn&#x27;t a huge deal. But &quot;concurrent limit reached&quot; is the point in this story where Bob should have put on the thinking cap and reasoned that &quot;this shouldn&#x27;t be hard, other people do things like this with bigger datasets all the time, I wonder how&quot;. Before that point it&#x27;s literally just a matter of changing a couple lines to solve the issue. So what? After that point however, it&#x27;s starting to affect the overall design around it in harmful ways, and turning the issue into a bigger one.
评论 #34100383 未加载
评论 #34098950 未加载
outsidethepartyover 2 years ago
I got a chuckle out of &quot;With the exception of NPM modules, most tools are designed to solve problems, possibly the ones you have,&quot; but have to agree with some other commenters that a better title for this would have been &quot;for want of a mature code review process&quot;
评论 #34093830 未加载
gsvclassover 2 years ago
This is more common that you would believe other issues i&#x27;ve seen are no `limit` on the query, fetching all the results and then sorting in your own app code, using wrong joins. Many of these happen while using ORMs as well. SQL is a context switch for more devs and very few understand it and even those that do might not be familiar with the capabilites of your startups db choice.<p>Shameless plug but this was my motivation behind building GraphJin a GraphQL to SQL compiler and it&#x27;s my single goto force multipler for most projects. <a href="https:&#x2F;&#x2F;github.com&#x2F;dosco&#x2F;graphjin">https:&#x2F;&#x2F;github.com&#x2F;dosco&#x2F;graphjin</a>
Arwillover 2 years ago
The principle is that you should be using the underlying API if something is already solved on a lower level, and not replicate the functionality on a higher level, because it will perform poorly. There was a reason why the lower level API exists in the first place.<p>This applies to graphics programming very well, its not a question that you wouldn&#x27;t be making your own pixel rasterizer instead of using DX, OpenGL or Vulkan, for example.<p>The big recognition is that when doing business apps, SQL database functionality is the underlying API, and you should prefer using that.
评论 #34096541 未加载
ivanhoeover 2 years ago
Back in the days of Mysql w&#x2F; MyISAM engine it was sometimes way faster on big data-sets and underpowered DB servers to do the query exactly this way. Even with all the correct indices in place the JOINs (especially if more than one table was in game) would often just freeze the server for 15-20 minutes, while joining data at the app level in the for loop and with the lookup tables for id-s would typically take only a few seconds. Obviously this is an obsolete hack for long time now...
评论 #34098673 未加载
LAC-Techover 2 years ago
My key take away here is that not spending an hour reviewing code probably man-days worth of work.<p>The technical capabilities are all there on the team, from description. What was probably missing is someone both technical and assertive, who could politely say to the deadline setters &quot;This is fucking stupid and it&#x27;s not going to work&quot;.
tmp60beb0edover 2 years ago
&gt; Don’t let junior SWEs get 2000 lines into a change before submitting a pull request.<p>Why junior SWEs and not all SWEs?
jmullover 2 years ago
This doesn&#x27;t really add up to me.<p>The article explains how the original bad code gets checked in which seems plausible enough.<p>But that doesn&#x27;t explain why the first fix wasn&#x27;t to just start using a JOIN? Or the second fix.<p>I guess it&#x27;s a made up story, to make a point? Anyway, I found the plot holes distracting.
tantamanover 2 years ago
This is an incredibly common thing. The worst I&#x27;ve seen it is when people drop their SQL DB for a No-SQL thing (for no good reason) and then end up implementing all the joins they lost in the application :(
phendrenad2over 2 years ago
Ragequitting a company and calling it a &quot;trainwreck&quot; because one developer didn&#x27;t know about JOIN seems... extreme.
brightballover 2 years ago
That&#x27;s an excellent description of Contagion caused by tech debt.<p>How much does this problem grow and spread the longer it goes unfixed?
tomerbdover 2 years ago
Wait until he hears about LEFT JOIN
jmartricanover 2 years ago
We&#x27;ve all been Bob at least once in our life. By &#x27;we&#x27; I really mean me.
bjornsingover 2 years ago
Shipping it the first time may have made sense. Second time? Not so much.
mgaunardover 2 years ago
how is CSV not &quot;wrap the fields in double quotes and join them with commas and newlines&quot;?
评论 #34099079 未加载