TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: Is SQL a primary tool of choice for doing ETL pipelines in 2019?

33 pointsby sosilkjover 5 years ago
Is SQL considered a primary tool of choice for ETL pipelines in 2019? Benefits/drawbacks?

11 comments

specialistover 5 years ago
I&#x27;ve used and created numerous ETL stacks. So learn from my mistakes.<p>First, move the code, not the data. Batch processing is for mainframes. I know, I know, this has been impossible to realize any where I&#x27;ve ever worked.<p>Second, less is more. If command line tools work, use &#x27;em.<p>Avoid IDLs, maps, schemas, visual programming, workflow engines, event sourcing, blah blah blah. It&#x27;s all useless abstractions, measured by number of indirections and stack trace depth. It&#x27;s all wicked hard to debug. It&#x27;s all abandoned unmaintained obfuscation layers.<p>Data processing (ETL) is just cutting and pasting strings. Input, processing, output. Sometimes with sanity checks. Sometimes with transformations, like munging date fields or mapping terms (&quot;yes&quot; to &quot;true&quot;). Very rarely with accumulators (aggregators) where you need some local persistent state.<p>Third, and this is pretty rare, use better APIs for data extraction. It&#x27;s all just scrapping. Don&#x27;t over think it. I wish I could show the world the APIs I created for HL7 (healthcare) data. For 2.x, I created &quot;fluent&quot; (method chaining) data wrappers (like a DOM) which could not blowup (used Null Objects to prevent null pointer exceptions). For 3.x, I used path query thingie to drill down into those stupid XML files. This was CODE, not mappings, so it was practically a REPL, meaning fast to code, fast to debug.<p>Fourth, you control the execution. Be more like Postfix&#x2F;Qmail, where each task has it&#x27;s own executable. Be less like J2EE, BizTalk, where you ask the runtime to control the lifecycle of your code.<p>Good luck.
评论 #21165644 未加载
davismwflover 5 years ago
There is no on size fits all tool for good ETL processes IMO.<p>We just built a pretty involved ETL process over the past couple of months that had two key components, a true batch ETL process and then a real-time synchronization between two Postgres databases on different servers. In our case, real-time is basically less than 2 seconds for this application so nothing insane, we routinely see it working in about 300-500ms.<p>We could&#x27;ve solved it all using just SQL to be fair, but found that it was better even for the batch ETL process to utilize SQL only for the data CRUD and leave the business logic to the application code we wrote (it is in nodejs). This let us take advantage of some caching which reduced the database server load dramatically during the processing, id lookups and other known values being the core area we reduced with cache. It also let us keep our business rules in code which is where we have them for data validation etc, so we could reuse existing code sections to maintain consistency and not try to replicate some pretty sophisticated validations and transforms in SQL.<p>I have also worked on projects in the past where the entire process was done using SQL and it is doable but I&#x27;d say it was less than ideal and definitely can cause significant overhead on the DB servers which is unnecessary most of the time.<p>I had one project where the entire ETL process was in MSSQL with stored procs and SSIS, spread across multiple servers. That system worked really well overall (integrity was super good and reasonably maintainable for SQL people), but again strained the DB servers a lot and was fairly slow. We moved most of that ETL process to C# at the time given it was a MS house, and the difference was dramatic. Load on the DB servers dropped and ETL that would take hours to run on SSIS with stored procs etc took minutes with C# and the utilization of memcached. In fact on that one we even wrote extensions for SQL Server using .NET and had SSIS call our external code which used the caching engine to reduce DB lookups amongst a bunch of other things. Pretty damn cool solution but definitely wasn&#x27;t simple, lots of smart people worked on that project to make it super reliable and fast.
dalailambdaover 5 years ago
SQL has definitely become the defacto tool for a lot of data processing. This model of working is generally referred to as ELT as opposed to ETL.<p>For small&#x2F;medium scale environments Fivetran&#x2F;Stitch with Snowflake&#x2F;BigQuery using getdbt.com for modelling is an insanely productive way to build an analytics stack. I consider this the default way of building a new data stack unless there&#x27;s a very good reason not to.<p>For larger scales Facebook has Presto, Google has Dremel&#x2F;Procella&#x2F;others, and a lot of data processing is done using SQL as opposed to writing code.<p>The only downside really is that it tends to be fairly focussed on batch pipelines (which are fine for 95% of workloads). But even that is becoming less of an issue with Beam&#x2F;Spark so you can use SQL for both batch and streaming.<p>Source: Solution Architect at an analytics consultancy.
jstrebelover 5 years ago
Coming from a Big Data (Hadoop &#x2F; AWS) background, I would say that SQL is a good choice for analyzing data once it is in the data warehouse, but it is not needed to get the data there. Most of the ETL code in this environment is based on Apache Spark 2.x and either Scala or Python as programming language. On a Hortonworks HDP cluster, you would use PySpark directly; on AWS, you would use AWS Glue which allows you to set up and schedule the PySpark code. Although PySpark does not stop you from using SQL, there is a preference to write the transformations in terms of native Spark functions. So, to answer the question: no, SQL is not a primary tool of choice for implementing ETL pipelines in 2019 - PySpark is.
IpV8over 5 years ago
Generally yes. You can use gui tools like Matillion or AlteryX to orchestrate, or you can just run SQL as commands or view materializations. With the advent of natively scalable SQL based databases such as Snowflake, SQL is clawing back territory from the &#x27;Big Data&#x27; databases rapidly. It is way more flexible to throw a ton of hardware at SQL as opposed to pre-conforming your data for specific usecases with non-relastional databases. Obviously if you&#x27;re talking realtime or sensor data SQL may not be your best strategy, but you&#x27;re question is really broad.
fjpover 5 years ago
I&#x27;ve been doing ETL-heavy work as (async) Python microservices.<p>The most important thing is to separate your infrastructure layer from your business logic, and have the infrastructure layer satisfy interfaces that are defines in business logic layer.<p>That way, when my datastore switches from FTP to S3 or MySQL to Postgres, I can swap out the datastore client, make sure the new one satisfies the interface, and the rest of the application works just the same.<p>The microservice&#x27;s database to track and schedule runs is in Postgres, with SELECT FOR UPDATE providing an easy way to treat a Postgres table as a work queue.
评论 #21139977 未加载
thenanyuover 5 years ago
I work at Mode, an analytics tool company. We get questions about data engineering from our customers all the time, and the solution we recommend 90% of the time is DBT<p><a href="https:&#x2F;&#x2F;www.getdbt.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.getdbt.com&#x2F;</a><p>If you&#x27;re used to software development, DBT gives you a very similar workflow for managing the T and L portions of the job.
eb0laover 5 years ago
If you are doing a serious ETL job, you&#x27;ll need to handle rejects - I mean, a way to identify which rows of data cannot be handled gracefukky by your ETL.<p>Doing this in plain SQL or Spark is complicated.<p>This is usually a job for commercial ETL tools like informatics, talend, data stage...
tracker1over 5 years ago
Personally, I&#x27;m a fan of scripts for this type of work... Node, Python, Perl, etc. That&#x27;s just me though. I know a lot of people that live and breath SSIS packages and similar.<p>It really depends on where you want the data and how you want to use it.
vivanover 5 years ago
As with most questions about stack choice: it depends.
llampxover 5 years ago
It depends on your sources and whether you want to do real-time or not.