TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: Is PySPark a Dead-End?

9 pointsby passer_byerover 3 years ago
I am contracted by a major financial services firm to refactor an analytical model used for revenue forecasting to PySpark executing on a AWS EMR cluster.<p>The project&#x27;s current status is documented[0].<p>The client&#x27;s team responsible for operationalization was successful in refactoring another analytical model into Python&#x2F;pandas. The current model execution time for a 5 year scenario is ~17 hours. Most of that time is spent executing poorly crafted Oracle SQL queries drawing millions of rows into the analytical run-time for, sorting, aggregation, discarding, merging, and spliting tasks.<p>In order to constrain this exeuction time, final input is a sample of ~1.8M rows from a loan portfolio of ~81M records.<p>The client is concerned about performance and believes PySpark is the preferred target language.<p>I have been on this project for just one month, but I contracted previously at the same firm on a six months to refactor another model into Python&#x2F;pandas. That project was successful, mainly due to the team leader&#x27;s rigor for meeting milestones and ability to remove blockers for the team.<p>I recently discussed these projects with @Travis Oliphant who had some interesting ideas on Python-based frameworks to overcome issues for processing out-of-core dataframes. We discussed the frameworks Dask[1], Coiled.io, commercial Dask support[2], Ray[3], Modin, commercial support for Ray[4].<p>Others discussed were, Databricks[5], bodo.ai[6], Voltron Data[7], and AtScale[8]. On Reddit, the commentary for Snowflake was very positive[9].<p>Easing maintenence burdens to keep the model in production and devising new scenarios (e.g. Covid-19 effects on forebarance requests) are requirements. Its shelf-life is years, making maintainability a major consideration.<p>What have others experienced in scaling out for teams familiar with Python&#x2F;pandas for feature engineering tasks?<p>Is PySpark a dead-end libray in the Python ecosystem?<p>[0] https:&#x2F;&#x2F;www.pythonforsasusers.com&#x2F;project_summary&#x2F;current_project_status.html<p>[1] https:&#x2F;&#x2F;dask.org&#x2F;<p>[2] https:&#x2F;&#x2F;coiled.io&#x2F;<p>[3] https:&#x2F;&#x2F;docs.ray.io&#x2F;en&#x2F;ray-0.4.0&#x2F;pandas_on_ray.html<p>[4] https:&#x2F;&#x2F;modin.readthedocs.io&#x2F;en&#x2F;stable&#x2F;<p>[5] https:&#x2F;&#x2F;docs.databricks.com&#x2F;languages&#x2F;pandas-spark.html (which points to Apache&#x27;s Pandas API on Spark)<p>[6] https:&#x2F;&#x2F;bodo.ai&#x2F;<p>[7] https:&#x2F;&#x2F;wesmckinney.com&#x2F;blog&#x2F;from-ursa-to-voltrondata&#x2F;<p>[8] https:&#x2F;&#x2F;www.atscale.com&#x2F;autonomous-data-engineering&#x2F;<p>[9] https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;dataengineering&#x2F;comments&#x2F;r893rw&#x2F;why_is_snowflake_so_popular&#x2F;

7 comments

dagwover 3 years ago
<i>Most of that time is spent executing poorly crafted Oracle SQL queries</i><p>Start by looking here. As much as we love to bag on Oracle, it is at its core a really fast and capable database. I don&#x27;t know what you are doing, but doing anything with only 81M records shouldn&#x27;t take 17 hours. Profile you SQL, rewrite it, if necessary bring in a Oracle SQL expert, and I&#x27;m pretty sure you will find some easy wins just here. Perhaps even enough to solve your performance problems. If you&#x27;re doing relational database type work, it&#x27;s hard to beat a relational database.
shooover 3 years ago
It&#x27;s pretty hard to give helpful advice without clearly understanding the existing situation and what the actual bottlenecks are.<p>E.g. maybe 15 of the 17 hour running time is because the database is doing sequential scans of some tables as some essential indices haven&#x27;t being defined. Or maybe the indices are defined but the queries need to be written to take advantage of them. Or maybe the queries are blazing fast because the python scripts are taking it upon themselves to perform outer joins in very slow pure python code rather than just getting the database engine to do it. Or maybe all the queries are happening implicitly through SQLAlchemy ORM and the entire analysis is a fractal mess of lazy n+1 select antipattern OO nonsense, and most of the running time is actually network latency between the machine where the python sits and the machine where the database lives. Maybe 4 of the 17 hours of running time is due to compute heavy hot loops in pure python code that can be sped up 1000x if someone is willing to roll up their sleeves and spend a week rewriting as C &#x2F; C++ &#x2F; Cython code that lets the CPU loose to crunch numbers in arrays without allocating or hashing or reference counting or waiting for the GIL. Or maybe the entire thing is relatively well engineered, given the physics of the computations involved, and 17 hours is pretty reasonable!<p>If no one knows yet what the bottlenecks are, maybe spend a few days profiling stuff and comparing it to theoretical estimates of the throughput or processing speed that the hardware is capable of, assuming the system was making optimal use of the hardware, and try to figure it out. It&#x27;d be a bit unfortunate to not understand the bottlenecks and migrate everything to pyspark and end up with something that runs slower than the original version.
NumberCruncherover 3 years ago
&gt; Most of that time is spent executing poorly crafted Oracle SQL queries drawing millions of rows into the analytical run-time for, sorting, aggregation, discarding, merging, and spliting tasks.<p>I always try to follow the rule-of-thumb of &quot;if it can be done in the analytical DB, it should be done in the analytical DB&quot;. In my experience Oracle is pretty well suited for all of the &quot;sorting, aggregation, discarding, merging, and splitting tasks&quot;. With proper indexing&#x2F;partitioning processing 81M records shouldn&#x27;t take 17 hours. Pulling all the data into python and then fighting the lack of (out-of-the-box) multi-threaded data processing capabilities seems to be part of the problem than of the solution.<p>In my current job if I have to do some analytical heavy lifting I just write the data to AWS S3 (parquet) and read the query-results back through AWS Athena (Presto) into python.
vanusaover 3 years ago
<i>Most of that time is spent executing poorly crafted Oracle SQL queries drawing millions of rows into the analytical run-time for, sorting, aggregation, discarding, merging, and spliting tasks.</i><p>Depending on what goes on in between the lines of all that &quot;sorting, aggregation, discarding, merging, and splitting&quot; -- the core guts of what you&#x27;re doing might quite easily done within Postgres.<p>And 81 million rows? That will fit on your laptop, easy (especially if many are discarded in the early stages of processing).<p>Or it perhaps might not fit so easily. But the basic point I&#x27;m trying to make here is: don&#x27;t be afraid of simplicity. All other unknowns being equal, it&#x27;s as good a starting point as any.
passer_byerover 3 years ago
The observations posted here are very useful, thank you for such detailed response.
kaluover 3 years ago
Spark is not dead. Not even close.
评论 #29465743 未加载
apohnover 3 years ago
So first things first, PySpark is not a dead, dying, or a dead-end. When Databricks and Spark die, then we&#x27;ll see the end of PySpark. Adoption of Spark and Databricks is growing. I actually see Arrow errors in PySpark jobs in Databricks, so Arrow&#x2F;Ursa&#x2F;VoltronData is already being used in the guts of Databricks&#x2F;Spark.<p>We actually see a lot of what you are describing at my current company. All our data is stored in a database that was once very popular, but is old now and not cloud based.<p>We have a couple of challenges.<p>1) The data engineering team typically loads data (typically 100s of millions of rows) into the database before anybody really decides how it will be used. Typically we (Data Scientists) only get access to views. The views and underlying tables are not indexed or partitioned as we need, so almost any query takes forever to run since it almost always results in a full table scan of the raw source table.<p>2) The database team is caught in a budgeting trap. There is a long term-migration to another cloud based database, so both people and financial resources are focused on that. The end result is that no further scaling of the current database provider is possible, which means that every complex query we make on this database creates more load, which makes every other query run slower. This database has a lot of users, tables, and queries, which means that a lot of the available people spend their time just making sure the database maintains some basic standard of performance.<p>3) Since any complex query (e.g. even a basic date based aggregation) increases the load on the server, user queries should only fetch and filter data. Aggregations and anything complex (e.g. a string operation) should be done downstream.<p>Based on these constraints, we basically have adopted the following.<p>1) Once the users (e.g. us) have defined a use case for the data, the Data Engineers&#x2F;Data Platform team will index and partition the data properly. This typically results in a huge performance increase. Recently we had one view that went from 5+ hours for a basic query to less than 5 minutes.<p>2) All aggregations and data transformations are done in Databricks&#x2F;PySpark.<p>3) Typically, after 2) if we convert the Spark Dataframe to a Pandas dataframe, data dataset is small enough to run anywhere.<p>One of the things to keep in mind is the people supporting it after your contract is over. I think Databricks&#x2F;PySpark and Dask are fairly common and well known in the data community. Snowflake can probably help speed up the SQL queries once the data is moved into that, but I don&#x27;t think it can cover some of the analytical things you can do in Spark.<p>Arrow&#x2F;VoltronData doesn&#x27;t seem like a fit for the use case you are describing unless you have bunch of developers trying to develop their own data engine with Arrow behind it.