Hey HN,<p>I have been working in the data-science and machine-learning domain for the past 8 years or so. I have not been exposed to tools such as PySpark etc. which are being asked frequently in job descriptions.
What resource or certification can I use to get upto par on PySpark?<p>Thanks!
Having used spark for the past 8 years or so, it's definitely a solid basic for data engineering. I use it for generating reports the most, but sometimes we have large projects to get data into different staging databases. I use it a lot with ElasticSearch or a parquet. Basically it helps you write large joins and flatten the result to a database that can more quickly perform aggregations on that flattened result (like Elasticsearch) or a columnar database.
If you have experience in any data frame library (like Pandas), and SQL, you can pick up PySpark pretty easily... With the one caveat that writing good data pipelines in any language gets much harder when you start looking at ways to actually processes big data (~20+TB). Modern SQL engines are so good though.
I thought the _Learning Spark_ book was a pretty good introduction. Databricks offers it for free here [0]<p>[0] <a href="https://pages.databricks.com/rs/094-YMS-629/images/LearningSpark2.0.pdf" rel="nofollow">https://pages.databricks.com/rs/094-YMS-629/images/LearningS...</a>
I'm a Data Engineer which uses Spark daily. I guess the only important cert would come from Databricks, but I think it will be more worth your while to read the book mentioned here and try to do a little project ingesting/transforming data
Just get a job since you are already senior. You can learn it on the job. Find a few tutorials if you must, but people should be able to pick it up in a few weeks for basic work.