Data Engineering is a very wide field with an even wider variety of tools. What are some good resources that can shine some light on how to successfully build and run a modern Data Engineering operation at scale?<p>Specifically talking about practical techniques and preferably open-source software that can be composed to build a solution for the following:
* Data Collection / Ingestion of streaming and batch snapshot data
* Structuring a Data Lake
* Creating and operating data pipelines
* ETL or ELT?
* Do you share source data between the warehouse tables and machine learning / streaming pipelines?
* Data Warehouse
* Machine Learning / Data Science / Spark
* Data Visualization
* ML Model Serving (and streaming updates)
* Data Lineage
* Am I forgetting anything?<p>I realise this is a very broad question. I know that there are good solutions (open source and commercial) for all of these if I simply Google, but integrating them well seems to be an art.<p>If you know of any books or resources or experts to follow, please share.