Speaking as a pytorch user, many of the steps in your Readme example remind me of the usual setup except the pipeline.run() handoff which is replaced by eager evaluation in pytorch.<p>Are you seeing something like an eager mode for your library, or perhaps a pytorch plugin that might use your apis?
The API looks fairly brittle, e.g. manually defining the loss and activation inside the pipeline instead of in the model itself. Have you or your customers used this in a large production environment?
Are the team of bringing ZenML funded by VC investment? I ask because it seems the project just launched fully without a long development process in open (it appears, I could be wrong).