> A forward() function gets called when the Graph is run.<p>Isn't that almost exactly the same in tensorflow? You'd run your model to generate an output, or/and run your optimization operation t optimize the model.<p>> Based on some reviews, PyTorch also shows a better performance on a lot of models compared to TensorFlow.<p>Citation needed. How good are the examples optimized? What does performance mean? Precision or learning iterations per second?<p>If it's the later, in which environment? CPU/GPU/distributed computing?