I generally agree with the point made in this article, although I’ll point out that it’s only been true for the last couple of years. Until TensorFlow completely revamped its syntax in v2.0, scrapping the previous graph-based syntax for PyTorch-like eager execution, writing code in TF was much more time-consuming than in PyTorch, since you had to define the entire computational graph before you could execute it as a single unit. This made iterative debugging extremely painful, since you couldn’t interactively execute individual steps within the graph.<p>These days, thankfully, the choice of framework comes down mostly to (a) minor syntactic preferences and (b) specific functionality available in one framework but not another. For example, although I generally prefer PyTorch’s syntax since it’s closer to numpy’s, TF supports far more probability distributions (and operations on those distributions) than PyTorch. When working on a model in PyTorch, if I discover that I need that additional functionality, it’s easy enough to convert all my code to TF.