That was explanation from a perspective of someone acquainted with modern physics. As such, it will make sense to physicist, but no sense to most everyone else, including mathematicians who don’t know modern physics.<p>For example, in the beginning, author describes tensors as things behaving according to tensor transformation formula. This is already very much a physicist kind of thinking: it assumes that there is some object out there, and we’re trying to understand what it is in terms of how it behaves. It also uses the summation notation which is rather foreign to non-physicist mathematicians. Then, when it finally reaches the point where it is all related to tensors in TensorFlow sense, we find that there is no reference made to the transformation formula, purportedly so crucial to understanding tensors. How comes?<p>The solution here is quite simple: what author (and physicists) call tensors is not what TensorFlow (and mathematicians) call tensors. Instead, author describes what mathematicians call “a tensor <i>bundle</i>”, which is a correspondence that assigns each point of space a unique <i>tensor</i>. That’s where the transformation rule comes from: if we describe this mapping in terms of some coordinate system (as physicist universally do), the transformation rule tells you how to this description changes in terms of change of the coordinates. This setup, of course, has little to do with TensorFlow, because there is no space that its tensors are attached to, they are just standalone entities.<p>So what are the mathematician’s (and TensorFlow) tensors? They’re actually basically what the author says, after very confusing and irrelevant introduction talking about change of coordinates of underlying space — irrelevant, because TensorFlow tensors are not attached as a bundle to some space (manifold) as they are on physics, so no change of space coordinates ever happens. Roughly, tensors are a sort of universal objects representing multi linear maps: bilinear maps V x W -> R correspond canonically one-to-one to regular linear maps V (x) W -> R, where V (x) W is a vector space called tensor product of V and W, and tensors are simply vectors in this tensor product space.<p>Basically, the idea is to replace weird multi linear objects with normal linear objects (vectors), that we know how to deal with, using matrix multiplication and stuff. That’s all there is to it.