tl;dr<p>1. Here I present an elementary proof for a classical result in random matrix theory that applies to any random matrix sampled from a continuous distribution.<p>One of its many important consequences is that almost all linear models with square Jacobian matrices are invertible.<p>2. This is also relevant to scientists that want stable internal models for deep neural networks since a deep network is an exponentially large ensemble of linear models with compact support.