Agreed. The tooling around deep learning is not as mature as the tooling around software development. There is a fair amount of engineering and grunt work needed to even get started, let alone build on others' research. A few problems from top of mind:<p>- Setup: Installing DL frameworks, Nvidia drivers and CUDA is an exercise in dependency hell. Trying to run someone's project, which has different dependencies than what you have is difficult to get right. Docker images [1] and nvidia-docker make this simple, but are still not the norm.<p>- Reproducibility: This is big as Denny mentions. Folks still use Github for sharing code. But DL pipelines need versioning of more than just code. It's code, environment, parameters, data and results.<p>- Sharing and collaboration: I've noticed that most collaboration on deep learning research, unlike software, happens only when the folks are co-located (e.g. part of the same school or company). This likely links back to reproducibility, but there are not many good tools for effective collaboration currently IMHO.<p>[1] <a href="https://github.com/floydhub/dl-docker" rel="nofollow">https://github.com/floydhub/dl-docker</a> (Disclaimer: I created this)