The dependency hell required to run a good deep learning machine is one of the reasons why using Docker/VM is not a bad idea. Even if you follow the instructions in the OP to the letter, you can still run into issues where a) an unexpected interaction with permissions/other package versions causes the build to fail and b) building all the packages can take an hour+ to do even on a good computer.<p>The Neural Doodle tool (<a href="https://github.com/alexjc/neural-doodle" rel="nofollow">https://github.com/alexjc/neural-doodle</a>), which appeared on HN a couple months ago (<a href="https://news.ycombinator.com/item?id=11257566" rel="nofollow">https://news.ycombinator.com/item?id=11257566</a>), is very difficult to set up without errors. Meanwhile, the included Docker container (for the CPU implementation) can get things running immediately after a 311MB download, even on Windows which otherwise gets fussy with machine learning libraries. (I haven't played with the GPU container yet, though)<p>Nvidia also has an interesting implementation of Docker which allows containers to use the GPU on the host: <a href="https://github.com/NVIDIA/nvidia-docker" rel="nofollow">https://github.com/NVIDIA/nvidia-docker</a>