The dependency hell required to run a good deep learning machine is one of the reasons why using Docker/VM is not a bad idea. Even if you follow the instructions in the OP to the letter, you can still run into issues where a) an unexpected interaction with permissions/other package versions causes the build to fail and b) building all the packages can take an hour+ to do even on a good computer.<p>The Neural Doodle tool (<a href="https://github.com/alexjc/neural-doodle" rel="nofollow">https://github.com/alexjc/neural-doodle</a>), which appeared on HN a couple months ago (<a href="https://news.ycombinator.com/item?id=11257566" rel="nofollow">https://news.ycombinator.com/item?id=11257566</a>), is very difficult to set up without errors. Meanwhile, the included Docker container (for the CPU implementation) can get things running immediately after a 311MB download, even on Windows which otherwise gets fussy with machine learning libraries. (I haven't played with the GPU container yet, though)<p>Nvidia also has an interesting implementation of Docker which allows containers to use the GPU on the host: <a href="https://github.com/NVIDIA/nvidia-docker" rel="nofollow">https://github.com/NVIDIA/nvidia-docker</a>
Disappointed. Misread it -- I thought he was going to do deep learning <i>with</i> <a href="https://scratch.mit.edu/" rel="nofollow">https://scratch.mit.edu/</a>, not <i>from</i> scratch.
Commoditizing deep learning is mandatory. After repetitive in production installs at various corps while connecting to existing pipelines, I ve convinced some of them to sponsor a commoditized open source deep learning server.<p>Code is here: <a href="https://github.com/beniz/deepdetect" rel="nofollow">https://github.com/beniz/deepdetect</a><p>There are differenciated CPU and GPU docker versions, and as mentioned elsewhere in this thread, they are the easiest way to setup even production system without critical impact on performances, thanks to nvidia-docker. It seems they are more popular than AMI within our little community.
I'm sorry if this is only tangentially on topic:<p>I was reading the article and got to the part related to installing CUDA drivers.<p>I am currently on the market for a laptop which will be used for self-learning purposes and I am interested in trying GPU-based ML solutions.<p>In my search for the most cost-effective machine, some of the laptops that I came across are equipped with AMD GPUs and it seems that support for them is not as good as for their Nvidia counterparts: so far I know of Theano and Caffe supporting OpenCL and I know support might come in the future from TensorFlow [1], in addition I saw that there are solutions for Torch [2] although they seem to be developed by single individuals.<p>I was wondering if someone with experience in ML could give me some advice: is the AMD route viable?<p>[1] <a href="https://github.com/tensorflow/tensorflow/issues/22" rel="nofollow">https://github.com/tensorflow/tensorflow/issues/22</a><p>[2] <a href="https://github.com/torch/torch7/wiki/Cheatsheet#opencl" rel="nofollow">https://github.com/torch/torch7/wiki/Cheatsheet#opencl</a>
I posted something similar on my blog (<a href="http://zacharyfmarion.io/machine-learning-with-amazon-ec2/" rel="nofollow">http://zacharyfmarion.io/machine-learning-with-amazon-ec2/</a>) not too long ago. Would be nice if there was a tool that set all of this up for you!
I work on Deeplearning4j, and I'm told that the install process is not too hellish. Feedback welcome there:<p><a href="http://deeplearning4j.org/quickstart" rel="nofollow">http://deeplearning4j.org/quickstart</a><p><a href="http://deeplearning4j.org/gettingstarted" rel="nofollow">http://deeplearning4j.org/gettingstarted</a><p>Someone in the community also Dockerized Spark + Hadoop + OpenBlas:<p><a href="https://github.com/crockpotveggies/docker-spark" rel="nofollow">https://github.com/crockpotveggies/docker-spark</a><p>The GPU release is coming out Monday.
The steps are pretty neat. Also agree on the driver and tools installation. Just painful and long.<p>looks like there are seperate torch and caffe amis as well for amazon. Going to try later.<p><a href="https://aws.amazon.com/marketplace/pp/B01B52CMSO" rel="nofollow">https://aws.amazon.com/marketplace/pp/B01B52CMSO</a><p><a href="https://aws.amazon.com/marketplace/pp/B01B4ZSX5S" rel="nofollow">https://aws.amazon.com/marketplace/pp/B01B4ZSX5S</a>
Have used this digits ami on aws in the past for caffe and torch.<p><a href="https://aws.amazon.com/marketplace/pp/B01DJ93C7Q/ref=srh_res_product_title?ie=UTF8&sr=0-6&qid=1463261339052" rel="nofollow">https://aws.amazon.com/marketplace/pp/B01DJ93C7Q/ref=srh_res...</a>
Step 1: make sure that your machine has sufficient free PSI slots for the GPU cards, and that you have sufficient physical space inside the machine.<p>Seriously... why can't there be a better way of adding coprocessors to a machine? Like stacking some boxes, interconnected by parallel ribbon cable, or something like that?
If someone creates a Juju Charm <a href="https://jujucharms.com" rel="nofollow">https://jujucharms.com</a> for this, then you can use the pre-configured service on any of the major public clouds.
I don't understand the fascination with these "make a list" style setup instructions, as they're almost immediately outdated, and seldom updated.<p>We have AMI, we have docker, we have (gasp) shell scripts. It's 2016, Why am I cutting and pasting between a web page and a console?<p>To my knowledge the only thing that does something like this well is oh-my-zsh. And look at the success they've had! So either do it right, or don't do it at all.
No, you don't need to restart your machine after you install CUDA.<p>Also you might not need to restart after you install the drivers, this is not Windows. (But there might be some rmmod/modprobe needed)<p>> If your deep learning machine is not your primary work desktop, it helps to be able to access it remotely<p>Yes, use ssh.