TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Setting Up a Deep Learning Machine from Scratch

324 点作者 IamFermat大约 9 年前

13 条评论

minimaxir大约 9 年前
The dependency hell required to run a good deep learning machine is one of the reasons why using Docker&#x2F;VM is not a bad idea. Even if you follow the instructions in the OP to the letter, you can still run into issues where a) an unexpected interaction with permissions&#x2F;other package versions causes the build to fail and b) building all the packages can take an hour+ to do even on a good computer.<p>The Neural Doodle tool (<a href="https:&#x2F;&#x2F;github.com&#x2F;alexjc&#x2F;neural-doodle" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;alexjc&#x2F;neural-doodle</a>), which appeared on HN a couple months ago (<a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=11257566" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=11257566</a>), is very difficult to set up without errors. Meanwhile, the included Docker container (for the CPU implementation) can get things running immediately after a 311MB download, even on Windows which otherwise gets fussy with machine learning libraries. (I haven&#x27;t played with the GPU container yet, though)<p>Nvidia also has an interesting implementation of Docker which allows containers to use the GPU on the host: <a href="https:&#x2F;&#x2F;github.com&#x2F;NVIDIA&#x2F;nvidia-docker" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;NVIDIA&#x2F;nvidia-docker</a>
评论 #11697938 未加载
评论 #11698640 未加载
评论 #11701932 未加载
JackFr大约 9 年前
Disappointed. Misread it -- I thought he was going to do deep learning <i>with</i> <a href="https:&#x2F;&#x2F;scratch.mit.edu&#x2F;" rel="nofollow">https:&#x2F;&#x2F;scratch.mit.edu&#x2F;</a>, not <i>from</i> scratch.
评论 #11698599 未加载
评论 #11699009 未加载
pilooch大约 9 年前
Commoditizing deep learning is mandatory. After repetitive in production installs at various corps while connecting to existing pipelines, I ve convinced some of them to sponsor a commoditized open source deep learning server.<p>Code is here: <a href="https:&#x2F;&#x2F;github.com&#x2F;beniz&#x2F;deepdetect" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;beniz&#x2F;deepdetect</a><p>There are differenciated CPU and GPU docker versions, and as mentioned elsewhere in this thread, they are the easiest way to setup even production system without critical impact on performances, thanks to nvidia-docker. It seems they are more popular than AMI within our little community.
mastazi大约 9 年前
I&#x27;m sorry if this is only tangentially on topic:<p>I was reading the article and got to the part related to installing CUDA drivers.<p>I am currently on the market for a laptop which will be used for self-learning purposes and I am interested in trying GPU-based ML solutions.<p>In my search for the most cost-effective machine, some of the laptops that I came across are equipped with AMD GPUs and it seems that support for them is not as good as for their Nvidia counterparts: so far I know of Theano and Caffe supporting OpenCL and I know support might come in the future from TensorFlow [1], in addition I saw that there are solutions for Torch [2] although they seem to be developed by single individuals.<p>I was wondering if someone with experience in ML could give me some advice: is the AMD route viable?<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;tensorflow&#x2F;tensorflow&#x2F;issues&#x2F;22" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;tensorflow&#x2F;tensorflow&#x2F;issues&#x2F;22</a><p>[2] <a href="https:&#x2F;&#x2F;github.com&#x2F;torch&#x2F;torch7&#x2F;wiki&#x2F;Cheatsheet#opencl" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;torch&#x2F;torch7&#x2F;wiki&#x2F;Cheatsheet#opencl</a>
评论 #11699234 未加载
评论 #11700874 未加载
评论 #11700085 未加载
评论 #11699226 未加载
zacharyfmarion大约 9 年前
I posted something similar on my blog (<a href="http:&#x2F;&#x2F;zacharyfmarion.io&#x2F;machine-learning-with-amazon-ec2&#x2F;" rel="nofollow">http:&#x2F;&#x2F;zacharyfmarion.io&#x2F;machine-learning-with-amazon-ec2&#x2F;</a>) not too long ago. Would be nice if there was a tool that set all of this up for you!
vonnik大约 9 年前
I work on Deeplearning4j, and I&#x27;m told that the install process is not too hellish. Feedback welcome there:<p><a href="http:&#x2F;&#x2F;deeplearning4j.org&#x2F;quickstart" rel="nofollow">http:&#x2F;&#x2F;deeplearning4j.org&#x2F;quickstart</a><p><a href="http:&#x2F;&#x2F;deeplearning4j.org&#x2F;gettingstarted" rel="nofollow">http:&#x2F;&#x2F;deeplearning4j.org&#x2F;gettingstarted</a><p>Someone in the community also Dockerized Spark + Hadoop + OpenBlas:<p><a href="https:&#x2F;&#x2F;github.com&#x2F;crockpotveggies&#x2F;docker-spark" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;crockpotveggies&#x2F;docker-spark</a><p>The GPU release is coming out Monday.
评论 #11698714 未加载
评论 #11698676 未加载
profen大约 9 年前
The steps are pretty neat. Also agree on the driver and tools installation. Just painful and long.<p>looks like there are seperate torch and caffe amis as well for amazon. Going to try later.<p><a href="https:&#x2F;&#x2F;aws.amazon.com&#x2F;marketplace&#x2F;pp&#x2F;B01B52CMSO" rel="nofollow">https:&#x2F;&#x2F;aws.amazon.com&#x2F;marketplace&#x2F;pp&#x2F;B01B52CMSO</a><p><a href="https:&#x2F;&#x2F;aws.amazon.com&#x2F;marketplace&#x2F;pp&#x2F;B01B4ZSX5S" rel="nofollow">https:&#x2F;&#x2F;aws.amazon.com&#x2F;marketplace&#x2F;pp&#x2F;B01B4ZSX5S</a>
评论 #11698670 未加载
visarga大约 9 年前
Is there a host offering GPU systems preconfigured with ML frameworks and models, for playing around? Something simple to use like Digital Ocean.
评论 #11698401 未加载
评论 #11701827 未加载
评论 #11698465 未加载
profen大约 9 年前
Have used this digits ami on aws in the past for caffe and torch.<p><a href="https:&#x2F;&#x2F;aws.amazon.com&#x2F;marketplace&#x2F;pp&#x2F;B01DJ93C7Q&#x2F;ref=srh_res_product_title?ie=UTF8&amp;sr=0-6&amp;qid=1463261339052" rel="nofollow">https:&#x2F;&#x2F;aws.amazon.com&#x2F;marketplace&#x2F;pp&#x2F;B01DJ93C7Q&#x2F;ref=srh_res...</a>
amelius大约 9 年前
Step 1: make sure that your machine has sufficient free PSI slots for the GPU cards, and that you have sufficient physical space inside the machine.<p>Seriously... why can&#x27;t there be a better way of adding coprocessors to a machine? Like stacking some boxes, interconnected by parallel ribbon cable, or something like that?
评论 #11698379 未加载
评论 #11698279 未加载
tzz大约 9 年前
If someone creates a Juju Charm <a href="https:&#x2F;&#x2F;jujucharms.com" rel="nofollow">https:&#x2F;&#x2F;jujucharms.com</a> for this, then you can use the pre-configured service on any of the major public clouds.
评论 #11698727 未加载
评论 #11698826 未加载
tacos大约 9 年前
I don&#x27;t understand the fascination with these &quot;make a list&quot; style setup instructions, as they&#x27;re almost immediately outdated, and seldom updated.<p>We have AMI, we have docker, we have (gasp) shell scripts. It&#x27;s 2016, Why am I cutting and pasting between a web page and a console?<p>To my knowledge the only thing that does something like this well is oh-my-zsh. And look at the success they&#x27;ve had! So either do it right, or don&#x27;t do it at all.
评论 #11697684 未加载
raverbashing大约 9 年前
No, you don&#x27;t need to restart your machine after you install CUDA.<p>Also you might not need to restart after you install the drivers, this is not Windows. (But there might be some rmmod&#x2F;modprobe needed)<p>&gt; If your deep learning machine is not your primary work desktop, it helps to be able to access it remotely<p>Yes, use ssh.
评论 #11697815 未加载
评论 #11697819 未加载
评论 #11697737 未加载
评论 #11698057 未加载