TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

MXNet – Deep Learning Framework of Choice at AWS

99 点作者 werner超过 8 年前

14 条评论

cs702超过 8 年前
Translation from corporatespeak: &quot;We don&#x27;t have an internally developed framework that can compete with TensorFlow, which is controlled by Google, so we are throwing our weight behind MXNet.&quot;<p>As others have commented here, there is no evidence that MXNet is <i>that</i> much better (or worse) than the other frameworks.
评论 #13018081 未加载
评论 #13017366 未加载
评论 #13017297 未加载
评论 #13018491 未加载
fpgaminer超过 8 年前
It seems more prevalent now than it used to be, that frameworks&#x2F;libraries are being used as weapons in a sort of mindshare war between the world&#x27;s megacorps. Or perhaps I&#x27;m misremembering history. And I don&#x27;t mean just AI; just look at Angular (Google) vs. React (Facebook).<p>It&#x27;s a bit of a double edged sword. As developers this war gives us free access to well funded and heavily developed tools. The world has been fundamentally changed by their availability. But at the same time we need to understand that the primary reason they exist is to lock developers into a particular vendor. It&#x27;s most transparent with Google&#x27;s TensorFlow, where they were obvious about their intentions to offer TensorFlow services on their cloud platform.<p>This article more than most exemplifies their desperate attempts. For now it seems to remain mostly that, desperate attempts, with the tools remaining more-or-less platform agnostic. But I foresee a grim future where our best libraries and tools are tied inextricably to a commercial ecosystem.
评论 #13016871 未加载
oneshot908超过 8 年前
Using 3 year-old GPUs on a much deeper network than the other guys(tm) to demonstrate awesome scaling efficiency == Intel-level FUD. Note also the absence of overall batch size.<p>Wonder what would happen to that scaling efficiency if those GPUs were P40s?<p>See also the absence of equivalent AlexNet numbers to further obscure attempts at comparing this to the other guys(tm).<p>Can&#x27;t wait for Intel&#x27;s response to this.
评论 #13016810 未加载
评论 #13016721 未加载
评论 #13017141 未加载
评论 #13016202 未加载
deepnotderp超过 8 年前
Okay, with all due respect, this is BS. I love MXNet and think it&#x27;s under appreciated as well. But, pretty much its best feature is the memory mirror. (see oneshot908&#x27;s comment)
imh超过 8 年前
This reads weirdly. He talks about how MXNet is the best choice without comparing it to other frameworks. That&#x27;s the whole point of choosing between things. I&#x27;m sure they did the legwork to make this decision, and some insight into that choice might help others follow. Without that, my distrust radar is blinking.
AlexCoventry超过 8 年前
From the OP:<p><pre><code> &gt; a Deep Learning AMI, which comes pre-installed with the popular open source &gt; deep learning frameworks mentioned earlier; GPU-acceleration through CUDA &gt; drivers which are already installed, pre-configured, and ready to rock </code></pre> You might want to clarify that the negative reviews [0] are from earlier versions which did not include the CUDA drivers. I recently considered this AMI and rejected it for a class [1] because of these reviews.<p>[0] <a href="https:&#x2F;&#x2F;aws.amazon.com&#x2F;marketplace&#x2F;reviews&#x2F;product-reviews?asin=B01M0AXXQB" rel="nofollow">https:&#x2F;&#x2F;aws.amazon.com&#x2F;marketplace&#x2F;reviews&#x2F;product-reviews?a...</a><p>[1] <a href="https:&#x2F;&#x2F;www.meetup.com&#x2F;Cambridge-Artificial-Intelligence-Meetup&#x2F;events&#x2F;235496478&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.meetup.com&#x2F;Cambridge-Artificial-Intelligence-Mee...</a>
评论 #13025556 未加载
eva1984超过 8 年前
&gt; we have concluded that MXNet is the most scalable framework<p>Without back by any benchmarks? This claim is lazy.
bsfjgngdnxy超过 8 年前
&gt;MXNet can consume as little as 4 GB of memory when serving <i>deep networks with as many as 1000 layers</i>.<p>So perhaps I&#x27;m not well versed enough in deep learning, but does this mean that they solved the vanishing gradient problem? How are they managing to do this?
评论 #13016558 未加载
评论 #13016888 未加载
mrdrozdov超过 8 年前
Did not realize you could use MXNet declaratively (like Tensorflow&#x2F;Theano) and imperatively (like Torch&#x2F;Chainer). Can anyone speak more of their imperative usage of MXNet?
评论 #13018387 未加载
评论 #13016479 未加载
turingbook超过 8 年前
Li Mu, the core developer behind MXNet, works for Amazon recently.
partycoder超过 8 年前
[offtopic] I think presentations with ascending bar charts are sort of cliche.
egeozcan超过 8 年前
&gt; Machine learning (...) is being employed in a range of computing tasks where programming explicit algorithms is infeasible.<p>I found this comment interesting. Is this really the summary of what machine learning is about?
评论 #13017870 未加载
评论 #13017668 未加载
blahi超过 8 年前
MXNet is the only deep learning framework that has proper support for R. That&#x27;s why I use it and it is pretty nice IMO.
评论 #13017211 未加载
gnipgnip超过 8 年前
Can someone please spell-out for us muggles what sets these frameworks (Theano, Tensorflow, Torch, CNTK, Mxnet) apart ? They all seem to be essentially doing the same thing underneath.
评论 #13020734 未加载