TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Show HN: Applying the Unix philosophy to neural networks

251 点作者 cloudkj大约 6 年前

14 条评论

HALtheWise大约 6 年前
I don&#x27;t see any claims about performance, but I would be very surprised if it was anything better than abysmal. In a modern neural network pipeline, just sending data to the CPU memory is treated as a ridiculously expensive operation, let alone serializing to a delimited text string.<p>Come to think of it, this is also a problem with the Unix philosophy in general, in that it requires trading off performance (user productivity) for flexibility (developer productivity), and that trade-off isn&#x27;t always worth it. I would love to see a low overhead version of this that can keep data as packed arrays on a GPU during intermediate steps, but I&#x27;m not sure it&#x27;s possible with Unix interfaces available today.<p>Maybe there&#x27;s a use case with very small networks and CPU evaluation, but so much of the power of modern neural networks comes from scale and performance that I&#x27;m skeptical it is very large.
评论 #19741132 未加载
评论 #19744182 未加载
评论 #19740750 未加载
评论 #19741796 未加载
评论 #19741403 未加载
评论 #19743826 未加载
peter_d_sherman大约 6 年前
Excerpt: &quot;layer is a program for doing neural network inference the Unix way. Many modern neural network operations can be represented as sequential, unidirectional streams of data processed by pipelines of filters. The computations at each layer in these neural networks are equivalent to an invocation of the layer program, and multiple invocations can be chained together to represent the entirety of such networks.&quot;<p>Another poster commented that performance might not be that great, but I don&#x27;t care about performance, I care about the essence of the idea, and the essence of this idea is brilliant, absolutely brilliant!<p>Now, that being said, there is one minor question I have, and that is, how would backpropagation apply to this apparently one-way model?<p>But, that also being said... I&#x27;m sure there&#x27;s a way to do it... maybe there should be a higher-level command which can run each layer in turn, and then backpropagate to the previous layer, if&#x2F;when there is a need to do so...<p>But, all in all, a brilliant, brilliant idea!!!
评论 #19741560 未加载
mempko大约 6 年前
What&#x27;s wonderful about this concept (and unix concept in general) is that the flexibility it gives you is amazing. You can for example pipe it over the network and distribute the inference across machines. You can tee the output and save each layers output to a file. The possibilities are endless here.
skvj大约 6 年前
Great concept. Would like to see more of this idea applied to neural network processing and configuration in general (which in my experience can sometimes be a tedious, hard-coded affair).
craftinator大约 6 年前
I&#x27;ve been thinking about something like this for a long time, but could never quite wrap my head around a good way to do it (especially since I kept getting stuck on making it full featured, i.e. more than inference), so thank you for putting it together! I love the concept, and I&#x27;ll be playing with this all day!
xrd大约 6 年前
This might not be a great way to build neural networks (as other commenters have said regarding performance). But, it could be a great way to learn about neural networks. I always find the command line a great way to understand a pipeline of information.
luminati大约 6 年前
Great idea but however equally great caveat - it&#x27;s just for (forward) inference. Unix pipelines are fundamentally one way and this approach won&#x27;t work for back propagation.
评论 #19744512 未加载
Rerarom大约 6 年前
Sounds like the kind of thing John Carmack would enjoy hacking on.
评论 #19739873 未加载
Donald大约 6 年前
See also Trevor Darrell&#x27;s group&#x27;s work on neural module networks:<p><a href="https:&#x2F;&#x2F;bair.berkeley.edu&#x2F;blog&#x2F;2017&#x2F;06&#x2F;20&#x2F;learning-to-reason-with-neural-module-networks&#x2F;" rel="nofollow">https:&#x2F;&#x2F;bair.berkeley.edu&#x2F;blog&#x2F;2017&#x2F;06&#x2F;20&#x2F;learning-to-reason...</a>
mark_l_watson大约 6 年前
Wonderful idea and the Chicken Scheme implementation looks nice also.<p>I wrote some Racket Scheme code that reads Keras trained models and does inferencing but this is much better: I used Racket’s native array&#x2F;linear algebra support but this implementation uses BLAS which should be a lot faster.
dekhn大约 6 年前
<a href="https:&#x2F;&#x2F;www.jwz.org&#x2F;blog&#x2F;2019&#x2F;01&#x2F;we-are-now-closer-to-the-y2038-bug-than-the-y2k-bug&#x2F;#comment-194745" rel="nofollow">https:&#x2F;&#x2F;www.jwz.org&#x2F;blog&#x2F;2019&#x2F;01&#x2F;we-are-now-closer-to-the-y2...</a>
评论 #19741215 未加载
cr0sh大约 6 年前
Well - I will say I like the general concept. I just wish it wasn&#x27;t implemented in Scheme (only because I am not familiar with the language; looking at the source, though - I&#x27;m not sure I want to go there - it looks like a mashup of Pascal, Lisp, and RPN).<p>It seems like today - and maybe I am wrong - but that data science and deep learning in general has pretty much &quot;blessed&quot; Python and C++ as the languages for such tasks. Had this been implemented in either, it might receive a wider audience.<p>But maybe the concept itself is more important than the implementation? I can see that as possibly being the case...<p>Great job in creating it; the end-tool by itself looks fun and promising!
评论 #19741274 未加载
评论 #19739935 未加载
评论 #19741115 未加载
评论 #19740012 未加载
评论 #19742192 未加载
评论 #19739966 未加载
评论 #19739816 未加载
andbberger大约 6 年前
Reading the title, I can&#x27;t help but think of the &#x27;Unix haters handbook&#x27; and groan, why would you want to apply the unix philosophy to nets??
评论 #19741108 未加载
评论 #19742254 未加载
评论 #19741017 未加载
nihil75大约 6 年前
I love you