TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

The Unreasonable Effectiveness of Deep Feature Extraction

324 点作者 hiphipjorge超过 6 年前

13 条评论

asavinov超过 6 年前
Deep feature extraction is important for not only image analysis but also in other areas where specialized tools might be useful such as listed below:<p>o <a href="https:&#x2F;&#x2F;github.com&#x2F;Featuretools&#x2F;featuretools" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Featuretools&#x2F;featuretools</a> - Automated feature engineering with main focus on relational structures and deep feature synthesis<p>o <a href="https:&#x2F;&#x2F;github.com&#x2F;blue-yonder&#x2F;tsfresh" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;blue-yonder&#x2F;tsfresh</a> - Automatic extraction of relevant features from time series<p>o <a href="https:&#x2F;&#x2F;github.com&#x2F;machinalis&#x2F;featureforge" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;machinalis&#x2F;featureforge</a> - creating and testing machine learning features, with a scikit-learn compatible API<p>o <a href="https:&#x2F;&#x2F;github.com&#x2F;asavinov&#x2F;lambdo" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;asavinov&#x2F;lambdo</a> - Feature engineering and machine learning: together at last! The workflow engine allows for integrating feature training and data wrangling tasks with conventional ML<p>o <a href="https:&#x2F;&#x2F;github.com&#x2F;xiaoganghan&#x2F;awesome-feature-engineering" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;xiaoganghan&#x2F;awesome-feature-engineering</a> - other resource related to feature engineering (video, audio, text)
评论 #19164697 未加载
评论 #19172011 未加载
kieckerjan超过 6 年前
As the author acknowledges, we might be living in a window of opportunity where big data firms are giving something away for free that may yet turn out to be a big part of their furure IP. Grab it while you can.<p>On a tangent, I really like the tone of voice in this article. Wide eyed, optimistic and forward looking while at the same time knowledgeable and practical. (Thanks!)
评论 #19169896 未加载
评论 #19168634 未加载
bobosha超过 6 年前
This is very interesting and timely to my work, I had been struggling with training a Mobilenet CNN for classification of human emotions (&quot;in the wild&quot;), and struggling to get the model to converge. I tried multiclass to binary models e.g. angry|not_angry etc. But couldn&#x27;t get past the 60-70% accuracy range.<p>I switched to extracting features from Imagenet and trained an xgboost binary and boom...right out of the box am seeing ~88% accuracy.<p>Also the author&#x27;s points about speed of training and flexibility is major plus for my work. Hope this helps others.
评论 #19165200 未加载
fouc超过 6 年前
&gt;But in the future, I think ML will look more like a tower of transfer learning. You&#x27;ll have a sequence of models, each of which specializes the previous model, which was trained on a more general task with more data available.<p>He&#x27;s almost describing a future where we might buy&#x2F;license pre-trained models from Google&#x2F;Facebook&#x2F;etc that are trained on huge datasets, and then extend that with more specific training from other sources of data in order to end up with a model suited to the problem being solved.<p>It also sounds like we can feed the model&#x27;s learnings back into new models with new architectures as well as we discover better approaches later.
评论 #19164481 未加载
评论 #19165218 未加载
评论 #19165062 未加载
评论 #19167457 未加载
stared超过 6 年前
A few caveats here:<p>- It works (that well) only for vision (for language it sort-of-works only at the word level: <a href="http:&#x2F;&#x2F;p.migdal.pl&#x2F;2017&#x2F;01&#x2F;06&#x2F;king-man-woman-queen-why.html" rel="nofollow">http:&#x2F;&#x2F;p.migdal.pl&#x2F;2017&#x2F;01&#x2F;06&#x2F;king-man-woman-queen-why.html</a>)<p>- &quot;Do Better ImageNet Models Transfer Better?&quot; <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1805.08974" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1805.08974</a><p>And if you want to play with transfer learning, here is a tutorial with a working notebook: <a href="https:&#x2F;&#x2F;deepsense.ai&#x2F;keras-vs-pytorch-avp-transfer-learning&#x2F;" rel="nofollow">https:&#x2F;&#x2F;deepsense.ai&#x2F;keras-vs-pytorch-avp-transfer-learning&#x2F;</a>
评论 #19165225 未加载
mlucy超过 6 年前
Hi everyone! Author here. Let me know if you have any questions, this is one of my favorite subjects in the world to talk about.
评论 #19164759 未加载
评论 #19165994 未加载
评论 #19164868 未加载
评论 #19164755 未加载
评论 #19164768 未加载
jfries超过 6 年前
Very interesting article! It answered some questions I&#x27;ve had for a long time.<p>I&#x27;m curious about how this works in practice. Is it always good enough to take the outputs of the next-to-last layer as features? When doing quick iterations, I assume the images in the data set have been run through the big net as a preparation step? And the inputs to the net you&#x27;re training is the features? Does the new net always only need 1 layer?<p>What are some examples of where this worked well (except for the flowers mentioned in the article)?
评论 #19165303 未加载
mikekchar超过 6 年前
It&#x27;s hard to ask my question without sounding a bit naive :-) Back in the early nineties I did some work with convoluted neural nets, except that at that time we didn&#x27;t call them &quot;convoluted&quot;. They were just the neural nets that were not provably uninteresting :-) My biggest problem was that I didn&#x27;t have enough hardware and so I put that kind of stuff on a shelf waiting for hardware to improve (which it did, but I never got back to that shelf).<p>What I find a bit strange is the excitement that&#x27;s going on. I find a lot of these results pretty expected. Or at least this is what <i>I</i> and anybody I talked to at the time seemed to think would happen. Of course, the thing about science is that sometimes you have to do the boring work of seeing if it does, indeed, work like that. So while I&#x27;ve been glancing sidelong at the ML work going on, it&#x27;s been mostly a checklist of &quot;Oh cool. So it <i>does</i> work. I&#x27;m glad&quot;.<p>The excitement has really been catching me off guard, though. It&#x27;s as if nobody else expected it to work like this. This in turn makes me wonder if I&#x27;m being stupidly naive. Normally I find when somebody thinks, &quot;Oh it was obvious&quot; it&#x27;s because they had an oversimplified view of it and it just happened to superficially match with reality. I suspect that&#x27;s the case with me :-)<p>For those doing research in the area (and I know there are some people here), what have been the biggest discoveries&#x2F;hurdles that we&#x27;ve overcome in the last 20 or 30 years? In retrospect, what were the biggest worries you had in terms of wondering if it would work the way you thought it might? Going forward, what are the most obvious hurdles that, if they don&#x27;t work out might slow down or halt our progression?
评论 #19168431 未加载
评论 #19168540 未加载
评论 #19168484 未加载
al2o3cr超过 6 年前
Contrast a similar writeup on some interesting observations about solving ImageNet with a network that only sees small patches (largest is 33px on a side)<p><a href="https:&#x2F;&#x2F;medium.com&#x2F;bethgelab&#x2F;neural-networks-seem-to-follow-a-puzzlingly-simple-strategy-to-classify-images-f4229317261f" rel="nofollow">https:&#x2F;&#x2F;medium.com&#x2F;bethgelab&#x2F;neural-networks-seem-to-follow-...</a>
purplezooey超过 6 年前
Question to me is, can you do this with i.e. Random Forest too, or is it specific to NN.
gdubs超过 6 年前
This is probably naive, but I’m imagining something like the US Library of Congress providing these models in the future. E.g., some federally funded program to procure &#x2F; create enormous data sets &#x2F; train.
评论 #19176117 未加载
CMCDragonkai超过 6 年前
I&#x27;m wondering how this compares to transfer learning applied to the same model. That is compare deep feature extraction plus linear model at the end vs just transferring the weights to the same model and retraining to your specific dataset.
zackmorris超过 6 年前
From the article:<p><i>Where are things headed?<p>There&#x27;s a growing consensus that deep learning is going to be a centralizing technology rather than a decentralizing one. We seem to be headed toward a world where the only people with enough data and compute to train truly state-of-the-art networks are a handful of large tech companies.</i><p>This is terrifying, but the same conclusion that I&#x27;ve come to.<p>I&#x27;m starting to feel more and more dread that this isn&#x27;t how the future was supposed to be. I used to be so passionate about technology, especially about AI as the last solution in computer science.<p>But anymore, the most likely scenario I see for myself is moving out into the desert like OB1 Kenobi. I&#x27;m just, so weary. So unbelievably weary, day by day, in ever increasing ways.
评论 #19166214 未加载
评论 #19166535 未加载
评论 #19166554 未加载
评论 #19168475 未加载
评论 #19198036 未加载