TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

A better way to build ML: Why you should be using Active Learning

136 点作者 razcle超过 4 年前

13 条评论

natch超过 4 年前
It&#x27;s hard to find articles like this that give a glimpse into what is used by larger shops doing ML. I take this one with a grain of salt due to the source being a vendor, but still it is generous with the amount of detail and with its even mentioning some alternative solutions for cases that might fit those, so that is really appreciated.<p>The pros working in big shops who write these tend to overlook the tiny use cases such as apps that recognize a cat coming through a cat door (as opposed to a raccoon) which can get by with minuscule training.<p>There&#x27;s a lot of discussion of &quot;big data&quot; but small data is amazingly powerful too. I wish there was more bridging of these two worlds — to have tools that deal with the needs of small data, without the assumption that training a model takes days or months, and on the other side, to have the big data world share more insights about how they manage their data for the big cases. There is a ton of info out there but what I find lacking is info about how labeling and tagging is managed on a large scale (I&#x27;m interested in both, big and small, as well as medium). Maybe I&#x27;m just missing something. This article gave some good clues — thanks!
评论 #26031354 未加载
评论 #26030391 未加载
评论 #26029373 未加载
realradicalwash超过 4 年前
Nice to see some active learning around here. To add a data point from a less successful story:<p>In one of our research projects, we used AL to improve part-of-speech prediction, inspired by work by Rehbein and Ruppenhofer, e.g. <a href="https:&#x2F;&#x2F;www.aclweb.org&#x2F;anthology&#x2F;P17-1107&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.aclweb.org&#x2F;anthology&#x2F;P17-1107&#x2F;</a><p>Our data base was a corpus of Scientific English from 17th-now and for our data and situation, we found that choosing the right tool&#x2F;model and having the right training data were the most important things. Once that was in place, active learning did not, unfortunately, add that much. For different tools&#x2F;settings, we got about +&#x2F;-0.2% in accuracy for checking 200k tokens and only correcting 400 of them.<p>Maybe one problem was that AL was only triggered when a majority vote was inconclusive. Also, we used it on top of individualised, gs training data. I guess things can look different if you don&#x27;t have a gs to start with. And if you have better computational resources: Our oracles spent quite some time waiting, which is why we even reorganised the original design to then process batches of corrections.<p>As so often, those null results were hard to publish :|<p>Either way, I thought I&#x27;d share our experiences. Your work sounds really cool, best of luck!
评论 #26030621 未加载
porphyra超过 4 年前
A more detailed and technical writeup on the benefits of active learning: You should try active learning - <a href="https:&#x2F;&#x2F;medium.com&#x2F;aquarium-learning&#x2F;you-should-try-active-learning-37a86aab1afb" rel="nofollow">https:&#x2F;&#x2F;medium.com&#x2F;aquarium-learning&#x2F;you-should-try-active-l...</a><p>Also, Aquarium Learning is just awesome. Super slick.
评论 #26032780 未加载
rocauc超过 4 年前
Nice read.<p>Can you shed some light on what you think are the most valuable methods for identifying high entropy examples for the model to learn faster? I&#x27;m familiar with Pool-Based Sampling, Stream-Based Selective Sampling, Membership Query Synthesis[1], but less certain which techniques are most useful in NLP.<p>[1] <a href="https:&#x2F;&#x2F;blog.roboflow.com&#x2F;what-is-active-learning&#x2F;" rel="nofollow">https:&#x2F;&#x2F;blog.roboflow.com&#x2F;what-is-active-learning&#x2F;</a>
评论 #26028913 未加载
anonymouse008超过 4 年前
Ha! This is amazing -- we did a similar process for an EEG research project, and it was stellar (working memory and learning curves)! Until now, I didn&#x27;t have the right words to articulate what we did - so thank you for the incantation!
评论 #26028557 未加载
nailer超过 4 年前
Mike from Humanloop here - if you&#x27;re interested in active learning we&#x27;ll be around on this thread, also we&#x27;re looking for fullstack SW engineers and ML engineers - <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=25992607" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=25992607</a>
评论 #26027911 未加载
评论 #26028527 未加载
andy99超过 4 年前
I have a suggestion about the first plot you show in the writeup. From what I can see, it is based on a finite pool of data and so it undersells active learning: performance shoots up as AL finds the interesting points, but then the curve flattens and is less steep than the random curve as the &quot;boring&quot; points get added. It would be nice to see the same curve for a bigger training pool where AL was able to get to a target accuracy without running out of valuable training points. I suspect that would make the difference between the two curves much more stark. As it is, it just looks like AL does better for very low data but to get to high accuracy you need to use the whole dataset anyway so it&#x27;s a wash between AL and random.
评论 #26029707 未加载
e2e4超过 4 年前
Startups especially benefit from Active Learning: <a href="https:&#x2F;&#x2F;www.slideshare.net&#x2F;nrubens&#x2F;1-of-40-recommender-systems-and-active-learning-for-startups" rel="nofollow">https:&#x2F;&#x2F;www.slideshare.net&#x2F;nrubens&#x2F;1-of-40-recommender-syste...</a><p>A slightly deeper intro: <a href="https:&#x2F;&#x2F;www.slideshare.net&#x2F;nrubens&#x2F;active-learning-in-recommender-systems" rel="nofollow">https:&#x2F;&#x2F;www.slideshare.net&#x2F;nrubens&#x2F;active-learning-in-recomm...</a><p>p.s. am the author of the above presentations; great to see Active Learning (AL) to finally get proper attention (I&#x27;ve been working in the AL area for 10+ years).
woeirua超过 4 年前
I&#x27;m not sure I really understand the advantage to AL in this context. Sure you get better performance earlier, but if you want the <i>best</i> performance you still appear to have to train with the same amount of data. Given the training -&gt; example identification -&gt; annotation -&gt; training loop is going to be much slower than just continuing to annotate data and then running all the data at once (for a variety of reasons), I think if you were to do an honest total time and total monetary cost comparison you would probably come out with AL being more expensive overall... Am I missing something here?
评论 #26031868 未加载
dexter89_kp3超过 4 年前
what is your thought on synthetic data vs active learning?<p>For some domains, with privacy concerns or rarity of objects, getting labelled data for deep learning is challenging.<p>There is decent research on sim2real i.e transferring models trained on synthetic data to real world applications <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;1703.06907.pdf" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;1703.06907.pdf</a>
评论 #26029685 未加载
nicoburns超过 4 年前
Active learning sounds like a step closer to how humans and other animals learn: iteratively with continuous feedback.
评论 #26028561 未加载
andrewmutz超过 4 年前
Spam filter is an interesting choice of motivating example, since usually it is your users labeling the data, rather than something that happens during the R&amp;D process. You <i>could</i> try to use active learning but I&#x27;m not sure the users would like that product experience.
评论 #26028153 未加载
lwhsiao超过 4 年前
Hi Mike,<p>Can you talk about the tradeoffs or relationship between active learning and weak supervision from your point of view?
评论 #26028132 未加载
评论 #26027915 未加载
评论 #26028342 未加载