TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Dead Code Should Be Buried – Why I Didn't Contribute to NLTK

140 pointsby Smerityover 9 years ago

14 comments

stevenbirdover 9 years ago
NLTK has an active and growing developer community. We&#x27;re grateful to Matthew Honnibal for permission to port his averaged perceptron tagger, and it&#x27;s now included in NLTK 3.1.<p>Note that NLTK includes reference implementations for a range of NLP algorithms, supporting reproducibility and helping a diverse community to get into NLP. We provide interfaces for standard NLP tasks, and an easy way to switch from using pure Python implementations to using wrappers for external implementations such as the Stanford CoreNLP tools. We&#x27;re adding &quot;scaling up&quot; sections to the NLTK book to show how this is done.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;nltk&#x2F;nltk" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;nltk&#x2F;nltk</a> | <a href="https:&#x2F;&#x2F;pypi.python.org&#x2F;pypi&#x2F;nltk" rel="nofollow">https:&#x2F;&#x2F;pypi.python.org&#x2F;pypi&#x2F;nltk</a> | <a href="http:&#x2F;&#x2F;www.nltk.org&#x2F;book_2ed&#x2F;ch05.html#scaling-up" rel="nofollow">http:&#x2F;&#x2F;www.nltk.org&#x2F;book_2ed&#x2F;ch05.html#scaling-up</a>
ben336over 9 years ago
I hate this genre of post that basically follows the line: &quot;I went to &lt;established project&gt; and attempted to educate them. When they didn&#x27;t listen I went and built something better. Now its clear they should have listened to me, and you should all abandon their software&quot;<p>Almost always the scope of the new project is much smaller, different or much less mature than the project being bashed. Open source projects are not required to make changes to please any arbitrary user that wants to make changes, even if it&#x27;s to bring technical improevements.<p>In NLTK&#x27;s case, they have a whole book written around their project. Presumably significant changes to project structure and function would mean heavy documentation&#x2F;writing work, and might not fit the goals of their project. Bashing them as a result just shows a complete lack of understanding of how&#x2F;why people write and maintain software.
评论 #10174181 未加载
评论 #10174309 未加载
评论 #10174382 未加载
Radimover 9 years ago
Very relatable post. Isn&#x27;t NLTK primarily a teaching &#x2F; demonstration tool though?<p>I just checked their website and the claim of <i>&quot;NLTK is a leading platform for building Python programs to work with human language data... a suite of libraries for classification, tokenization, stemming, tagging, parsing, and semantic reasoning&quot;</i> does sound a little odd. But I think everyone in the industry knows NLTK&#x27;s place and purpose -- you practically cannot avoid finding out quickly. NLTK&#x27;s scope is clearly too broad to be meaningfully cutting edge at any one thing.<p>New libraries and implementations will always have an advantage. It&#x27;s easier to tout &quot;simplicity and leanness&quot; when you don&#x27;t have to carry over all the baggage and backward compatibility accumulated over the years.<p>For that reason, an occasional &quot;complexity reset&quot; is expected, and if a library would not or can not do it, another library will. Will SpaCy&#x27;s fate be different, 10 years down the road?
评论 #10174203 未加载
评论 #10174384 未加载
评论 #10174032 未加载
评论 #10173927 未加载
评论 #10174385 未加载
评论 #10173855 未加载
Osirisover 9 years ago
Regarding dead (as in unused) code, I keep noticing the guys on my UI development team commenting out code and then committing it to Git. I remind them periodically that they can just delete the code and if they ever need it, they can use Git to pull up historical versions of the file for reference.
评论 #10174022 未加载
评论 #10173918 未加载
评论 #10173952 未加载
评论 #10173933 未加载
评论 #10173982 未加载
评论 #10173946 未加载
desilinguistover 9 years ago
As someone who did contribute to NLTK quite a bit, it was quite useful back in the day especially when I had to teach NLP&#x2F;CL to linguistics (non-CS) graduate students. I agree with Radim that NLTK has a purpose - and it&#x27;s not to implement the latest and the greatest NLP algorithms. I&#x27;m glad NLTK exists and although it is not what I use today, I&#x27;m pretty sure whatever I do use today (CoreNLP, gensim, etc.) will all be superseded by the next best thing a decade from now.
stevenbirdover 9 years ago
I&#x27;ve updated the NLTK issue tracker with information about how the model for NLTK&#x27;s built-in POS tagger was trained: <a href="https:&#x2F;&#x2F;github.com&#x2F;nltk&#x2F;nltk&#x2F;issues&#x2F;1063#issuecomment-138005116" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;nltk&#x2F;nltk&#x2F;issues&#x2F;1063#issuecomment-138005...</a><p>The second edition of the book will include a &quot;scaling up&quot; section in most chapters, which shows how to transition from NLTK&#x27;s pure Python implementations to NLTK&#x27;s wrappers for the Stanford tools.
z92over 9 years ago
I put all dead code in a file called &quot;deadcode.c&quot; and get done with it. If I need it again, I can always copy from there. Easier than searching through git history.
评论 #10173941 未加载
skrebbelover 9 years ago
I like the gist of this post, but it feels somewhat incomplete: NTLK is Apache licensed and spaCy is a dual-licensed (AGPL or money) commercial product. It&#x27;s a good idea and an honest business, and I hope he succeeds, but I think it would&#x27;ve been more honest if the article had reflected that.
ellipticover 9 years ago
Can someone explain the following comments, for someone with some knowledge of ML but none of NLP? &quot;First, it&#x27;s really much better to use Averaged Perceptron, or some other method which can be trained in an error-driven way. You don&#x27;t want to do batch learning. Batch learning makes it difficult to train from negative examples effectively, and this makes a very big difference to accuracy&quot; I thought that it was typical for suitably regularized batch methods to modestly outperform or at least match (in terms of accuracy) online methods, whose main advantage is their speed.
评论 #10174699 未加载
firebonesover 9 years ago
The theoretically &quot;best&quot; algorithm may not necessarily be the one that fits a particular task or set of constraints the best. It is presumptuous of the author to know what&#x27;s best for every user of the toolkit.<p>I suggest that the author, being so wise in the ways of NLP science, channel this outrage and write &quot;NLTK: The Good Parts&quot; to save the rest of the world from stumbling blindly in the dark wilderness of ignorance.
analognoiseover 9 years ago
So rather than jump in and start adding documentation you blast the developers, who are offering this stuff free and without warranty or implied fitness for any purpose?<p>You can contribute by adding documentation where you see it lacking, especially if you have domain specific knowledge that would help others.<p>Or you can blast the entire project, not help, and go write your own. The thing that bothers me is that if you know enough, and it&#x27;s mostly a teaching tool (my understanding from other comments), you could greatly improve the situation for the next guy by providing your enlightened input on the subject in the form of documentation. So the whole damn community loses out on your hard-earned understanding.<p>Meanwhile, 10 years from now, your project will be replaced, and if NLTK is really a teaching tool, you won&#x27;t even be a footnote (because teaching tools don&#x27;t die unless a whole field dies).<p>This smacks of the kind of &quot;bubble&quot; Silicon Valley entitlement that I can&#x27;t quite wrap my head around (I know, author isn&#x27;t in SV, I just see this kind of crap coming from there).
评论 #10174240 未加载
snoitavlaover 9 years ago
NLTK would include state-of-art openly and &quot;nicely&quot; license implementation soon: <a href="https:&#x2F;&#x2F;github.com&#x2F;nltk&#x2F;nltk&#x2F;issues&#x2F;1110" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;nltk&#x2F;nltk&#x2F;issues&#x2F;1110</a>
retreatguruover 9 years ago
New to NLP we tried NLTK first for a toy project and it was very slow and inaccurate. Luckily we found spaCy, switched to it and sped things up 10x with better accuracy and it was easier to use. Based on this experience I tend to agree with the author.
latenightcodingover 9 years ago
NLTK = education<p>OpenNLP = production<p>I thought that was a known fact
评论 #10173939 未加载