TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Word2Vec received 'strong reject' four times at ICLR2013

369 点作者 georgehill超过 1 年前

22 条评论

magnio超过 1 年前
There are more details in the FB post of Tomas Mikolov (author of word2vec) recently: <a href="https:&#x2F;&#x2F;www.facebook.com&#x2F;share&#x2F;p&#x2F;kXYaYaRvRCr5K2Ze" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.facebook.com&#x2F;share&#x2F;p&#x2F;kXYaYaRvRCr5K2Ze</a><p>A hilarious and poignant point I see is how experts make mistake too. Quote:<p>&gt; I also received a lot of comments on the word analogies - from &quot;I knew that too but forgot to publish it!&quot; (Geoff Hinton, I believe you :) happens to everyone, and anyways I think everybody knows what the origin of Distributed Representations is) to &quot;it&#x27;s a total hack and I&#x27;m sure it doesn&#x27;t work!&quot; (random guys who didn&#x27;t bother to read the papers and try it out themselves - including Ian Goodfellow raging about it on Twitter).
评论 #38685529 未加载
评论 #38685614 未加载
评论 #38685384 未加载
评论 #38685431 未加载
评论 #38693106 未加载
评论 #38686293 未加载
nybsjytm超过 1 年前
I think the reviewers did a good job; the reviews are pretty reasonable. Reviews are supposed to be about the quality of a paper, not how influential they might be in the future! And not all influential papers are actually very good.
评论 #38686193 未加载
评论 #38687722 未加载
评论 #38685376 未加载
评论 #38687146 未加载
评论 #38686643 未加载
评论 #38686733 未加载
评论 #38686547 未加载
picometer超过 1 年前
In hindsight, reviewer f5bf’s comment is fascinating:<p>&gt; - It would be interesting if the authors could say something about how these models deal with intransitive semantic similarities, e.g., with the similarities between &#x27;river&#x27;, &#x27;bank&#x27;, and &#x27;bailout&#x27;. People like Tversky have advocated against the use of semantic-space models like NLMs because they cannot appropriately model intransitive similarities.<p>What I’ve noticed in the latest models (GPT, image diffusion models, etc) is an ability to play with words when there’s a double meaning. This struck me as something that used to be very human, but is now in the toolbox of generative models. (Most of which, I assume, use something akin word2vec for deriving embedding vectors from prompts.)<p>Is the word2vec ambiguity contributing to the wordplay ability? I don’t know, but it points to a “feature vs bug” situation where such an ambiguity is a feature for creative purposes, but a bug if you want to model semantic space as a strict vector space.<p>My interpretation here is that the word&#x2F;prompt embeddings in current models are so huge that they’re overloaded with redundant dimensions, such that it wouldn’t satisfy any mathematical formalism (eg of well-behaved vector spaces) at all.
评论 #38690737 未加载
评论 #38687644 未加载
imjonse超过 1 年前
It seems they have rejected initial versions of the paper, since there had been later updates and clarifications based on the reviews. So it seems this was beneficial in the end and how review process should work? Especially since this was groundbreaking work it makes sense there is more effort put into explaining why it works instead of relying too much on good benchmark results.
cs702超过 1 年前
Surely those seemingly smart anonymous reviewers now feel pretty dumb in hindsight.<p>Peer review does <i>not</i> work for new ideas, because <i>no one ever</i> has the time or bandwidth to spend hours upon hours upon hours trying to understand new things.
评论 #38685366 未加载
评论 #38687733 未加载
评论 #38686300 未加载
评论 #38685355 未加载
评论 #38687867 未加载
评论 #38686493 未加载
评论 #38686558 未加载
wzdd超过 1 年前
There are indeed four entries saying &quot;strong reject&quot;, but they all appear to be from the same reviewer, at the same time, and saying the same thing. Isn&#x27;t this just the one rejection?<p>Also, why is only that reviewer&#x27;s score visible?
pmags超过 1 年前
I&#x27;m curious how many commenters here who are making strong statements about the worth (or not) of peer review have actually participated in it both as author AND reviewer? Or even as an editor who is faced with the challenge of integrating and synthesizing multiple reviews into a recommendation?<p>There are many venues available to share your research or ideas absent formal peer review, arXiv&#x2F;bioRxiv being among the most popular. If you reject the idea of peer review itself it seems like there are plenty of alternatives.
评论 #38685942 未加载
funnystories超过 1 年前
when i was on college, i wrote a simple system to make corrections on text based on some heuristics for a class.<p>then, the teacher of the class suggested me to write a paper describing the system for a local conference during the summer, with some results etc<p>I wrote it with his support but it got rejected right away because of poor grammar or something similar. the conference was in Brazil, but required the papers to be in English. I was just a student and thought that indeed my english was pretty bad. the teacher said to me to at least send an email to the reviewers to get some feedback, maybe resubmit with the corrections.<p>i asked specifically which paragraphs were confusing. they sent me some snippets of phrases that were obviously wrong. yes, they were the &quot;before&quot; examples of &quot;before&#x2F;after&quot; my system applied the corrections. I tried to explain that the grammar should be wrong, but the just replied with &quot;please fix your english mistakes and resubmit&quot;.<p>i tried 2 or 3 more times but just gave up.
评论 #38687222 未加载
评论 #38687245 未加载
评论 #38687692 未加载
mxwsn超过 1 年前
Flagged for misleading title - the four strong rejects are from a single author. It&#x27;s listed four times for some unknown reason but likely an openreview quirk. The actual status described by the page is: 2 unknown (with accompanying long text), 1 weak reject, and 1 strong reject.
Hayvok超过 1 年前
The review thread (start at the bottom &amp; work your way up) reads like a Show HN thread that went negative.<p>The paper initially received some questions&#x2F;negative feedback, so the authors updated and tweaked the reviewers a bit —<p>&gt; &quot;We welcome discussion... The main contribution (that seems to have been missed by some of the reviews) is that we can use very shallow models to compute good vector representation of words.&quot;<p>The response to the authors&#x27; update:<p>&gt; Review: The revision and rebuttal failed to address the issues raised by the reviewers. I do not think the paper should be accepted in its current form. &gt; Quality rating: Strong reject &gt; Confidence: Reviewer is knowledgeable
tbruckner超过 1 年前
Will keep happening because peer review itself, ironically, has no real feedback mechanism.
评论 #38686281 未加载
评论 #38685511 未加载
zaptheimpaler超过 1 年前
We already have a better mechanism for publishing and peer review.. it&#x27;s called the internet. Literally the comments section of Reddit would work better. Reviews would be tied to a pseudonymous account instead of anonymous, allowing people to judge the quality of reviewers as well. Hacker News would work just as well too. It&#x27;s also nearly free to setup a forum and frictionless to use compared to paying academic journals $100k for them to sell your own labour back to you. Cost and ease of use also mean more broadly accessible and hence more widely reviewed.
评论 #38686395 未加载
评论 #38688406 未加载
jongjong超过 1 年前
I&#x27;ve found that PhD level academics are usually wrong about practical matters. It&#x27;s almost as if having a PhD is itself proof that they are not good at identifying optimal solutions to reach practical goals. Also, it may show that they are overly concerned with superficial status markers instead of results.<p>I think it&#x27;s also the secret to why they can go so deep on certain abstract topics. Their minds are lacking a garbage collector and they can go in literally any direction and accumulate any amount of knowledge and they&#x27;ll be able to memorize it all easily even if it has 0 utility value.
Der_Einzige超过 1 年前
Makes me not feel bad about my own rejections when I see stuff like this or Yann Lecun reacting poorly on twitter to his own papers being rejected.
PaulHoule超过 1 年前
I&#x27;d reject it still (speaking of someone who has developed products based on word vectors, document vectors, dimensional reduction, etc. before y&#x27;all thought it was cool...)<p>I quit a job because they were insisting on using Word2Vec in an application where it would have doomed the project to failure. The basic problem is that in a real-life application many of the most important words are <i>not in the dictionary</i> and if you throw out words that are not in the dictionary you <i>choose</i> to fail.<p>Let a junk paper like that through and the real danger is that you will get 1000s of other junk papers following it up.<p>For instance, take a look at the illustrations on this page<p><a href="https:&#x2F;&#x2F;nlp.stanford.edu&#x2F;projects&#x2F;glove&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;nlp.stanford.edu&#x2F;projects&#x2F;glove&#x2F;</a><p>particularly under &quot;2. Linear Substructures&quot;. They make it look like a miracle that they project down from a 50-dimensional subspace down to 2 and get a nice pattern of cities and zip codes, for instance. The thing is you could have a random set of 20 points in a 50-d space and, assuming there are no degeneracy, you can map them to any 20 points you want in the 2-d space with an appropriately chosen projection matrix. Show me a graph like that with 200 points and I might be impressed. (I&#x27;d say those graphs on that server damage the Stanford brand for me about as much as SBF and Marc Tessier-Lavign)<p>(It&#x27;s a constant theme in dimensional reduction literature that people forget that random matrices often work pretty well, fail to consider how much gain they are getting over the random matrix, ...)<p>BERT, FastText and the like were revolutionary for a few reasons, but I saw the use of subword tokens as absolutely critical because... for once, you could capture a medical note and not <i>erase the patient&#x27;s name!</i><p>The various conventions of computer science literature prevented explorations that would have put Word2Vec in its place. For instance, it&#x27;s an obvious idea that you should be able to make a classifier that, given a document vector, can predict &quot;is this a color word?&quot; or &quot;is this a verb?&quot; but if you actually try it, it doesn&#x27;t work in a particularly maddening way. With a tiny training&#x2F;eval set (say 10 words) you might convince yourself it is working, but the more data you train on the more you realize the words are scattered mostly randomly and even those those &quot;linear structures&quot; exist in a statistical sense they aren&#x27;t well defined and not particularly useful. It&#x27;s the kind of thing that is so weird and inconclusive and fuzzy that I&#x27;m not aware of anyone writing a paper about it... Cause you&#x27;re not going to draw any conclusions out of it except that you found a Jupiter-sized hairball.<p>For all the excitement people had over Word2Vec you didn&#x27;t see an explosion of interest in vector search engines because... Word2Vec sucked, applying it to documents didn&#x27;t improve the search engine very much. Some of it is that adding sensitivity to synonyms can hurt performance because many possible synonyms turn out to be red herrings. BERT, on the other hand, is context sensitive and is able to some extent know the different because &quot;my pet jaguar&quot; and &quot;the jaguar dealership in your town&quot; and that really does help find the relevant documents and hide the irrelevant documents.
nsagent超过 1 年前
Previous discussion: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=38654038">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=38654038</a>
rahmaniacc超过 1 年前
This was hilarious!<p>Many very broad and general statements are made without any citations to back them up.<p>- Please be more specific.<p>The number of self-citations seems somewhat excessive.<p>- We added more citations.
ashvardanian超过 1 年前
That&#x27;s the most inspiring thing I&#x27;ve learned this year.
raverbashing超过 1 年前
And this is why the biggest evolution of AI has happened in companies, not in academic circles<p>Because there&#x27;s too much nitpicking and grasping at straws amongst people that can&#x27;t see novelty even when it&#x27;s dancing in front of them
评论 #38686863 未加载
tinyhouse超过 1 年前
I agree that Glove was a fraud.
m3kw9超过 1 年前
That didn’t age well
lupusreal超过 1 年前
Boiled down to the core essence, science is about testing ideas to see if they work. Peer review is not part of this process, they rarely if ever attempt replication during the peer review process; and so they inevitably end up rejecting new ideas without even trying them. This isn&#x27;t science.
评论 #38685385 未加载