TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

AllenNLP – An open-source NLP research library, built on PyTorch

194 点作者 varunagrawal超过 7 年前

6 条评论

TekMol超过 7 年前
Wow, is this really state of the art?<p><pre><code> Joe did not buy a car today. He was in buying mood. But all cars were too expensive. Why didn&#x27;t Joe buy a car? Answer: buying mood </code></pre> I think I have seen similar systems for decades now. I thought we would be further along meanwhile.<p>I have tried for 10 or 20 minutes now. But I can&#x27;t find any evidence that it has much sense of syntax:<p><pre><code> Paul gives a coin to Joe. Who received a coin? Answer: Paul </code></pre> All it seems to do is to extract candidates for &quot;who&quot;, &quot;what&quot;, &quot;where&quot; etc. So it seems to figure out correctly that &quot;Paul&quot; is a potential answer for &quot;Who&quot;.<p>No matter how I rephrase the &quot;Who&quot; question, I always get &quot;Paul&quot; as the answer. &quot;Who? Paul!&quot;, &quot;Who is a martian? Paul!&quot;, &quot;Who won the summer olympics? Paul&quot;, &quot;Who got a coin from the other guy? Paul!&quot;<p>Same for &quot;what&quot; questions:<p><pre><code> Gold can not be carried in a bag. Silver can. What can be carried in a bag? Answer: Gold</code></pre>
评论 #15263592 未加载
评论 #15265267 未加载
评论 #15263475 未加载
评论 #15263936 未加载
评论 #15263826 未加载
评论 #15264652 未加载
评论 #15264763 未加载
mamp超过 7 年前
This is very brittle: it works really well on the pre-canned examples but the vocabulary seems very tightly linked. It doesn&#x27;t handle something as simple as:<p>&#x27;the patient had no pain but did have nausea&#x27;<p>Doesn&#x27;t yield any helpful on semantic role labeling and didn&#x27;t even parse on machine comprehension. If I vary it to say ask &#x27;did the patient have pain?&#x27; the answer is &#x27;nausea&#x27;.<p>CoreNLP provides much more useful analysis of the phrase structure and dependencies.
sanxiyn超过 7 年前
In &quot;Adversarial Examples for Evaluating Reading Comprehension Systems&quot; <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1707.07328" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1707.07328</a>, it was found that adding a single distracting sentence can lower F1 score of BiDAF (which is used in demo here) from 75.5% to 34.3% on SQuAD. In comparison, human performance goes from 92.6% to 89.2%.
评论 #15264345 未加载
vbuwivbiu超过 7 年前
&quot;the squid was walked by the woman&quot;<p>&quot;what is the fifth word in that sentence ?&quot;<p>Answer: squid
strin超过 7 年前
We need more demos of AI models: there is what people claim their model does, and there is what the model actually does.
wyldfire超过 7 年前
How does this compare with spacy?
评论 #15265360 未加载