TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

AllenNLP – An open-source NLP research library, built on PyTorch

194 pointsby varunagrawalover 7 years ago

6 comments

TekMolover 7 years ago
Wow, is this really state of the art?<p><pre><code> Joe did not buy a car today. He was in buying mood. But all cars were too expensive. Why didn&#x27;t Joe buy a car? Answer: buying mood </code></pre> I think I have seen similar systems for decades now. I thought we would be further along meanwhile.<p>I have tried for 10 or 20 minutes now. But I can&#x27;t find any evidence that it has much sense of syntax:<p><pre><code> Paul gives a coin to Joe. Who received a coin? Answer: Paul </code></pre> All it seems to do is to extract candidates for &quot;who&quot;, &quot;what&quot;, &quot;where&quot; etc. So it seems to figure out correctly that &quot;Paul&quot; is a potential answer for &quot;Who&quot;.<p>No matter how I rephrase the &quot;Who&quot; question, I always get &quot;Paul&quot; as the answer. &quot;Who? Paul!&quot;, &quot;Who is a martian? Paul!&quot;, &quot;Who won the summer olympics? Paul&quot;, &quot;Who got a coin from the other guy? Paul!&quot;<p>Same for &quot;what&quot; questions:<p><pre><code> Gold can not be carried in a bag. Silver can. What can be carried in a bag? Answer: Gold</code></pre>
评论 #15263592 未加载
评论 #15265267 未加载
评论 #15263475 未加载
评论 #15263936 未加载
评论 #15263826 未加载
评论 #15264652 未加载
评论 #15264763 未加载
mampover 7 years ago
This is very brittle: it works really well on the pre-canned examples but the vocabulary seems very tightly linked. It doesn&#x27;t handle something as simple as:<p>&#x27;the patient had no pain but did have nausea&#x27;<p>Doesn&#x27;t yield any helpful on semantic role labeling and didn&#x27;t even parse on machine comprehension. If I vary it to say ask &#x27;did the patient have pain?&#x27; the answer is &#x27;nausea&#x27;.<p>CoreNLP provides much more useful analysis of the phrase structure and dependencies.
sanxiynover 7 years ago
In &quot;Adversarial Examples for Evaluating Reading Comprehension Systems&quot; <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1707.07328" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1707.07328</a>, it was found that adding a single distracting sentence can lower F1 score of BiDAF (which is used in demo here) from 75.5% to 34.3% on SQuAD. In comparison, human performance goes from 92.6% to 89.2%.
评论 #15264345 未加载
vbuwivbiuover 7 years ago
&quot;the squid was walked by the woman&quot;<p>&quot;what is the fifth word in that sentence ?&quot;<p>Answer: squid
strinover 7 years ago
We need more demos of AI models: there is what people claim their model does, and there is what the model actually does.
wyldfireover 7 years ago
How does this compare with spacy?
评论 #15265360 未加载