TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

NLP Architect by Intel AI Lab

77 点作者 tsaprailis超过 6 年前

3 条评论

jph超过 6 年前
This AI toolkit works on popular Intel CPUs, and is a big step forward for the new Intel Nervana Neural Network Processor (NNP-I) hardware chip akin to a GPU.<p>The Intel AI Lab has an introduction to NLP (<a href="https:&#x2F;&#x2F;ai.intel.com&#x2F;deep-learning-foundations-to-enable-natural-language-processing-solutions" rel="nofollow">https:&#x2F;&#x2F;ai.intel.com&#x2F;deep-learning-foundations-to-enable-nat...</a>) and optimized Tensorflow (<a href="https:&#x2F;&#x2F;ai.intel.com&#x2F;tensorflow&#x2F;" rel="nofollow">https:&#x2F;&#x2F;ai.intel.com&#x2F;tensorflow&#x2F;</a>)<p>One surprising research result for this NLP is that a simple convolutional architecture outperforms canonical recurrent networks, often. See: CMU lab, Sequence Modeling Benchmarks and Temporal Convolutional Networks (TCN) <a href="https:&#x2F;&#x2F;github.com&#x2F;locuslab&#x2F;TCN" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;locuslab&#x2F;TCN</a><p>If you&#x27;re interested in Nervana, here are some specifics: the chip is for hardware neural network acceleration, for inference-based workloads. Notable features include fixed-point math, Ice Lake cores, 10-nanometer fabs, on-chip memory management by software directly, and hardware-optimized inter-chip parallelism.<p>I&#x27;ve worked for Intel, and I&#x27;m stoked to see the AI NLP progress.
评论 #18879065 未加载
评论 #18876335 未加载
continuations超过 6 年前
How does this compare to word2vec or fasttext?
评论 #18878909 未加载
___cs____超过 6 年前
Yet another interface on top of Pytorch&#x2F;TF&#x2F;Gensim.