This AI toolkit works on popular Intel CPUs, and is a big step forward for the new Intel Nervana Neural Network Processor (NNP-I) hardware chip akin to a GPU.<p>The Intel AI Lab has an introduction to NLP (<a href="https://ai.intel.com/deep-learning-foundations-to-enable-natural-language-processing-solutions" rel="nofollow">https://ai.intel.com/deep-learning-foundations-to-enable-nat...</a>) and optimized Tensorflow (<a href="https://ai.intel.com/tensorflow/" rel="nofollow">https://ai.intel.com/tensorflow/</a>)<p>One surprising research result for this NLP is that a simple convolutional architecture outperforms canonical recurrent networks, often. See: CMU lab, Sequence Modeling Benchmarks and Temporal Convolutional Networks (TCN) <a href="https://github.com/locuslab/TCN" rel="nofollow">https://github.com/locuslab/TCN</a><p>If you're interested in Nervana, here are some specifics: the chip is for hardware neural network acceleration, for inference-based workloads. Notable features include fixed-point math, Ice Lake cores, 10-nanometer fabs, on-chip memory management by software directly, and hardware-optimized inter-chip parallelism.<p>I've worked for Intel, and I'm stoked to see the AI NLP progress.