There have been a large number of Deep Learning Libraries that have been released recently.<p>TensorFlow is getting a lot of buzz. Theano and Torch are often spoken about. Berkeley has Caffe and Microsoft has released CNTK. Nvidia has cuDNN and Keras looks very nice. What have you used? What do you like about the library? What are your pain points?
These libraries are not mutually exclusive, they operate on different levels of abstraction. For example, you can use Keras with either Theano or Tensorflow. cuDNN is an interface to the hardware and is used internally by most libraries.<p>Breaking it down. High-Level Frameworks:<p>- Caffe is very high-level and almost only used for Convolutional Neural Networks. It doesn't have good support for RNNs or anything else. It has a very good collection of pre-trained models (model zoo).<p>- Keras is a "wrapper" around Tensorflow or Theano and includes many higher-level abstractions like various types of layers, optimizers, etc. It's typically what I recommend to anyone who wants to get something up and running quickly and doesn't necessarily want to develop novel models.<p>On the next lower level are Theano and Tensorflow. They are pretty much competing with each other and have a very similar computational model (computational graphs). People/Companies seem to be moving towards Tensorflow, so that what I'd recommend using at this point. Tensorflow recently added several higher-level abstractions (like TF Learn and contrib modules) that are quite similar to those in Keras.<p>cuDNN is a library for GPU acceleration. It's <i>used</i> by most of these libraries under the hood to speed up computation. You certainly can use cuDNN directly, but unless you're doing low-level research it's probably not necessary.
I used to use Keras on top of Theano, and did performance testing of Keras on TensorFlow in parallel every TensorFlow release.<p>The last release (0.8) of TensorFlow is much faster to my use case (model compilation twice as fast, execution about 10% faster) and made the switch to TensorFlow across the whole cluster about a week ago. Has been performing well and the transition was pretty seamless thanks to Keras.