The history of machine learning startups is littered with companies that thought a hosted web service was a good idea. The problem with this model is that big data, by definition, is costly to move. So if a managed service is not generating and storing the data you need to process with machine learning or deep learning (as you might conceivably with AWS), then you probably don't want to move your data to those algorithms or models. All you'll get are small-data users. The models and algos need to go to the data. That's the most efficient approach, and it means you have to go on prem... Fwiw, that's what we're trying to do with Skymind and Deeplearning4j.<p><a href="https://skymind.io/" rel="nofollow">https://skymind.io/</a>
<a href="http://deeplearning4j.org/" rel="nofollow">http://deeplearning4j.org/</a>
As an algorithm developer and manager I have thought of business ideas similar to what Algorithmia is pursuing. There are a few reasons why I think “algorithms as a service” will not work so well. In most products / services that rely on non-trivial algorithms, the core algorithms are often the “secret sauce” of the business. They are what gives you your edge over your competition. And you need to fully understand and control your secret sauce. You need to know where the core algorithms work well and where they don’t work so well. With an outsourced service, your core algorithms are basically a black box outside your control. Another problem: for most real world algorithms it is pretty rare to be able to take an off the shelf algorithm and have it “just work” well enough for your problem. Often there is a bit of parameter tuning and domain specific knowledge that must be incorporated to get the best results (this is how people like myself get a lot of consulting work). If a generic algorithm does work quite well for your problem, your competitors probably already know about it and you have no real edge over them. A third problem, and this is really the main one: one of the main benefits of developing an advanced algorithm is that once you have it, you “own” it and can deploy it as you see fit. You amortize your costs upfront and are able to use this sunk development cost over and over again without extra cost. But with a service like Algorithmia, you are never able to take full advantage of the tremendous leverage that algorithms can give you. The more you use the algorithms, the more you pay. And if you start paying a lot to use an algorithm you’re going to at some point find it to be better to develop your own implementation and stop paying someone else for the service.
> "Using GPUs inside of containers is a challenge. There are driver issues, system dependencies, and configuration challenges. It’s a new space that’s not well-explored, yet. There’s not a lot of people out there trying to run multiple GPU jobs inside a Docker container.”<p>Er, Nvidia itself has an official Docker application which allows containers to interface with the host GPU, optimized for the deep learning use case: <a href="https://github.com/NVIDIA/nvidia-docker" rel="nofollow">https://github.com/NVIDIA/nvidia-docker</a><p>Training models is one thing that can commoditized, like with this API, but <i>building</i> models and selecting features without breaking the rules of statistics is another story and is the true bottleneck for deep learning. That can't be automated as easily.
I'm not a big fan of taking the openness in machine learning and making it a web based product, for me the "whoa" moment from the new approachable machine learning frameworks is that I can train a Tensorflow network on my computer then embedded it in an Android/iOS app that'll work offline.<p>Also, much more minor grievance but I really dislike websites that don't work on my 15" laptop, what's going on here? <a href="http://i.imgur.com/q13lCLK.png" rel="nofollow">http://i.imgur.com/q13lCLK.png</a>
Does anybody know if these "free APIs" are actually used to get "free training" for the API owner's models? I mean, is it free as in free beer or as in facebook?