Sorry for repeating myself, but since there is a machine learning and OCaml it worth mentioning Owl [1] - library for numeric and scientific computations, including ML.<p>[1] <a href="https://github.com/owlbarn/owl" rel="nofollow">https://github.com/owlbarn/owl</a>
This is great. Functional languages have such an elegant representation of so many mathematical concepts. It's a bit of a shame that they don't have more widespread use in scientific computing.
So much bashing on static typing on deep learning:) Does any one from Google can explain the benefit since you guys are working on swift in tensorflow<p><a href="https://medium.com/tensorflow/introducing-swift-for-tensorflow-b75722c58df0" rel="nofollow">https://medium.com/tensorflow/introducing-swift-for-tensorfl...</a>
So I was lost at the VGG19 example code, but probably because I have (a) no OCaml experience; and, (b) no ML/NN experience.<p>Still seems interesting, though. If anyone has any suggestions on basic sources for getting a background on the concepts here I'd definitely give them a read.
I had a very unpleasant interview regarding deep learning with Jane Street. I spoke to a member of their HR team to try to get significant assurances that the interview would actually be focused on deep learning and not puzzles or brain teasers, and that the job would really focus on deep learning for their actual business, and not just be a proxy for being generally smart and then work on whatever existing inhouse models. The HR employee reassured me significantly on both points.<p>Then the interview was nothing but deck of card puzzles and random riddles where you have to articulate a careful model of some physical quantity like speed or frequency to solve the puzzle. I hate that junk, never found that it correlates with a way of thinking that matters in quant finance (which I previously did for a living) and suitably failed the interview. Worse, I would have been happy to decline that interview and tell them I know I’m not their guy if only the HR staff had correctly depicted the interview & job to me.<p>Ok, enough grumbling. From this actual blog post,<p>> “Type-safety helps you ensure that your training script is not going to fail after a couple hours because of some simple type error.”<p>I really think this way of thinking about static typing is a very bad thing. This is not at all an actual benefit, because in any sane situation, you will use unit and integration tests that execute extremely quickly on small test data to exercise your end to end model training code.<p>What I currently do for this on my team is to always require that model training programs are deployed inside of containers that capture not just the state of the code, but also make it configurable to mount the training data volume and pass in ENV that governs what the training job really is.<p>So then Jenkins or whatever will build the container for any PRs that seek to implement or modify training, attach fixture data and fixture ENV settings, and give you quick feedback about the whole end to end training, even inclusive of GPU settings (we have to do a slight manual step to specify Jenkins running on a GPU server, but this is a vestige of some of our infra headaches).<p>The point is that adding all sorts of extra code to embody type annotations, and limiting people from awesome dynamic typing features is a silly thing to do if you’re worried about type errors ruining a long-running job. That should be handled by fast integration tests.<p>Now, there are perfectly valid other reasons to like static typing. I just always hear this one, especially in regards to Python, and it’s really the wrong way to look at it.<p>The extra code and constraints of static typing are liabilities that should have to offer offsetting value to choose them. You already need integration and unit tests to reliably make changes and maintain the training code. If you can get the same benefit of overall job safety (or even 99% of the same benefit), from the tests, without paying the extra costs of static typing, then don’t!<p>Turning it around to act like static typing is <i>de facto</i> always a benefit is a very one-sided way to look at it.
I'm not convinced that functional programming will grow in terms of devs using it daily, but it has been very useful for myself in certain contexts (especially when I wrote math based libraries using permutations, heavy recursion, etc). The results of this seminar are awesome!
Very nice. I have spent many evenings playing with the Haskell bindings for TensorFlow that don’t have the coverage these OCaml bindings have (e.g., character seq models).<p>I have thought of learning some OCaml, maybe this will give me the kick in the butt to do it.
Am I the only one who gets confused by references to ML (ML derived typed FP vs Machine Learning)? The threads on this page are the represent a strange junction where I really have to think about what people mean, because they really could mean either!
<i>Type-safety helps you ensure that your training script is not going to fail after a couple hours because of some simple type error.</i><p>This isn’t a failure mode that ever happens in DL... 2 hours into the job you will only be dealing with floats anyway no matter what language you are using. If you’re going to fail on anything typed it will be in the first 20 seconds probably, basically the instant you start your first epoch.