This is fun idea. With these kind of coding tasks you won't get any advantage of using differentiable programming paradigm, but it is a nice reminder how syntactically bad TensorFlow is. Code of any differentiable program should look identical to any non-differentiable program. Maybe a small annotation à la TorchScript [0] can be tolerated, but not reimplementing everything via function calls with overly descriptive names.<p>Btw link to GitHub repo is broken. Copy&pasting URL works.<p>[0] <a href="https://pytorch.org/docs/stable/jit_language_reference.html#language-reference" rel="nofollow">https://pytorch.org/docs/stable/jit_language_reference.html#...</a>
Ah yes, the enthusiasm of Day 1, "let's write my own stack DSL and do it on there!"<p>Day 8 "FML!" <i>checks python version installed...</i>
Lovely effort. Looks like the approach to the first one is just programatic, procedural updates to a variable.<p>Was hoping to see some training of a model to produce outputs. Good effort nonetheless!
That's pretty funny, AoC is rule-based so I don't think there will be much "deep" learning going but I hope I'll be surprised!
You wrote this...<p><pre><code> All the comparisons like > are better written using their TensorFlow equivalent (e.g tf.greater). Autograph can convert them (you could write >), but it’s less idiomatic and I recommend to do not relying upon the automatic conversion, for having full control.
</code></pre>
...but I'm not sure you realized that the for loop and the if statement in your code are being transparently compiled to dataset.map() and tf.cond() for you by Autograph :)
Good reading ! It would be interesting to have other similar challenges, such as Euler, solved in idiomatic Tensorflow and Pytorch. Also some examples of more complicated state-of-the-art algorithms, such as sorting/graph/trees algorithms reimplemented in these frameworks.<p>It would be a great introduction to these frameworks for people who never touched anything ML-related, leaving the neural network content to later in the learning process.<p>Learning how to create differentiable algorithms and neural networks would be easier once the way those frameworks work is understood (ingesting data, iterating dataset, running, debugging, profiling, etc).<p>If you are starting with neural networks or differentiable programming, learning both the maths and the frameworks at the same time can be quite overwhelming