For my master thesis, I implemented a new and fancy algorithm. The code seemed to be fine and dandy for the usual, simple test cases. After using more elaborate test cases, I found cases that didn't work well.<p>After contacting the author, who indicated he didn't have such problems, I literally spent months trying to debug the code. When I finally gave up and re-wrote my implementation basically from scratch, and found the same problems, I contacted the author again. He then indicated that indeed there was a problem with the method for these cases, that he understood the problem and found a way to fix it. In hindsight, the problem was not hard to understand (but still, the claims in the paper were unwarranted IMO).<p>Conclusion? I wish I was a math prodigy, then I would have spotted the problem instantly. Also, be wary of claims made in papers.
I know it's a bit of a tangent, but proactive, large-scale logging of models like this (such as those used in machine learning) may become desirable to meet the requirements of GDPR. If you have to be able to explain how an algorithm made a decision, you need to be able to pull up data like this somehow.
A while back I was working on a system doing fairly complex engineering calculations and I implemented detailed logging of both the values used and the actual calculations performed.<p>This allowed me to be able to generate a spreadsheet (with the values and calculations in place) that could show a non-developer exactly how the outputs had been calculated (you could use the features of Excel to add visual annotations of precedents and dependencies).<p>I was pretty pleased with that approach.
If you want IMMENSELY powerful logging, take a look at how the trace logging of Racket's Medic Debugger works, an absolutely ingenious solution:<p><a href="https://docs.racket-lang.org/medic/index.html" rel="nofollow">https://docs.racket-lang.org/medic/index.html</a><p>Highly interestingly, albeit a bit off topic, the authors of the paper from Medic originated very recently took this technique, and cranked it to 11:<p><a href="https://conf.researchr.org/event/sle-2017/sle-2017-papers-debugging-with-domain-specific-events" rel="nofollow">https://conf.researchr.org/event/sle-2017/sle-2017-papers-de...</a><p>…which won them a distinguished paper award!
This looks to me like a standard logging toolchain where you just have programmatic access to the logs.<p>This is like claiming you save and load a json object (instead of its serialization) in some hashmap / DB for fast lookup.<p>Am I missing something?
That's a great approach to logging/debugging complex models on large datasets.<p>I'm pretty sure this can be applied outside the math, e.g. on systems with complex business rule over large datasets.
I implemented a library that does exactly this -- <a href="https://github.com/IGITUGraz/SimRecorder" rel="nofollow">https://github.com/IGITUGraz/SimRecorder</a> (In case anyone finds it useful). It supports storing data in both hdf5 and redis (although I wouldn't recommend redis for storing large numpy arrays)
What’s the advantage of the file system/repo/bespoke diag database over storing the numpy arrays in the existing database infrastructure?<p>Doesn’t implementing this system with HDF5 cause headaches for concurrency in either direction?