Nice post, couple of bits of feedback:<p>When you talk about "fit" it sounds like you mean fit to the training data, which would obviously be a bad thing to optimise hyperparameters for. From the github repo it sounds like you are using a held-out validation set, but maybe worth being clear about this (e.g. call it something like "predictive performance on validation set").<p>When you've optimised over hyper-parameters using a validation set, you need to hold out a further test set and report results of your optimised hyperparameter settings on that test set, rather than just report the best achieved metric on the validation set. Is that what you did here? Maybe worth a mention.<p>A question about sigopt: how do you compare to open-source tools like hyperopt, spearmint and so on? Do you have proprietary algorithms? Are there classes of problems which you do better or worse on? Or is it more about the convenience ?
I'm one of the founders of SigOpt and I am happy to answer any questions about this post, our methods, or anything about SigOpt. I'll be in this thread all day.