TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Tuning machine learning models

22 点作者 Zephyr314大约 10 年前

2 条评论

mjw大约 10 年前
Nice post, couple of bits of feedback:<p>When you talk about &quot;fit&quot; it sounds like you mean fit to the training data, which would obviously be a bad thing to optimise hyperparameters for. From the github repo it sounds like you are using a held-out validation set, but maybe worth being clear about this (e.g. call it something like &quot;predictive performance on validation set&quot;).<p>When you&#x27;ve optimised over hyper-parameters using a validation set, you need to hold out a further test set and report results of your optimised hyperparameter settings on that test set, rather than just report the best achieved metric on the validation set. Is that what you did here? Maybe worth a mention.<p>A question about sigopt: how do you compare to open-source tools like hyperopt, spearmint and so on? Do you have proprietary algorithms? Are there classes of problems which you do better or worse on? Or is it more about the convenience ?
评论 #9104731 未加载
评论 #9105396 未加载
Zephyr314大约 10 年前
I&#x27;m one of the founders of SigOpt and I am happy to answer any questions about this post, our methods, or anything about SigOpt. I&#x27;ll be in this thread all day.
评论 #9102708 未加载