TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Introduction to Thompson Sampling: The Bernoulli Bandit (2017)

57 点作者 pncnmnp超过 1 年前

6 条评论

rphln超过 1 年前
My favorite resource on Thompson Sampling is &lt;<a href="https:&#x2F;&#x2F;everyday-data-science.tigyog.app&#x2F;a-b-testing" rel="nofollow">https:&#x2F;&#x2F;everyday-data-science.tigyog.app&#x2F;a-b-testing</a>&gt;.<p>After learning about it, I went on to replace the UCT formula in MCTS with it and the results were... not much better, actually. But it made me understand both a little better.
评论 #39254092 未加载
评论 #39253786 未加载
zX41ZdbW超过 1 年前
Thompson Sampling, a.k.a. Bayesian Bandits, is a powerful method for runtime performance optimization. We use it in ClickHouse to optimize compression and to choose between different instruction sets: <a href="https:&#x2F;&#x2F;clickhouse.com&#x2F;blog&#x2F;lz4-compression-in-clickhouse" rel="nofollow">https:&#x2F;&#x2F;clickhouse.com&#x2F;blog&#x2F;lz4-compression-in-clickhouse</a>
plants超过 1 年前
This is great. I remember finding another really good resource on the Bernoulli bandit that was interactive. Putting feelers out there to see if anyone knows what I’m talking about off the top of their heads.
orasis超过 1 年前
I built a contextual bandit combining XGBoost with Thompson Sampling you can check out at <a href="https:&#x2F;&#x2F;improve.ai" rel="nofollow">https:&#x2F;&#x2F;improve.ai</a>
评论 #39256596 未加载
eggie5超过 1 年前
if you have an NN that is probabilistic, how do you update the prior after sampling from the posterior?
评论 #39253384 未加载
clbrmbr超过 1 年前
Beautifully composed article. Looking forward to trying this out.