TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Research as a Stochastic Decision Process

76 点作者 benwr大约 6 年前

5 条评论

ArtWomb大约 6 年前
Absolutely this: that research is a creative exploration of the search space. It&#x27;s why artists and scientists have such a kinship ;)<p>The image of a solitary Principle Investigator is fading fast. Cloud based datasets, Jupyter Notebooks, open review archives, even public discussions and distribution of results via Twitter attest to the collaborative nature of science in the modern era.<p>Consider the recent initiatives in Brain understanding, that could yield profound implications beyond neuroscience and AI into public policy and our most fundamental beliefs. And its not just discoveries about intelligence. Completing entire transcriptomes of cell diversity in mouse and nematode brains creates a cell atlas at a level of detail that other researchers can then use in their own investigations. Such as exploring the robustness of genetic diversity in transcription error rates!
评论 #19675120 未加载
tgbugs大约 6 年前
This is a fantastic perspective on managing ignorance. It is told from a personal perspective, but there are fascinating applications of this at an organizational scale.<p>My initial thought was that the primary challenge here (which the author addresses in part) is in estimating the time it takes to complete a task. If you already &#x27;know&#x27; how much time something is going to take then is there really uncertainty? How much uncertainty? The author seems to be going in the right direction, but there seems to be something more here about sources of ignorance that touches on the number of special cases in the problem space, or something of that nature. I wonder if there is any work trying to infer the number of special cases, or &#x27;practical realities&#x27; of a problem space (maybe a kind of roughness, inhomogeneity, or irregularity?) that will ultimately be the major time cost.<p>Another thought is that &#x27;bad&#x27; negative results don&#x27;t have the provenance required to rule anything out, but if you know exactly what was done then you have much stronger evidence about where the problems might lie.<p>Finally this is deeply connected to another issue which is that sometimes we don&#x27;t have the resources to devote to solving the really big problems so we never even try. The economics of cutting edge research only serves to drive us further from the hardest questions because we don&#x27;t have anywhere to start because we haven&#x27;t the faintest idea why we fail.<p>Somehow this reminds me of my first play-through of Darksouls -- the only thing that kept me going was the belief that it could be completed. My repeated failures were required for me to slowly gather enough information to see where I was going wrong. Funding basic research that is truly at the edge of the unknown is like only funding noobs that have never played before, or maybe more like funding good players from one game that come to another, they&#x27;ll get there eventually, but they have to be able fail, and if you make them play Ironman mode then we might as well give up -- the game is too hard.
btrettel大约 6 年前
I haven&#x27;t read this page too closely, but the better strategy is similar to an approach I&#x27;ve seen called the &quot;ranking theorem&quot; mentioned here: <a href="http:&#x2F;&#x2F;www.prioritysystem.com&#x2F;math.html" rel="nofollow">http:&#x2F;&#x2F;www.prioritysystem.com&#x2F;math.html</a><p>&gt; One result is the ranking theorem: If independent projects are ranked based on the ratio of benefit-to-cost, and selected from the top down until the budget is exhausted, the resulting project portfolio will create the greatest possible value (ignoring the error introduced if the portfolio doesn&#x27;t consume the entire budget).<p>This is speaking more generally of a &quot;budget&quot; which could be time, money, or something else. It&#x27;s an approximation because when the resource is nearly spent it becomes a more complicated optimization problem, selecting projects which fit, which doesn&#x27;t necessarily pick projects with the highest rate.<p>Obviously this becomes more complicated if the projects are not independent, e.g., one is a prerequisite of others.
评论 #19671402 未加载
roboy大约 6 年前
Very cool, we have developed a framework based on the same ideas for early product development of physical products: <a href="https:&#x2F;&#x2F;www.taf.expert" rel="nofollow">https:&#x2F;&#x2F;www.taf.expert</a> and successfully use it in a course at Technical University of Munich (<a href="https:&#x2F;&#x2F;www.thinkmakestart.com" rel="nofollow">https:&#x2F;&#x2F;www.thinkmakestart.com</a>) as well as a number of large corporates.
crucialfelix大约 6 年前
Another interesting way I look at projects is where the objective is unknown, where I&#x27;m searching for things that can only be discovered, not imagined in advance.<p>In that case I want to choose the search path with the largest variability in output, and limit downside time cost by just giving up after some time.<p>This is the antifragile approach. Seek out sources of volatility. Explore, don&#x27;t chase predetermined goals.<p>This is also useful for software development.