The basic idea of integrating predictions into essays is good. On this subject, I've been thinking recently about using metadata for predictions. This could make analysis easier, e.g., imaging running an analysis script that got prediction data from multiple websites in standardized form. The easiest thing to do would probably be to add something to the proposed Schema.org Claim [0]. When I get the time I'll propose this to the right people [1].<p>> 2) The main way the forecasts failed to be useful was that the questions themselves weren't capturing anything interesting.<p>I agree, having used PredictionBook [2] in the past, though the essay doesn't address what I think is a better solution. Predictions that aren't involved in a decision aren't worth anything from a decision analysis perspective, so that's one heuristic I keep in mind when trying to make predictions. Why should I care that Angry Birds AIs are getting better (e.g.)? If the information isn't a factor in any decision, its value of information [3] is zero.<p>Perhaps I haven't paid close enough attention to this, but in AI safety I never got a sense for what people would do with the forecasts.<p>[0] <a href="http://schema.org/Claim" rel="nofollow">http://schema.org/Claim</a><p>[1] <a href="https://news.ycombinator.com/item?id=22127537" rel="nofollow">https://news.ycombinator.com/item?id=22127537</a><p>[2] <a href="https://predictionbook.com/" rel="nofollow">https://predictionbook.com/</a><p>[3] <a href="https://en.wikipedia.org/wiki/Value_of_information" rel="nofollow">https://en.wikipedia.org/wiki/Value_of_information</a>
Seems to be an ad for their platform for creating notebooks that mix plain text with specific forecasting questions.<p>Looks cool, but what I'd be interested in would be a rigorous method of combining the outputs from multiple questions.