I'm not sure if it would be classed as a "neuropsychological hazard" but one form of information hazard I encountered in recent years is true information that requires a finessed understanding of both risk and psychology to not act rashly upon it.<p>For example, let's say you get a brain scan out of curiosity only. A small aneurysm is detected with a confidence of 95%, and said aneurysm, if real, has an estimated annual risk of rupture of 0.5%. The operation to clip said aneurysm, you are told, has a mortality rate of 10%. Do you have the op?<p>In the books I've read covering such medical ethics topics, a disproportionate number of patients <i>do/would</i> have the op because the knowledge of the aneurysm will "play on their mind" even if the odds are hugely in favor of leaving it be. For this reason, amongst others, unnecessary/preventative testing is discouraged by many medical professionals. (A similar dilemma is faced by folks in affected families who have to choose whether or not to have genetics testing for fatal familial insomnia risk – would <i>you</i> want to know if you're likely to face this usually inherited condition?)
I'm gonna read the whole paper I swear, but I do hope they get around to treating the converse: loss of trust due to hiding true information also causes harm, by reducing the chances future true information will be believed.<p>Most fascinatingly, because it's possible that an excessively cautious attitude towards dangerous information can cause harm, this paper is a potential example of dangerous information. I hope the authors considered that before publication!
Just going to say it. The table on page 6 desperately needed just one extra blank line to pad the row break onto the page break to prevent that hideous mid row page break with the row title split between pages…. All for the lack of a single extra new line.<p>Other than that, kind of interesting to see the terminology construction for a type of “higher order analysis” I’ve been thinking about lately.
Searched for "noble lie" and found...nothing?<p>> <i>Information hazard: A risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm.</i><p>I'm getting the sense there is an underlying loop in the reasoning to support this concept of hazard, where the direction of truth and information seems confused. There's also the question of the durability of a given state of truth.<p>What is the difference between information and fear? (e.g. data about potential information) When and how is information realized into truth, and in turn, consequences? Given we know truth is not a necessary or sufficient condition to have consequences, what are the qualities of information that does? If there were no truth, and in turn, only narratives and struggles for power, is an information hazard just something that hinders your will to power?<p>Admittedly I find Bostrom's writing mostly impenetrable because he seems to publish things he's still sounding out and I don't quite detect the hand of an editor, but while this idea of a hazard seems interesting, I don't see that he's testing for or discovering its existance so much as coining and supporting a meme. Perhaps I've just missed it.
Ignorance is bliss for the sheeple,<p>the bostromian creed wails.<p>To feel empowered, wagging their sheepdog tails.<p>To please their masters, by protecting the herd,<p>from the ever lurking, black sheep nerd.<p>Fuck <i>OFF</i> ye lousy boffins!<p>edit: /me howls <i>Total transparency, or BUST!</i>