TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

BlackRock shelves unexplainable AI liquidity models

129 pointsby gsangheraover 6 years ago

18 comments

michaelbuckbeeover 6 years ago
I learned a new term in the context of AI recently: &quot;Specification Gaming&quot;.<p>There&#x27;s a big list here: <a href="https:&#x2F;&#x2F;t.co&#x2F;OqoYN8MvMN" rel="nofollow">https:&#x2F;&#x2F;t.co&#x2F;OqoYN8MvMN</a><p>But it&#x27;s stuff like:<p>- Evolved algorithm for landing aircraft exploited overflow errors in the physics simulator by creating large forces that were estimated to be zero, resulting in a perfect score<p>- A cooperative GAN architecture for converting images from one genre to another (eg horses&lt;-&gt;zebras) has a loss function that rewards accurate reconstruction of images from its transformed version; CycleGAN turns out to partially solve the task by, in addition to the cross-domain analogies it learns, steganographically hiding autoencoder-style data about the original image invisibly inside the transformed image to assist the reconstruction of details.<p>- Simulated pancake making robot learned to throw the pancake as high in the air as possible in order to maximize time away from the ground<p>- Robot hand pretending to grasp an object by moving between the camera and the object<p>- Self-driving car rewarded for speed learns to spin in circles<p>All of which leads me to think that if you can&#x27;t at some level explain how&#x2F;what&#x2F;why it&#x27;s reaching a certain conclusion that it may be reaching a radically different end than you&#x27;re anticipating.
评论 #18445302 未加载
评论 #18445742 未加载
评论 #18445650 未加载
评论 #18450786 未加载
评论 #18452760 未加载
agentofoblivionover 6 years ago
I hear this a lot. In my opinion, people overestimate their ability to “understand” non-neural net models.<p>For instance, take the go-to classification model: Logistic Regression. Many people think they can draw insight by looking at the coefficients on the variables. If it’s 2.0 for variable A and 1.0 for variable B, then A must move the needle twice as much.<p>But not so fast. B, for instance, might be correlated with A. In this case, the coefficients are also correlated and interpretability becomes much more nuanced. And this isn’t the exception, it’s the rule. If you have a lot of features, chances are many of them are correlated.<p>In addition, your variables likely operate at different scales, so you’ll have needed to normalize and scale everything, which makes another layer of abstraction between you and interpretation. This becomes even more complicated when you consider encoded categorical variables. Are you trying to interpret each category independently, or assess their importance as a group? Not obvious how to make these aggregations. The story only gets more complicated for e.g. Random Forests.<p>I think it’s best to accept that you can’t interpret these models very well in general. At least in the case of some models (like neural nets), they approximate a Bayesian posterior, which has some nice properties.
评论 #18441395 未加载
评论 #18444830 未加载
评论 #18440993 未加载
评论 #18441322 未加载
评论 #18442322 未加载
评论 #18441450 未加载
评论 #18441299 未加载
评论 #18441506 未加载
评论 #18441772 未加载
评论 #18443023 未加载
评论 #18441140 未加载
评论 #18441312 未加载
d--bover 6 years ago
I applaud this decision.<p>If you can&#x27;t explain the model, it means you don&#x27;t know the assumptions that went into the model&#x27;s output, which means you won&#x27;t see it coming when the model doesn&#x27;t work anymore. And if you don&#x27;t want to look like a moron saying &quot;oh but the model said...&quot;, (and not getting sued for mismanaging investors money).<p>Honestly, it&#x27;s probably the investors asking questions that led them to this decision, but nonetheless, this is reason talking.
评论 #18441682 未加载
评论 #18444245 未加载
restersover 6 years ago
Here&#x27;s the scenario that makes it sensible to shelve the superior AI models:<p>premise 1: financial crisis hits, requiring some firms to accept immediate loans (or off books loans aka qe) to maintain solvency (classic 2008 scenario)<p>premise 2: firms will not have equivalent exposure, so some firms fail worse than others, but as the risk is viewed as &quot;systemic&quot; all get the bailout<p>If some firms have AI that find risks hidden in investments that traditional (explainable) models ignore, then those firms will sit out of markets that will in the meantime be profitable for the firms that are unaware of the actual risk. Metaphorically, why ruin the 70s with an accurate HIV test.<p>If the same models could be used to identify and securitize (and make a market in) the invisible risk, it&#x27;s possible that the market price of the risk would similarly lead many firms to sit out of otherwise profitable markets, as the yields of many of the traditional investments would (after the cost of hedging) be poor.<p>All this would result in a shrinking of the pie without an analytical explanation. &quot;What do you mean the pie is smaller than we thought it was and we have to grow at a slower rate than we thought?&quot;, the CEO might ask.<p>In most scenarios where quantitative approaches give better insight into the future, the firm to develop the approach makes a fortune until others can catch up.<p>But what we have today is a financial system where keeping the overall system running hot is government policy, and so all participants have the incentive to ignore information that would lead to rational reallocation of investments.<p>Once the system&#x27;s <i>normal</i> is leveraged&#x2F;hot enough, the system becomes resistant to certain kinds of true information.
lawlessoneover 6 years ago
They&#x27;re right.<p>Why would we put a NN in charge of anything important if we can&#x27;t explain how a particular model works?<p>Would you want your car or an aircraft you&#x27;re on piloted by neural net the actions of which can&#x27;t be explained?<p>What if it encounters an unforseen event that causes a flash crash or worse an actual crash that kills people?<p>Do you want to trust something built from incomplete data and simulated annealing with your life and livelihood?
评论 #18441141 未加载
评论 #18441300 未加载
评论 #18443129 未加载
评论 #18441150 未加载
评论 #18441274 未加载
评论 #18441170 未加载
georgeekover 6 years ago
David Freedman has this following dialogue in his Statistical Models: Theory and Practice book:<p>Philosophers&#x27; stones in the early twenty-first century Correlation, partial correlation, cross lagged correlation, principal components, factor analysis, OLS, GLS, PLS, IISLS, IIISLS, IVLS, LIML, SEM, HLM, HMM, GMM, ANOVA, MANOVA, Meta-analysis, logits, probits, ridits, tobits, RESET, DFITS, AIC, BIC, MAXNET, MDL, VAR, AR, ARIMA, ARFIMA, ARCH, GARCH, LISREL[...]...<p>The modeler&#x27;s response We know all this. Nothing is perfect. Linearity has to be a good first approximation. Log linearity has to be a good secont approximation. THe assumptions are reasonable. The assumptions don&#x27;t matter. The assumptions are conservative. You can&#x27;t prove the assumptions are wrong. The biases will cancel. We can model the biases. We&#x27;re only doing what everybody else does. Now we use more sophisticated techniques. If we don&#x27;t do it, someone else will. What would you do? The decision-maker has to be better off with us than without us. We all have mental models. Not using a model is still a mode. The models aren&#x27;t totally useless. You have to do the best you can with the data. You have to make assumptions in order to make progress. You have to give the models the benefit of the doubt. Where&#x27;s the harm?
chatmastaover 6 years ago
Are the non-AI models any more “explainable?” Models built on multivariate statistics, processing terabytes of data a day, spitting out numbers might be “understandable” in the sense that there is some discrete representation of how their inputs map to outputs. But can anyone really look at those algorithms and explain <i>why</i> they work? What’s really the difference between NN and advanced statistical regression, beyond differing levels of familiarity&#x2F;comfort?
评论 #18441430 未加载
评论 #18441167 未加载
parallel_itemover 6 years ago
I think a key factor in this decision may be the perceived risk of putting huge capital behind a single black box model. I would assume this differs from more ML-heavy quant firms like Two-Sigma, because BlackRock&#x27;s products generally perform at a huge scale with some central idea behind them. Two-Sigma probably can spread out the same amount of assets across many different black-box models, diversifying and reducing risk through these means. In this case, perhaps only 1 model dictating such a huge chunk of capital was just too much uncertainty?<p>I have no evidence of the scale and diversification of both these, so evidence would be helpful in refuting the above!
评论 #18441967 未加载
reallymentalover 6 years ago
Who&#x27;s to blame when the model sees &#x27;red&#x27; ? Management needs a head, a model isn&#x27;t one yet.
rq1over 6 years ago
Quite natural. AI in market finance is a fraud for the moment.<p>AI models totally fail to do what classical (and parsimonious, explainable, cheap...) methods&#x2F;algos&#x2F;models achieve quite easily (BS, Hawkes, RFSV, uncertainty zones, Almgren-Chriss&#x2F;Cartea-Jaimungal... etc.). Actually, I&#x27;m tempted to say that AIs don&#x27;t work at all.<p>I&#x27;ve seen so far funds leveraging &quot;big data&quot; with AIs (eg. realtime processing of satellite imagery, cameras, (more) news...) and get more&#x2F;better information (than the others) to finally calibrate and use these (parsimonious) models, nothing (interesting) else.<p>Do not get fooled. Lots of banks announced that they use AIs, to surf on the hype, because today if you don&#x27;t do AIs, you&#x27;re not in, because today everyone is a Data Scientist, that&#x27;s all.
Invictus0over 6 years ago
Anyone have a mirror link?
fippleover 6 years ago
Corporations exist in a world with governments and politics. It’s entirely reasonable for senior management to require a methology that they can defend in a televised Senate hearing even at the expense of some predictive power.
ytersover 6 years ago
Maybe just shelve ML and go back to traditional statistics which focus a lot more on being explainable.
ZeroCool2uover 6 years ago
This seems like a management problem, not a model issue.
zzzeekover 6 years ago
content not available without a paid subscription?
fiveFeetover 6 years ago
Is there a free version of the article available? The link requires paid subscription.
bluetwoover 6 years ago
Isn&#x27;t it their choice to make?
m3kw9over 6 years ago
The manager probably saw the model as a threat to his job security. Looked for a way out and there it is, the always persistent problem of AI models
评论 #18440917 未加载