TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: Vulnerabilities in ML frameworks

9 pointsby rs86about 7 years ago
Does anyone know of any vulnerabilities in machine learning frameworks, specially buffer overflows and arbitrary execution?

4 comments

k4ch0wabout 7 years ago
Well, that&#x27;s a really broad question in itself. It is possible in the future some vulnerability will exist in the framework, a security researcher just hasn&#x27;t discovered it yet. If you&#x27;re referring to a framework written in C&#x2F;C++, then yes a buffer overflow is possible. Hopefully though, modern day protections such as ASLR, CFG, CFI, etc. protect you. I&#x27;ll refer you to an old classic <a href="http:&#x2F;&#x2F;insecure.org&#x2F;stf&#x2F;smashstack.html" rel="nofollow">http:&#x2F;&#x2F;insecure.org&#x2F;stf&#x2F;smashstack.html</a>, however it is very hard to do this attack today. Modern compilers are pretty good at preventing developers from shooting themselves. &lt;3 clang&#x27;s memory sanitizer.<p>Arbitrary execution is more possible IMO. The way a lot of ML models are stored is by being a serialized file using a framework like pickle or Java&#x27;s serialization. Theoretically, you could add code into a precompiled model that when someone loads would execute arbitrary code. This could be done using a simple technique like a code cave seen here <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Code_cave" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Code_cave</a>. I haven&#x27;t had time to dig into this myself, but I honestly don&#x27;t think it would be hard.<p>I think in the next couple years you will see more vulnerabilities pop up in these frameworks, but finding security vulnerabilities take time.
评论 #16565444 未加载
mgliwkaabout 7 years ago
There is a new different kind of attacks, i.e. fooling it to recognize things as something they aren&#x27;t:<p><a href="https:&#x2F;&#x2F;media.ccc.de&#x2F;v&#x2F;34c3-8860-deep_learning_blindspots" rel="nofollow">https:&#x2F;&#x2F;media.ccc.de&#x2F;v&#x2F;34c3-8860-deep_learning_blindspots</a><p><a href="https:&#x2F;&#x2F;medium.com&#x2F;@ageitgey&#x2F;machine-learning-is-fun-part-8-how-to-intentionally-trick-neural-networks-b55da32b7196" rel="nofollow">https:&#x2F;&#x2F;medium.com&#x2F;@ageitgey&#x2F;machine-learning-is-fun-part-8-...</a>
评论 #16556642 未加载
joshumaxabout 7 years ago
I don&#x27;t know if this counts, since your question is rather vague, but I wrote an article a while back about how Torch creates certain vulnerabilities on local systems: <a href="https:&#x2F;&#x2F;joshumax.github.io&#x2F;general&#x2F;2017&#x2F;06&#x2F;08&#x2F;how-torch-broke-ls.html" rel="nofollow">https:&#x2F;&#x2F;joshumax.github.io&#x2F;general&#x2F;2017&#x2F;06&#x2F;08&#x2F;how-torch-brok...</a>
ladbergabout 7 years ago
No, and if someone did, they would report it to the developers.