That's a risk every time you use a CDN. We used a CDN that f-ed up JS versions, breaking sites, had downtime, breaking sites... when ever you do use a CDN, be aware of everything that could go wrong, which is a lot... On the other hand, id you add hashes to the script references you load, you are a lot more secure, check out <a href="https://developer.mozilla.org/en-US/docs/Web/Security/Subresource_Integrity" rel="nofollow">https://developer.mozilla.org/en-US/docs/Web/Security/Subres...</a> if you use a CDN and would like to do so securely...
> <i>The Tealium iQ Tag Management System service is used by many companies to organize tags on their websites.</i><p>Tag managers are the worst. They shouldn’t even exist. When a website uses a tag manager, it means the web devs have been forced to give marketing and every other department a backdoor to insert whatever vile abominations they want.
> Each time a new table was generated, a new PHP file was created on the server. Using a hole in filtering of the input parameters for creating the PHP file, I was able to reproduce an RCE attack: a malicious request injected arbitrary PHP code into the generated file.<p>So this has nothing to do with the third party JS library itself, but with how the website's backend stored the data generated by the frontend script. The developer could probably reproduce the hack with postman and doesn't need the CDN hosted library at all.
A few ways out of these are:<p>- don't eval on the server side (this is a bad idea most of the time anyway);<p>- serve js bundles from your own domain and set an appropriate content security policy;<p>These hacks won't work then.
While I think SRI is a good tool to counter CDNs (with the correct deploying strategy, human-supervised semi-automatized SRI generation shall become trivial), there is a fundamental flaw with "compiled" aka obfuscated/minimized javascript code: How do you, as an author, even know that it doesn't contain malicious code in the first place? That's the fundamental problem of using software written by other people: Except you can afford expensive code audits, you never know. I expect any security-related company (like Banks) to do these source code audits. But I doubt they do it.
Minimize JS use, serve the JS you use only from your own domains, run high security apps on dedicated domains with less JS and other external shit than your public marketing site, and use CSP.<p>I'd probably trust a single CDN (like cloudflare) with my own copies of all things I include more than I'd want to serve directly but use code from lots of different sources, but for something incredibly high security, I'd want end users to be talking directly to a secure server (maybe with tcp/etc. layer proxies for ddos resistance and flow-level monitoring, but without decrypting).
In the first example, this guy goes and pops a (web)shell on Datatables.net. There's no security policy, no bounty program, etc. for this site or its owner[2]. Generally I don't believe it's a good idea to go pwning businesses' servers that don't give you some sort of permission to test. That's some seriously dangerous business.<p>[2] <a href="https://sprymedia.co.uk/" rel="nofollow">https://sprymedia.co.uk/</a>
The fact that this was possible is a testament that web devs really have no concept of due diligence. Sad.<p>Imagine if running a native app on your computer would load random DLLs from servers. It boggles the mind.
SRI is such a cool idea (in theory) but the approach fails in practice. Also very few sites maintain a solid Content Security Policy (CSP). What's the point of all these controls/tools when nobody uses them?
I’ve come to the conclusion that the way to secure your website from third party JavaScript is to monitor everything happening on your site:
<a href="https://enchantedsecurity.com/" rel="nofollow">https://enchantedsecurity.com/</a><p>These third party libraries are a necessary part of modern websites. It’s worth trusting but verifying their security.