We use this sort of identification for any joint services in public sector digitalisation, which is a lot, because a lot of our foundation is shared.<p>Reading this I’m fairly happy we do it mainly with C# at my place. All that configuration required in JAVA is simply crazy to me, why the hell would you want unsafe settings that aren’t disabled by default? You can turn them off in C#, opening yourself to the same vulnerabilities, but it’s an active and very obvious choice to do so. Though it’s likely been fixed in the 7 years that’s passed since this article.<p>I do wish we had a better system to identify IT-systems. When you operate more than a 1000 connections maintaining certificates that expire every 4 years becomes really fucking tedious. We’ve automated most of it, but some of it still requires hands-on, and we’re not perfect, I’ve seen project managers e-mail the private keys when someone bought a system around IT...
Oh god, this is one of my biggest peeves.<p>SSL/TLS in anything that's not a browser is a total shit-show, and this is ignored by security professionals, ISVs, developers, and network architects alike.<p>As an example, take a look at something like a Citrix NetScaler, a popular network load balancer and <i>security</i> appliance (similar to an f5 BIGIP LTM):<p>Until recently, it was flat out unable to validate host names because like all network devices, it assumes that "IP address == the host".<p>Some dingbat put the "host name" part of the SSL validation into the SSL Profile. So you now have to make a separate profile for each and every host name, making this feature practically unusable.<p>By <i>default</i> it'll accept <i>any</i> certificate for a back end, signed by any CA. Or self-signed. Or whatever. 512 bits? No worries! It's a cert! It's good! We're SSL now!<p>Recently "server authentication" was added so you can actually validate the cert chain of a back-end service. Except for one minor flaw: it lets you pick exactly one signing certificate to validate against. So even if you know ahead of time that a back-end server is about to have its intermediate CA change, you're facing at least a temporary outage while you quickly switch out this parameter on the NetScaler.<p>For some retarded reason, the back-end and front-end SSL capabilities are wildly different. You read the manual and think: Yay, there's TLS 1.3 support now! Nope... front-end only.<p>The stupid things <i>still</i> generate 512-bit keys by default, and this can't be overridden for some scenarios, making them so insecure out of the box that Chrome refuses to talk to one.<p>Validating CRLs or OCSP is so difficult that I've never seen it set up on a NetScaler. I tried once and gave up.<p>Sure, you're keen. You want to validate CRLs and use OCSP like a good boy. Bzzt... chances are that some Security Troll has blocked outbound port 80 from the NetScaler because everybody knows that it's an "insecure protocol". So you're now facing a multi-month argument with a whole team of people convinced that you're trying to undermine their precious firewall rules.<p>There's no supported way of renewing a certificate automatically on one of these things, so of course, certificate expiry is like the #1 reason for outages in any NetScaler deployment.<p>Etc... it just goes on and on.<p>A lot of SSL/TLS design for network appliances was very obviously hacked in to support <i>one scenario only</i>, and anything else is going to be dangerously insecure. NetScaler was originally designed to do front-end SSL offload for HTTP-only servers in the same broadcast domain on a physically secured network. For any topology or scenario more complex than that it just falls apart and provides essentially zero protection against a MitM attack or anything similar.