I suspect most of HN hasn't ever worked in a compliance-driven, regulatory environment. They've never had to deal with the possibility of audits by government, insurance, or financial regulators.<p>This is the world I live in - building pipelines for financial and health care giants. I agree with this article 100%. Not only <i>can</i> continuous delivery provide regulatory compliance consistency, but also more traditional human-bureaucracy approaches <i>cannot</i> provide it. There are too many potential leaky spots, and far too much potential for human error - in particular, people signing off on things they don't understand. And this <i>includes</i> compliance auditors!
>> What are the goals of Regulatory Compliance?
All of the regulatory regimes that I have seen are, in essence, focussed on two things: 1) Trying to encourage a professional, high-quality, safe approach to making change. 2) Providing an audit-trail to allow for some degree of oversight and problem-finding after a failure.<p>No and no. The goal of regulatory compliance is to avoid liability. You maintain professional standards because that helps keep everyone safe, but you obey the rules to avoid the punishment associated with not obeying them.<p>This really matters when regulatory compliance conflicts with good judgement. Sometimes you do the bad thing because the bad thing is mandated in the rules. If a regulation says that you have to have "antivirus software" on the machine then you have antivirus software on the machine.... even if no antivirus software exists for that machine. You shoehorn something because the rules say you need it. You don't do this to increase security. You do it because the lawyers tell you that not doing it will get you sued.
In general I agree -- but it means your pipelines will have to support every customer you deliver to and be version-aware.<p>Part 11 of the FDA regulations that oversee software compliance enforce expensive re-validation processes for every major release of software. It's not something your customers are going to want to do very often. So set up a CD for each customer on version X.y.z where X is static and make sure that you don't accidentally ship a backwards-incompatible major-version change on that pipeline.<p>It's an interesting challenge for operations-focused teams but I agree that CD is a valuable tool to have.
In my experience, putting CD into place makes a significant improvement to the SDLC only when automated unit tests are a part of that life cycle. I have never seen the benefit of implementing a system like Jenkins if the only automated testing portion is 'does it compile without error? Yes? Then, good to go!'. Without the automated unit tests it just doesn't seem like a worthwhile endeavor to me.
“ In fact it is quite hard to imagine a Pipeline that doesn’t give you access to this information“<p>I have seen multiple pipelines that don’t tell you who manually tested this change - or at least, don’t <i>always</i> tell you. The author here is assuming the existence of a lot of practices already, which makes his pointing to CD as magic look a lot weaker.
We have had a lot of success using Inspec and for a while Chef compliance in validating the infrastructure elements remain compliant continually.<p>For instance, you can check a folder remains encrypted or certain ports remain closed in dev and test before promoting your config management to production.<p>Quite involved to setup but a big tick for auditors.
Your link is not working:<p><i>Resource Limit Is Reached
The website is temporarily unable to service your request as it exceeded resource limit. Please try again later.</i>