I recently did an extensive competitive analysis of SAST tools for a client. Anyone who is thinking about buying one of these tools should pay attention to what versions of each language they support. Also try to get old release notes in order to determine when they first supported a particular language version.<p>Many vendors take a good long time to support new versions of languages, even mainstream ones like Java and the .net family. None of them are particularly helpful in getting this information to you. They have their marketing checklists and information deeper than this can be hard to come by from the salespeople. A few were scared of letting us have this information at all once they knew it was for a competitive analysis. That's a sign that they take a long time to support new language versions.<p>In many companies adoption of new language versions happen organically at the developer level, often within days of the new version being released. Even if the system admins try to press the brakes a bit on deploying the new version on production servers, the pressure is there for it to happen. SAST vendors typically are not going to be able to keep pace, which will make your developers unhappy or even give them an excuse for not using the expensive tool you purchased.
I wish we could just all reject SOC2, it's such a grift. Bankers come in to read some docs (that they don't understand), look at screenshots (that they don't understand), take up 100s of thousands in cash and time, and then write a document (that they don't understand) that no one will read or care about (except to fuel their own SOC2).<p>The harm is so significant. Tons of these due diligence terms are driven not by any security engineer, but by a legal or compliance team - I've even had members of the security team outright apologize for having to send it over.
I've done more than my fair share of vendor due diligences (and audits, action plans and contract reviews,..)<p>To me this is a non-issue, because customers almost always ask for types of security checks, not for specific tooling (ie: asking for source code analysis vs asking for veracode).
As a rule, compliance/government folks will be concerned about the types of security measures you have in place and not about the specific implementation. Commercial source code analysis tools have varying support depending on language (as others have mentioned: some languages are harder than others). A very valid alternative is to use a linter with security checks (and potential custom rules). The advantage will be that checking will go much faster so you can do it more often (every PR instead of nightly for example). Many security conscious companies have something like this in place.<p>In general when you're answering security due diligence, it's your job to convince the customer you're going to keep their data safe. They will ask about certain things you don't have and it's your job to explain how you're still solving the underlying problem. Typical example: customers asking for antivirus on all systems and you using (immutable) docker containers.<p>By the way, the interesting thing here is not the answers to the questions, but how you organise your company to quickly and effectively (as in: no follow up meetings or worse: action plans) answer them. My pet peeve here is "customer guided security": You start from what you think you need (baseline) and you add the security measures that take the longest to explain why you don't have them. That way, you're skating through most of the due diligences and sales velocity goes up, which will make your bosses very happy.
Meh, if you want your niche tool to break into big enterprise, you have to deal with compliance hoops. That's just the cost of doing business with big enterprise, certainly not a new thing.<p>If anything, in the long term that's probably a benefit to fancy functional programming languages with complex type systems - they provide much more info for static analysis tools to work with (static analysis is pretty highly related to type theory)
The static analysis, but also software component analysis tooling are really incredibly helpful though and should really contribute to releasing stable products as well -- it's not just here to satisfy your customers management types, it's there to actually make sure your tool doesn't have 5 RCEs active at any point in time.<p>I for one am happy companies ask about this type stuff, it's basic hygiene to keep control over your product's security, really, and the tooling really makes it a lot easier.
I just spent some time with someone trying to recruit me back to writing medical software. The entire interview was dominated with HIPAA related questions, which were mostly the interviewer justifying why the software sucked. And the software in question sucked in every way it could: bad UX, terrible limits on integration, data could not be exported without copy pasta magic, etc... Some of these issues really are dangerous because clinicians have to spend so much time fighting to make things work or doing immense amounts of double and triple data entry. Oh, and every time the justification was because of HIPAA requirements or infosec.<p>It made me realize that we do not know how to make software well enough to regulate it safely, and no, I do not want to go work in a sector where prioirity one is complying with some privacy regulation when the top priority should be accuracy of diagnostic, reliability of a system or eliminating operator error.
>There are new forces at play which will calcify current software stacks and make it extremely hard for existing or new entrants to see similar success without a massive coordinated push backed by big enterprise companies [...] enterprises no longer trust their developers and SREs to take care of security, and so protocols are being implemented top down.<p>The security community got exactly what they asked for.<p>Security people were selling fear of insecurity with limited actionable advice for security to come into products/systems bottom-up, so the business has to solve it with process. Breaking into computers is fun and all, but throw around words like "risk analysis" to sound like hot shit for too long and you end up with comprehensive risk analysis process that spans beyond the bits of tech you want to play with.<p>I work in a highly-regulated domain so software security is just another type of risk analysis we do. So <i>shrug</i> <i>whatevs</i> this doesn't calcify us more than we are already calcified. I just think it's cute that infosec people thought they were hackers, but didn't realize they're another flavor of boring business analyst telling the kids to turn down their music and develop software to their requirements.
> In the wake of so many data leaks and hacking events enterprises no longer trust their developers and SREs to take care of security, and so protocols are being implemented top down.<p>As if enterprises were not responsible for not properly budgeting security concerns in their engineering teams. I guess it's easier to just buy a tool that will force a process overhaul, rather than doing a much more thorough process overhaul in the first place. The problem is that tools like vulnerability scanners address only one part of the problem; admittedly, it is a low hanging fruit.
I've got to say, I don't agree with this. The predicate of the argument appears to be that the team managing the VA/SAST platform will be able to block adoption of new technologies if their tooling doesn't support them.<p>I don't think I've ever seen a company where the security tooling team had that kind of authority or pull. I've seen plenty where the first time the security team hears about a new technology is after product development has started.<p>You only have to look at the rise of containerization in enterprise to see this in action. When it started tooling was way behind, and it's only catching up now, but that didn't seem to stop anyone.
Yeah im confused by this article. Why would this push functional programming into a small niche? Is it just because the scanners are only written small range of languages like C# or javascript? Seems like functional programming makes for BETTER security scanning all around. If anything I actually see this potentially giving FP a boost, unless the scanners are just surface level and are adopted as a matter of faith, all good and well until one of them gets cracked, dynamic languages compared to compiled are like swiss cheese
Static analysis is coming to FOSS as well. There is no mystery tech behind that. Compilers actually do quite a bit of analysis: see Rust. Of course FOSS tools will probably have rougher edges, but the entire field will commoditize in the coming years. There are already a number of competing tools as mentioned by the article.<p>The usual FP vs IP argument is not relevant: the LLVM intermediate representation is FP.
Well first of all, a smaller number of tools is a good thing. Most software tools suck ass. By focusing more on a few of them, hopefully their quality will increase (depending on who is making them and what their incentives are).<p>Second, security scanning is just part of an overall strategy for increased software quality, which helps the product made out of the software, which is the <i>entire point of writing software</i>. Who cares if your stack calcifies if the user has a better experience because your app crashes less, needs to be emergency-patched less often, and doesn't leak personal data like a fire hose?<p>I am an SRE, and not a little bit of a security nerd, and I wouldn't trust <i>myself</i> with getting security right.
Best way to stop hackers is to just start handing out life sentences. that will put a stop to it. or even better. death penalties. like seriously just get a life. society doesn't tolerate thieves IRL so why would we tolerate them in the internet? Backdoors and exploits exist everywhere. no computer or building is 100% secure. we know this. so stop acting like you are doing everyone a favor by exposing them.
I'm on the side of sending the big list of vendor due diligence to a potential product. I know it's a pain to ask about GDPR, how you store/delete data, off board employees, is SSO everywhere, do you have 2FA and how you access your servers.<p>We try to keep a high standard of tooling to protect our customers and company data. It's really not about bogging you down, I know it sucks. It sucks hard. It's about ensuring when we upload data to your SaaS, we know it's in good hands that have been vetted.<p>The good news is, if you make it through it once other big companies start flooding to you as well and it becomes much easier to deal with them as you've been through the intensive process before.<p>I deal with a lot of deals and if you're building a startup up it's a lot easier to think about security at the start then retrofit and fix it all at the end.
I work in AppSec for a Very Large Company. I've worked in large companies before. These are not new trends. We have programmers who do F# and other functional programming. I would think the bigger inhibitor to functional programming is that most of the existing apps are Java or .Net so unless you are building a brand new team, you reuse the skills and technology you already have working for you.<p>Our devs use plenty of small open source projects. We [Security] like to recommend software that we are comfortable with, but any determination of "stacks" we leave to the actual software engineering teams. If something is pretty bad, not updated, constantly having problems, etc - we might ban it... but what's your case for using poorly engineer software given alternatives?<p>Not sure if mom and pop is supposed to mean commercial, but not OSS? OSS we can patch and modify if necessary. We can even PR patches back based on what our "scanners" and manual testing find.<p>Generally, we don't care about language, most issues are in implementation not the language, and less so in more modern languages where the creators have heard about security.<p>The basic type of code scanning needed for PCI and other compliance is a commodity offering and is manageable cost compared to marketing and relationship management costs needed to pursue big clients.<p>I am not sure the OP understands what a SOC2 report says/does. It talks about pretty high level controls and practices. You certify an app/service, not a stack. If you scan and fix your bugs and have a proactive security training, it doesn't care about how you do it. There is no golden stack that will help you pass a SOC2. You may be able to make your life easier with certain services/SaaS, but the issues come up in your practices and in the actual code implemented. If you have bugs in procedural or functional programming, its the same problem from this perspective.<p>Vendor due diligence? Some companies have their own questions, there are also agreed standards for these that some companies opt into. I am not sure why a big company should risk their bottom line on something unproven or that isn't ready for prime-time. It's like getting an inspection when you buy a house. In the same way, your org can improve and make improvements. This is no different then adding in features some customer wants in order to win your business.<p>I don't understand the ultimate point, people who build functional apps shouldn't have to care about security? It's just another non-functional requirement that helps you win a broad audience. It's the same argument that says government should regulate this or that, that financial advisers shouldn't need to act as fiduciaries. It's the cost of doing business.<p>Maybe the OP has some weird experiences where auditors jumped on functional programming as an issue to justify not doing more work or make their lives easier, but I don't think this is something that is a commonly held belief across audit and security (if people even know what functional programming is).