I’m a sec eng and worked at startups, I don’t love this list. Some of it is valid but generally it seems off target regarding how process-focused it is and ideas like startups doing data flows, vuln management and getting application-level logging done.<p>You’ll get much more bang for buck for the product’s security by working on the enterprise and infra side - deploying EDRs (somehow not on this list?), tightening up Gsuite sharing and email settings, Okta (I see SSO on the list), anti-phishing capability, guardduty and sechub, a pw manager, getting on IaC early to support ops and sec goals, and a lightweight SlackOps infra for security alerts. Somehow mostly none of this is mentioned.<p>The emphasis around the application seems misguided due to how (a) pragmatically it’s going to take much longer to get appsec controls and logs vs infra and enterprise controls/logs and (b) the vector in is usually Bob and Jodie in recruiting and HR getting phished, vs an appsec breach.<p>Also, it seems to break the golden rule of security enabling revenue with acceptable risk trade-offs and pragmatic controls. All the process in here doesn’t seem pragmatic. The controls
in here seem useful overall but not where I’d go at first to secure a fast startup.
The name is too cute, and actively misleads. You can't call this a "minimum" anything; it's an opinionated list of controls that several of the most savvy security teams I know don't uniformly implement.<p>Just off the top of my head, things that aren't even universally seen by practitioners as good things, let alone things everyone does as a "minimal" baseline:<p>* On request, enable your customers or their delegates to test the security of your application<p>* Implement role-specific security training for your personnel that is relevant to their business function<p>* Comply with all industry security standards relevant to your business such as PCI DSS, HITRUST, ISO27001, and SSAE 18 --- LOL to this whole line item.<p>* Maintain an up-to-date diagram indicating how sensitive data reaches your systems and where it ends up being stored<p>There is then a longer list of controls that I think most practitioners would say are good things, but that aren't always P1-prioritized (for good reason, to make way for more important things). CSP headers, SLSA level 1 builds, media sanitization policies; these things are situationally important.<p>I think an opinionated checklist is a fine thing, but when you call something the "minimal viable" standard, you set yourself up to explain how lots of well-run companies are viable without these things.
Not a great list. Very opinionated, and much as I'm security-conscious, I disagree with a number of "minimum" recommendations there.<p>This list looks like it was written by people who are not paying for anything, so they don't understand the tradeoffs, costs and compromises. I can assure you things look quite different when you are a solo founder running the business and you have to pay for everything. Technical stuff doesn't change, so recommendations like "Do not limit the permitted characters that can be used in passwords" still hold, but "Contract a security vendor to perform annual, comprehensive penetration tests on your systems"?<p>Also, "Publish the point of contact for security reports on your website" and "Respond to security reports within a reasonable time frame" — I'm already spending too much time responding to "security researchers" trying to shake me by sending (rather silly and generic) "vulnerability reports". Some of them follow up by asking if they can "publish this on their social media channels" if I don't respond. I really don't need more "security reports" for my website.
I think a lot of comments here might be missing the point of this standard - it’s not specifically designed just for startups or to cover general organisation security (like EDR). It’s a product feature oriented checklist that should help short circuit some aspects of supplier security due diligence and establish a set of product security features that any B2B product with ambitions to sell to enterprise customers would benefit from aspiring to, and anyone buying can leverage. That means large multi-product businesses looking to ship an MVP for a new service as well startups with a bright idea but a possibly limited understanding of enterprise IT compliance needs.<p>I’ve got some experience as Head of Cyber/InfoSec at a couple of startups/scaleups and I see this as potentially useful in a bunch of ways if broadly adopted. It establishes a baseline both for our own products and our suppliers, attests to stuff that actually affects how we integrate with and operate a third party product (ISO 27001 gives me no clue whether you support SSO, have viable logs that I can actually ship to the SIEM etc.) and hopefully simplifies both the due diligence we do on our suppliers as well as that done on us by our customers by allowing us to reduce a good chunk of the product feature questions to “does your product meet the MVSP standard”.<p>There was a pretty useful discussion of it on the Google Cloud Security podcast a while ago: <a href="https://cloud.withgoogle.com/cloudsecurity/podcast/ep114-minimal-viable-secure-product-mvsp-is-that-a-thing/" rel="nofollow noreferrer">https://cloud.withgoogle.com/cloudsecurity/podcast/ep114-min...</a>
> A minimum security baseline for enterprise-ready products and services<p>Sure, although <i>minimum viable</i> and <i>enterprise-ready</i> seems like an oxymoron to me.<p>Step one: define MVP.<p>Step two: add <i>minimum enterprise security, minimum enterprise scalability, minimum enterprise legal compliance, minimum enterprise cost controls, minimum social responsibility ...</i><p>Step three: why the hell is MVP 28 months late?
A list of security recommendations about MV(S)Ps created by a consortium of pretty big non-startup companies. The joke tells itself.<p>But seriously, I think the problem is in the name. This is not a set of best practices for any kind of startup but rather a public checklist that anyone looking forward to provide some kind of service or product to any of the contributors (i.e. Google, Salesforce, Okta, or even other companies subscribing to this extreme SecOps cargo cult) must comply.
Minimum Viable Secure Product, Minimum Viable Marketable Product, Minimum Viable Sellable Product.<p>I don't get it. Why do we need all these additions? Perhaps it is me who does not understand the meaning of "Viable". MVP is not a restricted, well-defined end-goal. "Viable" means viable for your case. You can fill in whatever you want. It does not mean "Viable" without security or "Viable" without "Marketability".
I was hoping for something more like:<p>* Check you're storing secrets safely and not in the code<p>* Run a automated security scanner to check for the OWASP top 10<p>* Confirm your endpoints have correct auth/permission checks, and that all debug flags are off<p>* Ensure databases are protected (eg. Not exposed to the internet or heavily restricted)<p>* Enable 2FA for everyone in the company and use a password manager<p>* Check what are the data protection laws and disclosure sla of your country
I like the idea, but I'd be skeptical that it's practical for very small companies.<p>That doesn't let them off the hook, but all that process overhead can be a killer.<p>I worked for a company that was PCI DSS, and it often made it impossible to get any work done (to be fair, it had more to do with how they implemented it, than the standard, itself).<p>But I agree that security needs to be Job One for everyone, regardless of size.
While this may appear to be an altruistic consortium of security-minded companies trying to help startups do the right thing, my skeptical take is that it's primarily a way to drum up business, which explains why the contributors list largely consists of vendors that help you check these items off your list.<p>Google (one of the main sponsors) and other cloud-hosting vendors can essentially say, "Sure, you can cobble all of this together yourself -- or you can buy our services with much of this already baked in."
Interesting, but I'm having trouble thinking of a startup that was killed or even harmed by a security issue (outside cryptocurrency stuff). Anecdata-wise, startup graveyard stories don't seem to have being-hacked as one of their failure causes either, unless I'm missing some big ones.<p>An old product attempt of mine was a threat model wizard that generated a simple deck with some very clear viz of the model and threat exposure addressed to all concerned parties - and then backlog issues and themes and epics for implementing or verifying the controls it implied. It reduced the checklist/risk assessment process to about an hour from the weeks of spreadsheet work that was a lot of peoples jobs, and it put security into the product/project dev process. Pretty much aha.io (the product management tool) but for security.<p>What I learned from it is a) self-assessments weren't valuable to large org customers (who need to be able to say they were told by a 3rd party, so it's not like product management that way) b) the need of small orgs / startups was a standard compliance checklist to make standard assertions to potential customers as part of vendor onboarding, and a faster/better model didn't get them that standard. c) most security people trade on their insights, and codifying their ontology even just to speed it up undermined their leverage in their orgs.<p>This checklist or something like it might actually meet the real needs of some startups who must make templated assertions to customers for vendor onboarding, but most startups would be lucky and probably pretty chuffed if they actually had something someone wanted to steal.
It was nice to see this kind of topic, reading the list felt a little higher level than I anticipated, even for a startup who is targeting enterprise sales.<p>I would love to hear what others here consider important to make a minimally viable and secure product in a startup from the starting as a startup.<p>How much of this list is hard needs, soft needs, beneficial, and nice to have preferences/interpretations?<p>Many startups wouldn't make it to the start line fulfilling this list. If I could pick a solution architect's brain looking at this, I'd be curious to know what ways there are to satisfy these through architecture, design considerations, or using particular parts of particular platforms?
If this is the "minimum", I'd love to see what they left off the list.<p>My minimum for a public-facing MVP:<p>- All services use HTTPS to talk to users and each other.<p>- High standard of password encryption (or prefer to use something like Cognito or Firebase).<p>- Plan for GDPR compliance (not exactly security related, but in the wheelhouse. The GDPR grifters come out of the woodwork quickly, so you need all the popups and account deletion stuff from day 1 if you are releasing to the EU).<p>- QA specifically for security: users can't access each other files, authentication controls work, etc.<p>- Don't store or handle credit cards - use a vendor like Stripe.<p>- Ensure all dev tools enforce 2FA where possible (GitHub, AWS, etc.).<p>- A basic backup system.<p>Then post-MVP, start working on the following:<p>- centralised logging.<p>- dependency patching plan.<p>- etc
Totally arbitrary. Some of the advice is actually bad. The only critically relevant pieces of information are the password guidelines and HTTPS-related bullet points, for which there are actual authorities to read, not this waste of time and effort.
Isn't this just a ploy to get decision makers to think they need to use these services/platforms to achieve MVSP (and to think MVSP is a thing)? Man the internet really gentrified in a gross, banal way.
> Cross-site request forgery. Example: Accepting requests with an Origin header from a different domain<p>English is not my first language and I’m not a security expert, but this description seems a bit misleading.<p>The “Origin header” part should be left out. You don’t check where it’s comming from (you can’t know). You either send a unique token back and forth, or you defer the issue to the browser (cookies with Same Site strict/lax).<p>And it’s not “requests”, it’s “unsafe requests” AKA mutations (POST, PUT, PATCH, DELETE). It’s not just a nitpick. If your GETs are not safe, you might cause deeper issues.
The intent of a the "minimum" in MVP is to produce something quickly in order to learn what resonates with people who might actually pay you money for something. It would be useful to have a "detect my disasterous security vulnerabilities" scanner with various language plugins for go, dart, swift, whatevs that I can run on the source of my MVP so that I don't have to waste time reading a checklist. Does that exist? Dunno.
Not sure I can get behind the idea. This is like an oxymoron. I get the heightened need for security, but this is not the way. Security is a journey, not a checklist.
> Comply with all industry security standards relevant to your business such as PCI DSS, HITRUST, ISO27001, and SSAE 18
> Comply with local laws and regulations in jurisdictions applicable to your company and your customers, such as GDPR, Binding Corporate Rules, and Standard Contractual Clauses
> Ensure data localization requirements are implemented in line with local regulations and contractual obligations<p>Whoever wrote this must be so irrationally out of touch with the startup space (or thinking of older billion dollar unicorn startup) to think that an MVP needs to do any of the above. I wouldn't care about GDPR until I have a somewhat strong EU userbase. Try to respect the spirit sure, but it's not like it will be enforced on a small business within it's first few years of existence.<p>Localization for an MVP is even more out of touch. Make an English US-centric version first (I'm not even American) put it out there and work on localization once you've had some success.