> <i>a P0 (Blocker) bug is a ship-stopper which has to be fixed before the next release can happen. P1s (Critical) are important but not something we’d stop a release for, while P2s (Medium,Low) represent pretty much anything that will only be fixed when our development team has cleared all P0 and P1 bugs</i><p>> <i>It’s important to note that a bug’s priority shouldn’t be confused with its severity, which exists as an entirely different dimension... ‘Severity’ to represent a (somewhat) objective assessment of a bug’s impact to the functionality of the system. From Critical representing major crashes or hangs, to Minor functional issues or purely Cosmetic blemishes. While a QA engineer will classify a bug’s Severity at creation time, it’s the PM who assigns the Priority at the time of triage based on their knowledge of the client’s business and product requirements</i><p>I don't see the benefit in having severity and priority as 2 distinct dimensions. Given a specific priority, what additional benefit do you get from having a severity label as well? Based on the author's description, I can't think of any.<p>If the goal is to better communicate what "type" of bug it is, using an enum classification (eg, cosmetic vs crash vs data-corruption) would be more appropriate than a numeric scale.
I've never really liked the priority or severity fields on bugs.<p>"Priority" feels to me like a band-aid on a UI deficiency in many electronic issue tracking systems: No way to manually sort the bug queue. If I can manually sort, it's easy for me to quickly find a spot where it's directly between something I'd rather have fixed sooner, and something I'd rather have fixed later. Which is ultimately what prioritizing really is - anything else is just beating around the bush.<p>And I don't believe that "severity", conceptualized as a linear scale, is meaningful. Whenever I see "severity X", my very next question is "why?" The severity score itself is, at best, not particularly actionable, and, at worst, something other stakeholders will learn to use as a vehicle for squeaky wheel prioritization. Or an agent for semantic rot, as someone decides to try and make the severity scale meaningful by imposing a strict hierarchy on all the possible kinds of things that might go wrong. (Or, rather, all the ones they thought of at the time.) I'd rather see tags like "data loss" or "known workaround exists". They paint a clearer picture, and they're harder to game.
It sounds like they're using:<p>- "severity" as level of importance the tester assigned it when they found the bug and
- "priority" as level of importance a developer assigned to it after triage.<p>That first piece of information is important when you need to determine which bugs to look at first for triage, but how important is it after that point? Can it not just be replaced after the developer's judgement?<p>In other words, could you not have a single priority field? The tester uses a heuristic to assign an initial priority (e.g., crashes are P0, cosmetic are P4). The dev uses this to prioritize which bugs to triage first, and once they've determined a new priority based on customer experience combined with app behaviour, they replace the old one.<p>If you really need to go back and check what the tester assigned, then I assume you can just use the "history" or "revision" feature in your bug tracking app.<p>Additionally, as suggested in a different comment, you can add a label for the bug's type if you feel that's important (crashing, lagging, cosmetic, etc.).<p>Perhaps the message here is that the app's behaviour in a vacuum is not the sole determinant of its priority. But then that should be the message, rather than claiming there is another metric which needs to be separately tracked when evaluating bugs.
I notice this sentence:<p>> Effective bug triage is an essential hygiene and success factor for anyone managing a software release.<p>I could not agree more. To some extend, it applies to all change requests. That said, there is no need to be too specific in your change request specification.<p>Where I work, we have setup only a few parameters to help us triaging several hundreds of CR:<p>- Priority: Low, Medium, High<p>- Type: Add (new feature), Enhancement (small change on an existing feature) and bug<p>- env: Production, Acceptance, Development, ...<p>In 99% of the case, this is completely sufficient. As said by others, adding severity does not add useful information on a bug triage (it may provide information on the bug itself). If there is a data loss involved but the client is fine with it, that would never lead to a high priority bug. If the client care about its data, then be sure the priority will be high. It means that bug triage really is priority assignment (no matter the severity).<p>In our workflow, only high priority bugs (usually hitting the production) are subject to hot-fixes. All other tickets go into carefully crafted milestones (a package to deliver). As all CRs belonging to the same milestone are delivered together, they actually all share the same priority which becomes the priority of the milestone compared to other milestones.<p>(edited for formatting)
Google has this concept. P0 S0 is the most urgent. P2 S2 is “standard”. P3/4 S3/4 is pretty much code for “this will never happen”. 99% of the time Pn = Sn. When it doesn’t it’s almost always off by 1. So practically I never saw the point in this distinction.
There is certainly a difference beween the two, but to say that they are completely different things is going too far, as can be seen by some of the examples used to justify that claim - e.g.:<p><i>Something that is Critical severity doesn’t necessarily mean its a P0. That crashing bug in your bio screen? Face it, nobody really cares to read it and few will ever click the button, so its not something that should be a P0 ship-stopper.</i><p>In what way is that an issue of 'critical severity'? The quoted text contains an explanation of why it is not.<p>The real difference is that severity is a measure of the harm done, while priority is a plan for to responding to it. Clearly they are different, but clearly they will likely be correlated, at least when the severity is high.
The > and < comparisons have no meaning for complex numbers. If you don't have a way to determine which (P + iS) is bigger, then you can't sort your bugs.<p>In other words, by adding a severity dimension, you lost your ability to order your bugs in any ascending or descending order. That defeats the whole point of having a priority field in the bug in the first place. Now you need to have a person decide if your [P1 S2] bug should be solved before your [P2 S1] bug by some arbitrary mechanism. And once you've made that decision, you'll communicate it with "Do this first" - which is essentially a priority field that now exists outside your system of [P S] classification.<p>Get rid of severity - or treat it like an enum that will help inform you what your priority of each bug is and bring sanity to your system.
This is confusing. It amounts to saying ‘bugs have an inherent property called ‘severity’. We explicitly track it in our system. It has no effect on how we prioritize work’<p>If there are people on the team who insist on objectively classifying bugs with no reference to customer impact, you don’t need to indulge them by giving them a meaningless drop down to play with in Jira, you need to get them up to speed with understanding your product and release priorities.
> <i>we use the JIRA field called ‘Severity’ to represent a (somewhat) objective assessment of a bug’s impact to the functionality of the system</i><p>> <i>crashing bug in your bio screen</i><p>The examples makes it clear that they use "severity" to map <i>types of bug</i> to <i>severity level</i> in a completely opaque way. Regardless of the impact of the bug, "crash" -> "high" and "typo" -> "low". A crash is high-severity even if it has low-impact in every category ("functionality of the system" included), but a typo is low-severity even if it is high-impact in an important category? Why?<p>> <i>a misplaced comma might seem like nothing worthy of emergent attention, but what if it completely re-purposes your brand statement</i><p>This just sounds like a high-severity bug: it may be low "functionality of the system" severity, but it is high "brand identity/loyalty" severity.<p>I think it becomes obvious that there is not much benefit of a special field for "functionality of the system" severity <i>or</i> even the underlying bug "type" that it seems to be representing. All severity is used here as a triage-heuristic anyways. Great. Now you can freely forget about it and find a better triage-heuristic for your QA team so that they can assign the priority more accurately.
Unless I'm missing something no one has mentioned the effort to fix a problem? I've seen cosmetic issues (using the wrong masculine / feminine form in French place names for example) which would have taken a huge amount of time to fix correctly and internal server errors which where the result of someone accidentally including a comma after a value in python, turning it into a tuple where one was not expected. At some point that should be factored into the order of priority.
I work on an embedded system. We had found a major bug and labeled it as "stop ship" because we didn't want anything more getting out. It is better for the factory to build nothing (paying all the assembly line workers to stand around if required - though more likely build a different product) than to have one more customer see this product and encounter this issue. I don't know the details on this issue, but there have been software issues that could kill people in the worst case.<p>Anyway, the developer got this stop ship the day before Thanksgiving, so he worked all weekend and Monday morning turned in a fix. At which time the factory manager told him "thanks, but one of the parts to make this machine is on back order for 4 months so we weren't planing to make that product anyway".
I think the issue here is that "Severity", as used here, isn't a good label. It sounds like they're mapping the kind of effect the bug has on the system to "Severity" values. E.g, a crash and hang maps to Severity "critical" and typos map to "minor".<p>The problem is that "Severity" is being used for something that doesn't really match its intuitive meaning.<p>Maybe call the field "kind of impact" (or something similar) and have enumerated values or tags like "crash", "hang", "typo", etc. Now the field is intuitive and you don't need to retrain (or write blog posts) to get people to understand your new meaning for the word "severity".
Severity is how the reporter sees the problem: if it interferes with their particular use case, it's of the highest severity. Almost all bugs end up being of the highest severity.<p>Priority is how the developer (or, more likely, the project management team) sees the bug. Highest priority bugs are the ones in the project manager's spreadsheet and ended up there because of the most vocal complainers (paying customers, an inconvenienced CEO, etc). The rest will never get fixed.<p>It's impractical to use the severity of the bug to rank the priority, because they're not strongly correlated.
I've preferred Urgency and Impact as my terminology, rather than Priority and Severity, respectively. But yes, I've been frustrated that these concepts are often combined when they shouldn't be.
This may be a controversial opinion but I strongly disagree with using bug priority/severity/urgency/impact etc to do work prioritization.<p>I think there are two separate things about tickets: one is the about technical merit of a ticket: documentation, evidence, reproducer, quick info about context. Basically, a ticket as a tool to gather and sort "technical discussion". This is fine and encouraged.<p>Then there is the "priority" - I don't think is a feature of a ticket. It's a property of context - do users complain _now_? Is the fix risky? Is the boss shouting? Do we lose money? Does the PM want this work done for personal reasons? How much time will it take for me to do a context switch to start the work?<p>I don't think it's possible to add this "priority" (as defined by social interactions) to a ticket. I always found I prioritize tickets that I understand the social context of. A colleague nagging me by walking to my desk is 100X the priority for me than "P1 critical" written by some PM on the other side of the planet.<p>The point: I try not to fight with the social aspect. If you want me to do a ticket. Good. Come over. Do a hangout and explain the context. Setting a mindless value in some field on some random badly described ticket is not going to make me de-prioritize other things.
My take is not to split priority and severity, but priority and tags. For example P0 typo,UI and P2 crash,UI ...the crash might be P2 if if's not severe for the product itself, as in the example, if it is on bio or whatever page.<p>Tags/label have been working quite well for me. Priority can be replaced with milestones, but you have to create them for each releases to ensure P0 are quickly enough.
Google's internal bug tracker has both priority and severity, and it's kind of a running joke that severity is meaningless (because it isn't meaningfully different from priority). On my team for example we leave it at its default value and ignore it completely.
Severity and Priority are dependent variables. You can try to treat them as independent but eventually it will catch up with you.<p>Where you see companies get into trouble is when an unlikely high severity bug actually happens (or becomes likely) and it comes out that they've known about it for a while. It's also the sort of thing that's a source for employee burnout.<p>My old standby for Severity is a 5 point scale based on data loss. It's a little fuzzy and everyone has to agree to definitions, but as a first proposal I'd say 5 is unrecoverable loss of pre-existing data. 4 is losing data entered now, 3 is loss that the user can work around (eg, enter the data via this other workflow and it's not lost), and honestly what constitutes 1 and 2 is nitpicky, because most places can't even keep up with all of their 3's, so arguing that your scenario is a 2 instead of a 1 is a bad expenditure of energy.<p>Save that energy for the fact that the business is going to rate 90% of all work as priority 1 or priority 5, sometimes re-ranking them after the fact. Especially for perf or security issues - always lowest priority until we don't have it, and then blame the engineer in the grand tradition of, "Why did you let me eat this whole box of chocolates?"
If people's first impression of what you want your UI to mean is different often enough to cause problems (for example see the amount of debate here), that is the bug, not the people misinterpreting it. The use of these terms in Jira is one of my pet peeves, since different teams often use them very differently and can be real fun when sharing Jira boards with multiple opinionated companies.<p>The colloquial usage of 'severity' and 'priority' just has too much overlap. Something like pairing 'likelihood' and 'severity' assessments, as is standard in safety, would still be general and make it immediately clear why something could be severe yet low priority, especially since people commonly mean both likely and severe when ranking a problem as severe. Keeping with the author's definition of severe, renaming severity as 'System Impact' at least immediately narrows down what is severe, but still carries the possible implication of 'frequent'.<p>I think fighting for a given interpretation here without using different terms is akin to insisting that people simply understand that a 'significant difference' reported in science means 'statistically different' as opposed to an 'important difference'.
My answer, originally posted to StackOverflow:<p>High priority, low severity: Your program at one point displays an uninitialized value, which consistently shows "BadDon". You are conducting a demo tomorrow that is make-or-break for your startup. You're pitching to a guy named "Don".<p>Low priority, high severity: Your ICBM will experience an uncommanded launch when Feb 29 does not exist in a year divisible by four.
Severity is subjective: those users who are not affected by a bug find it to have low severity (possibly zero). Those who are affected by it, and have no workaround find it a showstopper (high severity).<p>The consequences add to the severity. Not being able to save a document versus having your identity stolen versus an aircraft crashing are different severities. Severity fields that have four or five values can be difficult to use due to the orders of magnitude differences in consequences.<p>Since severity is experienced differently by different users, if only a single severity field is attached to a bug, it is somewhat tricky to use. A guiding principle should be that the bug reporter should imagine the circumstances of the "worst-case user": the user who is most likely impacted by the bug, with the worst consequences, and set the severity accordingly. It can feel wrong to do this, though, when it seems that such a user is purely imaginary; there is no actual user who is affected in that way or to that extent.<p>Priority is nearly the same thing as severity, but from the perspective of the user being the project itself. A bug has high priority if the project is severely affected. Priority usually drives what gets fixed now, versus later.<p>The project cares about the users, so if a large number of users experience a high severity, that bug will likely be treated with priority. The project may also care about the largest users who bring in the most business, and other things like pride in its good reputation. It might give more priority to a bug that is more embarrassing. Or it might give priority to a bug that is more widely publicized, such that there is attention on the project's handling of it. A bug that creates a blocking situation in the project's own development will typically be treated as high priority, even if it hasn't been released to any users.
We use three fields to triage a bug. When the bug is create the PO/support team/Dev tries to the best of his knowledge to assess:<p>1. Severity - roughly defined as in the article<p>2. Occurrence - How probable will this bug happen in the field? How many users would be impacted? How much support will be needed for this issue?
Low - Only some very special workflows or mainly dev work impacted.
Medium - Some regular users.
High - Many regular users.<p>3. Reproducibility - How often will the bug occur when we follow the steps.
Low - Single occurrence or difficult to reproduce
Medium - Erratic behavior, thread problems
High - It will almost always happen.<p>Then a PO can set a prio and give the ticket the status "will not fix" or "to be analyzed". The dev team then takes the bug and does a time boxed (1 day max) first analysis and comes back with an effort estimate of how big the issue it's too fix. Then a PO can reprioritize and set the status to "to be fixed" or not.
The post doesn't answer the biggest question, which is "What would I do with this additional information"? In concrete terms, what actions would your team take for a P2S0 that would be different from what they would do for a P2S3?<p>If anyone has a workflow where "severity" is important for your decision-making, I'm curious to learn about it.
I wish I could get my organization to explore this a little more. We tend to use a single priority value. But it doesn't feel like <i>prioritization</i> to me. If you have 100 issues that all have the same priority level, that means you don't care what order they're dealt with in - which is never really the case. There's invariably another level of "prioritization" that has to happen <i>within</i> each level. But instead of being codified, it gets left to casual discussion, or worse, it never gets discussed.<p>This has led me to think that a hierarchical priority structure would be useful. So we could say that these 100 Level 3 issues are generally of the same priority, but there's a ranking within that. But even the highest among them is still less than the lowest Level 2 issue.<p>That's not to say that it shouldn't also be 2-dimensional though.
In my area of software, we provide customised solutions to large enterprises. We have an app that takes in a lot of dimensions about each issue report and spits out an "objective" severity. This app has been honed over 20 years of reviewing issues. That's the only severity that's allowed into JIRA. By default, the priority is set the same as the severity. When we review the issue list with the customers, however, they are free to adjust the priority according to their unique business needs, and developers work on issues in priority order only, not severity. It's a good system, and it ensures that we're working on what the customer needs and not our isolated view of severity. Granted, we will consult with the customer during triage and give them a steer based upon our experience, but they have the final say.
I am very fond of <a href="https://lostgarden.home.blog/2008/05/20/improving-bug-triage-with-user-pain/" rel="nofollow">https://lostgarden.home.blog/2008/05/20/improving-bug-triage...</a> which established a pain metric that isn’t priority or severity.<p>Likewise, I am fond of PEF/REV <a href="https://www.fincher.org/tips/General/SoftwareDevelopment/BugTracking.shtml" rel="nofollow">https://www.fincher.org/tips/General/SoftwareDevelopment/Bug...</a> where the user side and developer side of a bug can be measured separately to determine the prioritization.
From a support engineer perspective, I found severity levels very helpful. PagerDuty describes it well here: <a href="https://response.pagerduty.com/before/severity_levels/" rel="nofollow">https://response.pagerduty.com/before/severity_levels/</a><p>From a developer perspective, I've had a hard time whenever I see the two labels Priority and Severity used together. I find I need to apply some sort of mental kung-fu every time to remind myself what this article is stating. (e.g. Low Priority, High Severity) The reality is, it IS confusing. Kudos to Blue Label Labs for effectively managing it well.<p>(edited: include an example)
I have worked in a development team that had both priority and severity. The main difficulty I encountered was that people would silently factor one into the other.<p>Some would factor severity into priority (<i>This is a crashing-type bug, so we should prioritize it</i>), or priority into severity (<i>This bug only causes minor harm, so it's not a high priority for us</i>). But this was done silently and unconsciously, and different people had different directions for how they would do it. Due to this experience, I think one metric is best.
Given a list of items one can purchase (here the currency is engineering resources), given each one's cost and benefit (assuming each item is mostly independent), the optimal strategy is approximately to sort them in decreasing order of benefit/cost and work from the top of the list.<p>In the terminology I use:<p>Severity = total impact of the bug = (impact to each customer that experiences the bug) * (estimated rate at which customers experience the bug)<p>Priority level = severity / cost, rounded to fit in one of n buckets
Sometimes I would almost advocate a little confusion because who is to say you've correctly identified the priority. Who's to say the bug is correctly triaged?
They provide some distinction between the impact and how soon something is to be dealt with. But in many cases, they tend to go together in a proportional manner, at least for the top values of severity and priority. And in all the cases I've seen, people understand them as directly linked together and never try to assign different values (that seem to crisscross if you put them in a two column table).
I disagree. If a lot of people know about the bug, and actually take time to reach out and tell you, it must be more severe than you thought.
People these days are used to bugs. And bugs that arent as severe often go completely unreported; Its so common we shrug it be off. People tend to report bugs when they see no other option.
On some level it’s sort of weird that priority is a field at all, when you also presumably have an overlapping concept of “order we will do these things in”. Priority literally means the degree to which something needs to happen <i>prior</i> to another thing.
We switched to P1, P2, P3. Why? A company that bought us used P1, P2, and P3, and they just couldn't comprehend that most of our "P1" bugs were really P2 bugs under their system.
The end point is "how does it change my work". It's not clear from the article, why do we need Severity at all, if it doesn't affect my work.
Misunderstanding of priority and severity fields has been going on for decades and suggests that they're not really useful terms.<p>Priority in particular is quite subjective and also changes depending on what else is going on (other tickets perhaps?)<p>Having said that, the bug tracker we use provides the fields by default and we try to use them intuitively. We also try to discourage the "everything is P0" behaviour that happens when people use an arbitrary numerical scale. (What even is P0? I see it pop up when people discover that all their current tickets are P1.<p>Anyway, here's ours. They're kind of ordered but mostly try to be descriptive and sympathetic to the idea that the judgement is subjective:<p>Priorities<p>* "If time permits"<p>* "Next"<p>* "Do not defer"<p>* "Blocks dependants"<p>* "Release blocker"<p>* "Don't go home"<p>The default is "If time permits".<p>Note how all of them are pretty "important" sounding except the "lowest" which instead admits that sometimes things just can't be done because of other factors rather than because they're not important.<p>"Next" means "do this next" or "in the next phase of work / sprint / release" and things can climb up to this priority as we take decisions.<p>"Do not defer" means that it <i>must</i> be done in the milestone it's allocated to. It can't be deferred to another milestone. i.e. we've made decisions that mean it can't be deferred to the next phase. This is for things that we'll need for the next phase or will become blockers but aren't yet. It's not for things where we have generally decided that Now Is The Time. There must be a specific reason it can't be deferred. We use the milestone features to actually decide what's in or out of a scope of work.<p>"Blocks dependants" is pretty high up and as such needs to be explicit about what it's used for. If your bug blocks something else or has something that depends on a fix then the schedule is going to get messy if it's not done. It's hard for someone with authority to wedge something in here because they feel like it.<p>"Release blocker" is also pretty high up. This is where people get to jump up and down and explain why they think their bug is so important (and why it's doesn't fit into "do not defer"). This is for stuff that means we can't actually ship the release rather than things we'd just like to be in it. For example "we can't build the software until this is fixed". Hopefully reasonably rare stuff. You need to explain why this blocks a specific release rather than the release otherwise being enough better than the previous one to deliver value to users.<p>"Don't go home". Sometimes things really are that bad and need to be fixed. At least the person assigning this knows what they're asking, what the implications are and how bad it looks if everything always ends up in this bucket.<p>Severity works a bit differently. These things are hints from the reporter about the impact of this bug. They really are very subjective.<p>* Don't care<p>* Embarrassing<p>* Minor peril<p>* Major shitstorm<p>* QA blockers<p>* Showstopper<p>"Don't care" is the default. We want to capture everything we know about our software and encourage people to not feel too "whiny" about filing the notes. This is often for things that end up as "enhancements" or "tasks" rather than "defects".<p>"Embarrassing". This is something that a user would notice and be left with an impression that we have less attention to detail than the best in the industry.<p>"Minor peril". This is a bug that's going to cause friction or trouble and give the user an uneasy sense or stop them from building confidence in the software.<p>"Major shitstorm". I'm never profane in my software but someone suggested this and lots of people liked it and had a good idea of how it should be used. This is for stuff that causes problems when the feature is used. Stuff that leaves a persistent mess that needs to be cleaned up by hand or corrupts some state or something.<p>"QA blockers". Without fixing these bugs then we're unable to test our software properly. This means we might attract regressions and we probably shouldn't be releasing stuff to users. For example, if you can't save a record that contains the new features then you can't test the corresponding load functionality.<p>"Showstoppers". These are bugs that cause the whole software to break. Crash bugs, things that destroy user data, etc.<p>Most of our open bugs mostly sit at "don't care", "embarrassing", "if time permits", "next" and occasionally "do not defer". If stuff starts creeping up the ladder then we know we have a problem coming. If stuff starts to appear straight at the top of the ladder then we get those things fixed ASAPs: they don't hang around as "open" for very long!<p>A lot of the "higher" values are things that help us deal with interactions. Problems on their own are often tractable. When things start affecting other things and causing knock-on effects then trouble occurs. We try to head that off by knowing about it when it comes in and fixing those things quickly.