TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

How to manage oncall as an engineering manager?

68 点作者 frugal108 个月前
As a relatively new engineering manager, I oversee a team handling a moderate volume of on-call issues (typically 4-5 per week). In addition to managing production incidents, our on-call responsibilities extend to monitoring application and infrastructure alerts.<p>The challenge I’m currently facing is ensuring that our on-call engineers don&#x27;t have sufficient time to focus on system improvements, particularly enhancing operational experience (Opex). Often, the on-call engineers are pulled into working on production features or long-term fixes from previous issues, leaving little bandwidth for proactive system improvements.<p>I am looking for a framework that will allow me to:<p>Clearly define on-call priorities, balancing immediate production needs with Opex improvements. Manage long-term fixes related to past on-call issues without overwhelming current on-call engineers. Create a structured approach that ensures ongoing focus on improving operational experience over time.

34 条评论

cbanek8 个月前
I&#x27;ve been on a lot of oncall lists... 4-5 per week seems extremely high to me. Have you gathered up and classified what the issues were? Are there any patterns or areas of the code that seem to be problematic? Are you actually fixing and getting to the root cause of issues or are they getting worse? It sounds like you don&#x27;t know the answer because you don&#x27;t really understand the problem.<p>If you don&#x27;t have enough time to run the system and you have to do new feature work one has to give into the other, or you have to hire additional people (but this rarely solves the problem, if anything, it tends to make it worse for a while until the new person figures out their bearings).<p>One way that is very simple but not easy is to let the on call engineer not do feature work and only work on on-call issues and investigating&#x2F;fixing on call issues for the period of time they are on-call, and if there isn&#x27;t anything on fire, let them improve the system. This helps with things like comp-time (&quot;worked all night on the issue, now I have to show up all day tomorrow too???&quot;) and letting people actually fix issues rather than just restart services. It also gives agency to the on-call person to help fix the problems, rather than just deal with them.
评论 #41654210 未加载
评论 #41657510 未加载
gobins8 个月前
A few things that worked for us:<p>1. The roster is set weekly. You need at least 4-5 engineers so that you get rostered not more than once per month. Anything more than that and you will get your engineers burned out.<p>2. There is always a primary and secondary. Secondary gets called up in cases when primary cannot be reached.<p>3. You are expected to triage the issues that comes during your on-call roster but not expected to work on long term fixes. that is something you have to bring to the team discussion and allocate. No one wants to do too much off maintenance work.<p>4. Your top priorities to work on should be issues that come up repeatedly and burn your productivity. This could take upto a year. Once things settle down, your engineers should be free enough to work in things that they are interested in.<p>5. For any cross team collaboration that takes more than a day, the manager should be the point of contact so that your engineers don&#x27;t get shoulder tapped and get pulled away from things that they are working on.<p>Hope this helps.
评论 #41659192 未加载
seniortaco8 个月前
4-5 issues per week can be a lot or a little, all depending on the severity of these issues. Likely most of the them are recurring issues your team sees a few times a month and the root cause hasn&#x27;t been addressed and needs to be.<p>Driving down oncall load is all about working smarter, not necessarily harder. 30% of the issues likely need to be fixed by another team. This needs to be identified ASAP and the issues handed off so that they can parallelize the work while your team focuses on the issues you &quot;own&quot;.<p>Setup a weekly rotation for issue triage and mitigation. The engineer oncall should respond to issues, prioritize based on severity, mitigate impact, and create and track Root Cause issues to fix the root cause. These should go into an operational backlog. This is 1 full time headcount on your team (but rotated).<p>To address the operational backlog, you need to build role expectations with your entire team. It helps if leadership is involved. Everyone needs to understand that in terms of career progression and performance evaluation, operational excellence is one of several role requirements. With these expectations clearly set, review progress with your directs in recurring 1-1s to ensure they are picking up and addressing operational excellence work, driving down the backlog.
ipnon8 个月前
The simplest solution is to compensate the on-call engineer, either by paying them 2 times their hourly rate per hour on-call, or by accruing them an hour of vacation time per hour on-call. This works because it incentivizes all parties to minimize the amount of time spent in on-call alert.<p>Management is incentivized to minimize time spent in alert because it is now cheaper to fix the root-cause issues instead of having engineers play firefighter on weekends. Long-term, which is the always the only relevant timeline, this saves money by reducing engineer burnout and churn.<p>Engineers are also incentivized to self-organize. Those who have more free time or are seeking more compensation can volunteer for more on-call. Those who have more strict obligations outside of work thus can spend less time on alert, or ideally none at all. In this scenario, even if the root cause is never addressed, usually the local &quot;hero&quot; quickly becomes so inundated with money and vacation time that everyone is happy anyway.<p>It doesn&#x27;t completely eliminate the need for on-call or the headaches that alerts inevitably induce but it helps align seemingly opposing parties in a constructive manner. Thanks to Will Larson for suggesting this solution in his book &quot;An Elegant Puzzle.&quot;
评论 #41654548 未加载
tthflssy8 个月前
Without knowing your context, it is hard to give advice, that is ready to be applied. As a manager, you will need to collect and produce data about what is really happening and what is the root cause.<p>Clear up first what is the charter of your team, what should be in your team&#x27;s ownership? Do you have to do everything you are doing today? Can you say no to production feature development for some time? Who do you need to convince: your team, your manager or the whole company?<p>Figure out how to measure &#x2F; assign value to opex improvements eg you will have only 1-2 on-call issues per week instead of 4-5, and that is savings in engineering time, measurable in reliability (SLA&#x2F;SLO as mentioned in another comment) - then you will understand how much time it is worth to spend on those fixes and which opex ideas worth pursuing.<p>Improving the efficiency of your team: are they making the right decisions and taking the right initiatives &#x2F; tickets?<p>Argue for headcount and you will have more bandwidth after some time. Or split 2 people off and they should only work on opex improvements. You give administratively priority to these initiatives (if the rest of the team can handle on-call).
matt_s8 个月前
Think of on-call like medical triage. On-call should triage outage (partial&#x2F;full) level scenarios and respond to alerts, take immediate actions to remedy the situation (restart services, scale up, etc.) and then create follow-on tickets to address root causes that go into the pool of work the entire team works. Like an ER team stabilizing a patient and identifying next steps or sending the patient off to a different team to take time in solving their longer term issue.<p>The team needs to collectively work project work _and_ opex work coming from on-call. On-call should be a rotation through the team. Runbooks should be created on how to deal with scenarios and iterated on to keep updated.<p>Project work and opex work are related, if you have a separate team dealing with on-call from project work then there isn&#x27;t a sense of ownership of the product since its like throwing things over a wall to another team to deal with cleaning up a mess.
windows20208 个月前
1) Identify on-call issues that aren&#x27;t engineering issues or for which there&#x27;s a workaround. Maybe institutional knowledge needs to be aggregated and shared.<p>2) Automate application monitoring by alerting at thresholds. Tweak alerts until they&#x27;re correct and resolve items that trigger false positives.<p>3) If issues are coming from a system someone who is still there designed, they should handle those calls.<p>4) You mention long-term fixes for on-call issues. First focus on short-term fixes.<p>5) Set a new expectation that on-call issues are an unexpected exceptions. If they occur, the root cause should be resolved. But see point 4.<p>6) On-call issues become so rare that there&#x27;s an ordered list of people to call in the event of an issue. The team informally ensures someone is always available. But if something happens, everyone else who&#x27;s available is happy to jump on a call to help understand what&#x27;s going on and if conditions permit, permanently resolve the next business day.
__s8 个月前
Without knowing the scale of company you&#x27;re at it&#x27;s hard to give advice<p>At Microsoft I headed Incident Count Reduction on my team where opex could be top priority &amp; rotating on call would have a common thread between shifts through me (ie, I would know which issues were related or not, what fixes were in the pipe, etc)<p>I&#x27;m guessing the above isn&#x27;t an option for you, but you can try drive an understanding that while someone is on call there is no expectation for them to work on anything else. That means subtracting on call head count during project planning
评论 #41616198 未加载
AdieuToLogic8 个月前
&gt; ... I oversee a team handling a moderate volume of on-call issues (typically 4-5 per week). In addition to managing production incidents, our on-call responsibilities extend to monitoring application and infrastructure alerts.<p>Being on-call and also responsible for asynchronous alert response is its own, distinct, job. Especially when considering:<p>&gt; Often, the on-call engineers are pulled into working on production features or long-term fixes from previous issues, leaving little bandwidth for proactive system improvements.<p>The framework you seek could be:<p>- hire and train enough support personnel to perform requisite monitoring<p>- take your development engineers out of the on-call rotation<p>- treat operations concerns the same as production features, prioritizing accordingly<p>The last point is key. <i>Any</i> system change, be it functional enhancements, operations related, or otherwise, can be approached with the same vigor and professionalism .<p>It is just a matter of commitment.
cjcenizal8 个月前
According to The Phoenix Project [0], if you can form a model of how work flows in, through, and out of your team then you can identify its problems, prioritize them in order of criticality, and form plans for addressing them. The story&#x27;s premise sounds eerily similar to what you&#x27;re facing.<p>At the very least it&#x27;s a fun read!<p>[0] <a href="https:&#x2F;&#x2F;www.amazon.com&#x2F;Phoenix-Project-DevOps-Helping-Business&#x2F;dp&#x2F;0988262592" rel="nofollow">https:&#x2F;&#x2F;www.amazon.com&#x2F;Phoenix-Project-DevOps-Helping-Busine...</a>
shoo8 个月前
if this is just a workload vs capacity thing -- where the workload exceeds capacity, is there a way to add some back-pressure to reduce the frequency of on-call issues that your team is faced with?<p>are you &#x2F; your team empowered to push back &amp; decline being responsible for certain services that haven&#x27;t cleared some minimum bar of stability? e.g. &quot;if you want to put it into prod right away, we wont block you deploying it, but you&#x27;ll be carrying the pager for it&quot;
sholladay8 个月前
I would first ask the question, “Do you really need high uptime at night?” I’ve seen too many small startups whose product is about as critical as serving cat pictures and with most customers in a nearby time zone do on-call. That’s unreasonable unless, maybe, your pay for such a role is equally ridiculous (high) and clear at the time of hiring. Don’t talk existing engineers into it, show them the terms and have them volunteer.<p>As for the schedule, I would recommend each engineer have a 3-night shift and then a break for a couple of weeks. Ideally, they will self-assign to certain slots. Early in the week&#x2F;month might be better&#x2F;worse for different people.<p>I strongly suggest that engineers not work on ops engineering or past on-call issues while they themselves are on-call, otherwise there is a very strong incentive for them to reduce alerts, raise thresholds, and generally make the system more opaque. All such work should be done between on-call shifts, or better yet, by engineers who are never on-call.<p>One way that on-call engineers can contribute when there is no current incident ongoing is to write documentation. Work on runbooks. What to do when certain types of errors occur. What to do for disaster recovery.
maerF0x08 个月前
entirely depends on what those 4-5 oncalls are per week.<p>4-5 Pagerduty Pages is either 1) bad software or 2) mistuned alerts.<p>4-5 Cross team requests + customer service escalations, &lt;= 1 Page per week is not that bad, and likely can be handled by 1 week rotations with cooperative team to cover 3-4 2hr &quot;breaks&quot; where the person can (workout, be with their kids&#x2F;spouse, Forest Bathe) would be a decent target.<p>For me the best experience across &gt;15 yrs experience was at a company that did 2 week sprints. For 1 week you&#x27;d be primary, 1 week you&#x27;d be secondary, and then for 4 weeks you&#x27;d be off rotation. The primary spent 100% of their time being the interrupt handler fixing bugs, cross team requests, customer escalations, and pages, if they ran out of work they focused on tuning alerts or improving stability even further. So you lose 1 member of your team permanently to KTLO. IMO you gain more than you lose by letting the other 5-7ish engineers be fully focused on feature work.<p>&gt; Often, the on-call engineers are pulled into working on production features or long-term fixes from previous issues, leaving little bandwidth for proactive system improvements.<p>Have a backbone, tell someone above you &quot;no&quot;.
jmmv8 个月前
A couple of things I&#x27;d suggest:<p>* Clearly delineate what is on-call work and how many people pay attention to it, and protect the rest of the team from such work. Otherwise, it&#x27;s too easy for the team at large to fall prey to the on-call toil. That time goes unaccounted and everybody ends up being distracted by recurrent issues, increases siloing, and builds up stress. I wrote about this at large here: <a href="https:&#x2F;&#x2F;jmmv.dev&#x2F;2023&#x2F;08&#x2F;costs-exposed-on-call-ticket-handling.html" rel="nofollow">https:&#x2F;&#x2F;jmmv.dev&#x2F;2023&#x2F;08&#x2F;costs-exposed-on-call-ticket-handli...</a><p>* Set up a fair on-call schedule that minimizes the chances of people having to perform swaps later on while ensuring that everybody is on-call roughly the same amount of time. Having to ask for swaps is stressful, particularly for new &#x2F; junior folks. E.g. PagerDuty will let you create a round-robin rotation but lacks these &quot;smarter&quot; abilities. I wrote about how this could work here: <a href="https:&#x2F;&#x2F;jmmv.dev&#x2F;2022&#x2F;01&#x2F;oncall-scheduling.html" rel="nofollow">https:&#x2F;&#x2F;jmmv.dev&#x2F;2022&#x2F;01&#x2F;oncall-scheduling.html</a>
rozenmd8 个月前
I&#x27;ve written a few guides on this. Some quick pointers:<p>- You build it, you run it<p>If your team wrote the code, your team ensures the code keeps running.<p>- Continuously improve your on-call experience<p>Your on-call staff shouldn&#x27;t be on feature work during their shift. Their job is to improve the on-call experience while not responding to alerts.<p>- Good processes make a good on-call experience<p>In short, keep and maintain runbooks&#x2F;standard operating procedures<p>- Have a primary on-call, and a secondary on-call<p>If your team is big enough, having a secondary on-call (essentially, someone responding to alerts only during business hours) can help train up newbies, and improve the on-call experience even faster.<p>- Handover between your on-call engineers<p>A regular mid-week meeting to pass the baton to the next team member ensures ongoing investigations continue, and that nothing falls between the cracks.<p>- Pay your staff<p>On-call is additional work, pay your staff for it (in some jurisdictions, you are legally required to).<p>More: <a href="https:&#x2F;&#x2F;onlineornot.com&#x2F;incident-management&#x2F;on-call&#x2F;improving-your-teams-on-call-experience" rel="nofollow">https:&#x2F;&#x2F;onlineornot.com&#x2F;incident-management&#x2F;on-call&#x2F;improvin...</a>
coderintherye8 个月前
Exec level Framework is DORA: <a href="https:&#x2F;&#x2F;www.pentalog.com&#x2F;blog&#x2F;strategy&#x2F;dora-metrics-maturity-models&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.pentalog.com&#x2F;blog&#x2F;strategy&#x2F;dora-metrics-maturity...</a><p>For your level: Your team and org size is large enough that you should be able to commit someone half or full-time to focusing on Opex improvements as their sole or primary responsibility. Ask your team, there&#x27;s likely someone who would actually enjoy focusing on that. If not, advocate for a head count for it.<p>Edit: Also ensure you have created playbooks for on-call engineers to follow along with a documentation culture that documents the resolutions to most common issues so as those issues arise again they can be easily dealt with by following the playbook.<p>Note: This is unpopular advice here because most people here don&#x27;t want to spend their lives bug-fixing, but in reality it&#x27;s a method that works when you have the right person who wants to do it.
jaygreco8 个月前
I don’t know the size or structure of your team, but one thing that has worked for me in addition to other strategies mentioned on this thread (specifically, that oncall is oncall, nothing else) is that you appoint one engineer - typically someone who has a more strategic mindset as the “OE Czar”. They are NOT on call, and ideally not even in the rotation, but rather there for two reasons: to support oncalls when they need longer term support, like burning down a longer running task&#x2F;investigation or keeping continuity between shifts. The other is identifying and planning (or executing on) processes and systems for fixing issues that continually crop up. Our mandate was 20% of this person’s time spent doing Czar tasks vs scheduled work.
Joel_Mckay8 个月前
In general, most IT departments operate on a multi-tier service model to keep users from directly annoying your engineers.<p>1. Call center support desk with documented support issues, and most recent successful resolutions.<p>2. Junior level technology folks dispatched for basic troubleshooting, documented repair procedures, and testing upper support level solutions<p>3. Specialists that understand the core systems, process tier 2 bug reports, and feed back repairs&#x2F;features into the chain<p>4. Bipedal lab critters involved in research projects... if your are very quiet, you may see them scurry behind the rack-servers back into the shadows.<p>Managers tend to fail when asking talent to triple&#x2F;quadruple wield roles at a firm.<p>No App is going to fix how inexperienced coordinators burn out staff. =3
kyrra8 个月前
May I recommend this chapter from the Google SRE book: <a href="https:&#x2F;&#x2F;sre.google&#x2F;sre-book&#x2F;being-on-call&#x2F;" rel="nofollow">https:&#x2F;&#x2F;sre.google&#x2F;sre-book&#x2F;being-on-call&#x2F;</a><p>As well as this two from the management section: <a href="https:&#x2F;&#x2F;sre.google&#x2F;sre-book&#x2F;dealing-with-interrupts&#x2F;" rel="nofollow">https:&#x2F;&#x2F;sre.google&#x2F;sre-book&#x2F;dealing-with-interrupts&#x2F;</a> and <a href="https:&#x2F;&#x2F;sre.google&#x2F;sre-book&#x2F;operational-overload&#x2F;" rel="nofollow">https:&#x2F;&#x2F;sre.google&#x2F;sre-book&#x2F;operational-overload&#x2F;</a>
dpifke8 个月前
I recently wrote about how NOT to do this: <a href="https:&#x2F;&#x2F;pifke.org&#x2F;posts&#x2F;middle-manager-rotation&#x2F;" rel="nofollow">https:&#x2F;&#x2F;pifke.org&#x2F;posts&#x2F;middle-manager-rotation&#x2F;</a>
Swiftdream8 个月前
Swift Dreams Web Consultant, have you lost your bitcoin wallet address, trust wallet, Crypto.com, crypto coin, exodus wallet, remmittano, paxful so on and so on, we are best in recovery we do not have much to say but seen is believing, Just give a try and see for yourself, like they do say seeing is believing we recover all crypto any crypto of all kind currency platform contact via email: Swiftdreamwebconsultant@gmail.com<p>swiftdreamwebconsultant@gmail.com
jlund-molfese8 个月前
I don&#x27;t think you&#x27;ll find a single framework that addresses everything you&#x27;re looking for in your last paragraph.<p>That being said, some advice:<p>&gt; Clearly define on-call priorities<p>Sit down with your team, and, if necessary, one or two stakeholders. Create a document and start listing priorities and SLAs during a meeting. The goal isn&#x27;t actually the doc itself, but when you go through this exercise and solicit feedback, people should raise areas where they disagree and point out things you haven&#x27;t thought of. The ordering is up to what matters to your team, but most people will tie things to revenue in some way. You can&#x27;t work on everything, and the groups that complain most loudly aren&#x27;t necessarily the ones who deserve the most support.<p>&gt; balancing immediate production needs with Opex improvements<p>Well, first, are your &#x27;immediate production needs&#x27; really immediate? If your entire product is unusable that might be the case, but certain issues, while qualifying as production support, don&#x27;t need to be prioritized immediately, and can be deferred until enough of them exist at the same time to be worked on together. Otherwise you can start by committing to certain roadmap items and then do as much production support as you have time for. Or vice-versa. A lot of this depends on the stage of your company; more mature companies will naturally prioritize support over a sprint to viability.<p>&gt; Manage long-term fixes related to past on-call issues without overwhelming current on-call engineers. Create a structured approach that ensures ongoing focus on improving operational experience over time.<p>Whenever a support task or on-call issue is completed, you should keep track of it by assigning labels or simply listing it in some tracking software. To start off, you might have really broad categories like &quot;customer-facing&quot; and &quot;internal-facing&quot; or something like that. If you find that you&#x27;re spending 90% of your support time on a particular service or process, that&#x27;s a good sign that investment in that area could be valuable. Over time, especially as you get a better handle on support, you should make the categories more granular so you can focus more specifically. But not so granular that only one issue per month falls into them or anything like that.
brudgers8 个月前
The best way to manage on-call is to not have on-call. On-call means the organization is understaffed. Hiring new positions to handle off hours, will solve the problem. Good luck.
评论 #41654283 未加载
评论 #41653711 未加载
评论 #41620416 未加载
评论 #41620452 未加载
评论 #41654140 未加载
ojbyrne8 个月前
The best framework has been available for a while:<p><a href="https:&#x2F;&#x2F;sre.google&#x2F;sre-book&#x2F;being-on-call&#x2F;" rel="nofollow">https:&#x2F;&#x2F;sre.google&#x2F;sre-book&#x2F;being-on-call&#x2F;</a><p><a href="https:&#x2F;&#x2F;sre.google&#x2F;workbook&#x2F;on-call&#x2F;" rel="nofollow">https:&#x2F;&#x2F;sre.google&#x2F;workbook&#x2F;on-call&#x2F;</a>
评论 #41654761 未加载
matrix878 个月前
&gt; Often, the on-call engineers are pulled into working on production features or long-term fixes from previous issues, leaving little bandwidth for proactive system improvements.<p>the way my company does it, on-call rotates around the team. The designated oncall person isn&#x27;t expected to work on anything else
Fire-Dragon-DoL8 个月前
4-5 times a week is A LOT, it&#x27;s not moderate. Once a month is low. Twice a month is moderate
nick34438 个月前
This is the root of your problem right here, unless this is part of your team&#x27;s R&amp;R then you need to prevent this: &gt;&quot;Often, the on-call engineers are pulled into working on production features or long-term fixes from previous issues&quot;
theideaofcoffee8 个月前
Alert fatigue. Alert fatigue. Alert fatigue. It&#x27;s the single biggest quality of life thing that you can do to help with the annoyance that is on call. If you know you&#x27;re in store for the same alert again and again, or perhaps even know that you know you&#x27;re going to get paged, it&#x27;s hard to think about anything else. It becomes then a game of normalizing deviance and burnout: &quot;oh, we just ignored that one last time&quot;. Ok, why are they alerts then if they can be ignored? It&#x27;s just going to murder people&#x27;s spirit after a while.<p>Someone gets called in the middle of the night? Let them take the morning to recover, no questions asked, better yet, the entire day if it was a particularly hairy issue. This is the time where your mettle as a manager is really tested against your higher-ups. If your people are putting in unscheduled time, you better be ready to cough up something in return.<p>Figure out what&#x27;s commonly coming up and root cause those issues so they can finally be put to bed (and your on-call can go back to bed, hah).<p>Everyone that touches a system gets put on call for that same system. That creates an incentive to make it resilient so they don&#x27;t have to be roused and so there&#x27;s less us-vs-them and throwing issues over the wall.<p>Beyond that, if someone is on call, that&#x27;s all they should be doing. No deep feature work, they really should be focusing on alerts, what&#x27;s causing them, how to minimize, triaging and then retro-ing so they&#x27;re always being pared down.<p>Lean on your alerting system to tell you the big things: when, why, how often, all that. The idea is you should understand exactly what is happening and why, you can&#x27;t do much to fix anything if you don&#x27;t know the why.<p>Look at your documentation. Can someone that is perhaps less than familiar with a given system easily start to debug things, or do they need to learn the entire thing before they can start fixing? Make sure your documentation is up to date, write runbooks for common issues (better yet, do some sort of automation work to fix those, computers are good at logic like that!), give enough context that being bleary eyed at 3:30am isn&#x27;t that much of a hindrance. Minimize the chances of having to call in a system&#x27;s expert to help debug. Everyone should be contributing there (see my fourth line above).<p>Make sure you are keeping an eye on workload too. You may need to think about increasing the number of people on your team if actual feature work isn&#x27;t getting done because you&#x27;re busy fighting fires.
aaomidi8 个月前
Get your company to pay for on call.<p>This is extremely important imo. It sets a positive culture and makes people want to do oncall rather than hate and dread it.
ivanstojic8 个月前
TL;DR: on-call manages acute issues, documents steps taken, possibly farms out immediate work to subject matter experts. Rate on-call based on traces they leave behind. Separate on-call with same population, but longer rotation window handles fixes. Rate this rotation based on root cause reoccurrence and general ticket stats trendlines.<p>Longer reply:<p>I have on-call experience for major services (DynamoDB front door, CosmosDB storage, OCI LoadBalancer). Seen a lot of different philosophies. My take:<p>1. on-call should document their work step by step in tickets and make changes to operational docs as they go: a ticket that just has &quot;manual intervention, resolved&quot; after 3 hours is useless; documenting what&#x27;s happening is actually your main job; if needed, work to analyze&#x2F;resolve acute issues can be farmed out<p>2. on-call is the bus driver, shouldn&#x27;t be tasked with handling long term fixes (or any other tasks beyond being on-call)<p>3. handover between on-calls is very important, prevents accidentally dropping the ball on resolving longer time horizon issues; handover meetings<p>Probably the most controversial one: separate rotation (with a longer window - eg. 2 week) should handle tasks that are RCA related or drive fixes to prevent reoccurrence<p>Managers should not be first tier on any pager rotation, if you wouldn&#x27;t approve pull requests, you shouldn&#x27;t be on the rotation (other than as a second tier escalation). Reverse should also hold: if you have the privilege to bless PRs, you should take your turn in the hot seat.
mise_en_place8 个月前
Check out the Google SRE Handbook. Still highly relevant today.
parasense8 个月前
This sounds like a cliche stereotypical IT problem. And firstly, not a not a bad thing, because it&#x27;s new to you. Luckily there are mountains of best-practices for addressing this issue. Picking one feather from the big pile, I&#x27;d say your situation screams of Problem Management.<p><a href="https:&#x2F;&#x2F;wiki.en.it-processmaps.com&#x2F;index.php&#x2F;Problem_Management" rel="nofollow">https:&#x2F;&#x2F;wiki.en.it-processmaps.com&#x2F;index.php&#x2F;Problem_Managem...</a><p>Your on-calls folks need a way to be free of the broader problem analysis, and focus on putting out the fires. The folks in problem management will take the steps to prevent problems from ever manifesting.<p>Once upon a time I was into Problem Management, and one issue that kept coming up was server OS patching where the Linux systems crashed upon reboot, after having applied new kernel, etc. The customers were blaming us, and we were blaming the customer, and round and round it went. Anyhow, the new procedure was some thing like this... any time there was routine maintenance that would result in the machine rebooting (e.g. kernel updates), then the whole system had to be brought down first to prove it was viable for upgrades. Low-and Behold, machines belonging to a certain customer had a tendency to not recover after the pre-reboot. This would stop the upgrade window in it&#x27;s track, and I would be given a ticket for next day to investigate why the machine was unreliable. Hint... a typical problem was Oracle admins playing god with &#x2F;etc&#x2F;fstab, and many other shenanigans. We eventually got that company to a place where the tier-2 on-call folks could have a nice life outside of work.<p>But I digress...<p>&gt; Opex ...<p>Usually that term means &quot;Operational Expenditure&quot;, as opposed to &quot;Capex&quot; or Capital Expenditure. It&#x27;s your terminology, so it&#x27;s fine, but I&#x27;d NOT say those kind of things to anybody publicly. You might get strange looks.<p>I&#x27;d say let one or two of the on-call folks be given a block of a few hours each week to think of ways to kill recurring issue. Let them take turns, and give them concrete incentives to achieve results. Something like $200 bonus per resolved problem. That leads us into the next issue, which is monitoring and logging of the issues. Because if you hired consultants to come-in tomorrow, and you don&#x27;t even have stats... there&#x27;s nothing anybody could do.<p>Good luck
uaas8 个月前
Have you looked into SLO&#x2F;SLA&#x2F;SLIs?
crdrost8 个月前
So you&#x27;re gonna get a bunch of comments about just about everything other than the organizing framework! Which brings up,<p>Tip 1: Everyone has opinions about on-call. Try a bunch, see what works.<p>Frameworks for this stuff are usually either sprint-themed, or they&#x27;re SLO-flavored. Both of those are popular because they fit into goalsetting frameworks. You can say &quot;okay this sprint what&#x27;s our ticket closure rate&quot; or you can say &quot;okay how are we doing with our SLOs.&quot; This also helps to scope oncall: are you just restoring service, are you identifying underlying causes, are you fixing them? But those frameworks don&#x27;t directly organize. Still, it&#x27;s worth learning these two points from them:<p>Tip 2: You want to be able to phrase something positive to leadership even if the pagers didn&#x27;t ring for a little bit. That&#x27;s what these both address.<p>Tip 3: There is more overhead if you don&#x27;t just root-cause and fix the problems that you see. However if you <i>do</i> root-cause-and-fix, then you may find that sprint planning for the oncall is &quot;you have no other duties, you are oncall, if you get anything else done that&#x27;s a nice-to-have.&quot;<p>Now, turning to organization... you are lucky in that you have a specific category of thing you want to improve: opex. You are unlucky that your oncall engineers are being pulled into either carryover issues or features.<p>I would recommend an idea that I&#x27;ve called &quot;Hot Potato Agile&quot; for this sort of circumstance. It is somewhat untested but should give a good basic starting spot. The basic setup is,<p>• Sprint is say 2 weeks, and intended oncall is 1 week secondary, then 1 week primary. That means a sprint contains 3 oncall engineers: Alice is current primary, Bob is current secondary and next primary, Carol is next secondary.<p>• At sprint planning everybody else has some individual priorities or whatever, Alice and Carol budget for half their output and Bob assumes all his time will be taken by as-yet-unknown tasks.<p>• But, those 3 must decide on an opex improvement (or tech debt, really any cleanup task) that could be completed by ~1 person in ~1 sprint. This task is the “hot potato.” Ideally the three of them would come up with a ticket with like a hastily scribbled checklist of 20ish subtasks that might each look like it takes an hour or so.<p>Now, stealing from Goldratt, there is a rough priority category at any overwhelmed workplace, everything is either Hot, Red Hot, or Drop Everything and DO IT NOW. Oncall is taking on DIN and some RH, the Red Hots that specifically are embarrassing if we&#x27;re not working on them over the rest. The hot potato is clearly a task from H, it doesn&#x27;t have the same urgency as other tasks, yet we are treating it with that urgency. In programming terms it is a sentinel value, a null byte. This is to leverage some more of those lean manufacturing principles... create slack in the system etc.<p>• The primary oncall has the responsibility of emergency response including triage and the authority to delegate their high-priority tasks to anyone else on the team as their highest priority. The hot potato makes this process less destructive by giving (a) a designated ready pair of hands at any time, and (b) a backup who is able to more gently wind down from whatever else they are doing before they have to join the fire brigade.<p>• The person with the hot potato works on its subtasks in a way that is unlike most other work you&#x27;re used to. First, they have to know who their backup is (volunteer&#x2F;volunteer); second, they have to know how stressed out the fire brigade is; communicating these things takes some intentional effort. They have to make it easy for their backup to pick up where they left off on the hot potato, so ideally the backup is reviewing all of their code. Lots of small commits, they are intentionally interruptable at any time. This is why we took something from maintenance&#x2F;cleanup and elevated it to sprint goal, was so that people aren&#x27;t super attached to it, it isn&#x27;t actually as urgent as we&#x27;re making it seem.<p>Hope that helps as a framework for organizing the work. The big hint is that the goals need to be owned by the team, not by the individuals on the team.