TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Leveraging AI for efficient incident response

110 pointsby Amaresh9 months ago

16 comments

LASR9 months ago
We&#x27;ve shifted our oncall incident response over to mostly AI at this point. And it works quite well.<p>One of the main reasons why this works well is because we feed the models our incident playbooks and response knowledge bases.<p>These playbooks are very carefully written and maintained by people. The current generation of models are pretty much post-human in following them, performing reasoning and suggesting mitigations.<p>We tried indexing just a bunch of incident slack channels and result was not great. But with explicit documentation, it works well.<p>Kind of proves what we already know, garbage in, garbage out. But also, other functions, eg: PM, Design have tried automating their own workflows, but doesn&#x27;t work as well.
评论 #41327207 未加载
评论 #41326951 未加载
评论 #41330347 未加载
评论 #41329585 未加载
评论 #41329239 未加载
评论 #41328849 未加载
评论 #41327931 未加载
donavanm9 months ago
Im really interested in the implied restriction&#x2F;focus on “code changes.”<p>IME a very very large number of impacting incidents arent strictly tied to “a” code change, if any at all. It _feels_ like theres an implied solution to tying running version back to deployment rev, to deployment artifacts, and vcs.<p>Boundary conditions and state changes in the distributed system were the biggest bug bear I ran in to at AWS. Then below that were all of the “infra” style failures like network faults, latency, API quota exhaustion, etc. And for all the cloudformation&#x2F;cdk&#x2F;terraform in the world its non trivial to really discover those effects and tie them to a “code change.” Totally ignoring older tools that may be managed via CLI or the ol’ point and click.
评论 #41326450 未加载
评论 #41327182 未加载
评论 #41326414 未加载
pants29 months ago
&gt; The biggest lever to achieving 42% accuracy was fine-tuning a Llama 2 (7B) model<p>42% accuracy on a tiny, outdated model - surely it would improve significantly by fine-tuning Llama 3.1 405B!
评论 #41345915 未加载
nyellin9 months ago
We&#x27;ve open sourced something with similar goals that you can use today: <a href="https:&#x2F;&#x2F;github.com&#x2F;robusta-dev&#x2F;holmesgpt&#x2F;">https:&#x2F;&#x2F;github.com&#x2F;robusta-dev&#x2F;holmesgpt&#x2F;</a><p>We&#x27;re taking a slightly different angle than what Facebook published, in that we&#x27;re primarily using tool calling and observability data to run investigations.<p>What we&#x27;ve released really shines at surfacing up relevant observability data automatically, and we&#x27;re soon planning to add the change-tracking elements mentioned in the Facebook post.<p>If anyone is curious, I did a webinar with PagerDuty on this recently.
评论 #41327434 未加载
评论 #41327409 未加载
mafribe9 months ago
The paper goes out of its way <i>not</i> to compare the 42% figure with anything. Is <i>&quot;42% within the top 5 suggestions&quot;</i> good or bad?<p>How would an experienced engineer score on the same task?
TheBengaluruGuy9 months ago
Interesting. Just a few weeks back, I was reading about their previous work <a href="https:&#x2F;&#x2F;atscaleconference.com&#x2F;the-evolution-of-aiops-at-meta-beyond-the-buzz&#x2F;" rel="nofollow">https:&#x2F;&#x2F;atscaleconference.com&#x2F;the-evolution-of-aiops-at-meta...</a> -- didn&#x27;t realise there&#x27;s more work!<p>Also, some more researches in the similar space by other enterprises:<p>Microsoft: <a href="https:&#x2F;&#x2F;yinfangchen.github.io&#x2F;assets&#x2F;pdf&#x2F;rcacopilot_paper.pdf" rel="nofollow">https:&#x2F;&#x2F;yinfangchen.github.io&#x2F;assets&#x2F;pdf&#x2F;rcacopilot_paper.pd...</a><p>Salesforce: <a href="https:&#x2F;&#x2F;blog.salesforceairesearch.com&#x2F;pyrca&#x2F;" rel="nofollow">https:&#x2F;&#x2F;blog.salesforceairesearch.com&#x2F;pyrca&#x2F;</a><p>Personal plug: I&#x27;m building a self-service AIOps platform for engineering teams (somewhat similar to this work by Meta). If you&#x27;re looking to read more about it, visit -- <a href="https:&#x2F;&#x2F;docs.drdroid.io&#x2F;docs&#x2F;doctor-droid-aiops-platform">https:&#x2F;&#x2F;docs.drdroid.io&#x2F;docs&#x2F;doctor-droid-aiops-platform</a>
MOARDONGZPLZ9 months ago
I would love if they leveraged AI to detect AI on the regular Facebook feed. I visit occasionally and it’s just a wasteland of unbelievable AI content with tens of thousands of bot (I assume…) likes. Makes me sick to my stomach and I can’t even browse.
aray079 months ago
I do think AI will automate a lot of the grunt work involved with incidents and make the life of on-call engineers better.<p>We are currently working on this at: <a href="https:&#x2F;&#x2F;github.com&#x2F;opslane&#x2F;opslane">https:&#x2F;&#x2F;github.com&#x2F;opslane&#x2F;opslane</a><p>We are starting by tackling adding enrichment to your alerts.
benreesman9 months ago
Way back in the day on FB Ads we trained a GBDT on a bunch of features extracted from the diff that had been (post-hoc) identified as the cause of a SEV.<p>Unlike a modern LLM (or most any non-trivial NN), a GBDT’s feature importance is defensively rigorous.<p>After floating the results to a few folks up the chain we burned it and forget where.
BurningFrog9 months ago
PSA:<p>9 times out of 10, you can and should write &quot;using&quot; instead of &quot;leveraging&quot;.
评论 #41329298 未加载
AeZ1E9 months ago
nice to see meta investing in AI investigation tools! but 42% accuracy doesn&#x27;t sound too impressive to me... maybe there&#x27;s still some fine-tuning needed for better results? glad to hear about the progress though!
评论 #41327197 未加载
ketzo9 months ago
This is really cool. My optimistic take on GenAI, at least with regard to software engineering, is that it seems like we&#x27;re gonna have a lot of the boring &#x2F; tedious parts of our jobs get a lot easier!
评论 #41326442 未加载
coding1239 months ago
AI 1: This user is suspicious, lock account<p>User: Ahh, got locked out, contact support and wait<p>AI 2: The user is not suspicious, unlock account<p>User: Great, thank you<p>AI 1: This account is suspicious, lock account
评论 #41328776 未加载
_pdp_9 months ago
I will be more interested to understand how they deal with injection attacks. Any alert where the attacker controls some parts of the text that ends up in the model could be used to either evade it worse use it to hack it. Slack had an issue like that recently.
devneelpatel9 months ago
This is exactly what we do at OneUptime.com. Show you AI generated possible Incident remediation based on your data + telemetry + code. All of this is 100% open-source.
minkles9 months ago
I&#x27;m going to point out the obvious problem here: 42% RC identification is shit.<p>That means the first person on the call doing the triage has a 58% chance of being fed misinformation and bias which they have to distinguish from reality.<p>Of course you can&#x27;t say anything about an ML model being bad that you are promoting for your business.
评论 #41326765 未加载