Just paste in a chunk of systemd (or whatever) logs and start asking questions. Often just pasting in the logs and pressing enter results in it identifying potential problems and suggesting solutions. It helped me troubleshoot a huge amount of issues on linux desktops and servers that would have taken me a lot longer with google - even if it doesn't always give the right solution, 99% of the time it at least points to the source of the error and gives me searchable keywords.<p>Note it works much better with GPT4 - gpt3.5 tends to hallucinate a bit too often.
I used it to transform cryptic credit card statement items into company names, which then allowed me to query my Gmail archives for receipts and invoices from these vendors, automating a manual process of accounting backup discovery that is the bane of my very existence. I even got GPT-4 to assess whether an email likely relates to an invoice or payment so that I could limit the amount of noise extracted from my email archives.<p>I highly recommend considering GPT-4 every time you encounter a painful manual process. In nearly every case where I have applied GPT-4, it has been successful in one-shot or few-shot solving the problem.
How does that work with logs? Logs are often... Huge? How many lines of logs can you paste? Because if I first need to narrow down the log to the problematic part, I kinda already have my problem right there no?<p>Or do you mean I do something like grab the lines with "error" in the log, hoping there aren't too many, then ask ChatGPT what it thinks about this:<p><pre><code> [ 0.135036] kernel: ACPI Error: AE_NOT_FOUND, During name lookup/catalog (20210730/psobject-220)</code></pre>
I wish people would focus on these exceptional strengths of the model rather than blabbing about AGI or whatever.<p>Similarly to you, I have been able to find issues with logs, formatting, asking it quick query questions in [whatever flavor of query language XYZ service likes to use], etc.. and it's really, really good.<p>The alternative is to muscle through it, using a lot of energy, writing my own parser or something dumb, or to use Google - which basically isn't usable anymore!<p>But you have people who are like "GPT CODED MY ENTIRE WEBSITE" and "GPT TAUGHT ME QUANTUM PHYSICS" and I'm like... uh... big doubt my man...
I let GPT-4 walk me through troubleshooting steps with my extremely slow read times on my SSD-backed KVM virtual machine. It told me the things it needed, I pasted relevant logs and other output, and finally I solved my issue. I was highly impressed! It parsed atop, top, and various other content, explaining exactly what everything meant.<p>Another benefit was that it was able to present a much more readable version of some of what I pasted. I may have to start using it for cleaning up hard-to-read output (looking at you, atop!) in the future, it really excels at that!<p>Also, the issue ended up being that I that I was reading from what turned out to be an NFS mount. Doh!
Can any of the existing open source models do the same?<p>ChatGPT is great but I don't want all of my queries going to OpenAI.<p>I'd rather shell out a considerable sum to buy the equipment to run my own.
This is a great feature and I did use it few times to test. However, be aware of potentially leaking your company's private or sensitive information when doing this.
Yeah it's pretty awesome! I used GPT-4 last week to fix my corrupted SSD. Granted I already narrowed down the kernel logs to a few suspicious lines, but I just pasted those 10 lines in and asked for a fix. Pasting in GPT's arcane `fsck` incantations and boom -- fixed SSD. Saved me an hour or two of hassle reading man pages and stack overflow posts.
Awesome! It is also fantastic for understanding chunks of Bash script, Perl, AWK etc where if you don't know something then it's next to impossible to search for it using a non-AI search engine.<p>Bonus: you can also use it to understand what the various flags in a command do.
systemd is a very common log.<p>I'm curious whether you think this would work on logs for custom software that by necessity didn't have either its logs or writing about its logs in the training set.
I used recently to navigate a quite complex Bash script. This is using Codex.<p>I'd go just below the line that I don't understand and type<p><pre><code> # Explain the line above in detail: << gpt explanation >>
</code></pre>
And it'd write a very decent explanation that makes sense most of the time. Basically decrypting bash code, which I suck at.<p>However, there was one instance where it almost freaked me out as the output was quite human like:<p><pre><code> # Explain the line above: <<I don't understand it>>.</code></pre>
Great use case. I have successfully used it in a similar way with SRT files (recording transcripts in SubRip Subtitle format) as well as CSV data from surveys that often has many columns with long text labels (the survey questions) and free text answers.<p>Ex. a prompt I had used for a developer skills survey .csv was:<p>> The CSV data below is the results of a skills survey sent to a group of software engineers. The first row is a header row. Please summarize this data in the areas that the People are Strong, Weak, Most Similar, and Unique:<p>Then, because of things I saw in the response, I asked a few follow-up questions:<p>> How much Azure experience is there in the group?<p>> Can you provide more explanation around your assessment that "Engineers generally have little to no experience with Dockers and Kubernetes."<p>> What other skills and experience do you see in the results that you haven't already mentioned?<p>To address my risk tolerance vis-a-vis the ChatGPT warnings (and previous UI leak of responses), I replaced the email addresses in the .csv file with "PersonA", "PersonB", ...
As another side note, using ChatGPT as a search engine is not so great.<p>Recently, I was trying to find golang libraries for managing authorization. The first one was a well known library (I was incidentally trying to avoid).<p>But the others 4 were complete were completely fantasy projects. ChatGPT simply invented complete projects with descriptions and github urls.<p>Interestingly, looking at the urls, it seemed to actually be a composition of authors working on the subject and similar project in other languages.<p>I also observed the same behavior when I tried to find community resale/recycling centers (ressourcerie in French) in my neighborhood, and sure enough it generated a bunch of fake addresses.<p>It's logical in hindsight, ChatGPT is a generative AI after all. But these results left me scratching my head at first.
Even free one managed to decipher my 10 years old perl code and I was a bit surprised by it.<p>But in other instance I pasted same function but with parameter name changed (event -> events) and it just produced lies
Anomaly detection is my current favorite use case for GPTs. Other problems can potentially be recast as anomaly detection that are not currently thought of as such. Regulation for instance.
There's a company that does this log analysis in real time for you called Zebrium<p><a href="https://www.zebrium.com/" rel="nofollow">https://www.zebrium.com/</a>
also for fixing bugs in a chunk of code. at least, that has worked for me a couple of times. it can be frustrating if it's struggling, feels like you're stuck in a loop and it's hard to break out