Why AI is Just Another Tool in Our Blue Team Toolbox


You can’t scroll through LinkedIn, attend a security conference, or open a vendor whitepaper these days without hearing that AI is about to replace the SOC. Some companies claim AI can triage alerts, write detections, respond to incidents, and make coffee while you’re still getting through your inbox. Let me be blunt: That’s not happening. Not yet. Maybe not ever.

But here’s what is happening — and what’s a lot more interesting to people like us working in detection, response, and threat hunting: AI is becoming a genuinely useful tool in the blue team’s toolbox. Not a silver bullet. Not a SOC-in-a-box. Just another tool. And like any other tool, it has strengths, weaknesses, and needs a skilled operator to really get value out of it.

Where AI Is Actually Useful for Blue Teams

We’ve been experimenting with AI — mostly LLMs like GPT — in our day-to-day work. And I’ll be honest, it’s been more helpful than I expected. Not game-changing. But definitely time-saving. Here’s where it’s made the most impact:

Translating Logs into Human Language

One of the low-effort, high-reward use cases: taking noisy, machine-generated logs and making them readable. For example, I’ll take a Windows Event ID 4688 entry like this:

New Process Created:
New Process Name: C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe
Command Line: powershell -enc JABXAGMAdgA...
Parent Process: C:\Users\user\Documents\invoice.doc

Drop it into GPT with a simple prompt: “Explain what this log is showing in plain English.”

And get something like: “This log shows PowerShell being executed via an encoded command, and the parent process is Microsoft Word — likely indicating a macro was used to launch a script. This could be a sign of a phishing document with embedded code.”

Does it always nail it? No. But it’s a fast way to get a summary that helps with triage, especially when you’re juggling dozens of alerts.

Summarizing Threat Intel on the Fly

We all know the pain of getting a new CTI report an hour before a meeting and trying to make sense of whether it applies to our environment. I’ve been using AI to:

  • Summarize long-form threat reports into a few bullet points
  • Extract IOCs (IPs, domains, hashes, TTPs)
  • Map behaviors to MITRE ATT&CK techniques (and even validate them)

For instance, pasting in a report from CISA or a vendor blog and asking: “Summarize this in 5 key points. Include associated ATT&CK techniques and known IOCs.” The output helps us fast-track relevance checks and prep for threat hunting or detection review.

Kickstarting Detection Rules

I’ve started using GPT to help brainstorm detection logic. For example: “Write an Elastic KQL query to detect credential dumping via LSASS access.”

It’ll output something like:

process.name : ("procdump.exe" or "rundll32.exe" or "mimikatz.exe") and process.args : ("lsass.exe" or "lsass")

Or, if you’re using EQL:

process where process.name in ("procdump.exe", "rundll32.exe", "mimikatz.exe") and process.args : "lsass.exe"

Is it production-ready? Definitely not. But it’s a useful starting point. I still go in and validate the logic, adjust for our data schema (especially if using Elastic Common Schema), test against known benign/false positive scenarios, and tune for our environment. But when you’re building a library of detections or pivoting off a new technique, having a “first draft” saves time and brain cycles.

Where AI Falls Short (And Why We’re Okay With That)

AI is helpful, but it’s not magical. It’s like a junior analyst who never gets tired — but also doesn’t know your environment, can’t ask clarifying questions, and confidently makes mistakes.

It Doesn’t Know Your Normal

Your network, users, applications, and alerting thresholds are unique. AI has no idea what’s normal for you. So when you ask it to spot anomalies or suggest thresholds, it’s guessing. That makes it risky for anything where behavior baselines are key.

It Struggles With Precision

Security often lives in the details. One missed field match in a detection rule can result in missing critical activity. One flawed summary of an alert can send an analyst down the wrong path. I’ve seen GPT output things like: “Look for PowerShell commands that include ‘-NoProfile -ExecutionPolicy Bypass -EncodedCommand’” but then fail to account for encoding variations or how that might be obfuscated in real attacks. Helpful, yes. Precise, no.

It’s Easy to Overtrust

LLMs sound confident — even when they’re wrong. That’s dangerous in security. If you take their suggestions at face value without testing or context, you’re introducing risk. AI doesn’t understand nuance. It doesn’t get the implications of false positives or alert fatigue. We’ve adopted a policy internally: AI can help, but everything gets reviewed, tested, and version-controlled.

How We Actually Use AI: Like Any Other Tool

Here’s the mindset shift that’s helped my team the most: We don’t treat AI like a revolution. We treat it like a utility — no different than Sigma, YARA, or a Python helper script.

That means:

  • We test AI outputs in isolated environments before deploying anything live
  • We build human-in-the-loop review into any workflow using AI-generated logic
  • We capture prompt+output history for reproducibility and auditing
  • We document where it works well, and where it breaks down
  • We hold vendors accountable when their marketing doesn’t match reality

Used this way, AI makes us faster, sharper, and more productive — and honestly, just makes some parts of the job a little less tedious.

Final Thoughts

AI isn’t a threat to the blue team — it’s a tool for the blue team. It helps us move faster, reduce burnout, and get a head start on problems that used to eat up hours. But it doesn’t replace judgment. It doesn’t replace experience. And it certainly doesn’t replace the ability to think critically under pressure.

Smart people, well-documented processes, and strong collaboration are still the foundation of good security operations. AI’s just a useful assistant — if you know how to keep it in check.

So the next time someone says AI will replace SOCs, threat hunters, or detection engineers, I’ll just shake my head. You can’t replace what you don’t fully understand. And AI still doesn’t understand what it means to defend.


Discover more from Annoyed Engineer

Subscribe to get the latest posts sent to your email.

, ,

Leave a Reply

Your email address will not be published. Required fields are marked *