Product Updates

We Taught an AI to Review Our Security Alerts (So You Don't Have To)

Admin User · Mar 02, 2026 · 1 views

There's a paradox at the heart of security scanning: the better your scanner gets at finding threats, the more false positives it generates. Cast a wide net, catch more fish - but also a lot of old boots and shopping trolleys.

WebMon's content scanner checks your pages for phishing indicators, malware signatures, scareware patterns, and other suspicious content. It's deliberately aggressive, because missing a real threat is far worse than flagging a false positive. But that means if you run a cybersecurity blog, or your site mentions "your account has been limited" in a perfectly innocent context, you're going to get flagged.

You've always been able to report false positives when a detection doesn't look right. The problem was what happened next - someone on our team had to manually review each report, visit the page, dig through the HTML, and decide whether the detection was legitimate. With a growing user base, that review queue was starting to pile up.

So we did what any reasonable developer would do in 2026. We asked an AI.

What's Changed for You

When you report a false positive now, your report is automatically reviewed by AI within seconds. It fetches the page, examines the code around the pattern match, and provides our team with a recommendation - is this a real threat or a false alarm?

The result? Your reports get processed faster. Instead of waiting for a human to manually investigate every single report from scratch, our team gets an informed starting point. Most of the time the AI's assessment is spot on, which means quicker resolutions for you.

How It Works Behind the Scenes

We're using Claude (Anthropic's AI) to analyse each report. Here's the general process:

  1. Fetch the page - We grab the current HTML of the reported URL
  2. Find the pattern - We locate the exact pattern match in the raw source code and extract the surrounding context
  3. Analyse the context - The AI examines the code around the match, the pattern category, and the severity level
  4. Provide a recommendation - It returns a verdict (likely false positive, likely real threat, or inconclusive) with an explanation

The key thing is that the AI sees the actual HTML context around the match. So when a cybersecurity blog triggers the "your account.*limited" phishing pattern because they're writing about common phishing tactics, the AI can see that the match is inside an article surrounded by educational content, not inside a fake login form. Context is everything.

Here's a real example. A site was flagged for the pattern your account.*limited with a severity of "critical phishing indicator." Scary, right? The AI looked at the surrounding HTML and responded:

"The detected pattern appears within what looks like a booking/calendar system configuration string. This is legitimate website functionality - a scheduling or reservation system - not a phishing attempt. The pattern match is coincidental."

Verdict: likely false positive. And it was right.

AI Assists, Humans Decide

To be clear - the AI doesn't automatically approve or reject anything. It provides a recommendation that our team reviews before making a decision. Think of it as a very fast colleague who's already looked at the report and left a note with their opinion before anyone else gets to it.

The AI is particularly good at:

  • Recognising educational content - Security blogs, antivirus documentation, and IT training materials that legitimately discuss threats
  • Identifying legitimate scripts - Analytics tools, booking widgets, and third-party integrations that happen to match suspicious patterns
  • Spotting coincidental matches - Phrases that match phishing keywords but appear in completely benign contexts

It's less good at heavily obfuscated JavaScript (though to be fair, humans struggle with that too) and brand-new attack techniques it hasn't seen before. In those cases it flags the report as inconclusive and our team takes a closer look manually.

Why This Matters

If you've ever reported a false positive and waited a while to hear back, this is directly aimed at improving that experience. The AI review means our team spends less time on the obvious cases and more time on the ones that actually need human judgement.

It also means we can keep the content scanner aggressive without drowning in review work. We'd rather catch 100 things and have the AI help sort out which 5 are real than dial back the sensitivity and risk missing something.

What's Next

This is our first integration of AI into WebMon, and it won't be the last. We're exploring a few ideas:

  • Smarter alert summaries - Plain-English explanations of what went wrong when a monitor goes down, instead of raw status codes and error strings
  • Anomaly detection - Using response time and content trends to detect issues before they become full outages
  • Pattern tuning - Using false positive data to automatically refine detection patterns over time

We're being deliberate about where we add AI though. It's easy to slap "AI-powered" on everything and call it innovation. We'd rather add it where it genuinely solves a problem - and speeding up false positive reviews was a clear win.

If your cybersecurity blog keeps getting flagged for phishing patterns - sorry about that. At least now the AI will back you up.

Monitor Your Website Today

Free uptime monitoring with instant alerts. No credit card required.

Get Started Free