Here's a scenario I see a lot: someone sets up uptime monitoring, gets a reassuring 99.9% uptime badge, and assumes everything is fine. Meanwhile their site takes 8 seconds to respond during peak hours and their bounce rate is through the roof.
Uptime tells you whether your site is reachable. Response time tells you whether it's usable. They're different problems and you need to watch both.
What Response Time Actually Measures
When we talk about response time in monitoring, we mean Time to First Byte - how long it takes for your server to start sending a response after receiving the request. This doesn't include page rendering, image loading, or JavaScript execution. It's purely "how fast did your server react?"
A healthy response time is under 500ms. Between 500ms and 2 seconds is sluggish but functional. Over 2 seconds and you're actively losing visitors. Over 5 seconds and Google starts penalising your search rankings.
Why Uptime Alone Isn't Enough
Your server can return a 200 OK status in 12 seconds. Technically that's "up". Your uptime monitoring is happy. Your visitors are not.
Common causes of slow responses that don't trigger downtime alerts:
- Database queries gone wild - A missing index on a table that grew from 1,000 to 100,000 rows. Everything worked fine until it didn't.
- Shared hosting neighbours - Someone on the same server is running a heavy cron job. Your site slows to a crawl during peak hours.
- Plugin bloat - WordPress sites are notorious for this. Each plugin adds a few milliseconds until the cumulative effect is noticeable.
- CDN misconfiguration - Your static assets are being served from the origin instead of the edge. The site works, just slowly for users far from your server.
- Memory limits - PHP running close to its memory limit triggers garbage collection more frequently. The site doesn't crash, it just gets progressively slower.
The Slow Creep Problem
The worst thing about response time degradation is that it happens gradually. Your site doesn't suddenly jump from 200ms to 5 seconds. It creeps up - 200ms to 400ms over a month, then 400ms to 800ms, then one day it crosses a threshold and someone notices. By then the underlying issue has been compounding for weeks.
This is exactly why monitoring response times over time matters more than checking them once. A single snapshot tells you very little. A trend line tells you everything.
Setting Sensible Thresholds
WebMon's slow response alerts default to 5000ms (5 seconds), which is deliberately conservative. If you're hitting that, something is definitely wrong. But I'd recommend adjusting it based on your baseline:
- If your site normally responds in 200ms, set the threshold to 1000ms
- If you're on shared hosting and 800ms is your normal, 3000ms is a reasonable alert
- If you're running a heavy application with 2 second responses, 5000ms is fine
The goal isn't to alert on every fluctuation - it's to catch sustained degradation before your users do.
Using Response Time Data
Your monitor's detail page shows response time graphs over time. A few things to look for:
Regular spikes at the same time - Usually a cron job, backup, or scheduled task competing for resources. Easy fix once you identify it.
Gradual upward trend - Growing database, increasing traffic, or resource limits being approached. Time to optimise or upgrade.
Sudden permanent increase - Something changed. A deployment, a plugin update, a server configuration change. Check what happened around the timestamp where the jump occurred.
Random spikes with no pattern - Often network-related rather than server-related. Could be the monitoring node's route to your server, not your server itself.
You can configure slow response thresholds per monitor from your alert settings. The daily digest mode is great if you're monitoring a lot of sites and don't want individual slow alerts clogging your inbox.