Skip to main content

Alerting and Notifications

Pulse alerts notify your team the moment a critical interaction degrades — before users report the issue and before it compounds into significant revenue loss.

How Alerts Work

Alerts are triggered by Pulse's anomaly detection engine. When an interaction's Apdex, error rate, or latency deviates significantly from its baseline, Pulse creates an anomaly and sends notifications to configured channels.

Unlike static threshold alerts that generate noise, Pulse uses dynamic baselines that account for natural variation in your metrics. This means you get notified when something genuinely changes — not every time traffic dips on a Sunday.

Setting Up Alerts

Configure Notification Channels

Navigate to Settings → Notifications to connect your channels:

ChannelDescription
EmailNotifications sent to individual emails or distribution lists
SlackMessages posted to a Slack channel via webhook
PagerDutyIncidents created for on-call rotation
WebhookCustom HTTP POST to any endpoint

Alert Rules

For each interaction or globally, you can configure:

  • Severity threshold — Minimum anomaly severity to trigger an alert (low, medium, high, critical)
  • Affected user threshold — Only alert when more than N users are affected
  • Notification channels — Which channels receive which severity levels
  • Mute windows — Suppress alerts during planned maintenance or deployments

Alert Anatomy

Each alert includes:

  • Interaction name — Which user journey is degraded
  • Metric and deviation — What changed and by how much (e.g., "Apdex dropped from 0.85 to 0.62")
  • Affected segment — Which device, OS, region, or app version is impacted
  • Affected user count — How many users are experiencing the issue
  • Estimated revenue impact — Projected loss based on conversion-rate delta
  • Link to investigation — Direct link to the anomaly detail page in the dashboard

Managing Alerts

  • Acknowledge — Mark an alert as being investigated to prevent duplicate work
  • Resolve — Close an alert once the issue is fixed
  • Snooze — Temporarily suppress notifications for a known issue
  • History — View all past alerts with their resolution status and timeline

Best Practices

  • Route by severity — Send critical alerts to PagerDuty, medium to Slack, low to email digests
  • Set an affected-user threshold — Prevents alerting on regressions that affect a handful of sessions
  • Review alert history weekly — Use patterns to improve thresholds and identify recurring issues

Next Steps