Validate Slack Output
Overview
After connecting Slack, verify that investigation results are delivered correctly to your configured channel.
Validation Steps
1. Trigger a Test Investigation
In the OpsWorker portal, go to your cluster settings and click Test Integration. This sends a synthetic alert through the complete investigation pipeline.
2. Check Your Slack Channel
Within 2 minutes, a message should appear in your configured Slack channel. The message includes:
- Alert summary — The alert that triggered the investigation
- Root cause analysis — What the AI identified as the underlying issue
- Affected resources — Kubernetes resources involved (pods, services, deployments)
- Recommended actions — Specific remediation steps with kubectl commands
- Feedback buttons — Three buttons to rate investigation quality
3. Test Feedback Buttons
Click one of the feedback buttons on the Slack message:
| Button | Meaning |
|---|---|
| Accurate | The root cause and recommendations were correct |
| Partially Accurate | Some findings were useful, but incomplete or partially wrong |
| Needs Improvement | The investigation missed the mark |
Feedback helps OpsWorker improve future investigation quality.
Troubleshooting
If no message appears in Slack:
- Check Slack integration status — In the portal, verify the Slack integration shows as connected under Integrations
- Check notification routing — Verify a Slack channel is configured for this cluster's notifications
- Check channel permissions — Ensure the OpsWorker bot has been invited to the target channel (some workspaces require this)
- Check investigation status — Verify the test investigation completed successfully in the portal under Investigations
- Re-authorize — If the integration shows errors, disconnect and reconnect Slack
Expected Message Format
A typical Slack investigation message looks like:
🔍 Investigation Complete: HighPodCPUUsage
Root Cause: Pod api-gateway in namespace production is experiencing
CPU throttling due to resource limits set below actual usage patterns.
Affected Resources:
• Pod: api-gateway-7d8f9b6c4-x2k9p (production)
• Deployment: api-gateway (production)
Recommended Actions:
1. Increase CPU limit: kubectl patch deployment api-gateway -n production ...
2. Review HPA configuration for auto-scaling
[Accurate] [Partially Accurate] [Needs Improvement]
Next Steps
- Run Your First Investigation — Run a full investigation with a real alert
- Notification Routing — Route notifications to different channels