AWS Bedrock
Overview
AWS Bedrock is the default LLM provider for OpsWorker's SaaS deployment. OpsWorker uses a multi-model strategy through Bedrock to optimize investigation quality and speed.
How It's Used
OpsWorker uses different AI models for different investigation stages:
| Stage | Model Type | Purpose |
|---|---|---|
| Field extraction | Fast model (Amazon Nova) | Quick parsing of alert fields |
| Deep analysis | Reasoning model (Claude) | Root cause analysis and recommendation generation |
| Chat responses | Reasoning model (Claude) | Interactive Q&A with context |
This multi-model approach balances speed and depth — fast models handle routine extraction, while more capable models handle complex analysis.
Configuration
SaaS Deployment
No configuration needed. OpsWorker's SaaS platform uses AWS Bedrock by default. All AI processing is handled automatically.
Dedicated Deployment
For dedicated deployments in your own AWS account, you can use your own Bedrock access:
- Models are deployed within the same AWS region as your OpsWorker instance
- You control model access and usage through your AWS account
- Contact the OpsWorker team for setup
Data Processing
- Investigation data is sent to AWS Bedrock for AI analysis
- Processing occurs within AWS infrastructure
- Data is not used for model training
- Data retention follows OpsWorker's data handling policies
Next Steps
- How Investigations Work — See the AI in action
- Architecture Overview — System design details