Bring Your Own LLM
Overview
For organizations with specific AI governance requirements, OpsWorker supports using your own LLM provider. This option is available for dedicated and private cloud deployments.
Availability
| Deployment Type | Available |
|---|---|
| SaaS | No (uses AWS Bedrock) |
| Dedicated AWS | Yes |
| Private Cloud | Yes |
Supported Providers
OpsWorker supports any LLM provider with an OpenAI-compatible API, including:
- Self-hosted models (vLLM, TGI, Ollama)
- Third-party LLM providers with OpenAI-compatible endpoints
- Custom model deployments
Configuration
BYO LLM is configured during deployment setup:
| Field | Description |
|---|---|
| API endpoint | URL of your LLM API |
| API key | Authentication credentials |
| Model name | Model identifier to use |
Considerations
- Performance varies: Investigation quality depends on the model's reasoning capabilities. Models with strong instruction-following and reasoning abilities work best.
- Compatibility: The model must support function calling / tool use for full investigation capabilities.
- Testing: The OpsWorker team will verify compatibility with your model before deployment.
Use Cases
- Organizations with strict data sovereignty requirements
- Environments that cannot send data to third-party AI providers
- Teams that want to use specific fine-tuned models
- Compliance scenarios requiring AI processing within a specific jurisdiction
Getting Started
Contact the OpsWorker team to discuss your LLM requirements and verify compatibility. This is a deployment-time configuration, not a self-service setup.
Next Steps
- Dedicated AWS Deployment — Single-tenant deployment
- Private Cloud Deployment — Deploy in your infrastructure