Skip to main content

Bring Your Own LLM

Overview

For organizations with specific AI governance requirements, OpsWorker supports using your own LLM provider. This option is available for dedicated and private cloud deployments.

Availability

Deployment TypeAvailable
SaaSNo (uses AWS Bedrock)
Dedicated AWSYes
Private CloudYes

Supported Providers

OpsWorker supports any LLM provider with an OpenAI-compatible API, including:

  • Self-hosted models (vLLM, TGI, Ollama)
  • Third-party LLM providers with OpenAI-compatible endpoints
  • Custom model deployments

Configuration

BYO LLM is configured during deployment setup:

FieldDescription
API endpointURL of your LLM API
API keyAuthentication credentials
Model nameModel identifier to use

Considerations

  • Performance varies: Investigation quality depends on the model's reasoning capabilities. Models with strong instruction-following and reasoning abilities work best.
  • Compatibility: The model must support function calling / tool use for full investigation capabilities.
  • Testing: The OpsWorker team will verify compatibility with your model before deployment.

Use Cases

  • Organizations with strict data sovereignty requirements
  • Environments that cannot send data to third-party AI providers
  • Teams that want to use specific fine-tuned models
  • Compliance scenarios requiring AI processing within a specific jurisdiction

Getting Started

Contact the OpsWorker team to discuss your LLM requirements and verify compatibility. This is a deployment-time configuration, not a self-service setup.

Next Steps