BYOK
Bring your own API Keys
OpenRoute supports both OpenRoute credits and the option to bring your own provider keys (BYOK).
When you use OpenRoute credits, your rate limits for each provider are managed by OpenRoute.
Using provider keys enables direct control over rate limits and costs via your provider account.
Your provider keys are securely encrypted and used for all requests routed through the specified provider.
Manage keys in your account settings.
Using custom provider keys on OpenRoute incurs a small percentage fee relative to the normal cost for the same model/provider on OpenRoute, deducted from your OpenRoute credits. This fee is waived for an initial monthly allotment of BYOK requests.
Key Priority and Fallback
OpenRoute always prioritizes using your provider keys when available. By default, if your key encounters a rate limit or failure, OpenRoute will fall back to using shared OpenRoute credits.
You can configure individual keys with "Always use this key" to prevent any fallback to OpenRoute credits. When this option is enabled, OpenRoute will only use your key for requests to that provider, which may result in rate limit errors if your key is exhausted, but ensures all requests go through your account.
Azure API Keys
To use Azure AI Services with OpenRoute, you'll need to provide your Azure API key configuration in JSON format. Each key configuration requires the following fields:
{
"model_slug": "the-openroute-model-slug",
"endpoint_url": "https://<resource>.services.ai.azure.com/deployments/<model-id>/chat/completions?api-version=<api-version>",
"api_key": "your-azure-api-key",
"model_id": "the-azure-model-id"
}
You can find these values in your Azure AI Services resource:
-
endpoint_url: Navigate to your Azure AI Services resource in the Azure portal. In the "Overview" section, you'll find your endpoint URL. Make sure to append
/chat/completions
to the base URL. You can read more in the Azure Foundry documentation. -
api_key: In the same "Overview" section of your Azure AI Services resource, you can find your API key under "Keys and Endpoint".
-
model_id: This is the name of your model deployment in Azure AI Services.
-
model_slug: This is the OpenRoute AI model identifier you want to use this key for.
Since Azure supports multiple model deployments, you can provide an array of configurations for different models:
[
{
"model_slug": "mistralai/mistral-large",
"endpoint_url": "https://example-project.openai.azure.com/openai/deployments/mistral-large/chat/completions?api-version=2024-08-01-preview",
"api_key": "your-azure-api-key",
"model_id": "mistral-large"
},
{
"model_slug": "openai/gpt-4o",
"endpoint_url": "https://example-project.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2024-08-01-preview",
"api_key": "your-azure-api-key",
"model_id": "gpt-4o"
}
]
Make sure to replace the url with your own project url. Also the url should end with /chat/completions with the api version that you would like to use.
AWS Bedrock API Keys
To use Amazon Bedrock with OpenRoute, you can authenticate using either Bedrock API keys or traditional AWS credentials.
Option 1: Bedrock API Keys (Recommended)
Amazon Bedrock API keys provide a simpler authentication method. Simply provide your Bedrock API key as a string:
your-bedrock-api-key-here
Note: Bedrock API keys are tied to a specific AWS region and cannot be used to change regions. If you need to use models in different regions, use the AWS credentials option below.
You can generate Bedrock API keys in the AWS Management Console. Learn more in the Amazon Bedrock API keys documentation.
Option 2: AWS Credentials
Alternatively, you can use traditional AWS credentials in JSON format. This option allows you to specify the region and provides more flexibility:
{
"accessKeyId": "your-aws-access-key-id",
"secretAccessKey": "your-aws-secret-access-key",
"region": "your-aws-region"
}
You can find these values in your AWS account:
-
accessKeyId: This is your AWS Access Key ID. You can create or find your access keys in the AWS Management Console under "Security Credentials" in your AWS account.
-
secretAccessKey: This is your AWS Secret Access Key, which is provided when you create an access key.
-
region: The AWS region where your Amazon Bedrock models are deployed (e.g., "us-east-1", "us-west-2").
Make sure your AWS IAM user or role has the necessary permissions to access Amazon Bedrock services. At minimum, you'll need permissions for:
bedrock:InvokeModel
bedrock:InvokeModelWithResponseStream
(for streaming responses)
Example IAM policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"bedrock:InvokeModel",
"bedrock:InvokeModelWithResponseStream"
],
"Resource": "*"
}
]
}
For enhanced security, we recommend creating dedicated IAM users with limited permissions specifically for use with OpenRoute.
Learn more in the AWS Bedrock Getting Started with the API documentation, IAM Permissions Setup guide, or the AWS Bedrock API Reference.
Google Vertex API Keys
To use Google Vertex AI with OpenRoute, you'll need to provide your Google Cloud service account key in JSON format. The service account key should include all standard Google Cloud service account fields, with an optional region
field for specifying the deployment region.
{
"type": "service_account",
"project_id": "your-project-id",
"private_key_id": "your-private-key-id",
"private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n",
"client_email": "your-service-account@your-project.iam.gserviceaccount.com",
"client_id": "your-client-id",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/your-service-account@your-project.iam.gserviceaccount.com",
"universe_domain": "googleapis.com",
"region": "global"
}
You can find these values in your Google Cloud Console:
-
Service Account Key: Navigate to the Google Cloud Console, go to "IAM & Admin" > "Service Accounts", select your service account, and create/download a JSON key.
-
region (optional): Specify the region for your Vertex AI deployment. Use
"global"
to allow requests to run in any available region, or specify a specific region like"us-central1"
or"europe-west1"
.
Make sure your service account has the necessary permissions to access Vertex AI services:
aiplatform.endpoints.predict
aiplatform.endpoints.streamingPredict
(for streaming responses)
Example IAM policy:
{
"bindings": [
{
"role": "roles/aiplatform.user",
"members": [
"serviceAccount:your-service-account@your-project.iam.gserviceaccount.com"
]
}
]
}
Learn more in the Google Cloud Vertex AI documentation and Service Account setup guide.
Last updated on