Required Parameters by Provider
Understand provider-specific required parameters and how to properly handle them in your API requests.
Required Parameters by Provider
Different AI providers have varying requirements for request parameters. This guide documents provider-specific mandatory fields to ensure your API requests succeed without errors.
Anthropic Models
max_tokens (Required)
Anthropic models enforce max_tokens as a mandatory parameter in all chat completion requests. This parameter specifies the maximum number of tokens the model can generate in its response.
Key Points:
- Requirement: Always required
- Type: Integer
- Minimum: 1
- Maximum: Depends on the model's context window
- Impact: Requests without this parameter will receive a
400 Bad Requesterror
Common Error
If you omit max_tokens when using Anthropic models, you'll receive this error:
{
"error": {
"message": "Error from provider: 400 Bad Request {\"type\":\"error\",\"error\":{\"type\":\"invalid_request_error\",\"message\":\"max_tokens: Field required\"},\"request_id\":\"req_011CXXRUrxrnxYTDpXkvkmmK\"}",
"type": "gateway_error",
"code": "gateway_error",
"usedProvider": "anthropic",
"requestedModel": "claude-opus-4-5-20251101"
}
}Solution
Always include max_tokens in your request payload:
curl -X POST https://api.osmapi.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer osm_YOUR_API_KEY" \
-d '{
"model": "claude-opus-4-5-20251101",
"max_tokens": 1024,
"messages": [
{ "role": "user", "content": "Hello, how are you?" }
]
}'import anthropic
client = anthropic.Anthropic(
api_key="osm_YOUR_API_KEY",
base_url="https://api.osmapi.com"
)
message = client.messages.create(
model="claude-opus-4-5-20251101",
max_tokens=1024, # Required for Anthropic models
messages=[
{"role": "user", "content": "Hello, how are you?"}
]
)import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic({
apiKey: "osm_YOUR_API_KEY",
baseURL: "https://api.osmapi.com",
});
const message = await client.messages.create({
model: "claude-opus-4-5-20251101",
max_tokens: 1024, // Required for Anthropic models
messages: [
{ role: "user", content: "Hello, how are you?" }
],
});Recommended Values
Choose max_tokens based on your use case:
| Use Case | Recommended Value | Purpose |
|---|---|---|
| Short responses | 256-512 | Quick answers, classifications |
| Standard responses | 1024-2048 | General conversations |
| Long-form content | 2048-4096 | Articles, detailed explanations |
| Maximum capacity | 8000-16000 | Complex reasoning, code generation |
Note: The maximum value depends on the specific Claude model. Check the
model's context window to ensure your max_tokens value doesn't exceed
available limits.
Other Providers
Providers like OpenAI, Google, and others may have different parameter requirements:
- OpenAI:
max_tokensis optional (defaults to the model's context window) - Google:
max_output_tokensis the equivalent parameter - Meta (Llama):
max_tokensis optional
When routing through osmAPI, always check the specific provider's documentation if you're not certain about a parameter's requirement status.
Always use Anthropic's parameter names when routing to Anthropic models through osmAPI. The gateway handles format translation for other providers automatically.
Best Practices
- Always include
max_tokensfor Anthropic models - This prevents400 Bad Requesterrors - Match the parameter to your use case - Don't use unnecessarily large values
- Monitor token usage - Check the response's
usagefield to understand consumption patterns - Test with different values - Find the optimal balance between response quality and cost
Example response showing token usage:
{
"usage": {
"prompt_tokens": 13,
"completion_tokens": 29,
"total_tokens": 42,
"cost_usd_total": 0.073865,
"cost_usd_input": 0.0060775000000000004,
"cost_usd_output": 0.0677875
}
}How is this guide?