New: Audio API, Embeddings & Realtime WebSocket now available!
osmAPI LogoosmAPI
Resources

Required Parameters by Provider

Understand provider-specific required parameters and how to properly handle them in your API requests.

Required Parameters by Provider

Different AI providers have varying requirements for request parameters. This guide documents provider-specific mandatory fields to ensure your API requests succeed without errors.


Anthropic Models

max_tokens (Required)

Anthropic models enforce max_tokens as a mandatory parameter in all chat completion requests. This parameter specifies the maximum number of tokens the model can generate in its response.

Key Points:

  • Requirement: Always required
  • Type: Integer
  • Minimum: 1
  • Maximum: Depends on the model's context window
  • Impact: Requests without this parameter will receive a 400 Bad Request error

Common Error

If you omit max_tokens when using Anthropic models, you'll receive this error:

{
	"error": {
		"message": "Error from provider: 400 Bad Request {\"type\":\"error\",\"error\":{\"type\":\"invalid_request_error\",\"message\":\"max_tokens: Field required\"},\"request_id\":\"req_011CXXRUrxrnxYTDpXkvkmmK\"}",
		"type": "gateway_error",
		"code": "gateway_error",
		"usedProvider": "anthropic",
		"requestedModel": "claude-opus-4-5-20251101"
	}
}

Solution

Always include max_tokens in your request payload:

curl -X POST https://api.osmapi.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer osm_YOUR_API_KEY" \
  -d '{
    "model": "claude-opus-4-5-20251101",
    "max_tokens": 1024,
    "messages": [
      { "role": "user", "content": "Hello, how are you?" }
    ]
  }'
import anthropic

client = anthropic.Anthropic(
    api_key="osm_YOUR_API_KEY",
    base_url="https://api.osmapi.com"
)

message = client.messages.create(
    model="claude-opus-4-5-20251101",
    max_tokens=1024,  # Required for Anthropic models
    messages=[
        {"role": "user", "content": "Hello, how are you?"}
    ]
)
import Anthropic from "@anthropic-ai/sdk";

const client = new Anthropic({
  apiKey: "osm_YOUR_API_KEY",
  baseURL: "https://api.osmapi.com",
});

const message = await client.messages.create({
  model: "claude-opus-4-5-20251101",
  max_tokens: 1024, // Required for Anthropic models
  messages: [
    { role: "user", content: "Hello, how are you?" }
  ],
});

Choose max_tokens based on your use case:

Use CaseRecommended ValuePurpose
Short responses256-512Quick answers, classifications
Standard responses1024-2048General conversations
Long-form content2048-4096Articles, detailed explanations
Maximum capacity8000-16000Complex reasoning, code generation

Note: The maximum value depends on the specific Claude model. Check the model's context window to ensure your max_tokens value doesn't exceed available limits.


Other Providers

Providers like OpenAI, Google, and others may have different parameter requirements:

  • OpenAI: max_tokens is optional (defaults to the model's context window)
  • Google: max_output_tokens is the equivalent parameter
  • Meta (Llama): max_tokens is optional

When routing through osmAPI, always check the specific provider's documentation if you're not certain about a parameter's requirement status.

Always use Anthropic's parameter names when routing to Anthropic models through osmAPI. The gateway handles format translation for other providers automatically.


Best Practices

  1. Always include max_tokens for Anthropic models - This prevents 400 Bad Request errors
  2. Match the parameter to your use case - Don't use unnecessarily large values
  3. Monitor token usage - Check the response's usage field to understand consumption patterns
  4. Test with different values - Find the optimal balance between response quality and cost

Example response showing token usage:

{
	"usage": {
		"prompt_tokens": 13,
		"completion_tokens": 29,
		"total_tokens": 42,
		"cost_usd_total": 0.073865,
		"cost_usd_input": 0.0060775000000000004,
		"cost_usd_output": 0.0677875
	}
}

How is this guide?