New: Audio API, Embeddings & Realtime WebSocket now available!
osmAPI LogoosmAPI

Launch Guide

Get started with osmAPI in minutes — integrate with any SDK or language.

⚡ Get Started in Minutes

Welcome to osmAPI — the unified AI gateway that connects you to every major LLM provider through a single API. Drop-in compatible with your existing code.

Key Takeaway — Point your requests to https://api.osmapi.com/v1/…, authenticate with your OSM_API_KEY, and you're ready to go.


1 · Provision Your Credentials

  1. Access the osmAPI management console.
  2. Generate a new workspace and capture your unique Project Key.
  3. Securely store it in your environment:
export OSM_API_KEY="osm_XXXXXXXXXXXXXXXX"

2 · Ecosystem Adaptors

osmAPI integrates smoothly with your favorite SDKs and libraries:

openai-sdk.ts
import OpenAI from "openai";

const client = new OpenAI({
  apiKey: process.env.OSM_API_KEY,
  baseURL: "https://api.osmapi.com/v1",
});

const completion = await client.chat.completions.create({
  model: "openai/gpt-4o",
  messages: [{ role: "user", content: "Curate a list of 5 essential tools for AI developers." }],
});

console.log(completion.choices[0].message.content);
vercel-ai-sdk.ts
import { createOpenAI } from "@ai-sdk/openai";
import { generateText } from "ai";

const gateway = createOpenAI({
	baseURL: "https://api.osmapi.com/v1",
	apiKey: process.env.OSM_API_KEY!,
});

const { text } = await generateText({
	model: gateway("gpt-4o"),
	prompt: "Analyze the benefits of unified APIs.",
});

console.log(text);
openai-sdk-direct.ts
import OpenAI from "openai";

const client = new OpenAI({
	baseURL: "https://api.osmapi.com/v1",
	apiKey: process.env.OSM_API_KEY,
});

const stream = await client.chat.completions.create({
	model: "gpt-4o",
	messages: [
		{ role: "user", content: "Tell me a story about a helpful AI bridge." },
	],
});

console.log(stream.choices[0].message.content);

3 · Implementation Path

curl -X POST https://api.osmapi.com/v1/chat/completions \-H "Content-Type: application/json" \-H "Authorization: Bearer $OSM_API_KEY" \-d '{"model": "gpt-4o","messages": [  {"role": "user", "content": "Briefly explain the benefit of an AI gateway."}]}'
const callAI = async () => {const result = await fetch('https://api.osmapi.com/v1/chat/completions', {  method: 'POST',  headers: {    'Content-Type': 'application/json',    'Authorization': `Bearer ${process.env.OSM_API_KEY}`  },  body: JSON.stringify({    model: 'gpt-4o',    messages: [{ role: 'user', content: 'What is the future of AI orchestration?' }]  })});if (!result.ok) throw new Error(`Gateway Error: ${result.status}`);const payload = await result.json();console.log('AI Response:', payload.choices[0].message.content);};
// Note: In production, call your backend API instead of exposing the key client-sideimport { useState } from 'react'function IntelligentChat() {const [data, setData] = useState('');const [isProcessing, setIsProcessing] = useState(false);const fetchInsights = async () => {setIsProcessing(true);try {const apiResponse = await fetch('https://api.osmapi.com/v1/chat/completions', {method: 'POST',headers: {'Content-Type': 'application/json','Authorization': `Bearer ${process.env.OSM_API_KEY}`},body: JSON.stringify({model: 'gpt-4o',messages: [{ role: 'user', content: 'Design a modern UI pattern for AI assistants.' }]})});    const json = await apiResponse.json();    setData(json.choices[0].message.content);  } catch (err) {    console.error('Request Failed:', err);  } finally {    setIsProcessing(false);  }};return (<div className="p-4 border rounded-lg shadow-sm"><button	onClick={fetchInsights}	disabled={isProcessing}	className="px-4 py-2 bg-blue-600 text-white rounded hover:bg-blue-700 transition">	{isProcessing ? "Processing Request..." : "Generate Insights"}</button>{data && <div className="mt-4 p-3 bg-gray-50 rounded italic">"{data}"</div>}</div>); }export default IntelligentChat;
// app/api/generate/route.tsimport { NextRequest, NextResponse } from "next/server";export async function POST(req: NextRequest) {const { query } = await req.json();const apiConnection = await fetch('https://api.osmapi.com/v1/chat/completions', {method: 'POST',headers: {'Content-Type': 'application/json','Authorization': `Bearer ${process.env.OSM_API_KEY}`},body: JSON.stringify({model: 'gpt-4o',messages: [{ role: 'user', content: query }]})});if (!apiConnection.ok) {return NextResponse.json({ message: 'Upstream Provider Error' }, { status: 502 });}const resultBody = await apiConnection.json();return NextResponse.json({ content: resultBody.choices[0].message.content });}
import osimport requestsdef orchestrate_ai(prompt):endpoint = 'https://api.osmapi.com/v1/chat/completions'auth_header = {'Content-Type': 'application/json','Authorization': f'Bearer {os.environ.get("OSM_API_KEY")}'}payload = {'model': 'gpt-4o','messages': [{'role': 'user', 'content': prompt}]}  response = requests.post(endpoint, headers=auth_header, json=payload)  response.raise_for_status()  return response.json()['choices'][0]['message']['content']print(orchestrate_ai("Describe the synergy between Python and AI."))
import java.net.http.HttpClient;import java.net.http.HttpRequest;import java.net.http.HttpResponse;import java.net.URI;public class AIGatewayClient {public static void main(String[] args) throws Exception {String key = System.getenv("OSM_API_KEY");String jsonPayload = """{"model": "gpt-4o","messages": [{"role": "user", "content": "How does Java fit into modern AI stacks?"}]}""";      HttpRequest req = HttpRequest.newBuilder()          .uri(URI.create("https://api.osmapi.com/v1/chat/completions"))          .header("Content-Type", "application/json")          .header("Authorization", "Bearer " + key)          .POST(HttpRequest.BodyPublishers.ofString(jsonPayload))          .build();      HttpResponse<String> res = HttpClient.newHttpClient()          .send(req, HttpResponse.BodyHandlers.ofString());      System.out.println("Gateway Response: " + res.body());  }}
use reqwest::Client;use serde_json::json;#[tokio::main]async fn main() -> Result<(), Box<dyn std::error::Error>> {let gateway_client = Client::new();let key = std::env::var("OSM_API_KEY")?;  let res = gateway_client      .post("https://api.osmapi.com/v1/chat/completions")      .header("Authorization", format!("Bearer {}", key))      .json(&json!({          "model": "gpt-4o",          "messages": [{"role": "user", "content": "Explain Rust's safety in the context of AI."}]      }))      .send()      .await?;  let output: serde_json::Value = res.json().await?;  println!("AI insights: {}", output["choices"][0]["message"]["content"]);  Ok(())}
package mainimport (  "bytes"  "encoding/json"  "fmt"  "net/http"  "os")func main() {apiSecret := os.Getenv("OSM_API_KEY")payload := map[string]interface{}{"model": "gpt-4o","messages": []map[string]string{{"role": "user", "content": "How does Go handle concurrency for AI requests?"},},}  body, _ := json.Marshal(payload)  request, _ := http.NewRequest("POST", "https://api.osmapi.com/v1/chat/completions", bytes.NewBuffer(body))  request.Header.Set("Content-Type", "application/json")  request.Header.Set("Authorization", "Bearer " + apiSecret)  httpClient := &http.Client{}  resp, _ := httpClient.Do(request)  defer resp.Body.Close()  fmt.Println("Request successfully routed through osmAPI")}
<?php$osmKey = getenv('OSM_API_KEY');$requestData = ['model' => 'gpt-4o','messages' => [['role' => 'user', 'content' => 'Why use a gateway for PHP-based AI apps?']]];$ch = curl_init('https://api.osmapi.com/v1/chat/completions');curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);curl_setopt($ch, CURLOPT_POST, true);curl_setopt($ch, CURLOPT_POSTFIELDS, json_encode($requestData));curl_setopt($ch, CURLOPT_HTTPHEADER, ['Content-Type: application/json','Authorization: Bearer ' . $osmKey]);$response = curl_exec($ch);$result = json_decode($response, true);echo $result['choices'][0]['message']['content'];curl_close($ch);?>
require 'net/http'require 'json'def fetch_ai_completion(prompt)uri = URI('https://api.osmapi.com/v1/chat/completions')req = Net::HTTP::Post.new(uri, {'Content-Type' => 'application/json','Authorization' => "Bearer #{ENV['OSM_API_KEY']}"})req.body = {model: 'gpt-4o',messages: [{ role: 'user', content: prompt }]}.to_jsonres = Net::HTTP.start(uri.hostname, uri.port, use_ssl: true) { |http| http.request(req) }JSON.parse(res.body)['choices'][0]['message']['content']endputs fetch_ai_completion("Explain Ruby's elegance in AI scripting.")

4 · Embeddings

Generate text embeddings for search, similarity, and RAG using the same API key:

curl -X POST https://api.osmapi.com/v1/embeddings \-H "Content-Type: application/json" \-H "Authorization: Bearer $OSM_API_KEY" \-d '{"model": "text-embedding-3-small","input": "The quick brown fox jumps over the lazy dog"}'
import osfrom openai import OpenAIclient = OpenAI(api_key=os.environ["OSM_API_KEY"],base_url="https://api.osmapi.com/v1")response = client.embeddings.create(model="text-embedding-3-small",input="The quick brown fox jumps over the lazy dog")print(response.data[0].embedding[:5])
import OpenAI from "openai";const client = new OpenAI({apiKey: process.env.OSM_API_KEY,baseURL: "https://api.osmapi.com/v1",});const response = await client.embeddings.create({model: "text-embedding-3-small",input: "The quick brown fox jumps over the lazy dog",});console.log(response.data[0].embedding.slice(0, 5));

Available models: text-embedding-3-small, text-embedding-3-large (OpenAI), gemini-embedding-001, gemini-embedding-2-preview (Google). See the full Embeddings guide for details.


5 · Audio (Text-to-Speech & Speech-to-Text)

Generate spoken audio from text or transcribe audio files — same API key, same base URL:

curl -X POST https://api.osmapi.com/v1/audio/speech \-H "Authorization: Bearer $OSM_API_KEY" \-H "Content-Type: application/json" \-d '{"model": "tts-1","input": "Hello from osmAPI!","voice": "alloy"}' --output speech.mp3
curl -X POST https://api.osmapi.com/v1/audio/transcriptions \-H "Authorization: Bearer $OSM_API_KEY" \-F file=@audio.mp3 \-F model=whisper-1
import osfrom openai import OpenAIclient = OpenAI(  api_key=os.environ["OSM_API_KEY"],  base_url="https://api.osmapi.com/v1")# Text-to-Speechaudio = client.audio.speech.create(  model="tts-1", input="Hello!", voice="alloy")audio.stream_to_file("output.mp3")# Speech-to-Texttranscript = client.audio.transcriptions.create(  model="whisper-1", file=open("output.mp3", "rb"))print(transcript.text)

TTS models: tts-1, tts-1-hd, gpt-4o-mini-tts. STT models: whisper-1, gpt-4o-transcribe, gpt-4o-mini-transcribe (OpenAI) + groq/whisper-large-v3, groq/whisper-large-v3-turbo (Groq). See the full Audio guide for details.


6 · Inquiry Resolution (FAQ)


7 · Operational Mastery

  • Streamlined Real-time Output: Simply toggle stream: true to receive low-latency data streams directly through our proxy layer.
  • Deep Visibility: Navigate to your dashboard to inspect every interaction, including precise cost calculations and provider performance audits.

8 · Next Expansion Phases

  • Explore our documentation for advanced features and integrations.

Elevate your AI strategy with osmAPI. ✨

How is this guide?