Skip to content

Providers

Wraps an LLM client and returns a traced version. The returned client keeps the same API as the original.

const traced = observe(client, Provider.OpenAI);
const traced = observe(client, Provider.Anthropic);
const traced = observe(client, Provider.OpenRouter);

What gets patched:

  • OpenAI and OpenRouter patch client.chat.completions.create.
  • Anthropic patches client.messages.create.

All other client methods remain untouched.

import OpenAI from "openai";
import { observe, Provider } from "@pulse/sdk";
const client = observe(
new OpenAI({ apiKey: "sk-..." }),
Provider.OpenAI
);
const res = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello" }],
});
const stream = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello" }],
stream: true,
});

Both streaming and non-streaming calls are traced. For streams, the trace is recorded once the stream completes.

import Anthropic from "@anthropic-ai/sdk";
import { observe, Provider } from "@pulse/sdk";
const client = observe(
new Anthropic({ apiKey: "sk-ant-..." }),
Provider.Anthropic
);
const res = await client.messages.create({
model: "claude-3-5-sonnet-20241022",
max_tokens: 1024,
messages: [{ role: "user", content: "Hello" }],
});

Anthropic stop reasons are normalized: end_turn -> stop, max_tokens -> length, stop_sequence -> stop, tool_use -> tool_calls.

OpenRouter uses the OpenAI client library. Pass Provider.OpenRouter so Pulse stores the correct provider and OpenRouter cost fields.

import OpenAI from "openai";
import { observe, Provider } from "@pulse/sdk";
const client = observe(
new OpenAI({
apiKey: "sk-or-...",
baseURL: "https://openrouter.ai/api/v1",
}),
Provider.OpenRouter
);

When OpenRouter includes a cost field in the response, Pulse uses that value directly.

The SDK calculates cost automatically for known models.

ModelInputOutput
gpt-4o$2.50 / 1M$10.00 / 1M
gpt-4o-mini$0.15 / 1M$0.60 / 1M
gpt-4-turbo$10.00 / 1M$30.00 / 1M
gpt-3.5-turbo$0.50 / 1M$1.50 / 1M
claude-3-5-sonnet-20241022$3.00 / 1M$15.00 / 1M
claude-3-5-haiku-20241022$0.80 / 1M$4.00 / 1M
claude-3-opus-20240229$15.00 / 1M$75.00 / 1M

Model aliases such as gpt-4o-2024-11-20 resolve to base model pricing. Unknown models report null cost.

If an LLM call throws, the SDK captures an error trace and re-throws the original error.

If trace sending fails (network error or service unavailable), the SDK logs a warning and continues. Tracing never breaks application behavior.