Quickstart
Run Pulse server
Section titled “Run Pulse server”Start the trace service before instrumenting your app. For local development, single mode (SQLite) is the quickest path.
Install the binary:
curl -fsSL https://raw.githubusercontent.com/EK-LABS-LLC/trace-service/main/scripts/install.sh | bash -s -- pulseSet required environment variables:
export BETTER_AUTH_SECRET='replace-with-32+char-secret'export ENCRYPTION_KEY='replace-with-32+char-secret'export BETTER_AUTH_URL='http://localhost:3000'Start the service:
pulseInstall SDK
Section titled “Install SDK”bun add @pulse/sdkpip install pulse-sdkWorks with Bun, Node, or any JavaScript/Python runtime.
CLI integrations (Claude Code, Opencode, OpenClaw)
Section titled “CLI integrations (Claude Code, Opencode, OpenClaw)”If you want Pulse to capture coding-agent events in addition to SDK traces:
# Install CLIcurl -fsSL https://raw.githubusercontent.com/EK-LABS-LLC/trace-cli/main/install.sh | sh
# Configure trace service connectionpulse init
# Install integrations for detected agentspulse connect
# Verify config + connectivity + integration statuspulse statusSee CLI Reference for full command and config details.
Initialize
Section titled “Initialize”Call initPulse() once at application startup. You need an API key from the Pulse dashboard.
import { initPulse } from "@pulse/sdk";
initPulse({ apiKey: "pulse_sk_...",});from pulse_sdk import init_pulse
init_pulse({ "api_key": "pulse_sk_...",})This starts background trace batches and registers shutdown handlers to flush remaining traces on exit.
Wrap your client
Section titled “Wrap your client”Use observe() to wrap your LLM client. The returned client behaves identically and tracing is captured as a side effect.
import { initPulse, observe, Provider } from "@pulse/sdk";import OpenAI from "openai";
initPulse({ apiKey: "pulse_sk_..." });
const client = observe( new OpenAI({ apiKey: "sk-..." }), Provider.OpenAI);
const res = await client.chat.completions.create({ model: "gpt-4o", messages: [{ role: "user", content: "Hello" }],});from pulse_sdk import init_pulse, observe, Providerfrom openai import OpenAI
init_pulse({ "api_key": "pulse_sk_..." })
client = observe( OpenAI(api_key="sk-..."), Provider.OPENAI)
res = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Hello"}],)What gets captured
Section titled “What gets captured”| Field | Description |
|---|---|
| Request and response bodies | Full prompt and completion |
| Token counts | Input and output tokens |
| Latency | End-to-end request duration in milliseconds |
| Cost | Provider-reported or SDK-estimated when model pricing is available |
| Model | Requested and actual model used |
| Status | success or error |
| Provider | openai, anthropic, or openrouter |
Supported providers
Section titled “Supported providers”| Provider | Client | Enum |
|---|---|---|
| OpenAI | openai | Provider.OpenAI |
| Anthropic | @anthropic-ai/sdk | Provider.Anthropic |
| OpenRouter | openai | Provider.OpenRouter |
See Providers for detailed usage.