Skip to content

Quickstart

Start the trace service before instrumenting your app. For local development, single mode (SQLite) is the quickest path.

Install the binary:

Terminal window
curl -fsSL https://raw.githubusercontent.com/EK-LABS-LLC/trace-service/main/scripts/install.sh | bash -s -- pulse

Set required environment variables:

Terminal window
export BETTER_AUTH_SECRET='replace-with-32+char-secret'
export ENCRYPTION_KEY='replace-with-32+char-secret'
export BETTER_AUTH_URL='http://localhost:3000'

Start the service:

Terminal window
pulse
Terminal window
bun add @pulse/sdk

Works with Bun, Node, or any JavaScript/Python runtime.

CLI integrations (Claude Code, Opencode, OpenClaw)

Section titled “CLI integrations (Claude Code, Opencode, OpenClaw)”

If you want Pulse to capture coding-agent events in addition to SDK traces:

Terminal window
# Install CLI
curl -fsSL https://raw.githubusercontent.com/EK-LABS-LLC/trace-cli/main/install.sh | sh
# Configure trace service connection
pulse init
# Install integrations for detected agents
pulse connect
# Verify config + connectivity + integration status
pulse status

See CLI Reference for full command and config details.

Call initPulse() once at application startup. You need an API key from the Pulse dashboard.

import { initPulse } from "@pulse/sdk";
initPulse({
apiKey: "pulse_sk_...",
});

This starts background trace batches and registers shutdown handlers to flush remaining traces on exit.

Use observe() to wrap your LLM client. The returned client behaves identically and tracing is captured as a side effect.

import { initPulse, observe, Provider } from "@pulse/sdk";
import OpenAI from "openai";
initPulse({ apiKey: "pulse_sk_..." });
const client = observe(
new OpenAI({ apiKey: "sk-..." }),
Provider.OpenAI
);
const res = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello" }],
});
FieldDescription
Request and response bodiesFull prompt and completion
Token countsInput and output tokens
LatencyEnd-to-end request duration in milliseconds
CostProvider-reported or SDK-estimated when model pricing is available
ModelRequested and actual model used
Statussuccess or error
Provideropenai, anthropic, or openrouter
ProviderClientEnum
OpenAIopenaiProvider.OpenAI
Anthropic@anthropic-ai/sdkProvider.Anthropic
OpenRouteropenaiProvider.OpenRouter

See Providers for detailed usage.