Observability for every LLM call your app makes

Trace prompts, completions, latency, tokens, and cost across OpenAI, Anthropic, and OpenRouter with a single SDK for TypeScript and Python.

Pulse dashboard overview screenshot
Quick start

Install Pulse server first

Run this script to install the Pulse server binary, then start instrumenting with the SDK.

Server install script
curl -fsSL https://raw.githubusercontent.com/EK-LABS-LLC/trace-service/main/scripts/install.sh | bash -s -- pulse
Integrations

Works with the agents your team already uses

Native integrations for agent runtimes and CLIs so traces, runs, and tool activity land in one system.

Dashboard

See exactly what your team will use day to day

Fast visibility for tracing, run replay, triage, and spend decisions.

Command Center dashboard screenshot

Command Center

Overview of traces, spend, and live service health across your projects.

Agent Run Playback dashboard screenshot

Agent Run Playback

Replay each agent run with tool calls, timing, and failure points in order.

Cost Intelligence dashboard screenshot

Cost Intelligence

Track token usage and cost trends to catch spikes before they become problems.

Features

Everything you need to understand your LLM stack

From individual traces to full session timelines, Pulse gives your team the visibility to ship with confidence.

Tracing

End-to-end trace capture

Every prompt, completion, token count, latency, and cost stored with full request and response bodies.

Sessions

Session timelines

Group related calls into sessions. See full conversations or agent runs as a timeline.

Analytics

Cost and usage tracking

Track spend per model, per project, per session. Catch cost spikes before they hit your bill.

Routing

Provider agnostic

OpenAI, Anthropic, OpenRouter. One SDK, consistent metrics across all providers.

Debugging

Fast root-cause workflows

Pivot from aggregate analytics to individual traces and spans quickly when incidents happen.

Deployment

Single-node to scale mode

Run as a single binary with built-in SQLite, or switch to Postgres + partitioned listeners for higher throughput.

How it works

Instrument in minutes, not days

Three steps to full observability across your LLM stack.

Step 01

Install the SDK

Add the SDK to your project. Works with JS/TS and Python runtimes.

bun add @pulse/sdk
pip install pulse-sdk
Step 02

Wrap your LLM client

Call observe() on your existing client. No API rewrite.

const client = observe(
  new OpenAI({ apiKey: "sk-..." }),
  Provider.OpenAI
);
Step 03

Open the dashboard

Traces flow in automatically. Search, filter, and analyze quickly.

142 traces captured
$0.47 total cost
avg latency 1.2s
Documentation

Everything you need to track your agent usage

Guides, API reference, and examples to get from zero to production.

Start tracking your agents

Free and open source. Self-host anywhere.