TrackingKoalaTrackingKoala

Setup Installation Docs

LLM Setup Guide: OpenAI Template and Custom Events

Copy-paste friendly guide for two integration scenarios. Use this page for humans and LLM agents when implementing TrackingKoala setup quickly and consistently.

Use This When

OpenAI Template

You have OpenAI token usage and want TrackingKoala to calculate cost and charge from the managed template package.

Use This When

Custom Events

You already compute usage cost in your own logic and want to push fully custom tracked events.

Scenario 1

Setup installation with OpenAI template

This path uses template execution. TrackingKoala evaluates pricing rules based on model and token fields, then stores the tracked event whenmode: "track"is used.

  1. 1Install the SDK and create an API key in your TrackingKoala dashboard.
  2. 2Activate the OpenAI package in Dashboard -> Integrations -> OpenAI.
  3. 3Call rateTemplate() with templateId="openai" and token inputs.
  4. 4Use mode="preview" for dry-runs and mode="track" to persist events.

Install

bash
pnpm add @trackingkoala/sdk

SDK Example

typescript
import { TrackingKoala } from "@trackingkoala/sdk";

const koala = new TrackingKoala(process.env.KOALA_API_KEY!);

const result = await koala.rateTemplate({
  templateId: "openai",
  mode: "track",
  inputs: {
    model: "gpt-5.1",
    input_tokens: 1200,
    cached_input_tokens: 300,
    output_tokens: 450
  },
  event: {
    source: "template",
    provider: "openai",
    action: "chat.completions",
    environment: "production",
    userId: "usr_8f9c",
    tags: ["assistant", "paid-plan"],
    metadata: {
      feature: "support-chat",
      requestId: "req_01HXYZ"
    }
  }
});

console.log(result.cost, result.charge, result.eventId);

cURL Example

bash
curl --location 'https://api.trackingkoala.com/v1/templates/execute' \
--header 'X-Koala-Key: YOUR_API_KEY' \
--header 'Content-Type: application/json' \
--data-raw '{
  "templateId": "openai",
  "mode": "track",
  "inputs": {
    "model": "gpt-5.1",
    "input_tokens": 1200,
    "cached_input_tokens": 300,
    "output_tokens": 450
  },
  "event": {
    "source": "template",
    "provider": "openai",
    "action": "chat.completions",
    "environment": "production",
    "userId": "usr_8f9c"
  }
}'

Scenario 2

Setup installation with custom events

This path sends your own computed values directly to event tracking. Use it when your product logic defines cost, charge, or custom pricing dimensions outside template rules.

  1. 1Install the SDK and keep your API key on the server side.
  2. 2Compute cost/charge in your service before sending the event.
  3. 3Call track() with cost and any useful metadata for reporting.
  4. 4Use traces/tags for debugging, segmenting, and dashboard filters.

Install

bash
pnpm add @trackingkoala/sdk

SDK Example

typescript
import { TrackingKoala } from "@trackingkoala/sdk";

const koala = new TrackingKoala(process.env.KOALA_API_KEY!);

await koala.track({
  cost: 142,
  charge: 249,
  currency: "USD",
  environment: "production",
  source: "api",
  provider: "anthropic",
  action: "messages.create",
  userId: "usr_8f9c",
  traces: ["trace_2039"],
  tags: ["premium", "chat"],
  metadata: {
    model: "claude-sonnet-4",
    inputTokens: 1380,
    outputTokens: 420,
    latencyMs: 684
  }
});

cURL Example

bash
curl --location 'https://api.trackingkoala.com/v1/events/track' \
--header 'X-Koala-Key: YOUR_API_KEY' \
--header 'Content-Type: application/json' \
--data-raw '{
  "cost": 142,
  "charge": 249,
  "currency": "USD",
  "environment": "production",
  "source": "api",
  "provider": "anthropic",
  "action": "messages.create",
  "userId": "usr_8f9c",
  "traces": ["trace_2039"],
  "tags": ["premium", "chat"],
  "metadata": {
    "model": "claude-sonnet-4",
    "inputTokens": 1380,
    "outputTokens": 420,
    "latencyMs": 684
  }
}'

LLM Readable

Machine-readable context block

Use this directly in your internal assistant prompts if you want strict, deterministic scenario selection.

LLM Context

text
SCENARIO_SELECTION:
- Use template path when provider pricing can be computed from known inputs (for OpenAI: model + token counts).
- Use custom event path when cost/charge is already computed in your system or comes from non-template logic.

OPENAI_TEMPLATE_PATH:
- Endpoint: POST /v1/templates/execute
- SDK method: koala.rateTemplate(...)
- Required fields:
  - templateId: "openai"
  - mode: "preview" or "track"
  - inputs.model
  - inputs.input_tokens
  - inputs.output_tokens
- Optional field:
  - inputs.cached_input_tokens

CUSTOM_EVENT_PATH:
- Endpoint: POST /v1/events/track
- SDK method: koala.track(...)
- Required field:
  - cost (> 0)
- Common optional fields:
  - charge, currency, environment, provider, action, userId, traces, tags, metadata