Use This When
OpenAI Template
You have OpenAI token usage and want TrackingKoala to calculate cost and charge from the managed template package.
Setup Installation Docs
Copy-paste friendly guide for two integration scenarios. Use this page for humans and LLM agents when implementing TrackingKoala setup quickly and consistently.
Use This When
You have OpenAI token usage and want TrackingKoala to calculate cost and charge from the managed template package.
Use This When
You already compute usage cost in your own logic and want to push fully custom tracked events.
Scenario 1
This path uses template execution. TrackingKoala evaluates pricing rules based on model and token fields, then stores the tracked event whenmode: "track"is used.
Install
bashpnpm add @trackingkoala/sdkSDK Example
typescriptimport { TrackingKoala } from "@trackingkoala/sdk";
const koala = new TrackingKoala(process.env.KOALA_API_KEY!);
const result = await koala.rateTemplate({
templateId: "openai",
mode: "track",
inputs: {
model: "gpt-5.1",
input_tokens: 1200,
cached_input_tokens: 300,
output_tokens: 450
},
event: {
source: "template",
provider: "openai",
action: "chat.completions",
environment: "production",
userId: "usr_8f9c",
tags: ["assistant", "paid-plan"],
metadata: {
feature: "support-chat",
requestId: "req_01HXYZ"
}
}
});
console.log(result.cost, result.charge, result.eventId);cURL Example
bashcurl --location 'https://api.trackingkoala.com/v1/templates/execute' \
--header 'X-Koala-Key: YOUR_API_KEY' \
--header 'Content-Type: application/json' \
--data-raw '{
"templateId": "openai",
"mode": "track",
"inputs": {
"model": "gpt-5.1",
"input_tokens": 1200,
"cached_input_tokens": 300,
"output_tokens": 450
},
"event": {
"source": "template",
"provider": "openai",
"action": "chat.completions",
"environment": "production",
"userId": "usr_8f9c"
}
}'Scenario 2
This path sends your own computed values directly to event tracking. Use it when your product logic defines cost, charge, or custom pricing dimensions outside template rules.
Install
bashpnpm add @trackingkoala/sdkSDK Example
typescriptimport { TrackingKoala } from "@trackingkoala/sdk";
const koala = new TrackingKoala(process.env.KOALA_API_KEY!);
await koala.track({
cost: 142,
charge: 249,
currency: "USD",
environment: "production",
source: "api",
provider: "anthropic",
action: "messages.create",
userId: "usr_8f9c",
traces: ["trace_2039"],
tags: ["premium", "chat"],
metadata: {
model: "claude-sonnet-4",
inputTokens: 1380,
outputTokens: 420,
latencyMs: 684
}
});cURL Example
bashcurl --location 'https://api.trackingkoala.com/v1/events/track' \
--header 'X-Koala-Key: YOUR_API_KEY' \
--header 'Content-Type: application/json' \
--data-raw '{
"cost": 142,
"charge": 249,
"currency": "USD",
"environment": "production",
"source": "api",
"provider": "anthropic",
"action": "messages.create",
"userId": "usr_8f9c",
"traces": ["trace_2039"],
"tags": ["premium", "chat"],
"metadata": {
"model": "claude-sonnet-4",
"inputTokens": 1380,
"outputTokens": 420,
"latencyMs": 684
}
}'LLM Readable
Use this directly in your internal assistant prompts if you want strict, deterministic scenario selection.
LLM Context
textSCENARIO_SELECTION:
- Use template path when provider pricing can be computed from known inputs (for OpenAI: model + token counts).
- Use custom event path when cost/charge is already computed in your system or comes from non-template logic.
OPENAI_TEMPLATE_PATH:
- Endpoint: POST /v1/templates/execute
- SDK method: koala.rateTemplate(...)
- Required fields:
- templateId: "openai"
- mode: "preview" or "track"
- inputs.model
- inputs.input_tokens
- inputs.output_tokens
- Optional field:
- inputs.cached_input_tokens
CUSTOM_EVENT_PATH:
- Endpoint: POST /v1/events/track
- SDK method: koala.track(...)
- Required field:
- cost (> 0)
- Common optional fields:
- charge, currency, environment, provider, action, userId, traces, tags, metadata