Provider-agnostic LLM analytics. Cost, latency, quality, usage — powered by the Countly platform. Self-hosted. GDPR compliant.
Wrap your OpenAI client with one line. Token usage, cost, latency, tool calls, streaming — all captured automatically. Zero config.
npm install @countly-ai/openai
Full observability for Anthropic's Claude. Cache tokens, tool use blocks, streaming — every metric captured with zero friction.
npm install @countly-ai/anthropic
Drop in a SpanProcessor. Every Vercel AI SDK call traced via OpenTelemetry. No code changes to your generateText or streamText calls.
npm install @countly-ai/ai-sdk
Wrap Google's GenAI client. Every generateContent and stream call tracked with token counts, costs, and latency — no configuration.
npm install @countly-ai/google-genai
Drop in a callback handler. LLM calls, tool executions, chain completions — all lifecycle events captured automatically.
npm install @countly-ai/langchain
Export traces from Mastra's telemetry pipeline. Spans grouped by trace, aggregated into LLM metrics, shipped to Countly.
npm install @countly-ai/mastra
Observe both v1 and v2 chat APIs. Token counts, tool calls, finish reasons — all normalized into Countly's unified data model.
npm install @countly-ai/cohere
Attach an event handler to LlamaIndex. LLM and tool lifecycles captured with camelCase and snake_case format support.
npm install @countly-ai/llamaindex
Your LLM data stays on your infrastructure. SOC 2, HIPAA, GDPR compliant. No vendor lock-in.
npm install @countly-ai/openai @countly-ai/anthropic @countly-ai/google-genai ...