AI SDK Integration
evlog/ai gives you full AI observability by wrapping your model with middleware. Token usage, tool calls, streaming performance, cache hits, reasoning tokens, all captured into the wide event automatically.
Add AI observability to my app with evlog.
- Install the AI SDK: pnpm add ai
- Import createAILogger from 'evlog/ai'
- Create an AI logger with createAILogger(log) where log is your request logger
- Wrap your model with ai.wrap('anthropic/claude-sonnet-4.6') and pass it to generateText, streamText, etc.
- Token usage, tool calls, streaming metrics, and errors are captured automatically into the wide event
- For embedding calls, use ai.captureEmbed({ usage }) after embed() or embedMany()
- Works with all frameworks: Nuxt, Express, Hono, Fastify, NestJS, Elysia, standalone
Docs: https://www.evlog.dev/core-concepts/ai-sdk
Adapters: https://www.evlog.dev/adapters
Install
Add the AI SDK as a dependency:
npm install ai
bun add ai
pnpm add ai
Quick Start
Two lines to add, one param to change:
export default defineEventHandler(async (event) => {
const result = streamText({
model: 'anthropic/claude-sonnet-4.6',
messages,
})
return result.toTextStreamResponse()
})
import { createAILogger } from 'evlog/ai'
export default defineEventHandler(async (event) => {
const log = useLogger(event)
const ai = createAILogger(log)
const result = streamText({
model: ai.wrap('anthropic/claude-sonnet-4.6'),
messages,
})
return result.toTextStreamResponse()
})
Your wide event now includes:
{
"method": "POST",
"path": "/api/chat",
"status": 200,
"duration": "4.5s",
"ai": {
"calls": 1,
"model": "claude-sonnet-4.6",
"provider": "anthropic",
"inputTokens": 3312,
"outputTokens": 814,
"totalTokens": 4126,
"reasoningTokens": 225,
"finishReason": "stop",
"msToFirstChunk": 234,
"msToFinish": 4500,
"tokensPerSecond": 180
}
}
How It Works
createAILogger(log) returns an AILogger with two methods:
| Method | Description |
|---|---|
wrap(model) | Wraps a language model with middleware. Accepts a model string (e.g. 'anthropic/claude-sonnet-4.6') or a LanguageModelV3 object. Works with generateText, streamText, generateObject, streamObject, and ToolLoopAgent. |
captureEmbed(result) | Manually captures token usage from embed() or embedMany() results (embedding models use a different type). |
The middleware intercepts calls at the provider level. It does not touch your callbacks, prompts, or responses. Captured data flows through the normal evlog pipeline (sampling, enrichers, drains) and ends up in Axiom, Better Stack, or wherever you drain to.
Usage Patterns
streamText
The most common pattern, streaming chat with full observability:
import { streamText } from 'ai'
import { createAILogger } from 'evlog/ai'
export default defineEventHandler(async (event) => {
const log = useLogger(event)
const ai = createAILogger(log)
const { messages } = await readBody(event)
log.set({ action: 'chat', messagesCount: messages.length })
const result = streamText({
model: ai.wrap('anthropic/claude-sonnet-4.6'),
messages,
onFinish: ({ text }) => {
// Your code, no conflict with evlog
saveConversation(text)
},
})
return result.toTextStreamResponse()
})
generateText
Synchronous generation, the middleware captures the result automatically:
import { generateText } from 'ai'
import { createAILogger } from 'evlog/ai'
export default defineEventHandler(async (event) => {
const log = useLogger(event)
const ai = createAILogger(log)
const result = await generateText({
model: ai.wrap('anthropic/claude-sonnet-4.6'),
prompt: 'Summarize this document',
})
return { text: result.text }
})
Multi-step agents
The middleware fires for each step automatically. Steps, tool calls, and tokens are all accumulated across the agent loop:
import { ToolLoopAgent, createAgentUIStreamResponse, stepCountIs } from 'ai'
import { createAILogger } from 'evlog/ai'
export default defineEventHandler(async (event) => {
const log = useLogger(event)
const ai = createAILogger(log)
const agent = new ToolLoopAgent({
model: ai.wrap('anthropic/claude-sonnet-4.6'),
tools: { searchWeb, queryDatabase },
stopWhen: stepCountIs(5),
})
return createAgentUIStreamResponse({
agent,
uiMessages: messages,
})
})
Wide event after a 3-step agent run:
{
"ai": {
"calls": 3,
"steps": 3,
"model": "claude-sonnet-4.6",
"provider": "anthropic",
"inputTokens": 4500,
"outputTokens": 1200,
"totalTokens": 5700,
"finishReason": "stop",
"toolCalls": ["searchWeb", "queryDatabase", "searchWeb"],
"msToFirstChunk": 312,
"msToFinish": 8200,
"tokensPerSecond": 146
}
}
RAG (embed + generate)
Use captureEmbed for embedding calls. They use a different model type that cannot be wrapped with middleware:
import { embed, generateText } from 'ai'
import { createAILogger } from 'evlog/ai'
export default defineEventHandler(async (event) => {
const log = useLogger(event)
const ai = createAILogger(log)
const { embedding, usage } = await embed({
model: openai.embedding('text-embedding-3-small'),
value: query,
})
ai.captureEmbed({ usage })
const docs = await findSimilar(embedding)
const result = await generateText({
model: ai.wrap('anthropic/claude-sonnet-4.6'),
prompt: buildPrompt(docs),
})
return { text: result.text }
})
Multiple models
Wrap each model separately, they share the same accumulator. When multiple models are used, the wide event includes both model (last model) and models (all unique models):
const ai = createAILogger(log)
const fast = ai.wrap('anthropic/claude-haiku-4.5')
const smart = ai.wrap('anthropic/claude-sonnet-4.6')
const classification = await generateText({ model: fast, prompt: classifyPrompt })
const response = await generateText({ model: smart, prompt: detailedPrompt })
{
"ai": {
"calls": 2,
"model": "claude-sonnet-4.6",
"models": ["claude-haiku-4.5", "claude-sonnet-4.6"],
"provider": "anthropic",
"inputTokens": 450,
"outputTokens": 300,
"totalTokens": 750
}
}
Model object support
wrap() also accepts model objects from provider SDKs if you prefer explicit imports:
import { anthropic } from '@ai-sdk/anthropic'
const model = ai.wrap(anthropic('claude-sonnet-4.6'))
Captured Data
| Wide event field | Source | Description |
|---|---|---|
ai.calls | Call count | Number of AI calls in this request |
ai.model | response.modelId | Model that served the response |
ai.models | All model IDs | Array of all models used (only when > 1) |
ai.provider | model.provider | Provider (anthropic, openai, google, etc.) |
ai.inputTokens | usage.inputTokens.total | Total input tokens across all calls |
ai.outputTokens | usage.outputTokens.total | Total output tokens across all calls |
ai.totalTokens | Computed | inputTokens + outputTokens |
ai.cacheReadTokens | usage.inputTokens.cacheRead | Tokens served from prompt cache |
ai.cacheWriteTokens | usage.inputTokens.cacheWrite | Tokens written to prompt cache |
ai.reasoningTokens | usage.outputTokens.reasoning | Reasoning tokens (extended thinking) |
ai.finishReason | finishReason.unified | Why generation ended (stop, tool-calls, etc.) |
ai.toolCalls | Content / stream chunks | List of tool names called |
ai.steps | Step count | Number of LLM calls (only when > 1) |
ai.msToFirstChunk | Stream timing | Time to first text chunk (streaming only) |
ai.msToFinish | Stream timing | Total stream duration (streaming only) |
ai.tokensPerSecond | Computed | Output tokens per second (streaming only) |
ai.error | Error capture | Error message if a model call fails |
Error Handling
If a model call fails, the middleware captures the error into the wide event before re-throwing:
{
"ai": {
"calls": 1,
"model": "claude-sonnet-4.6",
"provider": "anthropic",
"finishReason": "error",
"error": "API rate limit exceeded"
}
}
Stream errors (e.g. content filter) are also captured from the stream's error chunks.
Works With All Frameworks
evlog/ai works with any framework that evlog supports:
const log = useLogger(event)
const ai = createAILogger(log)
app.post('/api/chat', (req, res) => {
const ai = createAILogger(req.log)
// ...
})
app.post('/api/chat', (c) => {
const ai = createAILogger(c.get('log'))
// ...
})
app.post('/api/chat', async (request) => {
const ai = createAILogger(request.log)
// ...
})
import { useLogger } from 'evlog/nestjs'
const log = useLogger()
const ai = createAILogger(log)
import { createLogger } from 'evlog'
const log = createLogger()
const ai = createAILogger(log)
// ...
log.emit()
Vite Plugin
Build-time optimizations for any Vite-based framework — auto-init, debug stripping, source location injection, and optional auto-imports.
Structured Errors
Create errors that explain why they occurred and how to fix them. Add actionable context with why, fix, and link fields for humans and AI agents.