These are the docs for the beta version of Evalite. Install with
pnpm add evalite@beta reportTrace()
Manually report a trace for custom LLM calls or processing steps.
Signature
reportTrace(trace: { input: unknown; output: unknown; usage?: { inputTokens: number; outputTokens: number; totalTokens: number; }; start?: number; end?: number;}): voidParameters
input- The input to the operation (e.g., prompt, messages, raw data)output- The output from the operation (e.g., LLM response, processed result)usage(optional) - Token usage statisticsinputTokens- Number of input tokens consumedoutputTokens- Number of output tokens generatedtotalTokens- Total tokens (input + output)
start(optional) - Start timestamp in milliseconds (fromperformance.now()). Defaults to current time.end(optional) - End timestamp in milliseconds (fromperformance.now()). Defaults to current time.
Import
import { reportTrace } from "evalite/traces";Basic Usage
import { evalite } from "evalite";import { reportTrace } from "evalite/traces";
evalite("Multi-Step Analysis", { data: [{ input: "Analyze this text" }], task: async (input) => { // First LLM call reportTrace({ input: { prompt: "Summarize: " + input }, output: { text: "Summary of the text" }, usage: { inputTokens: 50, outputTokens: 20, totalTokens: 70, }, });
// Second LLM call reportTrace({ input: { prompt: "Translate to Spanish: Summary of the text" }, output: { text: "Resumen del texto" }, usage: { inputTokens: 30, outputTokens: 15, totalTokens: 45, }, });
return "Final result"; }, scorers: [],});With Timestamps
Capture timing information for performance analysis:
const start = performance.now();const result = await callLLM(input);const end = performance.now();
reportTrace({ input, output: result, start, end,});Viewing Traces
Traces appear in the Evalite UI under each test case:
- Navigate to an eval result
- Click on a specific test case
- View the “Traces” section to see all reported traces
- Inspect input, output, timing, and token usage for each trace
When to Use
Use reportTrace() for:
- Custom LLM integrations - Track calls to LLM providers not supported by AI SDK
- Non-LLM processing steps - Track parsing, validation, data transformation
- Manual instrumentation - Full control over what gets traced
For Vercel AI SDK models, use wrapAISDKModel() instead for automatic tracing.
Complete Example
Combining automatic AI SDK tracing with manual traces:
import { evalite } from "evalite";import { reportTrace } from "evalite/traces";import { wrapAISDKModel } from "evalite/ai-sdk";import { openai } from "@ai-sdk/openai";import { generateText } from "ai";
const model = wrapAISDKModel(openai("gpt-4"));
evalite("Research Agent", { data: [ { input: "What is the capital of France?", expected: "Paris", }, ], task: async (input) => { // Step 1: Extract intent (manually traced) const intent = await extractIntent(input); reportTrace({ input: { query: input }, output: { intent }, });
// Step 2: Generate response (automatically traced via AI SDK) const result = await generateText({ model, prompt: `Answer this question: ${input}`, });
// Step 3: Format result (manually traced) const formatted = formatResponse(result.text); reportTrace({ input: { raw: result.text }, output: { formatted }, });
return formatted; }, scorers: [ { name: "Exact Match", scorer: ({ output, expected }) => { return output === expected ? 1 : 0; }, }, ],});
async function extractIntent(query: string) { // Custom intent extraction logic return "question";}
function formatResponse(text: string) { // Custom formatting logic return text.trim();}Best Practices
- Include usage data - Track token consumption to monitor costs
- Use timestamps - Capture timing information for performance analysis
- Keep trace data relevant - Don’t trace every small operation
- Combine with AI SDK - Use
wrapAISDKModel()for AI SDK calls,reportTrace()for custom steps - Structured data - Use objects for input/output to make traces more readable in the UI
Behavior in Production
reportTrace() is a no-op when called outside an Evalite context, so you can safely leave it in your code without any performance overhead.
Troubleshooting
Traces not appearing
Make sure you’re calling reportTrace() inside a task function:
// ✅ Correctevalite("My Eval", { data: [{ input: "test" }], task: async (input) => { reportTrace({ input, output: "result" }); // Works return "result"; },});
// ❌ WrongreportTrace({ input: "test", output: "result" }); // Outside evalError: “reportTrace must be called inside an evalite eval”
This error occurs when reportTrace() is called outside the task function context. Make sure all reportTrace() calls are within the task function.
See Also
wrapAISDKModel()Reference - Automatic tracing for Vercel AI SDK models- Vercel AI SDK Guide - Complete integration guide
evalite()- Main evaluation function