Documentation Index
Fetch the complete documentation index at: https://docs.tracia.io/llms.txt
Use this file to discover all available pages before exploring further.
const result = await tracia.prompts.run(slug, variables?, options?);
Execute a prompt with variable substitution and get the generated response. Tracia handles template rendering, LLM API calls, and automatically logs a span.
Parameters
| Parameter | Type | Required | Description |
|---|
slug | string | Yes | The prompt slug |
variables | Record<string, string> | No | Template variables |
options.model | string | No | Override the default model |
options.version | number | No | Run a specific prompt version (uses latest if omitted) |
options.tags | string[] | No | Tags for filtering spans |
options.userId | string | No | End user identifier |
options.sessionId | string | No | Session identifier |
options.traceId | string | No | Group related spans together (session) |
options.parentSpanId | string | No | Link to parent span |
options.messages | LocalPromptMessage[] | No | Full conversation messages for multi-turn (skips template rendering) |
Response
interface RunResult {
text: string; // The generated text
spanId: string; // Unique span identifier
traceId: string; // Trace ID for session grouping
promptVersion: number; // Version of the prompt used
latencyMs: number; // Request latency in milliseconds
usage: {
inputTokens: number;
outputTokens: number;
totalTokens: number;
};
cost: number; // Cost in USD
finishReason?: FinishReason; // Why the model stopped ("stop", "max_tokens", "tool_calls")
toolCalls?: ToolCall[]; // Tool calls made by the model
structuredOutput?: Record<string, unknown>; // Parsed JSON when prompt has an output schema
messages?: LocalPromptMessage[]; // Full conversation for multi-turn continuation
}
Examples
Basic Usage
const result = await tracia.prompts.run('welcome-email', {
name: 'Alice',
product: 'Tracia'
});
console.log(result.text);
// "Dear Alice, Welcome to Tracia!..."
With Options
const result = await tracia.prompts.run(
'welcome-email',
{ name: 'Alice', product: 'Tracia' },
{
model: 'gpt-4',
tags: ['onboarding', 'email'],
userId: 'user_123',
sessionId: 'session_abc'
}
);
Pinning a Version
Run a specific prompt version instead of the latest. Useful for production deployments where you want to lock a known-good version while continuing to iterate on the prompt.
const result = await tracia.prompts.run(
'welcome-email',
{ name: 'Alice', product: 'Tracia' },
{ version: 3 }
);
console.log(result.promptVersion); // 3
const result = await tracia.prompts.run('welcome-email', { name: 'Alice' });
console.log(`Latency: ${result.latencyMs}ms`);
console.log(`Tokens: ${result.usage.totalTokens}`);
console.log(`Cost: $${result.cost.toFixed(4)}`);
console.log(`Span ID: ${result.spanId}`);
console.log(`Prompt Version: ${result.promptVersion}`);
When a prompt has tools configured, the model may return tool_calls instead of text. Use messages to continue the conversation with tool results.
// 1. First call - model returns tool calls
const result = await tracia.prompts.run('assistant', { question: 'Weather in Tokyo?' });
if (result.finishReason === 'tool_calls' && result.toolCalls) {
// 2. Execute tools locally
const weatherData = await getWeather(result.toolCalls[0].arguments.location);
// 3. Continue with tool results
const result2 = await tracia.prompts.run('assistant', undefined, {
messages: [
...result.messages!,
{ role: 'tool', toolCallId: result.toolCalls[0].id, toolName: 'get_weather', content: JSON.stringify(weatherData) }
],
traceId: result.traceId,
parentSpanId: result.spanId,
});
console.log(result2.text);
// "The weather in Tokyo is 22°C and sunny."
}
Trace Grouping
Use traceId and parentSpanId to group related runs in the dashboard.
const result1 = await tracia.prompts.run('step-1', { input: 'data' });
const result2 = await tracia.prompts.run('step-2', { input: result1.text }, {
traceId: result1.traceId,
parentSpanId: result1.spanId,
});
Error Handling
import { TraciaError, TraciaErrorCode } from 'tracia';
try {
const result = await tracia.prompts.run('welcome-email', { name: 'Alice' });
} catch (error) {
if (error instanceof TraciaError) {
switch (error.code) {
case TraciaErrorCode.NOT_FOUND:
console.error('Prompt does not exist');
break;
case TraciaErrorCode.MISSING_VARIABLES:
console.error('Missing required variables');
break;
case TraciaErrorCode.MISSING_PROVIDER_KEY:
console.error('No LLM provider configured');
break;
case TraciaErrorCode.PROVIDER_ERROR:
console.error('LLM provider error:', error.message);
break;
}
}
}