Skip to main content
const result = await tracia.prompts.run(slug, variables?, options?);
Execute a prompt with variable substitution and get the generated response. Tracia handles template rendering, LLM API calls, and automatically logs a span.

Parameters

ParameterTypeRequiredDescription
slugstringYesThe prompt slug
variablesRecord<string, string>NoTemplate variables
options.modelstringNoOverride the default model
options.tagsstring[]NoTags for filtering spans
options.userIdstringNoEnd user identifier
options.sessionIdstringNoSession identifier
options.traceIdstringNoGroup related spans together (session)
options.parentSpanIdstringNoLink to parent span
options.messagesLocalPromptMessage[]NoFull conversation messages for multi-turn (skips template rendering)

Response

interface RunResult {
  text: string;                        // The generated text
  spanId: string;                      // Unique span identifier
  traceId: string;                     // Trace ID for session grouping
  promptVersion: number;               // Version of the prompt used
  latencyMs: number;                   // Request latency in milliseconds
  usage: {
    inputTokens: number;
    outputTokens: number;
    totalTokens: number;
  };
  cost: number;                        // Cost in USD
  finishReason?: FinishReason;         // Why the model stopped ("stop", "max_tokens", "tool_calls")
  toolCalls?: ToolCall[];              // Tool calls made by the model
  structuredOutput?: Record<string, unknown>;  // Parsed JSON when prompt has an output schema
  messages?: LocalPromptMessage[];     // Full conversation for multi-turn continuation
}

Examples

Basic Usage

const result = await tracia.prompts.run('welcome-email', {
  name: 'Alice',
  product: 'Tracia'
});

console.log(result.text);
// "Dear Alice, Welcome to Tracia!..."

With Options

const result = await tracia.prompts.run(
  'welcome-email',
  { name: 'Alice', product: 'Tracia' },
  {
    model: 'gpt-4',
    tags: ['onboarding', 'email'],
    userId: 'user_123',
    sessionId: 'session_abc'
  }
);

Accessing Metadata

const result = await tracia.prompts.run('welcome-email', { name: 'Alice' });

console.log(`Latency: ${result.latencyMs}ms`);
console.log(`Tokens: ${result.usage.totalTokens}`);
console.log(`Cost: $${result.cost.toFixed(4)}`);
console.log(`Span ID: ${result.spanId}`);
console.log(`Prompt Version: ${result.promptVersion}`);

Multi-Turn Tool Calling

When a prompt has tools configured, the model may return tool_calls instead of text. Use messages to continue the conversation with tool results.
// 1. First call — model returns tool calls
const result = await tracia.prompts.run('assistant', { question: 'Weather in Tokyo?' });

if (result.finishReason === 'tool_calls' && result.toolCalls) {
  // 2. Execute tools locally
  const weatherData = await getWeather(result.toolCalls[0].arguments.location);

  // 3. Continue with tool results
  const result2 = await tracia.prompts.run('assistant', undefined, {
    messages: [
      ...result.messages!,
      { role: 'tool', toolCallId: result.toolCalls[0].id, toolName: 'get_weather', content: JSON.stringify(weatherData) }
    ],
    traceId: result.traceId,
    parentSpanId: result.spanId,
  });

  console.log(result2.text);
  // "The weather in Tokyo is 22°C and sunny."
}

Trace Grouping

Use traceId and parentSpanId to group related runs in the dashboard.
const result1 = await tracia.prompts.run('step-1', { input: 'data' });

const result2 = await tracia.prompts.run('step-2', { input: result1.text }, {
  traceId: result1.traceId,
  parentSpanId: result1.spanId,
});

Error Handling

import { TraciaError, TraciaErrorCode } from 'tracia';

try {
  const result = await tracia.prompts.run('welcome-email', { name: 'Alice' });
} catch (error) {
  if (error instanceof TraciaError) {
    switch (error.code) {
      case TraciaErrorCode.NOT_FOUND:
        console.error('Prompt does not exist');
        break;
      case TraciaErrorCode.MISSING_VARIABLES:
        console.error('Missing required variables');
        break;
      case TraciaErrorCode.MISSING_PROVIDER_KEY:
        console.error('No LLM provider configured');
        break;
      case TraciaErrorCode.PROVIDER_ERROR:
        console.error('LLM provider error:', error.message);
        break;
    }
  }
}