Skip to main content
const result = await tracia.runLocal(input: RunLocalInput);

Required Parameters

ParameterTypeDescription
messagesLocalPromptMessage[]Array of messages to send to the LLM
modelstringModel identifier (e.g., gpt-4o, claude-sonnet-4-20250514)

LocalPromptMessage

interface LocalPromptMessage {
  role: 'system' | 'user' | 'assistant' | 'tool';
  content: string | ContentPart[];
  toolCallId?: string;  // Required for 'tool' role
  toolName?: string;    // Required for 'tool' role
}

// Content parts for assistant messages with tool calls
type ContentPart = TextPart | ToolCallPart;

interface TextPart {
  type: 'text';
  text: string;
}

interface ToolCallPart {
  type: 'tool_call';
  id: string;
  name: string;
  arguments: Record<string, unknown>;
}

Streaming

ParameterTypeDefaultDescription
streambooleanfalseWhen true, returns LocalStream instead of Promise<RunLocalResult>
signalAbortSignalundefinedAbortSignal to cancel the request (streaming only)

Streaming Example

const stream = tracia.runLocal({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Write a story.' }],
  stream: true,
});

for await (const chunk of stream) {
  process.stdout.write(chunk);
}

const result = await stream.result;
See Streaming for more details.

LLM Configuration

ParameterTypeDefaultDescription
temperaturenumberProvider defaultControls randomness (0-2 for OpenAI/Google, 0-1 for Anthropic)
maxOutputTokensnumberProvider defaultMaximum tokens to generate
topPnumberProvider defaultNucleus sampling threshold
stopSequencesstring[]undefinedStop generation when these sequences appear
timeoutMsnumberundefinedTimeout in milliseconds for the LLM call

Example

const result = await tracia.runLocal({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Write a poem.' }],
  temperature: 0.9,
  maxOutputTokens: 500,
  topP: 0.95,
  stopSequences: ['---', 'THE END'],
  timeoutMs: 30000
});

Tool Calling

ParameterTypeDefaultDescription
toolsToolDefinition[]undefinedAvailable tools/functions the model can call
toolChoiceToolChoiceundefinedControl which tools the model can use

ToolDefinition

interface ToolDefinition {
  name: string;
  description: string;
  parameters: JSONSchema;  // JSON Schema for the tool's parameters
}

ToolChoice

type ToolChoice =
  | 'auto'      // Model decides whether to use tools
  | 'none'      // Model cannot use tools
  | 'required'  // Model must use a tool
  | { tool: string }  // Model must use the specified tool

Tool Calling Example

// Step 1: Initial request with tools
const result = await tracia.runLocal({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'What is the weather in Tokyo?' }],
  tools: [{
    name: 'get_weather',
    description: 'Get current weather for a location',
    parameters: {
      type: 'object',
      properties: {
        location: { type: 'string', description: 'City name' }
      },
      required: ['location']
    }
  }],
  toolChoice: 'auto'
});

// Step 2: Handle tool calls
if (result.finishReason === 'tool_calls') {
  const toolCall = result.toolCalls[0];

  // Execute your tool
  const weatherData = await getWeather(toolCall.arguments.location);

  // Step 3: Continue with tool result
  const followUp = await tracia.runLocal({
    model: 'gpt-4o',
    messages: [
      { role: 'user', content: 'What is the weather in Tokyo?' },
      result.message,  // Assistant's message (includes tool calls)
      {
        role: 'tool',
        toolCallId: toolCall.id,
        toolName: toolCall.name,
        content: JSON.stringify(weatherData)
      }
    ],
    tools: [/* same tools */]
  });

  console.log(followUp.text); // "The weather in Tokyo is 22°C and sunny."
}

Provider Configuration

ParameterTypeDefaultDescription
provider'openai' | 'anthropic' | 'google'Auto-detectedOverride provider detection for custom models
providerApiKeystringEnvironment variableOverride the default API key
customOptionsPartial<Record<LLMProvider, Record<string, unknown>>>undefinedProvider-specific options passed to the AI SDK, namespaced by provider

Provider Override

Use provider when using a model not in the built-in list:
const result = await tracia.runLocal({
  model: 'my-fine-tuned-gpt4',
  provider: 'openai', // Required for custom models
  messages: [{ role: 'user', content: 'Hello!' }]
});

Custom API Key

Override the environment variable API key:
const result = await tracia.runLocal({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Hello!' }],
  providerApiKey: 'sk-different-key-for-this-request'
});

Custom Options

Pass provider-specific options using a provider-namespaced object. Each key is the provider name, and the value is an object of options passed to the AI SDK:
// OpenAI: enable strict JSON schema for tool calls
const result = await tracia.runLocal({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Return a JSON object with name and age.' }],
  customOptions: {
    openai: { strictJsonSchema: true }
  }
});

// Anthropic: custom metadata
const result = await tracia.runLocal({
  model: 'claude-sonnet-4-20250514',
  messages: [{ role: 'user', content: 'Hello!' }],
  customOptions: {
    anthropic: { metadata: { user_id: 'user-123' } }
  }
});

// Google: safety settings
const result = await tracia.runLocal({
  model: 'gemini-2.0-flash',
  messages: [{ role: 'user', content: 'Hello!' }],
  customOptions: {
    google: {
      safetySettings: [
        { category: 'HARM_CATEGORY_HATE_SPEECH', threshold: 'BLOCK_LOW_AND_ABOVE' }
      ]
    }
  }
});
By default, openai.strictJsonSchema is set to false, which matches the OpenAI API default. The AI SDK’s OpenAI provider defaults to true, which can reject tool schemas with open-ended objects. Tracia overrides this to false unless you explicitly set it.

Variable Interpolation

ParameterTypeDefaultDescription
variablesRecord<string, string>undefinedVariables for {{placeholder}} interpolation
const result = await tracia.runLocal({
  model: 'gpt-4o',
  messages: [
    { role: 'system', content: 'You help with {{topic}}.' },
    { role: 'user', content: 'Explain {{concept}} to a {{audience}}.' }
  ],
  variables: {
    topic: 'programming',
    concept: 'recursion',
    audience: 'beginner'
  }
});
See Variables for more details.

Span Options

ParameterTypeDefaultDescription
tagsstring[]undefinedTags for filtering spans in the dashboard
userIdstringundefinedEnd user identifier
sessionIdstringundefinedSession identifier for grouping spans
sendTracebooleantrueWhether to send the span to Tracia
spanIdstringAuto-generatedCustom span ID (must match sp_ + 16 hex chars)
traceIdstringundefinedGroup related spans together (session ID for multi-turn conversations)
parentSpanIdstringundefinedLink to parent span (creates a chain)

Example

const result = await tracia.runLocal({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Hello!' }],
  tags: ['production', 'chat'],
  userId: 'user_123',
  sessionId: 'session_abc',
  spanId: 'sp_1234567890abcdef'
});

Span Linking for Multi-Turn Conversations

// First call
const result1 = await tracia.runLocal({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Hello!' }],
});

// Second call - linked to first
const result2 = await tracia.runLocal({
  model: 'gpt-4o',
  messages: [...],
  traceId: result1.spanId,      // Group all spans in this session
  parentSpanId: result1.spanId, // Chain to parent span
});
See Sessions for automatic span linking.

Disabling Tracing

const result = await tracia.runLocal({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Hello!' }],
  sendTrace: false
});

console.log(result.spanId); // Empty string
See Tracing for more details.

Complete Example

const result = await tracia.runLocal({
  // Required
  model: 'claude-sonnet-4-20250514',
  messages: [
    { role: 'system', content: 'You are a {{role}}.' },
    { role: 'user', content: '{{question}}' }
  ],

  // LLM configuration
  temperature: 0.7,
  maxOutputTokens: 1000,
  topP: 0.9,
  stopSequences: ['---'],
  timeoutMs: 60000,

  // Provider configuration
  providerApiKey: process.env.ANTHROPIC_API_KEY_PROD,

  // Variables
  variables: {
    role: 'helpful assistant',
    question: 'What is the meaning of life?'
  },

  // Tracing
  tags: ['production', 'philosophy'],
  userId: 'user_123',
  sessionId: 'session_abc'
});