const result = await tracia.runLocal(input: RunLocalInput);
Required Parameters
| Parameter | Type | Description |
|---|
messages | LocalPromptMessage[] | Array of messages to send to the LLM |
model | string | Model identifier (e.g., gpt-4o, claude-sonnet-4-20250514) |
LocalPromptMessage
interface LocalPromptMessage {
role: 'system' | 'user' | 'assistant' | 'tool';
content: string | ContentPart[];
toolCallId?: string; // Required for 'tool' role
toolName?: string; // Required for 'tool' role
}
// Content parts for assistant messages with tool calls
type ContentPart = TextPart | ToolCallPart;
interface TextPart {
type: 'text';
text: string;
}
interface ToolCallPart {
type: 'tool_call';
id: string;
name: string;
arguments: Record<string, unknown>;
}
Streaming
| Parameter | Type | Default | Description |
|---|
stream | boolean | false | When true, returns LocalStream instead of Promise<RunLocalResult> |
signal | AbortSignal | undefined | AbortSignal to cancel the request (streaming only) |
Streaming Example
const stream = tracia.runLocal({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Write a story.' }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk);
}
const result = await stream.result;
See Streaming for more details.
LLM Configuration
| Parameter | Type | Default | Description |
|---|
temperature | number | Provider default | Controls randomness (0-2 for OpenAI/Google, 0-1 for Anthropic) |
maxOutputTokens | number | Provider default | Maximum tokens to generate |
topP | number | Provider default | Nucleus sampling threshold |
stopSequences | string[] | undefined | Stop generation when these sequences appear |
timeoutMs | number | undefined | Timeout in milliseconds for the LLM call |
Example
const result = await tracia.runLocal({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Write a poem.' }],
temperature: 0.9,
maxOutputTokens: 500,
topP: 0.95,
stopSequences: ['---', 'THE END'],
timeoutMs: 30000
});
| Parameter | Type | Default | Description |
|---|
tools | ToolDefinition[] | undefined | Available tools/functions the model can call |
toolChoice | ToolChoice | undefined | Control which tools the model can use |
interface ToolDefinition {
name: string;
description: string;
parameters: JSONSchema; // JSON Schema for the tool's parameters
}
type ToolChoice =
| 'auto' // Model decides whether to use tools
| 'none' // Model cannot use tools
| 'required' // Model must use a tool
| { tool: string } // Model must use the specified tool
// Step 1: Initial request with tools
const result = await tracia.runLocal({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'What is the weather in Tokyo?' }],
tools: [{
name: 'get_weather',
description: 'Get current weather for a location',
parameters: {
type: 'object',
properties: {
location: { type: 'string', description: 'City name' }
},
required: ['location']
}
}],
toolChoice: 'auto'
});
// Step 2: Handle tool calls
if (result.finishReason === 'tool_calls') {
const toolCall = result.toolCalls[0];
// Execute your tool
const weatherData = await getWeather(toolCall.arguments.location);
// Step 3: Continue with tool result
const followUp = await tracia.runLocal({
model: 'gpt-4o',
messages: [
{ role: 'user', content: 'What is the weather in Tokyo?' },
result.message, // Assistant's message (includes tool calls)
{
role: 'tool',
toolCallId: toolCall.id,
toolName: toolCall.name,
content: JSON.stringify(weatherData)
}
],
tools: [/* same tools */]
});
console.log(followUp.text); // "The weather in Tokyo is 22°C and sunny."
}
Provider Configuration
| Parameter | Type | Default | Description |
|---|
provider | 'openai' | 'anthropic' | 'google' | Auto-detected | Override provider detection for custom models |
providerApiKey | string | Environment variable | Override the default API key |
customOptions | Partial<Record<LLMProvider, Record<string, unknown>>> | undefined | Provider-specific options passed to the AI SDK, namespaced by provider |
Provider Override
Use provider when using a model not in the built-in list:
const result = await tracia.runLocal({
model: 'my-fine-tuned-gpt4',
provider: 'openai', // Required for custom models
messages: [{ role: 'user', content: 'Hello!' }]
});
Custom API Key
Override the environment variable API key:
const result = await tracia.runLocal({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
providerApiKey: 'sk-different-key-for-this-request'
});
Custom Options
Pass provider-specific options using a provider-namespaced object. Each key is the provider name, and the value is an object of options passed to the AI SDK:
// OpenAI: enable strict JSON schema for tool calls
const result = await tracia.runLocal({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Return a JSON object with name and age.' }],
customOptions: {
openai: { strictJsonSchema: true }
}
});
// Anthropic: custom metadata
const result = await tracia.runLocal({
model: 'claude-sonnet-4-20250514',
messages: [{ role: 'user', content: 'Hello!' }],
customOptions: {
anthropic: { metadata: { user_id: 'user-123' } }
}
});
// Google: safety settings
const result = await tracia.runLocal({
model: 'gemini-2.0-flash',
messages: [{ role: 'user', content: 'Hello!' }],
customOptions: {
google: {
safetySettings: [
{ category: 'HARM_CATEGORY_HATE_SPEECH', threshold: 'BLOCK_LOW_AND_ABOVE' }
]
}
}
});
By default, openai.strictJsonSchema is set to false, which matches the OpenAI API default. The AI SDK’s OpenAI provider defaults to true, which can reject tool schemas with open-ended objects. Tracia overrides this to false unless you explicitly set it.
Variable Interpolation
| Parameter | Type | Default | Description |
|---|
variables | Record<string, string> | undefined | Variables for {{placeholder}} interpolation |
const result = await tracia.runLocal({
model: 'gpt-4o',
messages: [
{ role: 'system', content: 'You help with {{topic}}.' },
{ role: 'user', content: 'Explain {{concept}} to a {{audience}}.' }
],
variables: {
topic: 'programming',
concept: 'recursion',
audience: 'beginner'
}
});
See Variables for more details.
Span Options
| Parameter | Type | Default | Description |
|---|
tags | string[] | undefined | Tags for filtering spans in the dashboard |
userId | string | undefined | End user identifier |
sessionId | string | undefined | Session identifier for grouping spans |
sendTrace | boolean | true | Whether to send the span to Tracia |
spanId | string | Auto-generated | Custom span ID (must match sp_ + 16 hex chars) |
traceId | string | undefined | Group related spans together (session ID for multi-turn conversations) |
parentSpanId | string | undefined | Link to parent span (creates a chain) |
Example
const result = await tracia.runLocal({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
tags: ['production', 'chat'],
userId: 'user_123',
sessionId: 'session_abc',
spanId: 'sp_1234567890abcdef'
});
Span Linking for Multi-Turn Conversations
// First call
const result1 = await tracia.runLocal({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
});
// Second call - linked to first
const result2 = await tracia.runLocal({
model: 'gpt-4o',
messages: [...],
traceId: result1.spanId, // Group all spans in this session
parentSpanId: result1.spanId, // Chain to parent span
});
See Sessions for automatic span linking.
Disabling Tracing
const result = await tracia.runLocal({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
sendTrace: false
});
console.log(result.spanId); // Empty string
See Tracing for more details.
Complete Example
const result = await tracia.runLocal({
// Required
model: 'claude-sonnet-4-20250514',
messages: [
{ role: 'system', content: 'You are a {{role}}.' },
{ role: 'user', content: '{{question}}' }
],
// LLM configuration
temperature: 0.7,
maxOutputTokens: 1000,
topP: 0.9,
stopSequences: ['---'],
timeoutMs: 60000,
// Provider configuration
providerApiKey: process.env.ANTHROPIC_API_KEY_PROD,
// Variables
variables: {
role: 'helpful assistant',
question: 'What is the meaning of life?'
},
// Tracing
tags: ['production', 'philosophy'],
userId: 'user_123',
sessionId: 'session_abc'
});