Documentation Index
Fetch the complete documentation index at: https://docs.tracia.io/llms.txt
Use this file to discover all available pages before exploring further.
Tracia uses the Vercel AI SDK under the hood. Install only the providers you need.
Installation
Install the core AI SDK package plus your provider(s):
# Core package (required)
npm install ai
# Provider packages (install the ones you need)
npm install @ai-sdk/openai # For OpenAI models
npm install @ai-sdk/anthropic # For Anthropic models
npm install @ai-sdk/google # For Google models
npm install @ai-sdk/amazon-bedrock # For Amazon Bedrock models
OpenAI
Environment Variable
OPENAI_API_KEY=sk-your-openai-key
Usage
const result = await tracia.runLocal({
model: 'gpt-4o',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello!' }
],
temperature: 0.7,
maxOutputTokens: 500
});
Streaming
const stream = tracia.runLocal({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Write a poem.' }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk);
}
Anthropic
Environment Variable
ANTHROPIC_API_KEY=sk-ant-your-anthropic-key
Usage
const result = await tracia.runLocal({
model: 'claude-sonnet-4-20250514',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Explain quantum computing.' }
],
temperature: 0.7,
maxOutputTokens: 1000
});
Streaming
const stream = tracia.runLocal({
model: 'claude-sonnet-4-20250514',
messages: [{ role: 'user', content: 'Write a story.' }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk);
}
Google
Environment Variable
GOOGLE_API_KEY=your-google-api-key
Usage
const result = await tracia.runLocal({
model: 'gemini-2.0-flash',
messages: [
{ role: 'user', content: 'What are the benefits of TypeScript?' }
],
temperature: 0.5,
maxOutputTokens: 800
});
Streaming
const stream = tracia.runLocal({
model: 'gemini-2.0-flash',
messages: [{ role: 'user', content: 'Explain machine learning.' }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk);
}
Provider Override
The SDK automatically detects the provider based on the model name. For custom or fine-tuned models not in the built-in list, specify the provider explicitly:
// Fine-tuned OpenAI model
const result = await tracia.runLocal({
model: 'ft:gpt-4o:my-org:custom-model:abc123',
provider: 'openai',
messages: [{ role: 'user', content: 'Hello!' }]
});
API Key Override
Override the environment variable for a specific request:
// Use a different API key
const result = await tracia.runLocal({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
providerApiKey: process.env.OPENAI_API_KEY_SECONDARY
});
This is useful for:
- Multi-tenant applications with per-customer API keys
- Testing with different API keys
- Using organization-specific keys
Error Handling by Provider
import { Tracia, TraciaError, TraciaErrorCode } from 'tracia';
try {
const result = await tracia.runLocal({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }]
});
} catch (error) {
if (error instanceof TraciaError) {
switch (error.code) {
case TraciaErrorCode.MISSING_PROVIDER_SDK:
// Missing ai package or provider package
console.error('Install required packages: npm install ai @ai-sdk/openai');
break;
case TraciaErrorCode.MISSING_PROVIDER_API_KEY:
console.error('Set OPENAI_API_KEY environment variable');
break;
case TraciaErrorCode.PROVIDER_ERROR:
console.error('Provider API error:', error.message);
break;
}
}
}
Provider Comparison
| Feature | OpenAI | Anthropic | Google | Amazon Bedrock |
|---|
| System messages | Native support | Native support | Converted internally | Vendor-dependent |
| Temperature range | 0-2 | 0-1 | 0-2 | Vendor-dependent |
| Max output tokens | Model-dependent | Model-dependent | Model-dependent | Model-dependent |
| Stop sequences | Supported | Supported | Supported | Supported |
| Streaming | Supported | Supported | Supported | Supported |
| Tool calling | Supported | Supported | Supported | Model-dependent |
Amazon Bedrock hosts models from multiple vendors (Amazon, Anthropic, Meta, Mistral, etc.). Feature support depends on the specific model. See the Bedrock page for details.