Skip to main content
The Python SDK uses LiteLLM under the hood for unified provider access. LiteLLM is included as a dependency — no additional provider packages are needed.

Installation

pip install tracia
LiteLLM is installed automatically with Tracia and handles all provider communication.

OpenAI

Environment Variable

OPENAI_API_KEY=sk-your-openai-key

Usage

result = client.run_local(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Hello!"},
    ],
    temperature=0.7,
    max_output_tokens=500,
)

Streaming

stream = client.run_local(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Write a poem."}],
    stream=True,
)

for chunk in stream:
    print(chunk, end="")

Anthropic

Environment Variable

ANTHROPIC_API_KEY=sk-ant-your-anthropic-key

Usage

result = client.run_local(
    model="claude-sonnet-4-20250514",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Explain quantum computing."},
    ],
    temperature=0.7,
    max_output_tokens=1000,
)

Streaming

stream = client.run_local(
    model="claude-sonnet-4-20250514",
    messages=[{"role": "user", "content": "Write a story."}],
    stream=True,
)

for chunk in stream:
    print(chunk, end="")

Google

Environment Variable

GOOGLE_API_KEY=your-google-api-key

Usage

result = client.run_local(
    model="gemini-2.0-flash",
    messages=[
        {"role": "user", "content": "What are the benefits of Python?"},
    ],
    temperature=0.5,
    max_output_tokens=800,
)

Streaming

stream = client.run_local(
    model="gemini-2.0-flash",
    messages=[{"role": "user", "content": "Explain machine learning."}],
    stream=True,
)

for chunk in stream:
    print(chunk, end="")

Provider Override

The SDK automatically detects the provider based on the model name. For custom or fine-tuned models not in the built-in list, specify the provider explicitly:
# Fine-tuned OpenAI model
result = client.run_local(
    model="ft:gpt-4o:my-org:custom-model:abc123",
    provider="openai",
    messages=[{"role": "user", "content": "Hello!"}],
)

API Key Override

Override the environment variable for a specific request:
import os

result = client.run_local(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}],
    provider_api_key=os.environ["OPENAI_API_KEY_SECONDARY"],
)
This is useful for:
  • Multi-tenant applications with per-customer API keys
  • Testing with different API keys
  • Using organization-specific keys

Error Handling by Provider

from tracia import Tracia, TraciaError, TraciaErrorCode

try:
    result = client.run_local(
        model="gpt-4o",
        messages=[{"role": "user", "content": "Hello!"}],
    )
except TraciaError as error:
    if error.code == TraciaErrorCode.MISSING_PROVIDER_API_KEY:
        print("Set OPENAI_API_KEY environment variable")
    elif error.code == TraciaErrorCode.PROVIDER_ERROR:
        print(f"Provider API error: {error.message}")
    elif error.code == TraciaErrorCode.UNSUPPORTED_MODEL:
        print("Specify provider explicitly for custom models")

Provider Comparison

FeatureOpenAIAnthropicGoogle
System messagesNative supportNative supportConverted internally
Temperature range0-20-10-2
Max output tokensModel-dependentModel-dependentModel-dependent
Stop sequencesSupportedSupportedSupported
StreamingSupportedSupportedSupported
Tool callingSupportedSupportedSupported