runEmbedding() method generates text embeddings using OpenAI, Google, or Amazon Bedrock embedding models. Like runLocal(), embedding requests go directly to the provider and traces are sent to Tracia asynchronously in the background.
How It Works
When you callrunEmbedding(), the SDK:
- Calls the provider directly - Your embedding request goes straight to OpenAI, Google, or Amazon Bedrock. Tracia is not in the request path.
- Sends the trace asynchronously - After the provider responds, trace data is sent to Tracia in the background. This is non-blocking and adds zero latency to your application.
EMBEDDING span kind, so you can track embedding usage, latency, and costs alongside your LLM completions.
Quick Examples
Available Pages
Basic Usage
Single and batch embeddings, dimensions, sessions
Supported Models
OpenAI, Google, and Amazon Bedrock embedding models

