{ "text": "Dear Alice,\n\nWelcome to Tracia! We're thrilled to have you join our community...", "traceId": "tr_abc123xyz", "promptVersion": 3, "latencyMs": 1250, "usage": { "inputTokens": 45, "outputTokens": 120, "totalTokens": 165 }, "cost": 0.0049}
Run a prompt with variable substitution and get the generated response. This endpoint handles template rendering, LLM API calls, and automatically logs a trace.