Observability & Tracing
Station includes built-in OpenTelemetry (OTEL) support for complete execution observability. Every agent execution, LLM call, and tool invocation is automatically traced.What Gets Traced
| Component | Details Captured |
|---|---|
| Agent Executions | Complete timeline from start to finish |
| LLM Calls | Every OpenAI/Anthropic/Gemini API call with latency |
| MCP Tool Usage | Individual tool calls to AWS, databases, etc. |
| Database Operations | Query performance and data access patterns |
| GenKit Spans | Dotprompt execution, generation flow, model interactions |
Quick Start with Jaeger
The fastest way to get tracing running locally:http://localhost:4318.
Example Trace
Configuration
Environment Variable (Recommended)
Config File
MCP Client Configuration
When connecting MCP clients, include the OTEL endpoint:Tracing Backends
Station works with any OpenTelemetry-compatible backend.Jaeger (Local Development)
Grafana Tempo
Datadog APM
Honeycomb
AWS X-Ray
New Relic
Azure Monitor
Span Details
Station captures rich span information:Agent Execution Span
LLM Call Span
Tool Call Span
Viewing Traces
Jaeger UI
- Open http://localhost:16686
- Select “station” from the Service dropdown
- Click “Find Traces”
- Click on a trace to see the full execution timeline
Filtering Traces
In Jaeger, use tags to filter:Production Setup
High-Volume Environments
For production, use sampling to reduce trace volume:Secure Endpoints
Docker Deployment
Troubleshooting
No Traces Appearing
-
Check endpoint connectivity:
-
Verify environment variable:
-
Check Station logs:
Traces Missing Tool Calls
Ensure MCP servers are configured with tracing:High Latency in Traces
If traces show high latency:- Check network connectivity to tracing backend
- Consider async export: traces are sent asynchronously by default
- For high-volume, use sampling (see Production Setup)
Next Steps
- Deployment Monitoring - Metrics and alerting
- Scheduling - Automated agent runs
- CloudShip Integration - Centralized observability

