Skip to main content

Configuration Files

Station uses a hierarchical configuration system:
FilePurpose
~/.config/station/config.yamlGlobal settings
environments/{env}/template.jsonMCP server configurations
environments/{env}/variables.ymlEnvironment-specific variables
environments/{env}/agents/*.promptAgent definitions

Global Configuration

The main config.yaml file controls Station’s behavior:
# AI Provider Configuration
ai_provider: openai           # openai, anthropic, gemini, custom
ai_model: gpt-4o              # default model for agents
ai_api_key: ""                # or use STN_AI_API_KEY env var
ai_base_url: ""               # for custom providers

# Anthropic OAuth (for Claude Max/Pro subscribers)
ai_auth_type: oauth           # oauth or api_key
ai_oauth_token: ""            # set via `stn auth anthropic login`
ai_oauth_refresh_token: ""    # auto-managed
ai_oauth_expires_at: 0        # auto-managed

# Server Configuration
api_port: 8585                # REST API and Web UI
mcp_port: 8586                # MCP Server for AI editors
ssh_port: 2222                # SSH Admin Interface
admin_username: admin         # SSH username

# Database
database_url: /home/user/.config/station/station.db

# Operating Mode
local_mode: true              # true for standalone, false for CloudShip
debug: false                  # enable debug logging

# Telemetry
telemetry_enabled: false
otel_endpoint: http://localhost:4318

# CloudShip Integration
cloudship:
  enabled: false
  registration_key: ""
  endpoint: lighthouse.cloudship.ai:443

Environment Variables

All settings can be overridden via environment variables:
VariableConfig KeyDescription
STN_AI_PROVIDERai_providerAI provider name
STN_AI_MODELai_modelDefault model
STN_AI_API_KEYai_api_keyAPI key
STN_AI_BASE_URLai_base_urlCustom API endpoint
STN_API_PORTapi_portAPI server port
STN_MCP_PORTmcp_portMCP server port
STN_SSH_PORTssh_portSSH server port
STATION_LOCAL_MODElocal_modeLocal/CloudShip mode
OTEL_EXPORTER_OTLP_ENDPOINTotel_endpointTelemetry endpoint
Environment variables take precedence over config file values.

AI Provider Authentication

Station supports multiple AI providers with different authentication methods.

OpenAI / Gemini (API Key)

ai_provider: openai
ai_model: gpt-4o
ai_api_key: sk-...  # or set STN_AI_API_KEY

Anthropic (API Key)

ai_provider: anthropic
ai_model: claude-sonnet-4-20250514
ai_api_key: sk-ant-api03-...

Anthropic OAuth (Claude Max/Pro)

Claude Max and Pro subscribers can use OAuth instead of an API key. This uses your existing subscription—no separate API billing.
# Login via browser
stn auth anthropic login

# Check token status
stn auth anthropic status
# Output: Token expires: 2025-12-30T16:59:42-06:00 (5h remaining)

# Logout
stn auth anthropic logout
After login, your config is automatically updated:
ai_provider: anthropic
ai_model: claude-sonnet-4-20250514
ai_auth_type: oauth
ai_oauth_token: sk-ant-oat01-...      # auto-set
ai_oauth_refresh_token: sk-ant-ort01-... # auto-set
ai_oauth_expires_at: 1767135582033    # auto-set
OAuth tokens expire after ~4 hours but are automatically refreshed before expiry. The refresh token is stored securely in your config.
Supported Claude Models:
  • claude-opus-4-5-20251101 (Claude 4.5 Opus)
  • claude-sonnet-4-20250514 (Claude 4 Sonnet)
  • claude-sonnet-4-5-20250929 (Claude 4.5 Sonnet)
  • claude-haiku-4-5-20251001 (Claude 4.5 Haiku)
  • claude-3-5-sonnet-20241022 (Claude 3.5 Sonnet)
  • claude-3-5-haiku-20241022 (Claude 3.5 Haiku)
  • claude-3-opus-20240229 (Claude 3 Opus)

MCP Server Configuration

Configure MCP servers in template.json:
{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/workspace"],
      "env": {}
    },
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": {
        "GITHUB_TOKEN": "{{GITHUB_TOKEN}}"
      }
    }
  }
}

Template Variables

Use {{VARIABLE_NAME}} syntax for dynamic values. Define them in variables.yml:
GITHUB_TOKEN: ghp_your_token_here
DATABASE_URL: postgresql://localhost/mydb
Or set them as environment variables - Station will prompt for missing values during stn sync.

Agent Configuration

Define agents in .prompt files using dotprompt format:
---
name: code-reviewer
description: Reviews code for best practices and security issues
model: openai/gpt-4o
tools:
  - filesystem_read
  - github_pr_comments
config:
  temperature: 0.7
  maxOutputTokens: 4096
input:
  schema:
    type: object
    properties:
      code:
        type: string
        description: The code to review
      language:
        type: string
        description: Programming language
output:
  schema:
    type: object
    properties:
      issues:
        type: array
        items:
          type: object
          properties:
            severity: { type: string }
            message: { type: string }
            line: { type: number }
---

You are an expert code reviewer. Analyze the provided {{language}} code and identify:

1. Security vulnerabilities
2. Performance issues
3. Best practice violations
4. Code style improvements

Code to review:
```{{language}}
{{code}}
Provide detailed feedback with line numbers where applicable.

## Port Configuration

Default ports and their purposes:

| Port | Service | Description |
|------|---------|-------------|
| 8585 | API/UI | REST API and Web management UI |
| 8586 | MCP | Model Context Protocol server |
| 8587 | Agent MCP | Dynamic Agent MCP (OAuth protected) |
| 2222 | SSH | SSH Admin Interface |

To change ports:

```yaml
# config.yaml
api_port: 9000
mcp_port: 9001
ssh_port: 2223
Or via environment:
STN_API_PORT=9000 STN_MCP_PORT=9001 stn serve

Telemetry Configuration

Enable OpenTelemetry for distributed tracing:
# config.yaml
telemetry_enabled: true
otel_endpoint: http://localhost:4318
Or start Jaeger and enable on the fly:
# Start Jaeger
stn jaeger up

# Start Station with telemetry
stn serve --enable-telemetry

Workflow and Embedded NATS

Station includes an embedded NATS server for workflow orchestration and OpenCode communication. This eliminates the need for a separate NATS installation in most deployments.

How Embedded NATS Works

When you start Station with workflows enabled (stn serve), it automatically:
  1. Starts an embedded NATS server on port 4222
  2. Configures workflow engine to use it
  3. Allows OpenCode containers to connect via host.docker.internal:4222
┌─────────────────────────────────────────────────────────────┐
│  Station Process                                            │
│                                                             │
│  ┌─────────────────┐      ┌─────────────────────────────┐  │
│  │  Workflow Engine │─────▶│  Embedded NATS (port 4222)  │  │
│  └─────────────────┘      └─────────────────────────────┘  │
│                                    │                        │
└────────────────────────────────────│────────────────────────┘


                          ┌─────────────────────┐
                          │  OpenCode Container │
                          │  NATS_URL=nats://   │
                          │  host.docker.internal│
                          │  :4222              │
                          └─────────────────────┘

Environment Variables

VariableDefaultDescription
WORKFLOW_NATS_URLnats://localhost:4222NATS server URL. If set to non-default, embedded NATS is disabled
WORKFLOW_NATS_PORT4222Port for embedded NATS server
WORKFLOW_NATS_EMBEDDEDAuto-detectedForce embedded NATS on (true) or off (false)

Auto-Detection Logic

Station automatically determines whether to use embedded NATS:
ConditionResult
WORKFLOW_NATS_URL not set or nats://localhost:4222Embedded NATS enabled
WORKFLOW_NATS_URL set to external serverEmbedded NATS disabled
WORKFLOW_NATS_EMBEDDED=trueForce embedded (overrides URL detection)
WORKFLOW_NATS_EMBEDDED=falseForce external (must set URL)

Configuration Examples

No configuration needed—embedded NATS starts automatically:
# Just start Station
stn serve

# OpenCode container connects to:
# NATS_URL=nats://host.docker.internal:4222

Docker Compose with Embedded NATS

When running Station on the host and OpenCode in a container:
version: '3.8'

services:
  opencode:
    image: ghcr.io/sst/opencode:latest
    environment:
      - NATS_URL=nats://host.docker.internal:4222
    extra_hosts:
      - "host.docker.internal:host-gateway"  # Required for Linux
    volumes:
      - ./workspaces:/workspaces
The extra_hosts configuration is only needed on Linux. macOS and Windows Docker Desktop include host.docker.internal automatically.

Verifying NATS Connectivity

# Check if embedded NATS is running
netstat -tlnp | grep 4222

# From OpenCode container, test connection
docker exec opencode-container nats-ping nats://host.docker.internal:4222

# View NATS logs (if using external)
nats server ping

CloudShip Integration

To connect to CloudShip platform:
# config.yaml
local_mode: false
cloudship:
  enabled: true
  registration_key: "your-key-from-cloudship-ui"
  endpoint: lighthouse.cloudship.ai:443
Get your registration key from CloudShip Dashboard.

Notify Tool

Enable agent notifications via webhooks (ntfy, Slack, etc.):
# config.yaml
notify:
  webhook_url: https://ntfy.sh/station
  api_key: tk_your_ntfy_api_key  # optional
  timeout_seconds: 10
Once configured, agents with notify: true in their frontmatter can send notifications:
---
model: openai/gpt-4o
notify: true
---
You can use the notify tool to alert users about task completion or errors.

Environment Variables

VariableDescription
STN_NOTIFY_WEBHOOK_URLWebhook URL for notifications
STN_NOTIFY_API_KEYAPI key/token for webhook auth
STN_NOTIFY_TIMEOUTRequest timeout in seconds

Supported Webhook Formats

ServiceURL FormatNotes
ntfy.shhttps://ntfy.sh/your-topicSupports title, priority, tags
Self-hosted ntfyhttps://ntfy.example.com/topicSame as ntfy.sh
Generic webhookAny URLReceives JSON payload

OpenCode Integration

Enable AI coding capabilities with OpenCode:
# config.yaml
coding:
  opencode:
    url: http://localhost:4096

Starting OpenCode

# Default (localhost only)
opencode

# For container mode (stn up) - REQUIRED
opencode --hostname 0.0.0.0

# Custom port
opencode --port 4000
When using stn up container mode, OpenCode must be started with --hostname 0.0.0.0 to allow connections from the container. Station automatically rewrites localhost to host.docker.internal in the container config.

Environment Variables

VariableDescription
STN_OPENCODE_URLOverride OpenCode server URL

OpenCode Backend

Full OpenCode integration guide with coding tools and workflows

Config Commands

Manage configuration via CLI:
# Show current config
stn config show

# Set a value
stn config set ai_model gpt-4o-mini

# Get a value
stn config get ai_provider

Next Steps