Skip to main content

Quick Start with stn up

The simplest way to run Station in Docker:
stn up
This single command handles everything: building images, creating volumes, starting services, and configuring MCP endpoints.

Primary Use Case: Running Bundles

stn up is designed to make running agent bundles effortless. Whether from CloudShip, a URL, or a local file:
# Run a bundle from your CloudShip account
stn up --bundle finops-cost-analyzer

# Run a bundle from any URL
stn up --bundle https://github.com/myorg/agents/releases/download/v1.0/bundle.tar.gz

# Run a local bundle file you're developing
stn up --bundle ./my-agents.tar.gz
The bundle is automatically installed into the container’s default environment and all agents become available immediately.

Secondary Use Case: Testing Local Configurations

For developers building custom agents, stn up provides an isolated container environment that mirrors production:
# Mount your local workspace into the container
stn up --workspace ~/my-agent-project

# Test a specific environment configuration
stn up --environment staging

# Enable development mode with Genkit reflection UI
stn up --develop
This ensures your agents work correctly in the same containerized environment they’ll run in production, without affecting your local system.

Manual Docker Deployment

Using Docker Compose

Station provides a docker-compose.yml for production deployments:
version: '3.8'

services:
  jaeger:
    image: jaegertracing/all-in-one:1.53
    ports:
      - "16686:16686"  # Jaeger UI
      - "4317:4317"    # OTLP gRPC
      - "4318:4318"    # OTLP HTTP
    environment:
      - COLLECTOR_OTLP_ENABLED=true

  station:
    image: ghcr.io/cloudshipai/station:latest
    ports:
      - "8585:8585"  # API/UI
      - "8586:8586"  # MCP
      - "8587:8587"  # Agent MCP
    volumes:
      - station_data:/home/station/.config/station
      - ./workspace:/workspace
    environment:
      - OPENAI_API_KEY=${OPENAI_API_KEY}
      - OTEL_EXPORTER_OTLP_ENDPOINT=http://jaeger:4318
    depends_on:
      - jaeger

volumes:
  station_data:
Start with:
docker-compose up -d

Building Custom Image

Build from source:
cd station/
docker build -t my-station:latest .
Or build with UI embedded:
make build-with-ui
docker build -t my-station:latest .

Zero-Config Deployment

Station supports automatic bundle installation from mounted directories:
services:
  station:
    image: ghcr.io/cloudshipai/station:latest
    volumes:
      - ./bundles:/bundles:ro  # Auto-install bundles from here
      - station_data:/home/station/.config/station
    environment:
      - OPENAI_API_KEY=${OPENAI_API_KEY}
Drop .tar.gz bundle files in ./bundles/ and they install automatically on startup.

Environment Variables

Configure Station via environment variables:
environment:
  # AI Provider
  - OPENAI_API_KEY=${OPENAI_API_KEY}
  - STN_AI_PROVIDER=openai
  - STN_AI_MODEL=gpt-4o

  # Ports
  - STN_API_PORT=8585
  - STN_MCP_PORT=8586

  # Mode
  - STATION_LOCAL_MODE=true

  # Telemetry
  - OTEL_EXPORTER_OTLP_ENDPOINT=http://jaeger:4318

  # CloudShip (optional)
  - STN_CLOUDSHIP_KEY=your-key
  - STN_CLOUDSHIP_ENDPOINT=lighthouse.cloudship.ai:443

Volume Mounts

MountPurpose
/home/station/.config/stationConfiguration and database
/workspaceWorkspace for agents
/bundlesAuto-install bundles

Networking

Internal Docker Network

For container-to-container communication, use Docker’s internal DNS:
environment:
  - OTEL_EXPORTER_OTLP_ENDPOINT=http://jaeger:4318  # jaeger is service name

Host Network Access

From inside the container, access host services:
environment:
  # Use host.docker.internal on Mac/Windows
  - DATABASE_URL=postgresql://host.docker.internal:5432/mydb
On Linux, add:
extra_hosts:
  - "host.docker.internal:host-gateway"

Health Checks

Add health checks for orchestration:
services:
  station:
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8585/api/v1/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s

Kubernetes Deployment

Example Kubernetes deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: station
spec:
  replicas: 1
  selector:
    matchLabels:
      app: station
  template:
    metadata:
      labels:
        app: station
    spec:
      containers:
      - name: station
        image: ghcr.io/cloudshipai/station:latest
        ports:
        - containerPort: 8585
        - containerPort: 8586
        env:
        - name: OPENAI_API_KEY
          valueFrom:
            secretKeyRef:
              name: station-secrets
              key: openai-api-key
        volumeMounts:
        - name: station-data
          mountPath: /home/station/.config/station
      volumes:
      - name: station-data
        persistentVolumeClaim:
          claimName: station-pvc

Troubleshooting

Container Won’t Start

Check logs:
docker logs station-server

Permission Issues

Station runs as user station (UID 1000). Ensure mounted volumes are accessible:
chmod -R 755 ./workspace

Port Conflicts

Check if ports are in use:
lsof -i :8585
lsof -i :8586