Overview
stn up starts Station as an isolated Docker container, while stn down gracefully stops it. Data persists across restarts unless explicitly deleted.
How stn up Works
When you run stn up, here’s what happens:
YOU (Developer)
|
| $ stn up --bundle <bundle-id> --workspace ~/code
v
+------------------------------------------------------------------+
| stn up |
+------------------------------------------------------------------+
|
| 1. CHECK DOCKER
| - Is Docker daemon running?
| - Is station-server container already running?
|
| 2. PREPARE VOLUMES
| - Create station-config volume (first run)
| - Create station-cache volume (build cache)
| - Import host ~/.config/station/config.yaml if exists
|
| 3. BUILD/PULL IMAGE
| - Try: docker pull ghcr.io/cloudshipai/station:latest
| - Fallback: docker build (if Dockerfile exists)
|
| 4. INSTALL BUNDLE (if --bundle flag)
| - Download from CloudShip API (if UUID)
| - Download from URL (if http://)
| - Use local file path
|
| 5. START CONTAINER
v
+------------------------------------------------------------------+
| Docker Container: station-server |
| |
| Volumes Mounted: |
| - station-config:/home/station/.config/station (data) |
| - station-cache:/home/station/.cache (build cache) |
| - ~/code:/workspace (your workspace - read/write) |
| - /var/run/docker.sock (Docker-in-Docker for Dagger) |
| |
| Command: stn serve --database ... --mcp-port 8586 |
+------------------------------------------------------------------+
|
v
+------------------------------------------------------------------+
| stn serve (inside container) |
| |
| STARTUP SEQUENCE: |
| 1. Load config.yaml |
| 2. Initialize SQLite database |
| 3. Run database migrations |
| 4. Create default environment if none exists |
| 5. DeclarativeSync: Sync files to database |
| - Scan environments/default/mcp-configs/*.json |
| - Connect to each MCP server, discover tools |
| - Scan environments/default/agents/*.prompt |
| - Parse prompts, create agent records |
| 6. Initialize Genkit (AI provider: OpenAI/Gemini) |
| 7. Initialize Lighthouse client (CloudShip connection) |
| 8. Start scheduler service (cron jobs) |
| 9. Start all servers |
+------------------------------------------------------------------+
Running Services
After startup, Station exposes three services:
Port Service Description 8585 API/UI Server Web interface for settings, agent management (dev mode) 8586 MCP Server Main MCP endpoint - tools, agents, data ingestion 8587 Dynamic Agent MCP Agent execution - run_agent, list_agents 4000 Genkit Developer UI Only when --develop flag is used
MCP Configuration
stn up automatically updates .mcp.json in your workspace:
{
"mcpServers" : {
"station" : {
"type" : "http" ,
"url" : "http://localhost:8586/mcp"
}
}
}
This allows Claude Desktop, Cursor, and other MCP clients to discover Station’s tools and agents.
How stn down Works
$ stn down [--remove-volume] [--clean-mcp]
|
v
+------------------------------------------------------------------+
| stn down |
| |
| 1. docker stop station-server (graceful SIGTERM, 3s timeout) |
| 2. docker rm station-server (remove container) |
| |
| Optional flags: |
| --remove-volume: docker volume rm station-config |
| WARNING: DELETES ALL agents, configs, database |
| |
| --clean-mcp: Remove "station" from .mcp.json |
| --remove-image: docker rmi station-server:latest |
| --force: SIGKILL if graceful stop fails |
+------------------------------------------------------------------+
Data Preservation
By default, stn down preserves all your data:
Data Preserved? How to Delete station-config volumeYes stn down --remove-volumestation-cache volumeYes docker volume rm station-cacheWorkspace files Always (Your files, not managed by Station)
Bundle Development Workflow
Here’s how to develop and test bundles with Station:
Step 1: Create Bundle Files Locally
~/.config/station/environments/my-bundle/
├── agents/
│ ├── code-reviewer.prompt # Agent definition with tools
│ ├── deploy-helper.prompt # Another agent
│ └── ...
│
├── mcp-configs/
│ ├── github.json # GitHub MCP server config
│ ├── slack.json # Slack MCP server config
│ └── custom-tool.json # Your custom MCP server
│
└── variables.yml # Environment variables template
Example variables.yml:
variables :
- name : GITHUB_TOKEN
description : "GitHub access token"
required : true
- name : SLACK_BOT_TOKEN
description : "Slack bot token"
required : true
Step 2: Test Locally with stn serve
stn serve --environment my-bundle
This runs Station directly (no Docker), reading your files:
DeclarativeSync scans environments/my-bundle/
Connects to MCP servers defined in mcp-configs/*.json
Loads agents from agents/*.prompt
Exposes everything via MCP on ports 8586/8587
Make changes to your files, restart stn serve, and changes take effect immediately.
Step 3: Package as a Bundle
stn bundle create my-bundle -o my-bundle.tar.gz
Creates a tarball containing:
agents/*.prompt
mcp-configs/*.json
variables.yml
manifest.json (metadata)
Step 4: Test the Bundle with stn up
# Start fresh (removes previous data)
stn down --remove-volume
# Install and run your bundle in a container
stn up --bundle ./my-bundle.tar.gz
This simulates exactly how CloudShip users will run your bundle:
Creates isolated Docker container
Installs bundle into container’s default environment
Runs DeclarativeSync to load everything
Starts MCP servers and agents
Step 5: Publish to CloudShip
stn bundle push my-bundle.tar.gz
Users can then install with:
stn up --bundle < bundle-i d >
Command Reference
stn up Flags
Flag Description --workspace, -wWorkspace directory to mount (default: current directory) --bundleCloudShip bundle ID, URL, or local file path to install --providerAI provider: openai, gemini, anthropic, custom --modelAI model to use (e.g., gpt-4o-mini, gemini-2.0-flash-exp) --api-keyAPI key for AI provider --base-urlCustom base URL for OpenAI-compatible endpoints --developEnable Genkit Developer UI mode (port 4000) --environmentStation environment to use in develop mode --upgradeRebuild container image before starting --envAdditional environment variables to pass through --detach, -dRun container in background (default: true) --yes, -yUse defaults without interactive prompts
stn down Flags
Flag Description --remove-volumeDelete ALL Station data (environments, agents, bundles, config) --clean-mcpRemove Station from .mcp.json --remove-imageRemove Docker image after stopping --forceForce stop (kill) if graceful stop fails
Common Workflows
Fresh start with new bundle
# Remove all previous data
stn down --remove-volume
# Start with new bundle
stn up --bundle < new-bundle-i d >
# Restart to pick up config changes
stn restart
# Or rebuild with latest image
stn down
stn up --upgrade
# Test bundle locally first (no Docker)
stn serve --environment my-bundle
# When ready, test in container
stn bundle create my-bundle -o my-bundle.tar.gz
stn down --remove-volume
stn up --bundle ./my-bundle.tar.gz
# Check status
stn status
# Follow logs
stn logs -f
# Show last 500 lines
stn logs --tail 500
Environment Variables Passed to Container
stn up automatically passes through these environment variables:
Category Variables AI Providers OPENAI_API_KEY, ANTHROPIC_API_KEY, GEMINI_API_KEY, GOOGLE_API_KEYAWS AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGIONCloudShip STN_CLOUDSHIP_KEY, STN_CLOUDSHIP_ENDPOINTTools GITHUB_TOKEN, SLACK_BOT_TOKENTelemetry OTEL_EXPORTER_OTLP_ENDPOINT
Pass additional variables with --env:
stn up --env CUSTOM_VAR=value --env ANOTHER_VAR=value