The OpenCode Container provides a pre-configured Docker image for running OpenCode as a coding backend for Station agents. It enables AI-powered code generation in isolated, reproducible environments.
Why Use the Container?
Isolated Environment Run OpenCode in a container without affecting your local system. Perfect for CI/CD and testing.
Pre-configured Comes with Bun runtime, Git, and Station plugin pre-installed.
OAuth Support Use Claude Max subscription via OAuth instead of API keys.
NATS Integration Built-in Station plugin for NATS-based task dispatch.
Quick Start with Station CLI
The easiest way to run OpenCode in a container is using the Station CLI. This manages Docker Compose for you and uses your existing Anthropic OAuth credentials.
Start the Sandbox
This starts OpenCode on port 4099 (to avoid conflicts with native OpenCode on 4096).
Configure Station
# ~/.config/station/config.yaml
coding :
backend : opencode
opencode :
url : http://localhost:4099
Run Your Agent
stn agent run coder "Create a hello.py file"
Management Commands
stn opencode up # Start the sandbox
stn opencode down # Stop (preserves workspace data)
stn opencode status # Check if running + health
stn opencode logs -f # Follow container logs
stn opencode clean # Remove all data and start fresh
Custom Port
# Use a different port
OPENCODE_PORT = 4100 stn opencode up
With NATS (CloudShip Orchestration)
# Start observability stack (includes NATS)
stn jaeger up
# Start OpenCode connected to NATS
NATS_URL = nats://localhost:4222 stn opencode up
The Station CLI method uses ghcr.io/cloudshipai/opencode-station:latest which includes the Station NATS plugin pre-installed.
Manual Docker Setup
If you prefer manual Docker control, use the steps below.
1. Pull the Image
docker pull ghcr.io/sst/opencode:latest
2. Run with API Key
docker run -d \
-p 4096:4096 \
-e ANTHROPIC_API_KEY= $ANTHROPIC_API_KEY \
ghcr.io/sst/opencode:latest \
opencode serve --hostname 0.0.0.0 --port 4096
3. Test the Connection
curl http://localhost:4096/global/health
# {"healthy":true}
Authentication Options
Option 1: API Keys
Pass API keys as environment variables:
docker run -d \
-p 4096:4096 \
-e ANTHROPIC_API_KEY=sk-ant-xxx \
-e OPENAI_API_KEY=sk-xxx \
ghcr.io/sst/opencode:latest \
opencode serve --hostname 0.0.0.0
Option 2: OAuth (Claude Max)
For Claude Max subscribers, use OAuth authentication instead of API keys:
Login on Host
Run opencode auth login on your host machine to generate OAuth credentials: opencode auth login
# Opens browser for Anthropic OAuth
# Saves credentials to ~/.local/share/opencode/auth.json
Mount Credentials
Mount only the auth.json file (read-only) into the container: docker run -d \
-p 4096:4096 \
-v ~/.local/share/opencode/auth.json:/root/.local/share/opencode/auth.json:ro \
ghcr.io/sst/opencode:latest \
opencode serve --hostname 0.0.0.0
Mount only auth.json, not the entire ~/.local/share/opencode directory. OpenCode needs to write to its storage subdirectory.
Docker Compose
Basic Setup
version : '3.8'
services :
opencode :
image : ghcr.io/sst/opencode:latest
command : [ "opencode" , "serve" , "--hostname" , "0.0.0.0" , "--port" , "4096" ]
ports :
- "4096:4096"
environment :
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
volumes :
- opencode-workspaces:/workspaces
working_dir : /workspaces
volumes :
opencode-workspaces :
With OAuth (Claude Max)
version : '3.8'
services :
opencode :
image : ghcr.io/sst/opencode:latest
command : [ "opencode" , "serve" , "--hostname" , "0.0.0.0" , "--port" , "4096" ]
ports :
- "4096:4096"
volumes :
- opencode-workspaces:/workspaces
- ${HOME}/.local/share/opencode/auth.json:/root/.local/share/opencode/auth.json:ro
working_dir : /workspaces
volumes :
opencode-workspaces :
Full Stack with NATS
For Station integration with NATS-based task dispatch:
version : '3.8'
services :
nats :
image : nats:2.10-alpine
command : [ "--jetstream" , "--store_dir=/data" ]
ports :
- "4222:4222"
- "8222:8222"
volumes :
- nats-data:/data
opencode :
build :
context : ./opencode-plugin
dockerfile : Dockerfile.opencode
environment :
- NATS_URL=nats://nats:4222
- OPENCODE_WORKSPACE_DIR=/workspaces
volumes :
- opencode-workspaces:/workspaces
- ${HOME}/.local/share/opencode/auth.json:/root/.local/share/opencode/auth.json:ro
depends_on :
- nats
ports :
- "4096:4096"
working_dir : /workspaces
volumes :
nats-data :
opencode-workspaces :
Configuration
OpenCode Config File
Create a custom opencode.json for the container:
{
"$schema" : "https://opencode.ai/config.json" ,
"model" : "anthropic/claude-sonnet-4-20250514"
}
Mount it into the container:
docker run -d \
-p 4096:4096 \
-v ./opencode.json:/root/.config/opencode/opencode.json:ro \
-v ~/.local/share/opencode/auth.json:/root/.local/share/opencode/auth.json:ro \
ghcr.io/sst/opencode:latest \
opencode serve --hostname 0.0.0.0
Available Models
Model Provider ID Claude Opus 4.5 Anthropic anthropic/claude-opus-4-5-20251101Claude Sonnet 4 Anthropic anthropic/claude-sonnet-4-20250514GPT-4o OpenAI openai/gpt-4oGPT-4o Mini OpenAI openai/gpt-4o-mini
Environment Variables
Variable Description ANTHROPIC_API_KEYAnthropic API key (if not using OAuth) OPENAI_API_KEYOpenAI API key GEMINI_API_KEYGoogle Gemini API key NATS_URLNATS server URL for Station plugin OPENCODE_WORKSPACE_DIRDefault workspace directory
Building Custom Image
For Station integration with the NATS plugin:
Dockerfile
FROM ghcr.io/sst/opencode:latest
# Install Bun runtime for plugins and git for workspace operations
RUN apk add --no-cache curl unzip bash git \
&& curl -fsSL https://bun.sh/install | bash \
&& ln -s /root/.bun/bin/bun /usr/local/bin/bun \
&& ln -s /root/.bun/bin/bunx /usr/local/bin/bunx
# Create plugin and config directories
RUN mkdir -p /root/.config/opencode/plugin \
&& mkdir -p /root/.opencode
# Copy the Station plugin
COPY dist/index.js /root/.config/opencode/plugin/station-plugin.js
# Copy OpenCode config
COPY opencode.json /root/.config/opencode/opencode.json
# Copy entrypoint
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
WORKDIR /workspaces
ENTRYPOINT [ "/entrypoint.sh" ]
Entrypoint Script
#!/bin/bash
set -e
echo "=== OpenCode Container ==="
echo "NATS_URL: ${ NATS_URL :- not set }"
echo "OPENCODE_WORKSPACE_DIR: ${ OPENCODE_WORKSPACE_DIR :-/ workspaces }"
echo "=== Starting OpenCode server ==="
exec opencode serve --port 4096 --hostname 0.0.0.0 --print-logs
Build and Run
# Build the image
docker build -t my-opencode:latest .
# Run with OAuth
docker run -d \
-p 4096:4096 \
-v ~/.local/share/opencode/auth.json:/root/.local/share/opencode/auth.json:ro \
my-opencode:latest
Using with Station
Station can connect to the OpenCode container via two backends:
Option 1: HTTP Backend (Direct)
Station calls OpenCode’s HTTP API directly. Simple setup, good for local development.
# ~/.config/station/config.yaml
coding :
backend : opencode
opencode :
url : http://localhost:4096
Option 2: NATS Backend (Plugin-based)
Station publishes tasks to NATS, the Station plugin in the container receives them and executes via OpenCode. Better for distributed setups.
# ~/.config/station/config.yaml
coding :
backend : opencode-nats
nats :
url : nats://localhost:4222
Station includes an embedded NATS server that starts automatically. No separate NATS installation needed. How it works:
Station starts embedded NATS on port 4222
OpenCode container connects via host.docker.internal
Tasks flow: Station → NATS → Station Plugin → OpenCode
# docker-compose.yaml
services :
opencode :
image : ghcr.io/sst/opencode:latest
environment :
- NATS_URL=nats://host.docker.internal:4222
extra_hosts :
- "host.docker.internal:host-gateway" # Required for Linux
volumes :
- ./workspaces:/workspaces
The extra_hosts configuration is only needed on Linux. macOS and Windows Docker Desktop include host.docker.internal by default.
Verify connectivity: # Check embedded NATS is running on host
netstat -tlnp | grep 4222
# Test from container
docker exec opencode curl -s http://host.docker.internal:4222 || echo "NATS responding"
Embedded NATS environment variables (on Station host): Variable Default Description WORKFLOW_NATS_PORT4222Change embedded NATS port WORKFLOW_NATS_EMBEDDEDAuto Force true or false
See Configuration for full details. Use a separate NATS container for production deployments or when you need JetStream persistence: # docker-compose.yaml
version : '3.8'
services :
nats :
image : nats:2.10-alpine
command : [ "--jetstream" , "--store_dir=/data" ]
ports :
- "4222:4222"
- "8222:8222" # Monitoring
volumes :
- nats-data:/data
opencode :
image : ghcr.io/sst/opencode:latest
environment :
- NATS_URL=nats://nats:4222
depends_on :
- nats
volumes :
- ./workspaces:/workspaces
volumes :
nats-data :
Configure Station to use external NATS: # This disables embedded NATS automatically
WORKFLOW_NATS_URL = nats://localhost:4222 stn serve
Or in config.yaml: coding :
backend : opencode-nats
nats :
url : nats://localhost:4222
Create a Coding Agent
The agent definition is the same regardless of backend - just enable coding:
---
metadata :
name : "Code Assistant"
description : "AI coding with OpenCode container"
model : openai/gpt-4o
coding :
enabled : true
---
You are a coding assistant. Use your coding tools to :
- Read and understand existing code
- Write new files and functions
- Refactor and improve code quality
When given a task, open a coding session, complete the work, then close the session.
Notice the agent doesn’t specify backend: opencode or backend: opencode-nats. The backend is determined by Station’s config, making agents portable across different deployments.
Run the Agent
stn agent run "Code Assistant" "Create a Python function that calculates fibonacci numbers"
Testing
Health Check
curl http://localhost:4096/global/health
Create Session
curl -X POST http://localhost:4096/session \
-H "Content-Type: application/json" \
-d '{"path":"/workspaces/test"}'
Send Message
SESSION_ID = "ses_xxx" # from create session response
curl -X POST "http://localhost:4096/session/${ SESSION_ID }/message" \
-H "Content-Type: application/json" \
-d '{"parts":[{"type":"text","text":"Create a hello.py file"}]}'
Troubleshooting
OAuth Token Not Working
Verify token is mounted correctly
# Check if auth.json exists in container
docker exec < containe r > cat /root/.local/share/opencode/auth.json
# Should show OAuth token structure:
# {"type":"oauth","token":"...","expiresAt":...}
Re-run opencode auth login on your host to refresh the token, then restart the container.
Read-Only Filesystem Error
If you see EROFS: read-only file system errors:
Error: EROFS: read-only file system, open '/root/.local/share/opencode/storage/...'
Cause : You mounted the entire ~/.local/share/opencode directory as read-only.
Fix : Mount only auth.json:
# Wrong
-v ~/.local/share/opencode:/root/.local/share/opencode:ro
# Correct
-v ~/.local/share/opencode/auth.json:/root/.local/share/opencode/auth.json:ro
Empty Response from API
If API calls return 200 OK but with empty body:
Check container logs: docker logs <container>
Verify model is configured in opencode.json
Ensure API key or OAuth credentials are valid
Container Can’t Reach NATS
# Verify NATS is running
curl http://localhost:8222/healthz
# Check container can resolve NATS hostname
docker exec < containe r > ping nats
# Verify NATS_URL environment variable
docker exec < containe r > printenv | grep NATS
Next Steps