Anthropic OAuth Unavailable: Anthropic has restricted third-party OAuth tokens. Claude Max/Pro subscription authentication is not working until further notice. Please use API key authentication instead.
# Initialize in a specific directorystn init --provider openai --ship --config ~/my-station-workspace# Your agents, MCP configs, and variables are now version-controllablecd ~/my-station-workspacegit init && git add . && git commit -m "Initial Station config"
Ready to dive deeper? Copy this prompt into your MCP client (Claude, Cursor, etc.) for a guided, interactive tour of Station’s features:
Interactive Tutorial Prompt
Copy
You are my Station onboarding guide. Walk me through an interactive hands-on tutorial.RULES:1. Create a todo list to track progress through each section2. At each section, STOP and let me engage before continuing3. Use Station MCP tools to demonstrate - don't just explain, DO IT4. Keep it fun and celebrate wins!THE JOURNEY:## 1. Hello World Agent- Create a "hello-world" agent that greets users and tells a joke- Call the agent and show the result- NOTE: Agent execution may take 10-30 seconds depending on your AI model- Point me to http://localhost:8585 to see the agent in the UI[STOP for me to try it]## 2. Faker Tools & MCP Templates- Explain Faker tools (AI-generated mock data for safe development)- Note: Real MCP tools are added via Station UI or template.json- Explain MCP templates - they keep credentials safe when deploying- Create a "prometheus-metrics" faker for realistic metrics- IMPORTANT: Faker tool calls can take 30-60+ seconds as the AI generates realistic mock data. This is normal!- Show me results at http://localhost:8585[STOP to see the faker]## 3. DevOps Investigation Agent- Create a "metrics-investigator" agent using our prometheus faker- Call it: "Check for performance issues in the last hour"- NOTE: This may take a minute as the agent uses faker tools to simulate real metrics- Direct me to http://localhost:8585 to inspect the run[STOP to review the investigation]## 4. Multi-Agent Hierarchy- Create an "incident-coordinator" that delegates to: - metrics-investigator (existing) - logs-investigator (new - create a logs faker)- Show hierarchy structure in the .prompt file- Call coordinator: "Investigate why the API is slow"- Multi-agent runs take longer as each agent executes sequentially- Check out the delegation chain at http://localhost:8585[STOP to see delegation]## 5. Inspecting Runs- Use inspect_run to show detailed execution- Explain: tool calls, delegations, timing- Compare with the visual view at http://localhost:8585[STOP to explore]## 6. Workflow with Human-in-the-Loop- Create a workflow: investigate → switch on severity → human_approval if high → report- Make it complex (switch/parallel), not sequential- Start the workflow- Show me the workflow state at http://localhost:8585[STOP for me to approve/reject]## 7. Evaluation & Reporting- Run evals with evaluate_benchmark- Generate a performance report- View detailed metrics at http://localhost:8585[STOP to review]## 8. Grand Finale- Direct me to http://localhost:8585 (Station UI)- Quick tour: Agents, MCP servers, Runs, Workflows- Celebrate!## 9. Want More? (Optional)Briefly explain these advanced features (no demo needed):- **Schedules**: Cron-based agent scheduling- **Sandboxes**: Isolated code execution (Python/Node/Bash)- **Notify Webhooks**: Send alerts to Slack, ntfy, Discord- **Bundles**: Package and share agent teams- **Deploy**: `stn deploy` to Fly.io, Docker, K8s- **CloudShip**: Centralized management and team OAuthStart now with Section 1!
Performance Note: Faker tools generate AI-powered mock data, which can take 30-60+ seconds per call. This is normal! Real MCP tools (like Prometheus, Datadog) are much faster since they query actual APIs.