Skip to main content
For routing rules, see Routing & Protocols. For available model IDs, see List Models. OpenCode supports custom providers through opencode.json. The cleanest PipeLLM setup is to use PipeLLM’s OpenAI converter route so one provider configuration can access Claude, GPT, and Gemini models.
https://api.pipellm.ai/openai/v1
Use https://api.pipellm.ai/v1 only if you want OpenAI-compatible native routing and do not need cross-protocol access.

Config Locations

OpenCode loads configuration from:
  • ~/.config/opencode/opencode.json for global user config
  • opencode.json in the project root for project-specific config

Quick Setup

  1. Export your PipeLLM key:
export PIPELLM_API_KEY="your-pipellm-api-key"
  1. Add a custom provider to opencode.json:
{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "pipellm": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "PipeLLM",
      "options": {
        "baseURL": "https://api.pipellm.ai/openai/v1",
        "apiKey": "{env:PIPELLM_API_KEY}"
      },
      "models": {
        "claude-sonnet-4-6": {
          "name": "Claude Sonnet 4.6"
        },
        "gpt-4o-mini": {
          "name": "GPT-4o Mini"
        },
        "gemini-2.5-pro": {
          "name": "Gemini 2.5 Pro"
        }
      }
    }
  },
  "model": "pipellm/claude-sonnet-4-6"
}
  1. Start OpenCode:
opencode

Notes

  • OpenCode custom providers accept any provider id. pipellm is just an example.
  • @ai-sdk/openai-compatible is the right package when you are targeting Chat Completions-style endpoints.
  • If a model or workflow specifically needs the OpenAI Responses API, that is a separate integration path and should not reuse this Chat Completions setup.

Developer Tools

Compare supported coding assistants

OpenAI Converter

Keep OpenAI-compatible format across providers

OpenCode Providers Docs

Official OpenCode custom provider examples