Skip to main content
OpenClaw is an open-source, self-hosted AI assistant that connects to messaging apps (Telegram, WhatsApp, Discord, Slack, etc.) and performs autonomous tasks. Connect it to IoTeX AI Gateway to use models like Gemini 2.5 Flash Lite for chat and Whisper for voice message transcription.

Quick Setup

Run the setup script to add IoTeX AI Gateway as an LLM provider and configure voice message transcription. Get your API key from the Gateway Console first. Non-interactive — uses defaults (Gemini 2.5 Flash Lite + Whisper Large V3 Turbo):
curl -fsSL https://raw.githubusercontent.com/iotexproject/ai-gateway-docs/main/scripts/openclaw-setup-iotex-ai.sh | bash -s -- YOUR_API_KEY
Interactive — prompts you to pick LLM and audio models:
curl -fsSL https://raw.githubusercontent.com/iotexproject/ai-gateway-docs/main/scripts/openclaw-setup-iotex-ai.sh | bash
Full control — specify model, audio model, and set as default:
curl -fsSL https://raw.githubusercontent.com/iotexproject/ai-gateway-docs/main/scripts/openclaw-setup-iotex-ai.sh | bash -s -- YOUR_API_KEY gemini-2.5-flash openai/whisper-large-v3-turbo --default
The script configures:
  • LLM provider: IoTeX with your chosen model (e.g. iotex/gemini-2.5-flash-lite)
  • Audio transcription: Whisper model for automatic voice message transcription
  • Auth profile: API key stored securely for both LLM and audio
  • Model alias: Short name for easy switching in chat (e.g. /model gemini-lite)

Manual Setup

1. Add IoTeX as a Provider

Run openclaw config edit or edit ~/.openclaw/openclaw.json directly. Add the IoTeX provider under models.providers:
{
  "models": {
    "providers": {
      "iotex": {
        "baseUrl": "https://gateway.iotex.ai/v1",
        "apiKey": "sk-xxxxxxxxxx",
        "api": "openai-completions",
        "models": [
          {
            "id": "gemini-2.5-flash-lite",
            "name": "Gemini 2.5 Flash Lite (via IoTeX)",
            "reasoning": false,
            "input": ["text"],
            "cost": {
              "input": 0,
              "output": 0,
              "cacheRead": 0,
              "cacheWrite": 0
            },
            "contextWindow": 200000,
            "maxTokens": 8192
          }
        ]
      }
    }
  }
}
Replace sk-xxxxxxxxxx with your API key from the Gateway Console.
You can also use openclaw config set to set individual values without editing the full file:
openclaw config set models.providers.iotex.baseUrl "https://gateway.iotex.ai/v1"
openclaw config set models.providers.iotex.apiKey "sk-xxxxxxxxxx"
openclaw config set models.providers.iotex.api "openai-completions"

2. Set as Default or Fallback Model

To use IoTeX as your primary model:
{
  "agents": {
    "defaults": {
      "model": {
        "primary": "iotex/gemini-2.5-flash-lite"
      }
    }
  }
}
Or as a fallback alongside another provider:
{
  "agents": {
    "defaults": {
      "model": {
        "primary": "anthropic/claude-sonnet-4-5",
        "fallbacks": ["iotex/gemini-2.5-flash-lite"]
      }
    }
  }
}

3. Add a Model Alias (Optional)

Give the model a short name for easy switching in chat:
{
  "agents": {
    "defaults": {
      "models": {
        "iotex/gemini-2.5-flash-lite": {
          "alias": "gemini-lite"
        }
      }
    }
  }
}
Now you can switch models in Telegram/WhatsApp with /model gemini-lite.

4. Verify

openclaw gateway restart
openclaw gateway health

Audio Transcription

OpenClaw can automatically transcribe voice messages (from Telegram, WhatsApp, etc.) using IoTeX-hosted Whisper models.
OpenClaw’s media understanding system only recognizes built-in provider names (openai, groq, google, anthropic, minimax, deepgram) for audio transcription. Since the IoTeX gateway is OpenAI-compatible, you must configure it to route through the openai provider with the IoTeX base URL.

Step 1: Set Up the Auth Profile

OpenClaw needs an auth profile for IoTeX. Add this to ~/.openclaw/openclaw.json:
{
  "auth": {
    "profiles": {
      "iotex:default": {
        "provider": "iotex",
        "mode": "api_key"
      }
    }
  }
}
Then add the actual API key to ~/.openclaw/agents/main/agent/auth-profiles.json:
{
  "version": 1,
  "profiles": {
    "iotex:default": {
      "type": "api_key",
      "provider": "iotex",
      "key": "sk-xxxxxxxxxx"
    }
  }
}
The field must be "key", not "apiKey". Using the wrong field name will silently fail to authenticate.

Step 2: Configure Audio Transcription

Add the audio model config to ~/.openclaw/openclaw.json:
{
  "tools": {
    "media": {
      "audio": {
        "enabled": true,
        "models": [
          {
            "provider": "openai",
            "model": "openai/whisper-large-v3-turbo",
            "baseUrl": "https://gateway.iotex.ai/v1",
            "profile": "iotex:default",
            "type": "provider"
          }
        ]
      }
    }
  }
}
Key fields explained:
FieldValueWhy
provider"openai"Routes through OpenClaw’s built-in OpenAI-compatible transcription handler. Do not use "iotex" here.
model"openai/whisper-large-v3-turbo"The Whisper model ID on IoTeX gateway.
baseUrl"https://gateway.iotex.ai/v1"Overrides the default OpenAI URL to point to IoTeX.
profile"iotex:default"Uses the IoTeX auth profile for the API key.

Step 3: Restart and Test

openclaw gateway restart
Send a voice message to your OpenClaw bot on Telegram. The bot will automatically transcribe the audio and respond to its content.

Available Whisper Models

ModelSpeedPrice/min
openai/whisper-large-v3-turboFast$0.0015
openai/whisper-large-v3Standard$0.0030
whisper-1Legacy$0.0060

Available Models

The setup script configures these Gemini models via IoTeX AI Gateway:
ModelBest ForPrice (input/output per 1M tokens)
gemini-2.5-flash-liteLow-cost general chat, fast responses0.10/0.10 / 0.40
gemini-2.5-flashBalanced quality and speed0.30/0.30 / 2.50
See the full list of models available through IoTeX AI Gateway on the Supported Models page.

Troubleshooting

The most common cause is using "provider": "iotex" in the audio model config. OpenClaw’s media understanding only recognizes openai, groq, google, anthropic, minimax, deepgram. Use "provider": "openai" with baseUrl pointing to IoTeX instead.Also check:
  • The auth profile uses "key" (not "apiKey") in auth-profiles.json
  • tools.media.audio.enabled is true
  • Run openclaw gateway restart after config changes
Verify the model name matches the supported models list. Model names are case-sensitive.
# Check available models
curl https://gateway.iotex.ai/v1/models \
  -H "Authorization: Bearer sk-xxxxxxxxxx"
OpenClaw watches openclaw.json for changes, but some changes (like auth profiles or audio config) require a full restart:
openclaw gateway restart
openclaw gateway health
Check that the key is set in the correct location. OpenClaw resolves API keys in this order:
  1. Auth profile specified by profile field
  2. Auth profiles in ~/.openclaw/agents/main/agent/auth-profiles.json
  3. Environment variable (e.g., IOTEX_API_KEY)
  4. apiKey in models.providers.iotex config
Verify with:
openclaw gateway health

Resources