Quick Setup
Run the setup script to add IoTeX AI Gateway as an LLM provider and configure voice message transcription. Get your API key from the Gateway Console first. Non-interactive — uses defaults (Gemini 2.5 Flash Lite + Whisper Large V3 Turbo):- LLM provider: IoTeX with your chosen model (e.g.
iotex/gemini-2.5-flash-lite) - Audio transcription: Whisper model for automatic voice message transcription
- Auth profile: API key stored securely for both LLM and audio
- Model alias: Short name for easy switching in chat (e.g.
/model gemini-lite)
Manual Setup
1. Add IoTeX as a Provider
Runopenclaw config edit or edit ~/.openclaw/openclaw.json directly. Add the IoTeX provider under models.providers:
sk-xxxxxxxxxx with your API key from the Gateway Console.
2. Set as Default or Fallback Model
To use IoTeX as your primary model:3. Add a Model Alias (Optional)
Give the model a short name for easy switching in chat:/model gemini-lite.
4. Verify
Audio Transcription
OpenClaw can automatically transcribe voice messages (from Telegram, WhatsApp, etc.) using IoTeX-hosted Whisper models.OpenClaw’s media understanding system only recognizes built-in provider names (
openai, groq, google, anthropic, minimax, deepgram) for audio transcription. Since the IoTeX gateway is OpenAI-compatible, you must configure it to route through the openai provider with the IoTeX base URL.Step 1: Set Up the Auth Profile
OpenClaw needs an auth profile for IoTeX. Add this to~/.openclaw/openclaw.json:
~/.openclaw/agents/main/agent/auth-profiles.json:
Step 2: Configure Audio Transcription
Add the audio model config to~/.openclaw/openclaw.json:
| Field | Value | Why |
|---|---|---|
provider | "openai" | Routes through OpenClaw’s built-in OpenAI-compatible transcription handler. Do not use "iotex" here. |
model | "openai/whisper-large-v3-turbo" | The Whisper model ID on IoTeX gateway. |
baseUrl | "https://gateway.iotex.ai/v1" | Overrides the default OpenAI URL to point to IoTeX. |
profile | "iotex:default" | Uses the IoTeX auth profile for the API key. |
Step 3: Restart and Test
Available Whisper Models
| Model | Speed | Price/min |
|---|---|---|
openai/whisper-large-v3-turbo | Fast | $0.0015 |
openai/whisper-large-v3 | Standard | $0.0030 |
whisper-1 | Legacy | $0.0060 |
Available Models
The setup script configures these Gemini models via IoTeX AI Gateway:| Model | Best For | Price (input/output per 1M tokens) |
|---|---|---|
gemini-2.5-flash-lite | Low-cost general chat, fast responses | 0.40 |
gemini-2.5-flash | Balanced quality and speed | 2.50 |
Troubleshooting
Audio transcription not working
Audio transcription not working
The most common cause is using
"provider": "iotex" in the audio model config. OpenClaw’s media understanding only recognizes openai, groq, google, anthropic, minimax, deepgram. Use "provider": "openai" with baseUrl pointing to IoTeX instead.Also check:- The auth profile uses
"key"(not"apiKey") inauth-profiles.json tools.media.audio.enabledistrue- Run
openclaw gateway restartafter config changes
Model not found
Model not found
Verify the model name matches the supported models list. Model names are case-sensitive.
Gateway not picking up config changes
Gateway not picking up config changes
OpenClaw watches
openclaw.json for changes, but some changes (like auth profiles or audio config) require a full restart:API key not working
API key not working
Check that the key is set in the correct location. OpenClaw resolves API keys in this order:
- Auth profile specified by
profilefield - Auth profiles in
~/.openclaw/agents/main/agent/auth-profiles.json - Environment variable (e.g.,
IOTEX_API_KEY) apiKeyinmodels.providers.iotexconfig