Skip to main content

Add a custom AI provider

REQUIREMENTS

To be able to implement this guide, you need to learn how to insert PHP snippets to your website.

You can find guide here: WP Beginner

Better Messages ships with three built-in AI providers — OpenAI, Anthropic and Google Gemini — but you can register additional providers yourself using three PHP filter hooks. This guide walks through adding a provider that talks to any OpenAI-compatible Chat Completions endpoint.

That single provider works with self-hosted servers like Ollama, LM Studio, llama.cpp (--api), vLLM, LocalAI and AnythingLLM, as well as hosted services like Groq, Together AI, OpenRouter, DeepInfra and Perplexity.

The same mechanism also lets you build your own wrappers around OpenAI, Claude or Gemini — useful for Azure OpenAI, Amazon Bedrock, Google Vertex AI, LiteLLM proxies, or any time you need custom auth, logging, or routing in front of an API that Better Messages already supports. See Wrapping the built-in providers below.

Requirements
  • Better Messages with the AI Chat Bots feature enabled
  • PHP 8.1 or newer (required by the AI addon)
  • Your LLM server must expose an OpenAI-compatible /v1/chat/completions endpoint with streaming support

Starter plugin

A ready-to-install version of the snippet from this guide is available as a WordPress plugin:

Download bm-custom-ai-provider.zip

Unzip into wp-content/plugins/, edit the four constants at the top of bm-custom-ai-provider.php (or override them from wp-config.php), activate the plugin, and the new provider will appear in the AI Chat Bots editor.

How it works

Three filter hooks give you everything you need to plug a new provider into the AI Chat Bots admin UI:

FilterPurpose
better_messages_ai_providers_infoAdds your provider to the Provider dropdown in the bot editor
better_messages_ai_provider_createReturns your provider class instance when Better Messages needs to generate a response
better_messages_ai_provider_global_keyReturns the global API key for your provider (useful for hosted services)

Your provider class extends Better_Messages_AI_Provider and must implement a few abstract methods. The important one is getResponseGenerator(), which returns a PHP Generator that yields:

  • string — a text delta, streamed to the user as the bot types
  • ['finish', $meta] — when the response is complete
  • ['error', $message] — if something went wrong

Complete example

Paste the snippet below into a mu-plugin, a custom plugin, or functions.php in your theme. Edit the four constants at the top to point at your LLM server.

<?php
/**
* Better Messages — custom AI provider for OpenAI-compatible endpoints.
*
* Works with Ollama, LM Studio, vLLM, llama.cpp, LocalAI, AnythingLLM,
* Groq, Together AI, OpenRouter and any server that speaks the
* OpenAI Chat Completions API.
*/

// ---- EDIT THESE (or override any of them from wp-config.php) ----
if ( ! defined( 'BM_CUSTOM_AI_ID' ) ) define( 'BM_CUSTOM_AI_ID', 'ollama' ); // internal provider id
if ( ! defined( 'BM_CUSTOM_AI_NAME' ) ) define( 'BM_CUSTOM_AI_NAME', 'Ollama (self-hosted)' ); // label shown in the admin dropdown
if ( ! defined( 'BM_CUSTOM_AI_BASE_URL' ) ) define( 'BM_CUSTOM_AI_BASE_URL', 'http://127.0.0.1:11434/v1/' ); // OpenAI-compatible base URL, must end with /
if ( ! defined( 'BM_CUSTOM_AI_KEY' ) ) define( 'BM_CUSTOM_AI_KEY', 'ollama' ); // API key — Ollama ignores it, hosted services need a real key
// -----------------------------------------------------------------

add_filter( 'better_messages_ai_providers_info', function ( $providers ) {
$providers[] = array(
'id' => BM_CUSTOM_AI_ID,
'name' => BM_CUSTOM_AI_NAME,
'features' => array( 'temperature', 'maxOutputTokens' ),
'hasGlobalKey' => true,
);
return $providers;
} );

add_filter( 'better_messages_ai_provider_global_key', function ( $key, $provider_id ) {
if ( $provider_id === BM_CUSTOM_AI_ID ) {
return BM_CUSTOM_AI_KEY;
}
return $key;
}, 10, 2 );

add_filter( 'better_messages_ai_provider_create', function ( $provider, $provider_id ) {
if ( $provider_id !== BM_CUSTOM_AI_ID ) {
return $provider;
}

if ( ! class_exists( 'Better_Messages_AI_Provider' ) ) {
return null; // AI addon is not loaded
}

if ( ! class_exists( 'BM_Custom_AI_Provider' ) ) {

class BM_Custom_AI_Provider extends Better_Messages_AI_Provider {

public function get_provider_id() {
return BM_CUSTOM_AI_ID;
}

public function get_provider_name() {
return BM_CUSTOM_AI_NAME;
}

public function get_supported_features() {
return array( 'temperature', 'maxOutputTokens' );
}

public function check_api_key() {
// Called when plugin settings are saved. No-op for self-hosted servers.
}

public function get_models() {
try {
$response = $this->get_client()->request( 'GET', 'models' );
$data = json_decode( $response->getBody()->getContents(), true );
$models = array();

if ( isset( $data['data'] ) && is_array( $data['data'] ) ) {
foreach ( $data['data'] as $row ) {
if ( isset( $row['id'] ) ) {
$models[] = $row['id'];
}
}
}

sort( $models );
return $models;
} catch ( \Throwable $e ) {
return new \WP_Error( 'custom_ai_models', $e->getMessage() );
}
}

public function getResponseGenerator( $bot_id, $bot_user, $message, $ai_message_id, $stream = true ) {
global $wpdb;

$settings = Better_Messages()->ai->get_bot_settings( $bot_id );
$bot_user_id = absint( $bot_user->id ) * -1;

// Load conversation history up to the triggering message.
$rows = $wpdb->get_results( $wpdb->prepare(
"SELECT id, sender_id, message
FROM `" . bm_get_table( 'messages' ) . "`
WHERE thread_id = %d AND created_at <= %d
ORDER BY created_at ASC",
$message->thread_id, $message->created_at
) );

// Build OpenAI-style messages array.
$request_messages = array();

if ( ! empty( $settings['instruction'] ) ) {
$request_messages[] = array(
'role' => 'system',
'content' => $settings['instruction'],
);
}

foreach ( $rows as $row ) {
// Skip messages that are themselves failed AI responses.
if ( Better_Messages()->functions->get_message_meta( $row->id, 'ai_response_error' ) ) {
continue;
}

// Strip HTML comments (e.g. the <!-- BM-AI --> marker) and tags.
$text = preg_replace( '/<!--(.|\s)*?-->/', '', $row->message );
$text = wp_strip_all_tags( html_entity_decode( $text ) );

$request_messages[] = array(
'role' => (int) $row->sender_id === $bot_user_id ? 'assistant' : 'user',
'content' => $text,
);
}

$params = array(
'model' => $settings['model'],
'messages' => $request_messages,
'stream' => true,
);

if ( $settings['temperature'] !== '' ) {
$params['temperature'] = (float) $settings['temperature'];
}

if ( $settings['maxOutputTokens'] !== '' ) {
$params['max_tokens'] = (int) $settings['maxOutputTokens'];
}

try {
$response = $this->get_client()->post( 'chat/completions', array(
'json' => $params,
'stream' => true,
) );

$body = $response->getBody();
$buffer = '';
$model = '';

while ( ! $body->eof() ) {
$chunk = $body->read( 1024 );
if ( $chunk === '' ) {
continue;
}

$buffer .= $chunk;

while ( ( $pos = strpos( $buffer, "\n" ) ) !== false ) {
$line = trim( substr( $buffer, 0, $pos ) );
$buffer = substr( $buffer, $pos + 1 );

if ( $line === '' || strpos( $line, 'data: ' ) !== 0 ) {
continue;
}

$json = substr( $line, 6 );

if ( $json === '[DONE]' ) {
yield array( 'finish', array(
'provider' => $this->get_provider_id(),
'model' => $model,
) );
return;
}

$data = json_decode( $json, true );

if ( isset( $data['model'] ) ) {
$model = $data['model'];
}

if ( isset( $data['choices'][0]['delta']['content'] ) ) {
yield $data['choices'][0]['delta']['content'];
}
}
}

// Some servers don't emit a final [DONE]; finish cleanly anyway.
yield array( 'finish', array(
'provider' => $this->get_provider_id(),
'model' => $model,
) );
} catch ( \Throwable $e ) {
yield array( 'error', $e->getMessage() );
}
}

protected function get_client() {
return new \BetterMessages\GuzzleHttp\Client( array(
'base_uri' => BM_CUSTOM_AI_BASE_URL,
'headers' => array(
'Authorization' => 'Bearer ' . $this->get_api_key(),
'Content-Type' => 'application/json',
),
'timeout' => 120,
'connect_timeout' => 10,
) );
}
}
}

return new BM_Custom_AI_Provider();
}, 10, 2 );

Using your custom provider

  1. Save the snippet (edit the four constants at the top to match your setup).
  2. Go to WP AdminBetter MessagesAI Chat Bots.
  3. Create or edit a bot.
  4. In the AI Provider section, pick your new provider from the Provider dropdown.
  5. Pick a model from the Model & Pricing selector — the list is fetched live from your server via GET /v1/models.
  6. Save the bot, add it to a conversation, and start chatting.

Per-server notes

  • Ollama — Base URL http://127.0.0.1:11434/v1/. The API key is ignored; any string works. Install a model first with ollama pull llama3.2.
  • LM Studio — Start the local server from the Developer tab. Base URL defaults to http://127.0.0.1:1234/v1/.
  • vLLM / llama.cpp — Start the server with its OpenAI-compatible API and use base URL http://127.0.0.1:8000/v1/ (or whichever port you configured).
  • Hosted services (Groq, Together AI, OpenRouter, DeepInfra, Perplexity) — Use the service's base URL and a real API key.
tip

If your WordPress site and LLM server run on different machines, make sure the LLM server's port is reachable from WordPress. For self-hosted servers that listen on 127.0.0.1, bind them to 0.0.0.0 or put them behind a reverse proxy with TLS.

Adding more features

The example above supports temperature and max output tokens. If your backend supports more, add the matching feature IDs to both get_supported_features() and the features array in the better_messages_ai_providers_info filter — Better Messages will automatically show the corresponding fields in the bot editor.

Available feature IDs:

Feature IDWhat it enables in the bot editor
temperatureTemperature slider
maxOutputTokensMax output tokens field
imagesImage input toggle (vision models)
filesFile input toggle (PDFs)
webSearchWeb search tool
fileSearchFile search / vector store tool
imagesGenerationImage generation tool
reasoningEffortReasoning effort selector
extendedThinkingExtended thinking / thinking budget
serviceTierService tier selector
audioAudio input/output
moderationPre-send moderation
transcriptionVoice message transcription

To support image inputs, add 'images' to the feature list and extend the message-building loop to include base64-encoded image attachments. The built-in OpenAI provider at addons/ai/api/open-ai.php is a good reference.

Wrapping the built-in providers

The same filter hooks let you register wrappers around OpenAI, Anthropic (Claude) or Google Gemini. This is useful when you need to reach one of those APIs through a non-default route:

  • Azure OpenAI Service — OpenAI's API surface, but on Azure with a different base URL, a different auth header (api-key instead of Authorization), and a model identifier that's actually your deployment name.
  • Amazon Bedrock / Google Vertex AI — Claude and Gemini models delivered through AWS or GCP credentials instead of the vendor's direct API.
  • LiteLLM, OpenRouter, or a self-hosted proxy — a single endpoint in front of several model providers, often with cost/quota controls bolted on.
  • A transparent logging / audit layer — send every prompt and response through your own service before forwarding to the real API.
  • Multiple keys or tenants — one "OpenAI (marketing)" provider and one "OpenAI (support)" provider, each with its own key, rate limit, and model allowlist.

Two ways to build a wrapper

1. Reuse the built-in provider class. OpenAI, Anthropic and Gemini all live in classes you can extend or compose. For example, to create an Azure OpenAI provider that keeps all of the streaming, moderation, transcription and cost-tracking logic of the built-in OpenAI provider, subclass it and only override the HTTP client:

add_filter( 'better_messages_ai_provider_create', function ( $provider, $provider_id ) {
if ( $provider_id !== 'azure_openai' ) {
return $provider;
}

if ( ! class_exists( 'Better_Messages_OpenAI_API' ) ) {
return null;
}

if ( ! class_exists( 'BM_Azure_OpenAI_Provider' ) ) {

class BM_Azure_OpenAI_Provider extends Better_Messages_OpenAI_API {

public function get_provider_id() { return 'azure_openai'; }
public function get_provider_name() { return 'Azure OpenAI'; }

public function get_client() {
return new \BetterMessages\GuzzleHttp\Client( array(
'base_uri' => 'https://YOUR-RESOURCE.openai.azure.com/openai/deployments/YOUR-DEPLOYMENT/',
'headers' => array(
'api-key' => $this->get_api_key(),
'Content-Type' => 'application/json',
),
'query' => array( 'api-version' => '2024-10-21' ),
) );
}
}
}

return new BM_Azure_OpenAI_Provider();
}, 10, 2 );

Register it with better_messages_ai_providers_info and better_messages_ai_provider_global_key the same way as the OpenAI-compatible example above.

2. Write a brand-new provider class that speaks a different API shape internally but advertises itself with whatever id and name you like. Use this when you want to wrap Claude via Amazon Bedrock or Gemini via Vertex AI, where the request/response format is different enough from the vendor's direct API that subclassing isn't practical. Start from the OpenAI-compatible snippet above and replace the HTTP call and streaming parser with the Bedrock / Vertex equivalents.

tip

Inside a wrapper you can still call the parent class's helper methods — get_thread_messages(), resolve_sender_names(), get_group_context_instruction(), strip_mention_html(), convert_mention_placeholders(), enrich_with_reply_context() — so you don't have to reimplement context building, group handling, or mentions.