Custom Providers¶
External packages can create providers without modifying the core library. This is the recommended approach for adding new LLM backends.
Creating a Provider¶
Step 1: Implement the Provider Interface¶
// In your external package (e.g., github.com/yourname/omnillm-myprovider)
package myprovider
import (
"context"
"github.com/agentplexus/omnillm/provider"
)
// HTTP Client
type Client struct {
apiKey string
// your HTTP client implementation
}
func New(apiKey string) *Client {
return &Client{apiKey: apiKey}
}
// Provider Adapter
type Provider struct {
client *Client
}
func NewProvider(apiKey string) provider.Provider {
return &Provider{client: New(apiKey)}
}
func (p *Provider) Name() string { return "myprovider" }
func (p *Provider) Close() error { return p.client.Close() }
func (p *Provider) CreateChatCompletion(ctx context.Context, req *provider.ChatCompletionRequest) (*provider.ChatCompletionResponse, error) {
// Convert provider.ChatCompletionRequest to your API format
// Make HTTP call via p.client
// Convert response back to provider.ChatCompletionResponse
}
func (p *Provider) CreateChatCompletionStream(ctx context.Context, req *provider.ChatCompletionRequest) (provider.ChatCompletionStream, error) {
// Your streaming implementation
}
Step 2: Use Your Provider¶
import (
"github.com/agentplexus/omnillm"
"github.com/yourname/omnillm-myprovider"
)
func main() {
customProvider := myprovider.NewProvider("your-api-key")
client, err := omnillm.NewClient(omnillm.ClientConfig{
Providers: []omnillm.ProviderConfig{
{CustomProvider: customProvider},
},
})
// Use the same omnillm API
response, err := client.CreateChatCompletion(ctx, &omnillm.ChatCompletionRequest{
Model: "my-model",
Messages: []omnillm.Message{{Role: omnillm.RoleUser, Content: "Hello!"}},
})
}
Provider Interface¶
type Provider interface {
// Name returns the provider name (e.g., "openai", "anthropic")
Name() string
// CreateChatCompletion performs a synchronous chat completion
CreateChatCompletion(ctx context.Context, req *ChatCompletionRequest) (*ChatCompletionResponse, error)
// CreateChatCompletionStream performs a streaming chat completion
CreateChatCompletionStream(ctx context.Context, req *ChatCompletionRequest) (ChatCompletionStream, error)
// Close releases any resources
Close() error
}
Reference Implementations¶
Look at any built-in provider as a reference:
providers/openai/- OpenAI implementationproviders/anthropic/- Anthropic implementation- omnillm-bedrock - External provider example
Benefits¶
- No Core Changes: External providers don't require modifying the core library
- Clean Injection: Use
ProviderConfig.CustomProviderto inject your provider - Same Interface: Both internal and external providers use the same
provider.Providerinterface - Easy Testing: Mock the provider interface for unit tests