This module adds AI model management capabilities to Finsteps, with support for multiple providers including local Ollama models.
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Start Ollama server
ollama serve
# Pull the qwen3-coder:30b model
ollama pull qwen3-coder:30b
import { modelManager, type ModelChatRequest } from 'finsteps';
// Check available models
const models = modelManager.getAvailableModels();
console.log(models);
// Chat with qwen3-coder:30b
const request: ModelChatRequest = {
model: 'qwen3-coder-30b',
messages: [
{
role: 'user',
content: 'Write a simple React component for a button'
}
]
};
const response = await modelManager.chat(request);
console.log(response.choices[0].message.content);
// Add a new Ollama model
modelManager.addModel({
id: 'my-custom-model',
name: 'My Custom Model',
provider: 'ollama',
model: 'llama3:8b',
baseUrl: 'http://localhost:11434',
maxTokens: 2048,
temperature: 0.7,
enabled: true
});
// OpenAI (requires API key)
modelManager.addModel({
id: 'gpt-4',
name: 'GPT-4',
provider: 'openai',
model: 'gpt-4',
apiKey: 'your-api-key-here',
maxTokens: 4096,
temperature: 0.7,
enabled: true
});
// Anthropic (requires API key)
modelManager.addModel({
id: 'claude-3-sonnet',
name: 'Claude 3 Sonnet',
provider: 'anthropic',
model: 'claude-3-sonnet-20240229',
apiKey: 'your-api-key-here',
maxTokens: 4096,
temperature: 0.7,
enabled: true
});
The system comes pre-configured with:
gpt-4claude-3-sonnetqwen3-coder-30b (local model)Run the validation script to test your setup:
npm run test:ollama
This will:
See examples/ai-usage.ts for a complete working example.
All models are configured with sensible defaults:
http://localhost:11434You can customize these values when adding models or in individual chat requests.
getAvailableModels(): Get all enabled modelsgetModel(id): Get a specific model by IDaddModel(model): Add a new modelremoveModel(id): Remove a modelsetDefaultModel(id): Set the default modelchat(request): Send a chat requestdetectOllamaModels(): Detect available Ollama modelspullOllamaModel(model): Pull a model from OllamaModelConfig: Configuration for a modelModelChatRequest: Chat request structureModelChatResponse: Chat response structureModelProvider: Supported providers (“openai”, “anthropic”, “ollama”)