finsteps

AI Model Configuration

This module adds AI model management capabilities to Finsteps, with support for multiple providers including local Ollama models.

Features

Quick Start

1. Install and Setup Ollama

# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh

# Start Ollama server
ollama serve

# Pull the qwen3-coder:30b model
ollama pull qwen3-coder:30b

2. Basic Usage

import { modelManager, type ModelChatRequest } from 'finsteps';

// Check available models
const models = modelManager.getAvailableModels();
console.log(models);

// Chat with qwen3-coder:30b
const request: ModelChatRequest = {
  model: 'qwen3-coder-30b',
  messages: [
    {
      role: 'user',
      content: 'Write a simple React component for a button'
    }
  ]
};

const response = await modelManager.chat(request);
console.log(response.choices[0].message.content);

3. Adding Custom Models

// Add a new Ollama model
modelManager.addModel({
  id: 'my-custom-model',
  name: 'My Custom Model',
  provider: 'ollama',
  model: 'llama3:8b',
  baseUrl: 'http://localhost:11434',
  maxTokens: 2048,
  temperature: 0.7,
  enabled: true
});

4. Using Other Providers

// OpenAI (requires API key)
modelManager.addModel({
  id: 'gpt-4',
  name: 'GPT-4',
  provider: 'openai',
  model: 'gpt-4',
  apiKey: 'your-api-key-here',
  maxTokens: 4096,
  temperature: 0.7,
  enabled: true
});

// Anthropic (requires API key)
modelManager.addModel({
  id: 'claude-3-sonnet',
  name: 'Claude 3 Sonnet',
  provider: 'anthropic',
  model: 'claude-3-sonnet-20240229',
  apiKey: 'your-api-key-here',
  maxTokens: 4096,
  temperature: 0.7,
  enabled: true
});

Available Models

The system comes pre-configured with:

Testing

Run the validation script to test your setup:

npm run test:ollama

This will:

Example Usage

See examples/ai-usage.ts for a complete working example.

Configuration

All models are configured with sensible defaults:

You can customize these values when adding models or in individual chat requests.

API Reference

ModelManager

Types