Settings

Ollama Configuration
Configure your local Ollama server connection and default model

The base URL of your Ollama server (without /api)

The default model to use for conversations (e.g., llama2, codellama, mistral)

Getting Started
1. Install Ollama:

Download and install Ollama from ollama.ai

2. Pull a model:

ollama pull llama2

3. Start the server:

ollama serve

4. Popular models:

llama2, codellama, mistral, phi, gemma

Built with v0