AI Providers
Ariv supports four AI providers, each with different strengths. This guide helps you understand the trade-offs so you can pick the right one for your workflow. You can switch providers at any time — just update the settings and your existing tags, searches, and Ask Brain history continue to work.
Google Gemini
Section titled “Google Gemini”Recommended default for most users.
Models
Section titled “Models”| Model | Speed | Quality | Notes |
|---|---|---|---|
| Gemini 2.0 Flash | Fast | Good | Default. Best balance of speed and cost |
| Gemini 2.5 Flash | Moderate | Better | Improved reasoning over 2.0 Flash |
| Gemini 2.5 Pro | Slower | Best | Most capable Gemini model |
- Go to aistudio.google.com.
- Sign in with your Google account.
- Create an API key.
- In Ariv: set
ai.providerto Google Gemini and paste your key intoai.gemini.apiKey.
Why Choose Gemini
Section titled “Why Choose Gemini”- Free tier available. Google offers a generous free tier for the Gemini API, making it the easiest way to try Ariv’s AI features without committing to a paid account.
- Fast responses. Gemini 2.0 Flash is one of the quickest models available, which matters for a snappy auto-tagging and Ask Brain experience.
- Good all-rounder. Handles tagging, summarization, and Q&A well across a range of note types.
OpenAI
Section titled “OpenAI”Models
Section titled “Models”| Model | Speed | Quality | Notes |
|---|---|---|---|
| GPT-4o | Moderate | Best | Most capable, handles complex questions |
| GPT-4o Mini | Fast | Good | Lower cost, good for routine tasks |
| o3-mini | Slower | Best | Strong reasoning, good for analytical queries |
- Go to platform.openai.com.
- Sign in and navigate to the API keys section.
- Create a new secret key.
- In Ariv: set
ai.providerto OpenAI and paste your key intoai.openai.apiKey.
Why Choose OpenAI
Section titled “Why Choose OpenAI”- Mature ecosystem. If you already have an OpenAI API account with billing set up, connecting it to Ariv takes seconds.
- Strong at structured tasks. GPT-4o is particularly good at extracting structured information from messy notes — useful for auto-tagging and action item detection.
- Reasoning models. o3-mini excels at analytical questions where you need the AI to think through a problem rather than just retrieve information.
Anthropic (Claude)
Section titled “Anthropic (Claude)”Models
Section titled “Models”| Model | Speed | Quality | Notes |
|---|---|---|---|
| Claude Sonnet 4.5 | Moderate | Best | Balanced quality and speed |
| Claude Haiku 4.5 | Fast | Good | Quick responses, lower cost |
- Go to console.anthropic.com.
- Sign in and create an API key from the dashboard.
- In Ariv: set
ai.providerto Anthropic and paste your key intoai.anthropic.apiKey.
Why Choose Anthropic
Section titled “Why Choose Anthropic”- Excellent writing quality. Claude tends to produce well-structured, nuanced responses — especially noticeable in Ask Brain answers and note summarization.
- Good at following instructions. Claude is strong at adhering to the specific formatting and tagging conventions Ariv uses, leading to clean auto-tag suggestions.
- Strong with long context. Claude handles long notes well, which matters when Ariv sends note content for AI tagging or Ask Brain context.
Ollama (Local)
Section titled “Ollama (Local)”For privacy-focused users and offline work.
Ollama runs AI models entirely on your machine. No data ever leaves your computer.
- Install Ollama from ollama.ai.
- Pull a model: open a terminal and run
ollama pull llama3.1:8b(or whichever model you prefer). - Make sure Ollama is running (it starts a local server automatically).
- In Ariv: set
ai.providerto Ollama, then configure:ai.ollama.url— the server address (default:http://localhost:11434)ai.ollama.model— the model name (e.g.,llama3.1:8b)
Popular Models
Section titled “Popular Models”| Model | Size | Quality | Notes |
|---|---|---|---|
| llama3.1:8b | ~4.7GB | Good | Best balance of quality and resource usage |
| mistral | ~4.1GB | Good | Fast, solid general-purpose model |
| phi3 | ~2.3GB | Moderate | Lightweight, runs on modest hardware |
Why Choose Ollama
Section titled “Why Choose Ollama”- Total privacy. Your notes never leave your computer. Zero network requests for AI features.
- No API costs. Once you download a model, usage is completely free forever.
- Works offline. Use AI features on a plane, in a cafe with no Wi-Fi, or anywhere else without internet.
Ollama Troubleshooting
Section titled “Ollama Troubleshooting”- “Connection refused” error: Make sure Ollama is running. Open a terminal and run
ollama serveif it’s not started. - Slow responses: Try a smaller model like
phi3or ensure no other heavy processes are consuming system resources. - Model not found: Run
ollama listin a terminal to see which models are installed, andollama pull <model>to download new ones.
Custom Models
Section titled “Custom Models”All four providers regularly release new models. If you want to use a model that isn’t listed in Ariv’s dropdown, you can specify it directly:
- Set your provider and API key as usual.
- Enter the model’s ID in
ai.customModel. - This overrides the dropdown selection.
This is useful for:
- Accessing newer models that haven’t been added to Ariv’s dropdown yet.
- Using fine-tuned models if your provider supports them.
- Specifying model variants (e.g., specific version tags or quantization levels for Ollama).
Switching Providers
Section titled “Switching Providers”You can change providers at any time without losing anything. Tags, Ask Brain history, and search indexes are stored locally and are independent of which AI provider generated them. Simply update ai.provider and the corresponding API key in Settings, and Ariv will start using the new provider for all future AI requests.
Related: AI Setup — step-by-step configuration guide | Auto-Tagging — how AI refines tag suggestions | Semantic Search — local AI-powered meaning search