Skip to content

AI Providers

Ariv supports four AI providers, each with different strengths. This guide helps you understand the trade-offs so you can pick the right one for your workflow. You can switch providers at any time — just update the settings and your existing tags, searches, and Ask Brain history continue to work.

Recommended default for most users.

ModelSpeedQualityNotes
Gemini 2.0 FlashFastGoodDefault. Best balance of speed and cost
Gemini 2.5 FlashModerateBetterImproved reasoning over 2.0 Flash
Gemini 2.5 ProSlowerBestMost capable Gemini model
  1. Go to aistudio.google.com.
  2. Sign in with your Google account.
  3. Create an API key.
  4. In Ariv: set ai.provider to Google Gemini and paste your key into ai.gemini.apiKey.
  • Free tier available. Google offers a generous free tier for the Gemini API, making it the easiest way to try Ariv’s AI features without committing to a paid account.
  • Fast responses. Gemini 2.0 Flash is one of the quickest models available, which matters for a snappy auto-tagging and Ask Brain experience.
  • Good all-rounder. Handles tagging, summarization, and Q&A well across a range of note types.

ModelSpeedQualityNotes
GPT-4oModerateBestMost capable, handles complex questions
GPT-4o MiniFastGoodLower cost, good for routine tasks
o3-miniSlowerBestStrong reasoning, good for analytical queries
  1. Go to platform.openai.com.
  2. Sign in and navigate to the API keys section.
  3. Create a new secret key.
  4. In Ariv: set ai.provider to OpenAI and paste your key into ai.openai.apiKey.
  • Mature ecosystem. If you already have an OpenAI API account with billing set up, connecting it to Ariv takes seconds.
  • Strong at structured tasks. GPT-4o is particularly good at extracting structured information from messy notes — useful for auto-tagging and action item detection.
  • Reasoning models. o3-mini excels at analytical questions where you need the AI to think through a problem rather than just retrieve information.

ModelSpeedQualityNotes
Claude Sonnet 4.5ModerateBestBalanced quality and speed
Claude Haiku 4.5FastGoodQuick responses, lower cost
  1. Go to console.anthropic.com.
  2. Sign in and create an API key from the dashboard.
  3. In Ariv: set ai.provider to Anthropic and paste your key into ai.anthropic.apiKey.
  • Excellent writing quality. Claude tends to produce well-structured, nuanced responses — especially noticeable in Ask Brain answers and note summarization.
  • Good at following instructions. Claude is strong at adhering to the specific formatting and tagging conventions Ariv uses, leading to clean auto-tag suggestions.
  • Strong with long context. Claude handles long notes well, which matters when Ariv sends note content for AI tagging or Ask Brain context.

For privacy-focused users and offline work.

Ollama runs AI models entirely on your machine. No data ever leaves your computer.

  1. Install Ollama from ollama.ai.
  2. Pull a model: open a terminal and run ollama pull llama3.1:8b (or whichever model you prefer).
  3. Make sure Ollama is running (it starts a local server automatically).
  4. In Ariv: set ai.provider to Ollama, then configure:
    • ai.ollama.url — the server address (default: http://localhost:11434)
    • ai.ollama.model — the model name (e.g., llama3.1:8b)
ModelSizeQualityNotes
llama3.1:8b~4.7GBGoodBest balance of quality and resource usage
mistral~4.1GBGoodFast, solid general-purpose model
phi3~2.3GBModerateLightweight, runs on modest hardware
  • Total privacy. Your notes never leave your computer. Zero network requests for AI features.
  • No API costs. Once you download a model, usage is completely free forever.
  • Works offline. Use AI features on a plane, in a cafe with no Wi-Fi, or anywhere else without internet.
  • “Connection refused” error: Make sure Ollama is running. Open a terminal and run ollama serve if it’s not started.
  • Slow responses: Try a smaller model like phi3 or ensure no other heavy processes are consuming system resources.
  • Model not found: Run ollama list in a terminal to see which models are installed, and ollama pull <model> to download new ones.

All four providers regularly release new models. If you want to use a model that isn’t listed in Ariv’s dropdown, you can specify it directly:

  1. Set your provider and API key as usual.
  2. Enter the model’s ID in ai.customModel.
  3. This overrides the dropdown selection.

This is useful for:

  • Accessing newer models that haven’t been added to Ariv’s dropdown yet.
  • Using fine-tuned models if your provider supports them.
  • Specifying model variants (e.g., specific version tags or quantization levels for Ollama).

You can change providers at any time without losing anything. Tags, Ask Brain history, and search indexes are stored locally and are independent of which AI provider generated them. Simply update ai.provider and the corresponding API key in Settings, and Ariv will start using the new provider for all future AI requests.


Related: AI Setup — step-by-step configuration guide | Auto-Tagging — how AI refines tag suggestions | Semantic Search — local AI-powered meaning search