Local Model

An AI model running on your own server (e.g., via Ollama), keeping data private and costs near zero.

Why it matters

Local models solve two problems: privacy (data never leaves your infrastructure) and cost (inference is free after setup).

In practice

Ollama running llama3.2 on our Hetzner server handles 80-95% of AI tasks for $0. Data stays on European servers.

Related terms

Back to glossary