self-hostable AI brains

Open Source Models

8 companies in this category

Overview

Open Source Models provide self-hostable AI capabilities for developers and organizations that need data sovereignty, customization, or cost control. Meta's Llama family leads adoption for self-hosted agent deployments.

DeepSeek's MIT-licensed MoE models integrate reasoning into tool-use with efficient 7B/14B variants. Qwen offers Apache 2.0 models with native MCP support running on consumer GPUs.

Hermes by Nous Research provides agent-first models with persistent memory and skill learning. The open-source tier matters enormously for agentic commerce: organizations processing sensitive financial transactions or operating in regulated markets often cannot send data to third-party model providers, making self-hosted inference a requirement rather than a preference.

All Companies (8)

Llama

Llama

Meta's open-weight model family with zero-shot tool calling and Llama Stack's standardized agent APIs; the most widely adopted open model for self-hosted agent deployments.

DeepSeek

DeepSeek

MIT-licensed MoE model that integrates reasoning directly into tool-use; distilled 7B/14B variants run locally while full V3 activates only 37B of 671B parameters per token.

Qwen

Qwen

Apache 2.0 models with native MCP support and function-calling; efficient MoE variants (30B-A3B) run on consumer GPUs while leading open-source agent benchmarks.

Mistral

Mistral

Apache 2.0 frontier MoE models from 3B to 675B with built-in tool-calling; edge-deployable Ministral variants for private commerce agent deployment with European data sovereignty.

Hermes

Hermes

Nous Research's agent-first open-source models with persistent memory, skill learning from experience, and RL-optimized tool-calling that improves across sessions.

Gemma

Gemma

Google's open models including FunctionGemma (270M) -- an ultra-lightweight function-calling specialist for mobile and edge commerce agents running on phones and IoT devices.

Phi

Phi

Microsoft's small models with native function calling purpose-built for edge-device agents; Phi-4-mini runs on phones and consumer hardware with full tool-use capabilities.

GPT-OSS

GPT-OSS

OpenAI's open-weight models (20B and 120B parameters) released for self-hosting; brings GPT-level reasoning and tool-calling to private deployments for agent workflows.

Explore Other Categories