Vendor selection: the AI tools we actually use
A snapshot of our current AI stack — what we deploy for clients, why, and what we avoid.
We get asked at least once a week which AI tools we recommend for clients. Here's our current stack, refreshed quarterly.
Models: Claude Sonnet for most agentic work, GPT-4o for tool-heavy chains, smaller open-weight models for cost-sensitive routing.
Automation layer: n8n self-hosted for 80% of clients; Make for clients already using it; custom TypeScript when reliability or scale demands it.
Observability: Langfuse for LLM tracing. Standard Datadog/Sentry for everything else.
Vector store: pgvector for most projects, Qdrant when scale or filtering complexity warrants.
What we avoid: anything closed-source that we can't self-host or move off easily; anything that requires moving customer data into a vendor warehouse without strict guarantees.
Keep reading
More from the blog.
A practical AI readiness checklist for 50–500 person teams
The questions to ask before any vendor demo, and the data work nobody wants to do first.
Apr 02, 2026 · 8 min read
The unsexy work that makes AI projects succeed
Data cleaning, schema design, edge-case mapping. The 60% of an AI project that no one wants to talk about.
Jan 29, 2026 · 6 min read
How to interview an AI vendor (without being fooled)
Five questions that separate the engineers from the slide-makers — based on demos we've sat through.
Dec 18, 2025 · 7 min read
Get started
Want this in your inbox?
We email occasionally — when there's something genuinely useful to share. No spam.