Structured knowledge packs that give AI agents the esoteric knowledge missing from their training data — about your products, your people, or your processes. Minimized token cost. Maximized prompt quality. Measurable value.
Esoteric knowledge (EK) is knowledge not found in the weights of frontier LLMs. It's the tribal knowledge in your support team's heads, the gotchas your engineers learned the hard way, the decision patterns your founder never wrote down — the gap between what a model can answer and what an expert actually knows.
ExpertPacks deliver this knowledge to any AI agent in a way that minimizes token cost and maximizes prompt quality through RAG. Every pack is built from atomic-conceptual concept files and is measured by its EK ratio — the proportion of content that frontier models cannot correctly produce on their own. During hydration, every fact is triaged: esoteric knowledge gets maximum treatment, general knowledge gets compressed to scaffolding. The result is dense, high-value context that makes your AI genuinely expert — not just articulate.
Every fact is triaged during hydration — maximize esoteric knowledge, compress what models already know
EK ratio, correctness, hallucination rate, and refusal accuracy — measured, not guessed
Self-contained concept files. One concept per file, each carrying its own definition, body, FAQs, and related terms — authored as a single retrieval unit so what you write is what the agent sees.
Human-readable, AI-consumable, git-versionable — no proprietary formats or lock-in
Three-tier context strategy loads only what's needed per turn
Works with any AI that can read Markdown files
Three reasons web search can't replace an ExpertPack.
When a model confidently hallucinates, it doesn't trigger a search. It doesn't think "I'm unsure, let me look this up" — it thinks it already knows. An ExpertPack loaded into context preempts the hallucination with the correct answer before the model gets a chance to fabricate.
Even with tool-use, the model needs to know what to search for. If it doesn't know about a specific firmware bug, it won't search for the precise query that finds the fix — it'll search generically and get generic results. You can't search for knowledge you don't know exists.
Source code analysis reveals undocumented behavior that exists nowhere online. Expert interviews capture tribal knowledge that was never written down. Person packs contain private stories and reasoning. These are original knowledge sources — no search engine indexes them.
Capture a person — stories, beliefs, relationships, voice, and legacy.
Deep knowledge about a product or platform — concepts, workflows, troubleshooting.
Complex multi-phase processes — phases, decisions, checklists, gotchas.
Combine multiple packs into a single agent deployment with role assignments and context control.
Pick a pack type — person, product, or process. Your AI agent reads the schema and knows exactly what to build.
Talk to the agent, point it at websites, drop in documents, or hand it data exports. It structures everything automatically.
Drop the pack into any AI agent's workspace. Instant domain expertise — no prompt engineering required.
Run evals to measure correctness, completeness, and hallucination rate. Use results to guide targeted improvements.
Open-source ExpertPacks built from real documentation, community forums, and source code analysis. Each pack shows its EK ratio — the percentage of content that frontier AI models cannot produce on their own. Higher EK = more value your AI can't get anywhere else. Download individual packs directly from GitHub — no account required. ⭐ Star the repo if you find them useful!
The open-source home automation platform. Deep practitioner knowledge covering smart home protocols, automation patterns, presence detection, YAML configuration, ESPHome, dashboards, voice assistant, energy management, and security monitoring. Includes community-sourced gotchas and real-world device compatibility data.
The free, open-source 3D modeling, animation, and rendering software used by millions of artists and studios worldwide. Covers polygon modeling, sculpting, animation & rigging, physics simulation, PBR shading, Cycles/EEVEE rendering, Geometry Nodes, compositing, Python scripting, and production workflows.
A practitioner guide for residential solar panel and battery storage systems. Covers system design, panel and battery product comparisons, NEC code compliance, permitting, installation best practices, and troubleshooting.
Basic RAG embeds documents and retrieves top-k chunks. ExpertPacks author each concept as a single self-contained file — definition, body, FAQs, and related terms together — so the retrieval unit matches the unit of meaning. When a concept legitimately needs detail that doesn't belong in the primary atom, a requires: frontmatter field declares the dependency explicitly, and the retrieval layer auto-includes the required atoms alongside the hit.
Self-contained retrieval units. One file per concept carries its definition (opening paragraph), body explanation, FAQs, and related terms — authored as what the agent actually sees. concepts/territory.md (live example).
requires:)When an overview atom needs a detail atom to be fully useful, authors declare it in frontmatter: requires: [detail-atom]. The retrieval layer auto-appends the required atoms when the overview is hit — directional, transitive, and bounded by depth and token caps so it never displaces primary results.
Soft target 500–800 tokens per concept, hard ceiling at 1,000. Oversized concepts split into independent atoms connected by requires: rather than hierarchical file groups. Every split is still a first-class retrieval unit on its own.
Frontmatter tracks where content came from — video timestamps, doc URLs, interviews. Trace any fact back to its origin for verification.
Open-source tooling for building, measuring, and deploying ExpertPacks.
Files are authored as self-contained retrieval units (400–800 tokens each). Any RAG chunker passes them through intact — no external tooling needed. The schema IS the chunking strategy. Workflows stay atomic; reference content is naturally scoped. Per-file overrides via frontmatter.
Blind-probes frontier models to measure what % of your pack they can't produce alone.
Automated eval execution with LLM-as-judge scoring for correctness, hallucination, and refusal.
Battle-tested with OpenClaw. Add pack path to memorySearch.extraPaths — instant expertise.
Generic RAG dumps documents into a vector store and loads everything into context — hoping the model will sort it out. You pay for every irrelevant token on every turn.
ExpertPacks use a three-tier context strategy: core identity loads every session, knowledge loads on topic match, and heavy content loads only on demand. Your agent gets the right information at the right time — not everything all the time.
requires: links between atoms — honored at retrieval time
ExpertPack MCP is an open-source MCP server that turns any ExpertPack into a live, queryable knowledge service. Connect Claude Desktop, Cursor, Windsurf, or any MCP-compatible host — your pack becomes a first-class tool your agent can call on demand.
BM25 + vector search (sqlite-vec), metadata boosting, MMR re-ranking, and graph-aware traversal via pack wikilinks.
Files are atomic retrieval units — no arbitrary splitting. Frontmatter is indexed for filtering; provenance metadata flows through to every result.
Streamable HTTP transport (cloud-ready) + stdio (local dev). Works with Claude Desktop, Cursor, Windsurf, Claude.ai Projects, and any custom MCP client.
Point it at any ExpertPack with a single config file. Runs locally or in the cloud — no vendor lock-in, no external dependencies beyond your embedding provider.
Standardized eval sets measure correctness, completeness, hallucination rate, and refusal accuracy. Run automated evals with the included eval runner. Track quality over time with baselines and scorecards.
CLI tools check pack structure, frontmatter, token budgets, and cross-atom requires: references. Catch broken dependencies and oversized atoms before retrieval does.
Population methods guide covers every knowledge source — conversations, websites, documents, video, support tickets. Eval runner automates quality scoring. More tooling on the way.
Every ExpertPack is a valid Obsidian vault. Open any pack in Obsidian and get live Dataview queries by content type, EK score, and tags — graph view, full-text search, and template-based authoring included. Standard Markdown links keep packs fully readable on GitHub and in any editor simultaneously.
Your agent accumulates months of knowledge — identity, preferences, infrastructure expertise, behavioral patterns, relationships. Now it can distill all of that into a portable, structured pack that bootstraps a new instance in minutes.
The agent scans its own workspace, classifies every knowledge chunk, and proposes constituent packs — agent, person, product, process.
Raw state (journals, configs, memory files) is compressed into structured, deduplicated EP-compliant files. 438KB raw → 31KB distilled.
A composite EP wires the agent pack (voice) with person/product/process packs (knowledge). Ready to import on any platform.
Your agent dies — spin up a new one from its EP. Immediately competent, not starting from scratch.
Move from one AI platform to another. Your agent's knowledge comes with it — portable by design.
Share domain expertise between agents. One agent's product knowledge becomes another's via composite.
Distribute well-trained agent configurations as portable packs. Built-in privacy controls keep secrets out.
ExpertPack was designed and battle-tested with OpenClaw — the open-source AI agent platform. Every schema change is validated against real agent deployments.
Open source. Apache 2.0. Free forever.
If ExpertPack is useful to you, a GitHub star helps others discover it.