Turn any AI agent into a
domain expert

Structured knowledge packs that give AI agents the esoteric knowledge missing from their training data — about your products, your people, or your processes. Minimized token cost. Maximized prompt quality. Measurable value.

Give your AI the knowledge it's missing

Esoteric knowledge (EK) is knowledge not found in the weights of frontier LLMs. It's the tribal knowledge in your support team's heads, the gotchas your engineers learned the hard way, the decision patterns your founder never wrote down — the gap between what a model can answer and what an expert actually knows.

ExpertPacks deliver this knowledge to any AI agent in a way that minimizes token cost and maximizes prompt quality through RAG. Every pack is built from atomic-conceptual concept files and is measured by its EK ratio — the proportion of content that frontier models cannot correctly produce on their own. During hydration, every fact is triaged: esoteric knowledge gets maximum treatment, general knowledge gets compressed to scaffolding. The result is dense, high-value context that makes your AI genuinely expert — not just articulate.

🧠

EK-Optimized

Every fact is triaged during hydration — maximize esoteric knowledge, compress what models already know

📊

Measurable Quality

EK ratio, correctness, hallucination rate, and refusal accuracy — measured, not guessed

🎯

Atomic-Conceptual Design

Self-contained concept files. One concept per file, each carrying its own definition, body, FAQs, and related terms — authored as a single retrieval unit so what you write is what the agent sees.

📝

Markdown-First

Human-readable, AI-consumable, git-versionable — no proprietary formats or lock-in

Token-Efficient

Three-tier context strategy loads only what's needed per turn

🔌

Agent-Agnostic

Works with any AI that can read Markdown files

"Why can't my AI just search for this?"

Three reasons web search can't replace an ExpertPack.

🤥

Models don't know what they don't know

When a model confidently hallucinates, it doesn't trigger a search. It doesn't think "I'm unsure, let me look this up" — it thinks it already knows. An ExpertPack loaded into context preempts the hallucination with the correct answer before the model gets a chance to fabricate.

🔍

Search requires the right question

Even with tool-use, the model needs to know what to search for. If it doesn't know about a specific firmware bug, it won't search for the precise query that finds the fix — it'll search generically and get generic results. You can't search for knowledge you don't know exists.

🔒

Not all knowledge is on the internet

Source code analysis reveals undocumented behavior that exists nowhere online. Expert interviews capture tribal knowledge that was never written down. Person packs contain private stories and reasoning. These are original knowledge sources — no search engine indexes them.

Three pack types, infinite use cases

🧑

Person Packs

Capture a person — stories, beliefs, relationships, voice, and legacy.

Use cases: Personal AI assistant, family archive, memorial AI, digital legacy, founder knowledge capture
📦

Product Packs

Deep knowledge about a product or platform — concepts, workflows, troubleshooting.

Use cases: AI support agent, sales assistant, training tool, onboarding guide, product documentation
🔄

Process Packs

Complex multi-phase processes — phases, decisions, checklists, gotchas.

Use cases: Home building guide, business formation, project management, certification processes
🔗

Composites

Combine multiple packs into a single agent deployment with role assignments and context control.

Use cases: CEO AI assistant, multi-product support bot, company knowledge base, personal legacy AI

How it works

1

Point your AI at the schema

Pick a pack type — person, product, or process. Your AI agent reads the schema and knows exactly what to build.

2

Feed it knowledge

Talk to the agent, point it at websites, drop in documents, or hand it data exports. It structures everything automatically.

3

Deploy the pack

Drop the pack into any AI agent's workspace. Instant domain expertise — no prompt engineering required.

4

Measure & improve

Run evals to measure correctness, completeness, and hallucination rate. Use results to guide targeted improvements.

Free community packs

Open-source ExpertPacks built from real documentation, community forums, and source code analysis. Each pack shows its EK ratio — the percentage of content that frontier AI models cannot produce on their own. Higher EK = more value your AI can't get anywhere else. Download individual packs directly from GitHub — no account required. ⭐ Star the repo if you find them useful!

🏠

Home Assistant

Composite Pack EK 54%

The open-source home automation platform. Deep practitioner knowledge covering smart home protocols, automation patterns, presence detection, YAML configuration, ESPHome, dashboards, voice assistant, energy management, and security monitoring. Includes community-sourced gotchas and real-world device compatibility data.

📄 61 files 📏 684 KB 📝 10,400+ lines
Zigbee / Z-Wave / Matter Automations Presence Detection ESPHome Dashboards Voice Assistant Energy
🎨

Blender 3D

Product Pack EK 42%

The free, open-source 3D modeling, animation, and rendering software used by millions of artists and studios worldwide. Covers polygon modeling, sculpting, animation & rigging, physics simulation, PBR shading, Cycles/EEVEE rendering, Geometry Nodes, compositing, Python scripting, and production workflows.

📄 35 files 📏 520 KB 📝 7,200+ lines
Modeling & Topology Animation & Rigging Sculpting Shading & PBR Cycles / EEVEE Geometry Nodes Physics & Simulation Compositing Python Scripting Game Export Production Workflows
☀️

Solar & Battery DIY

Composite Pack EK 52%

A practitioner guide for residential solar panel and battery storage systems. Covers system design, panel and battery product comparisons, NEC code compliance, permitting, installation best practices, and troubleshooting.

📄 46 files 📏 428 KB 📝 3,800+ lines
System Design Panel Selection Battery Storage NEC Code Permitting Troubleshooting

Atomic-conceptual retrieval

Basic RAG embeds documents and retrieves top-k chunks. ExpertPacks author each concept as a single self-contained file — definition, body, FAQs, and related terms together — so the retrieval unit matches the unit of meaning. When a concept legitimately needs detail that doesn't belong in the primary atom, a requires: frontmatter field declares the dependency explicitly, and the retrieval layer auto-includes the required atoms alongside the hit.

📦

Atomic-Conceptual Concept Files

Self-contained retrieval units. One file per concept carries its definition (opening paragraph), body explanation, FAQs, and related terms — authored as what the agent actually sees. concepts/territory.md (live example).

🔗

Declared Dependencies (requires:)

When an overview atom needs a detail atom to be fully useful, authors declare it in frontmatter: requires: [detail-atom]. The retrieval layer auto-appends the required atoms when the overview is hit — directional, transitive, and bounded by depth and token caps so it never displaces primary results.

📐

Right-Sized Atoms

Soft target 500–800 tokens per concept, hard ceiling at 1,000. Oversized concepts split into independent atoms connected by requires: rather than hierarchical file groups. Every split is still a first-class retrieval unit on its own.

🔍

Source Provenance

Frontmatter tracks where content came from — video timestamps, doc URLs, interviews. Trace any fact back to its origin for verification.

Tools & Integrations

Open-source tooling for building, measuring, and deploying ExpertPacks.

📐

Retrieval-Ready by Design

Files are authored as self-contained retrieval units (400–800 tokens each). Any RAG chunker passes them through intact — no external tooling needed. The schema IS the chunking strategy. Workflows stay atomic; reference content is naturally scoped. Per-file overrides via frontmatter.

📊

EK Ratio Measurement

Blind-probes frontier models to measure what % of your pack they can't produce alone.

🧪

Eval Runner

Automated eval execution with LLM-as-judge scoring for correctness, hallucination, and refusal.

OpenClaw

OpenClaw Integration

Battle-tested with OpenClaw. Add pack path to memorySearch.extraPaths — instant expertise.

Stop burning tokens on context bloat

Generic RAG dumps documents into a vector store and loads everything into context — hoping the model will sort it out. You pay for every irrelevant token on every turn.

ExpertPacks use a three-tier context strategy: core identity loads every session, knowledge loads on topic match, and heavy content loads only on demand. Your agent gets the right information at the right time — not everything all the time.

  • Token cost Tiered loading — only pay for what this turn actually needs
  • Retrieval Multi-layer: summaries, propositions, glossary, lead summaries
  • Structure Schemas model real expertise — not just document chunks
  • Quality Eval framework measures correctness and catches hallucinations
  • Provenance Every fact traceable to its source — videos, docs, interviews
  • Dependencies Declared requires: links between atoms — honored at retrieval time
  • Composition Combine packs — person + product + process in one agent
  • Portability Plain Markdown — works anywhere, version-controlled

Serve packs as an API — ExpertPack MCP

ExpertPack MCP is an open-source MCP server that turns any ExpertPack into a live, queryable knowledge service. Connect Claude Desktop, Cursor, Windsurf, or any MCP-compatible host — your pack becomes a first-class tool your agent can call on demand.

🔍

Hybrid Retrieval

BM25 + vector search (sqlite-vec), metadata boosting, MMR re-ranking, and graph-aware traversal via pack wikilinks.

📁

EP-Native Chunking

Files are atomic retrieval units — no arbitrary splitting. Frontmatter is indexed for filtering; provenance metadata flows through to every result.

🔌

Any MCP Host

Streamable HTTP transport (cloud-ready) + stdio (local dev). Works with Claude Desktop, Cursor, Windsurf, Claude.ai Projects, and any custom MCP client.

Self-Hostable

Point it at any ExpertPack with a single config file. Runs locally or in the cloud — no vendor lock-in, no external dependencies beyond your embedding provider.

View on GitHub →

Built for serious knowledge engineering

📊

Evaluation Framework

Standardized eval sets measure correctness, completeness, hallucination rate, and refusal accuracy. Run automated evals with the included eval runner. Track quality over time with baselines and scorecards.

🏷️

Validator & Doctor

CLI tools check pack structure, frontmatter, token budgets, and cross-atom requires: references. Catch broken dependencies and oversized atoms before retrieval does.

📖

Guides & Tooling

Population methods guide covers every knowledge source — conversations, websites, documents, video, support tickets. Eval runner automates quality scoring. More tooling on the way.

💎

Obsidian Compatible

Every ExpertPack is a valid Obsidian vault. Open any pack in Obsidian and get live Dataview queries by content type, EK score, and tags — graph view, full-text search, and template-based authoring included. Standard Markdown links keep packs fully readable on GitHub and in any editor simultaneously.

NEW

Export your AI agent as an ExpertPack

Your agent accumulates months of knowledge — identity, preferences, infrastructure expertise, behavioral patterns, relationships. Now it can distill all of that into a portable, structured pack that bootstraps a new instance in minutes.

🔍

Auto-Discover

The agent scans its own workspace, classifies every knowledge chunk, and proposes constituent packs — agent, person, product, process.

⚗️

Distill

Raw state (journals, configs, memory files) is compressed into structured, deduplicated EP-compliant files. 438KB raw → 31KB distilled.

📦

Package

A composite EP wires the agent pack (voice) with person/product/process packs (knowledge). Ready to import on any platform.

💾

Backup & Restore

Your agent dies — spin up a new one from its EP. Immediately competent, not starting from scratch.

🚚

Platform Migration

Move from one AI platform to another. Your agent's knowledge comes with it — portable by design.

🤝

Agent Collaboration

Share domain expertise between agents. One agent's product knowledge becomes another's via composite.

🏪

Marketplace Ready

Distribute well-trained agent configurations as portable packs. Built-in privacy controls keep secrets out.

OpenClaw
OpenClaw Tested

ExpertPack was designed and battle-tested with OpenClaw — the open-source AI agent platform. Every schema change is validated against real agent deployments.

Start building your ExpertPack

Open source. Apache 2.0. Free forever.

If ExpertPack is useful to you, a GitHub star helps others discover it.