Alpha Experiment

Guide vs Encyclopedia: Notes from an Internal Experiment

What we learned testing two approaches to documentation—and why it matters for humans and AI agents.

January 20257 min readProduct

At qeek.ai, we're building in public—and sometimes that means sharing results that are unfinished, imperfect, and still forming.

This post is one of those.

TL;DR

  • Guide-style docs help people form mental models and orientation
  • Encyclopedia-style docs excel at completeness and reference
  • In our internal eval, guide-style won for understanding and usability

During our current ALPHA phase, we ran an internal experiment to better understand a question that keeps coming up in our work:

What does "good" documentation actually mean—for humans, for new team members, and for AI agents?

We weren't trying to crown a winner or publish a benchmark. We were trying to learn.

What we found surprised us—and also clarified what we think we're really building.

01

Why we ran this experiment

Most technical documentation tools optimize for completeness:

  • More facts
  • More detail
  • More depth

That's valuable. But completeness and understanding are not the same thing. Onboarding stays slow, rediscovery stays costly, and both humans and AI agents spend time reconstructing intent.

New engineers
Overwhelmed on day one
Senior engineers
Wasting time reconstructing intent
AI agents
Getting lost in dense context

So we asked a different question:

Is documentation primarily a knowledge store, or is it a communication system?

Knowledge Store (Encyclopedia)→ Info density
Communication System (Guide)→ Orientation & flow

To explore this, we ran a small, internal evaluation comparing two different documentation approaches.

02

The setup (important caveats)

Before the results, a few things to be very clear about:

  • This was internal
  • This was iterative
  • The evaluation rubric evolved over several conversations
  • The goal was insight, not objectivity
  • The results are directional, not definitive

In other words: This is not a benchmark. It's a learning artifact.

03

The evaluation criteria

Instead of asking "which is more correct?", we evaluated across four lenses:

1

Accuracy

Is the information technically sound?

2

Senior Engineer Usability

Does it support fast diagnosis, deep dives, and edge cases?

3

Onboarding

Can a new human make sense of the system without prior context?

4

AI Context

Does the structure help an AI agent reason, navigate, and act?

What mattered more than the final score was how each system behaved across these dimensions.

04

What we observed

The two approaches diverged sharply in philosophy.

Encyclopedia

  • Deep
  • Comprehensive
  • Fact-dense
  • Excellent as a reference
  • Hard to enter, harder to traverse

Guide

  • Modular
  • Narrative
  • Flow-based
  • Designed to build mental models
  • Optimized for progression, not completeness

The surprising result wasn't that one "won." It was where each one excelled.

05

The key insight

For most real-world use cases—especially:

  • Humans new to a system
  • Engineers switching context
  • AI agents operating under constraints

Guide-style documentation consistently outperformed encyclopedia-style documentation.

Not because it had more information, but because it made information usable.

That led us to a simple but important conclusion:

Documentation is communication, not storage.

06

What this does not mean

This does not mean:

  • Depth doesn't matter
  • Detailed diagrams aren't valuable
  • Reference-style docs should disappear

In fact, the ideal system likely looks like a hybrid:

  • A guide-first structure
  • With deep, precise references available when needed

Humans (and agents) want orientation before detail.

07

Why this matters for qeek.ai

This experiment reinforced a core belief we've been building toward:

The problem isn't that teams lack documentation.

The problem is that documentation rarely matches how understanding actually forms.

At qeek.ai, we're exploring:

  • Guide-first structures
  • Progressive disclosure
  • Agent-friendly context boundaries
  • Documentation that supports thinking, not just lookup

We don't think we've solved this. We think we're early.

08

See it in action

To make this discussion concrete, we've published an experimental, generated guide for a public open-source project.

It's not meant to replace existing docs—it's meant to be questioned.

Disclaimer: This page shows an experimental, automatically generated guide for a well-known open-source project. It is not official documentation, may contain gaps or errors, and exists solely to invite feedback on structure, flow, and usability.

This guide is generated directly from the codebase and can be regenerated as the code evolves.

09

We want your feedback (seriously)

This is ALPHA work, and we fully expect:

  • Counterexamples
  • Strong disagreements
  • "This would never work for X" feedback

That's exactly what we want.

Specifically, we'd love to know:

Was the flow intuitive?
Did you know where to start?
Did you feel lost at any point?
What would you remove first?
Where did you want more depth?
How does this compare to what you're used to?

If you've seen documentation styles that work better, situations where encyclopedia-first is clearly superior, or failure modes of guide-based systems—we'd love to hear from you.

Closing thought

We're not trying to build "better docs."

We're trying to build better understanding—for humans and machines.

If this experiment helps move that conversation forward, even a little, it's done its job.