Skip to content

Creating Skills

There are two ways to create a skill. The first — capturing one from a project’s Data Flow — is by far the easier path, and it’s how most skills should start. Writing from scratch in the editor is the alternative when you don’t have a working analysis yet.

From a Project’s Data Flow (the easy way)

Section titled “From a Project’s Data Flow (the easy way)”

Every Querri project has a Data Flow — the visual graph of steps the agent ran to produce your result. If you’ve done a piece of analysis once and it worked, you can turn those steps into a reusable skill in a few clicks. This is by far the fastest, highest-quality way to build a skill, because the example plan comes straight from steps that actually ran on real data.

To do it:

  1. Open the project that contains the analysis you want to capture.
  2. Switch to the Data Flow view.
  3. Click the steps you want to include — selected steps highlight in the graph.
  4. In the selection toolbar that appears, click Save as Skill.
  5. Querri pulls the selected steps into a new skill draft, including the tools used, the columns each step touched, and the dependencies between them.
  6. You’re dropped into the skill editor with the example plan pre-filled. Add a title, description, and any advanced instructions, then save.

You can select between 1 and 20 steps in a single Save-as-Skill action. If you need to capture more than 20 steps, that’s usually a sign the skill is doing too much — split it into two.

Sometimes you know exactly what you want to teach the agent before you’ve built the analysis. In that case, use the + New Skill button at the top of the Skills page to open a blank editor. You’ll fill in the same fields as the Data Flow path, just without a pre-populated example plan.

This path works well when:

  • You’re translating an existing playbook (a Confluence doc, a runbook) into skill form.
  • You want to write an instruction-only skill — high-level guidance without specific steps.
  • You’re authoring a skill before you’ve run the analysis yourself.

In practice, even when you start from scratch it’s worth running through the analysis once in a project first, then capturing the result from the Data Flow. The example plan you get from a real run almost always beats the one you’d write by hand.

Whichever path you pick, the editor exposes the same set of fields.

A short, scannable name. Aim for something a teammate could read and immediately understand the scope of, like “Monthly recurring revenue rollup” or “Categorize support tickets by urgency.” This is one of the strongest signals the agent uses to decide when the skill applies.

One or two sentences explaining what the skill does and when to reach for it. Don’t restate the title. Do mention the data it expects (“expects a Stripe invoices table”), the typical output (“produces a single row per customer”), and any prerequisites worth flagging.

The free-form brief. This is where the bulk of your domain knowledge goes. Things to include:

  • Which data sources or tables to use — name them explicitly. “Use the customers table from the CRM connector, not the legacy users export.”
  • Definitions and edge cases — what counts as “active”? How do refunds get handled? Which test accounts to exclude?
  • Naming and formatting — column names, currency formatting, timezone conventions.
  • What NOT to do — explicit anti-patterns are surprisingly powerful. “Do not aggregate by signup_date — use first_paid_date instead.”

There’s no length penalty for being thorough, but be specific. Vague instructions (“be careful with dates”) give the agent nothing to act on. Concrete instructions (“dates in the events table are UTC; convert to local org timezone before grouping”) give it leverage.

An ordered list of analysis steps. Each step names a tool (filter, group, join, etc.) and any columns or dependencies it needs. The plan is a strong hint to the agent, not a hard script — the agent will adapt step ordering, add filters, and substitute columns based on the user’s actual question.

The plan is most useful when:

  • The structure of the analysis is deterministic and well-understood.
  • A particular step ordering matters (e.g., dedupe before aggregating).
  • A non-obvious join or transformation is required.

If the analysis is mostly natural-language reasoning (“classify these tickets”), advanced instructions alone may be enough.

For skills that involve computation that’s easier to express in code than in steps, you can paste a short code example showing the transformation. The agent uses this as a reference but doesn’t blindly execute it. Keep it focused — a couple of dozen lines max.

A few patterns we’ve seen work consistently well:

  • Write for a teammate, not for the AI. If a new analyst could read your skill and understand what it does, the agent can too.
  • Keep skills narrow. A skill that does one thing well beats a skill that tries to be a Swiss Army knife. If you find yourself writing “if the user asks about X, do Y; if they ask about Z, do W,” split it into two skills.
  • Iterate. The first version of a skill is rarely the best. Use it, watch where the agent diverges from your intent, and update the instructions to close the gap.
  • Name with verbs. “Monthly recurring revenue rollup” is better than “MRR” because the verb hints at the action, not just the topic.

Saving a new skill puts it in your My Skills section. Only you can see it. From there you can: