Skip to main content
Playbooks are Ana running on autopilot. You write the prompt once, attach your data sources, set a schedule, and Ana executes the analysis every time without you in the loop. That changes what a good prompt looks like. In a chat, you can course-correct in real time. In a Playbook, there is no one watching. The prompt has to carry the full weight of the analysis on its own.

Why Playbook prompts need more than chat prompts

A chat prompt can be a fragment — “show me revenue by region last week” — and Ana will ask for clarification if she needs it. A Playbook prompt needs to be a complete specification. Before writing one, answer these questions:
  • What is the objective of this analysis, and who is the audience?
  • What data should Ana use, and from which tables?
  • What time period should she look at, and how should she handle “today” vs. “yesterday”?
  • What should the output look like — tables, charts, a written summary?
  • What edge cases should she handle gracefully?
A prompt that works perfectly in a chat session may produce inconsistent results on a schedule, because the interactive back-and-forth that filled in the gaps is no longer there.

Structure your prompt in four parts

Objective — one or two sentences on what this Playbook is for and who it is for. This helps Ana calibrate tone, detail level, and what counts as a meaningful finding.
“This is a weekly executive summary of product usage metrics for the leadership team. The goal is to surface the most important trends and anomalies from the past 7 days.”
Steps — the specific analyses Ana should run, in order. Be explicit about tables, filters, and calculations. Do not assume Ana will infer the right approach.
“1. Pull daily active users for the past 7 days from the user_activity table, excluding internal accounts. 2. Compare to the prior 7-day period and calculate the week-over-week change. 3. Break down DAU by plan tier. 4. Flag any day where DAU dropped more than 15% from the 7-day average.”
Edge cases — what should Ana do when something unexpected happens?
“If any metric returns null or zero, note it explicitly rather than omitting it. If data for the current period is incomplete (e.g. today’s data has not fully loaded), use the most recent complete day and note the date.”
Output format — exactly what you want delivered, in order.
“Deliver: (1) a one-paragraph executive summary with the 2–3 most important takeaways, (2) a line chart of DAU by day for the past 14 days, (3) a table showing DAU by plan tier with week-over-week change.”

The 3-preview rule

Before activating a Playbook, run it manually at least three times and review the output each time.
1

Does Ana understand the prompt?

Check that she is querying the right tables, applying the right filters, and producing the right structure.
2

Do the numbers look right?

Check results against a source you trust. Look for nulls, unexpected zeros, or date ranges that are off.
3

Does it hold up when conditions change?

Try running it on a day with unusual data — a holiday, a data gap, a spike. Does Ana handle it gracefully or does the output break?
Most Playbook issues are caught in preview. A Playbook that has only been previewed once is a Playbook that will eventually surprise you.

Date and time handling

Date handling is the most common source of Playbook errors. Use relative dates, not absolute ones. “The past 7 days” will always be correct. “March 1 through March 7” will be wrong by March 8. Be explicit about what “today” means. Data pipelines often have a lag. If your warehouse updates at 6am and your Playbook runs at 5am, it will see yesterday’s data as “today.”
“Use the most recent complete day of data. If today’s data is not yet available, use yesterday.”
Specify the time zone. If your Playbook runs at 9am and your warehouse stores timestamps in UTC, “today” may mean different things depending on where your users are.
“All date calculations should use Eastern Time (ET).”
Avoid “this week” and “this month.” These are ambiguous at the start of a period. “The past 7 days” and “the past 30 days” are unambiguous.

Attaching datasets

If your Playbook relies on a file — a list of accounts, a target list, a mapping table — attach it explicitly rather than describing it in the prompt. Attached datasets are available to Ana as structured data she can query directly, which is more reliable than asking her to reconstruct a list from a description. Reference it by name in the prompt:
“Use the attached target_accounts.csv to filter results to accounts in the current quarter’s pipeline.”

Locking in output format

For recurring reports, consistency matters. If the format changes from week to week, recipients will not know what to look for.
  • Name the charts you want and what they should show
  • Specify whether tables should include totals or subtotals
  • Tell Ana whether you want a written summary and how long it should be
  • If the report will be shared via Slack or email, say so — Ana will format accordingly
The most effective way to get consistent formatting: paste the Python code from a chat that produced exactly the right output into the Playbook prompt and tell Ana to use it as a template. See Build Playbooks Directly in Threads for how to do this.

Prompt templates

Objective: Daily product usage summary for the operations team. Run every weekday morning.

Steps:
1. Pull the following metrics for yesterday (use the most recent complete day if yesterday's data is not yet available):
   - Total messages sent (from the messages table, exclude internal users)
   - Daily active users (from user_activity, exclude internal users)
   - New signups (from accounts, created_at = yesterday)
2. Compare each metric to the same day last week and calculate the day-over-day change.
3. Flag any metric that changed more than 20% in either direction.

Output:
- One-paragraph summary highlighting the most notable changes
- A table with: metric name, yesterday's value, same day last week, % change, flag (yes/no)
- Keep the tone factual and brief. This is an internal ops report, not an executive summary.

Date handling: Use Eastern Time. If yesterday's data is incomplete, use the most recent complete day and note the date at the top of the report.
Objective: Weekly executive summary of key business metrics. Sent every Monday morning covering the prior week (Monday through Sunday).

Steps:
1. Pull the following metrics for the prior full week:
   - Revenue (from the orders table, sum of order_value where status = 'completed')
   - New customers (from accounts, created_at in the prior week)
   - Churn (from subscriptions, cancelled_at in the prior week)
   - Net revenue retention (current week revenue from accounts that existed 4 weeks ago / revenue from those same accounts 4 weeks ago)
2. Compare each metric to the prior week and to the same week last year.
3. Identify the top 3 revenue-generating customer segments for the week.

Output:
- Executive summary paragraph (3–5 sentences, written for a non-technical audience)
- KPI table: metric, this week, prior week, WoW change, same week last year, YoY change
- Bar chart: revenue by customer segment, top 10 segments, current week vs. prior week
- Close with one sentence on what to watch next week based on the trends.

Tone: Confident and direct. Assume the reader has 2 minutes and wants the headline first.
Objective: Detect and surface unusual patterns in product usage data. Run daily.

Steps:
1. For each of the following metrics, calculate the value for yesterday and compare it to the 14-day rolling average:
   - Daily active users
   - Messages sent per user
   - Error rate (from the errors table, errors / total requests)
   - Average session duration
2. Flag any metric where yesterday's value is more than 2 standard deviations from the 14-day mean.
3. For flagged metrics, pull the daily trend for the past 14 days so the reader can see the pattern.

Output:
- If no anomalies: one sentence confirming all metrics are within normal range.
- If anomalies exist: for each flagged metric, show (1) the current value, (2) the 14-day average, (3) the deviation, and (4) a line chart of the past 14 days.
- Order anomalies by severity (largest deviation first).

Edge cases: If a metric has fewer than 7 days of data, skip the anomaly check for that metric and note it. If a metric returns null, note it explicitly.