Skip to main content
Ana gets things right most of the time. But “most of the time” is not good enough when you are putting a number in a board deck or making a resource decision. This page covers how to read Ana’s work critically, what to do when something looks off, and how to tell whether the problem is the prompt, the context, or the data.

How to read Ana’s work

Ana shows her work. Before you act on a result, take 60 seconds to review what she actually did. Check the SQL she wrote. When Ana queries your database, she shows the Text-to-SQL query she ran. Scan it for:
  • The right table — is she querying the table you expected?
  • The right filters — is she excluding internal accounts, applying the right date range?
  • The right aggregation — is she summing, averaging, or counting the right column?
You do not need to be a SQL expert to catch most issues. A quick read will tell you if the date filter looks wrong or if she is joining a table you did not expect. Review chart assumptions. When Ana builds a visualization with Python, she makes choices about axes, groupings, and scales. Check:
  • Does the y-axis start at zero, or is it truncated in a way that exaggerates a trend?
  • Are the groupings what you intended? (e.g., “by region” when you wanted “by country”)
  • Is the time granularity right — daily vs. weekly vs. monthly?
Read the written summary critically. Ana’s summaries are generated from the data she retrieved. If the underlying query was off, the summary will be confidently wrong. Always cross-check a key number from the summary against the table or chart before sharing. Look for what is missing. If you expected 12 rows and got 8, ask why. If a segment you know exists is not in the output, ask Ana to show you the distinct values in that column. Missing data is often more important than the data that is present.

What to do when Ana gets it wrong

Most errors fall into one of three categories: misunderstood question, wrong table or filter, or data issue. The fix is different for each. Ana misunderstood the question Rephrase with more specificity. Add the table name, the exact metric definition, or the filter you need. Do not just repeat the same question — change something about how you asked it.
Prompt
Less effective”Show me revenue last month”
More effective”Show me total completed order value from the orders table where completed_at is between March 1 and March 31, 2026. Exclude refunded orders.”
Ana used the wrong table or filter Tell her explicitly: “You used the orders table but I need orders_v2 or “You included internal accounts — please filter to is_internal = false.” Ana will re-run with the correction. The data itself looks wrong This is the hardest case, because the issue is not with Ana — it is with what is in your warehouse. Signs of a data issue: unexpected nulls, a metric that is zero when it should not be, a date range with missing days.
“Show me the underlying rows for March 15 so I can see what is there.”
Then take the issue to whoever owns the data pipeline.

Diagnosing the root cause

When Ana’s answer is wrong, work through this sequence: Does Ana understand what you are asking? Ask her to explain her approach before running the query: “Before you run anything, tell me how you plan to answer this.” If her plan is wrong, correct it before she executes. Is she using the right data source? Check the SQL. If she is querying the wrong table, tell her which one to use. If you are not sure which table is right, ask: “What tables contain order data?” and let her show you the options. Is the filter correct? The most common filter errors: wrong date range, missing exclusion of internal accounts, wrong status filter. Check the WHERE clause. Does the result match a source you trust? If you have a dashboard, a spreadsheet, or a prior report with a number you trust, compare Ana’s result to it. If they differ, ask Ana to explain the difference:
“My dashboard shows 1,240 for this metric but you got 1,180. Can you walk me through how you calculated it?”
Is the issue reproducible? Ask the same question in a new thread. If you get a different answer, something earlier in the thread influenced the result. If you get the same wrong answer, the issue is more systematic.

The escalation path

Most issues are resolved at the prompt level. If they are not, work through this path:
1

Rephrase the prompt

Add specificity: table name, metric definition, explicit filters. This resolves the majority of issues. See Writing Better Prompts.
2

Add or update context

If Ana is consistently making the same wrong assumption — wrong fiscal calendar, wrong metric definition, wrong exclusion rule — the fix is to add that rule to your context library. One prompt fix helps you; a context fix helps everyone.
3

Define it in ontology

If a metric needs to be calculated exactly the same way every time, consider defining it in ontology. Most useful for metrics that require specific joins or multi-step calculations.
4

Contact support

If Ana is consistently wrong on a specific question and you have ruled out prompt, context, and ontology issues, the problem may be a bug or a connector configuration issue. Reach out to support@textql.com with the specific question, the expected answer, and what Ana actually returned.

How to make improvements stick

The most durable improvements come from fixing the root cause, not just the symptom. When you catch an error, ask yourself: “If I were onboarding a new analyst, what would I tell them before answering this question?” That answer is usually what belongs in context. A few patterns that consistently improve answer quality over time:
  • Document exclusions. Every time you add “exclude internal accounts” or “exclude test users” to a prompt, that filter belongs in context.
  • Define your most-used metrics. If you ask about the same 5–10 metrics regularly, write down how each is calculated and add it to context or ontology.
  • Note data caveats. If a table has a known issue, a date range limitation, or an unreliable column, document it. Ana will surface the caveat automatically.
  • Update stale context. When something changes — a new fiscal year, a deprecated table, a revised metric definition — update the context document. Stale context is one of the most common sources of persistent errors. The GitHub integration makes this easier to maintain and review over time.
The goal is to move corrections from the prompt level (you fix it every time) to the context level (it is fixed for everyone, automatically).