Skip to main content
Ana’s answer quality is directly tied to the context she has access to. A well-configured context setup means Ana knows your fiscal calendar, your metric definitions, your business rules, and your data conventions — without you having to explain them in every prompt. A poorly configured setup means Ana makes reasonable guesses that may not match how your organization actually works. This guide is for admins and data champions who configure TextQL for their teams. If you are an end user looking to improve your own prompts, see Writing Better Prompts.

The Context Hierarchy

TextQL applies context in layers, from broadest to most specific. When Ana answers a question, she reads all context that applies to the current user and connector, with more specific context taking precedence over broader context.
ScopeWho sees itWhen it applies
OrganizationEveryoneAlways
RoleUsers with that roleAlways, for that role
ConnectorAnyone querying that connectorWhen that connector is active
Role + ConnectorUsers with that role, querying that connectorWhen both conditions are met
Start broad and add specificity as needed. Most organizations get significant value from a well-written organization-level context document alone. Role and connector scoping become important when different teams use the same data differently, or when a connector has conventions that only apply to a specific group.

What to Put in Context vs. Ontology

Context and ontology serve different purposes. Choosing the right one for a given piece of information makes a meaningful difference in how reliably Ana uses it. Use context for:
  • Business rules and policies (“Revenue is recognized at time of shipment, not order placement”)
  • Fiscal calendar definitions (“Our fiscal year runs July 1 through June 30”)
  • Currency and unit conventions (“All financial figures are in USD unless otherwise noted”)
  • Terminology that differs from common usage (“In our data, ‘user’ means a paying account, not an individual login”)
  • Data quality caveats (“The legacy_orders table only contains data through December 2023”)
  • Exclusion rules (“Internal test accounts have is_internal = true and should be excluded from all user metrics”)
Use ontology for:
  • Metric definitions that require specific SQL logic (“Monthly Active Users = distinct user_ids with at least one session in the calendar month, excluding internal accounts”)
  • Dimensions and joins that are used frequently (“Customer segment is derived by joining accounts to the segment_mapping table on account_id”)
  • Calculations that must be consistent across all users and all questions
The practical distinction: context is prose that Ana reads and interprets. Ontology is structured SQL logic that Ana executes directly. If a definition is simple enough to explain in a sentence, context is usually sufficient. If it requires a specific join, a specific filter, or a calculation that must be exactly right every time, ontology is the better choice.

High-Impact Context Examples

The following types of context have the highest return on investment. If your organization has not documented these, start here. Fiscal calendar If your fiscal year does not align with the calendar year, Ana will use the wrong periods for “this quarter,” “last year,” and similar relative date references. Document your fiscal year start date and quarter definitions explicitly.
“Our fiscal year runs from February 1 to January 31. Q1 is February through April, Q2 is May through July, Q3 is August through October, Q4 is November through January.”
Metric definitions Any metric your team discusses regularly — revenue, DAU, churn, conversion rate — should be defined in context or ontology. Even a one-sentence definition prevents Ana from making a different assumption each time.
“Churn rate is calculated as the number of accounts that cancelled in a given month divided by the number of active accounts at the start of that month. Trial accounts are excluded.”
Exclusion rules Internal accounts, test users, and bot traffic should be excluded from most analyses. Document the filter once so you do not have to repeat it in every prompt.
“Exclude all accounts where account_type = 'internal' or email LIKE '%@yourcompany.com' from user-facing metrics.”
Table ownership and caveats If certain tables are deprecated, have known data quality issues, or should only be used for specific purposes, document that. Ana will use the most relevant table she can find — if you want her to use a specific one, say so.
“Use orders_v2 for all order analysis. The orders table is deprecated and contains data only through Q3 2023.”

Common Mistakes

Too much context (noise) A context document that is hundreds of lines long and covers every edge case will slow Ana down and may cause her to miss the most important rules. Prioritize the 10–15 rules that matter most and keep the document focused. If you find yourself adding caveats to caveats, that is a sign the document needs editing, not expansion. Too little context (Ana guesses) The opposite problem: no context at all, or context that only covers obvious things. Ana will make reasonable assumptions, but “reasonable” may not match your conventions. The most common gaps are fiscal calendar, metric definitions, and exclusion rules. Wrong scope level Putting connector-specific rules in organization context means every user sees them, even when they are not relevant. Putting universal rules in connector context means they only apply when that connector is active. Match the scope to the actual applicability of the rule. Stale context Context that was accurate six months ago may no longer be. If your fiscal year changed, a table was deprecated, or a metric definition was updated, the context document needs to reflect that. Stale context is often worse than no context, because Ana will confidently apply the wrong rule.

How to Test Context Changes

Before rolling out a context change to your whole organization, test it with a targeted question.
  1. Note the question you want to test: something that should be directly affected by the context change.
  2. Ask Ana that question before making the change. Note the answer.
  3. Make the context change.
  4. Ask the same question again. Compare the answers.
If the answer changed in the way you expected, the context is working. If it did not change, or changed in an unexpected way, review the wording of the context document — Ana may be interpreting it differently than you intended. A useful test pattern: ask Ana to explain her assumptions. “What fiscal year definition are you using?” or “How are you defining active users?” will surface whether she is picking up the context correctly.

When to Use GitHub Integration for Version-Controlled Context

For organizations that want to treat context like code — with change history, review processes, and rollback capability — TextQL supports a GitHub integration that syncs context documents from a repository. This is most useful when:
  • Multiple people are responsible for maintaining context
  • You want a review process before context changes go live
  • You need an audit trail of what changed and when
  • Your context documents are complex enough that accidental edits could cause problems
For simpler setups, the built-in context editor is sufficient. See Using the Context Editor for how to create and manage context documents directly in TextQL.

Cross-References