TextQL Product Overview

Overview

Everything powering Ana is driven by our SOTA Ontology system. Put simply, it takes in the disparate tables, objects, metadata, etc. from your various sources and creates a unified representation of all the most important concepts in your business.

In-depth

Once our ontology model comprehensively represents the key concepts in your business, we use that to power AI data analyst, Ana. Ana can perform complex queries, build models and take actions on your behalf.

Ana is designed to meet your team where they are.

→ For technical users, Ana chats become powerful data notebooks that users can further refine for clear results.

→ For business users, Ana understands natural high level questions, providing clear and concise responses with relevant insights.

Ana is also designed to seamlessly integrate with your workflows, whether that’s using Ana in Slack (or embedded in your internal tools) or being able to automatically generate Jira tickets for requests that Ana can’t answer confidently (with useful suggestions that Data Engineers can use to quickly resolve).

How Chats Work

Overview

Any time you use TextQL, you’re talking to Ana. Ana is a powerful chat assistant powered by large language models. Using our Ontology system it’s uniquely capable of traversing all your data and finding exactly what you need. Ana can be accessed through a variety of interfaces, meeting you where you are.

In-depth

Ana receives chats from whichever interface you’re talking to her in. Then a flow proceeds as follows:

  1. GPT-4o is used to process the request and identify if data fetching is required
  2. If data fetching is needed, then GPT-4o constructs a plain english query of exactly the dataset that it needs. This is displayed to the user and passed to the Ontology. The user is able to edit this query in the interface if they actually wanted to use a different dataset.
  3. The ontology then uses GPT-4o to interpret the query in the context of the model we have of your data. It then finds the most relevant data to perform the queries and joins to create the needed table.
  4. The Ontology then returns this data to Ana, which uses GPT-4o to generate relevant Python code based on the user request and dataset metadata. This code can be for modeling, analysis or any other action needed to be done with the data.
  5. Ana then triggers this code to be run in our secure Data Notebook against the data.
  6. Ana (via GPT-4o) then reads the code’s output, giving it the chance to add to the analysis process, fix mistakes or gain context to perform follow up steps.

Technical Deep Dive

Overview

Our systems are architected to ensure your data is secure at every step. From managing RBAC data governance to maintaining encryption throughout.

*On-Prem deployments are available for enterprise customers, contact us for more info.

** Model providers can be tailored based on customer requirements, contact us for more info.

In-Depth

Users interact through our various frontend interface, which we can integrate according to customer needs. From there, encrypted requests are made to the Ana backend service, hosted on AWS. This service then authenticates the request and runs the Ana flow described previously.

We do not store customer data in our environment, only certain metadata needed to perform Ontology data retrieval. Data is encrypted at rest and in flight. Any service that interacts with customer data is run in private subnets with no inbound access from outside TextQL. Outbound requests can be configured to an IP whitelist only.

On-Prem & Self-Hosted

Overview

For enterprises with especially sensitive data we support both on-prem and self hosted deployments.

Sample diagram on self-hosted AWS deployment, custom layouts based on requirements can be created

In-Depth

We support multiple modes of deploying TextQL:

  • Managed Deployment
    • Managed Multi-Tenant
    • Managed Single-Tenant
  • Self Hosted Deployment
    • Docker Compose
      • We’ll provide a compose file for our application. The database can be run within Docker or on a managed service like RDS

The TextQL service relies on language models for multiple functions we support the following model providers:

  • OpenAI
  • OpenAI hosted on Azure
  • Anthropic
  • Anthropic hosted on AWS Bedrock