Skip to main content
Effective: October 27, 2025

Consumption

Generally. The TextQL Virtual Sandcastle Service is a cloud data platform provided by TextQL (“TextQL”, “we”, “us”, “our”) to TextQL customers (each a “Customer”, “you”, “your”) as a service which consumes resources for distinct functions as set forth herein. The service provides agentic capabilities for data-intensive workloads through Ana, and is available in different service tiers and deployment options. Customer workloads are processed using virtual compute resources and AI inference, charged based on Agent Compute Units (ACUs).

Agent Compute Units (ACUs)

Compute (Virtual Sandcastle Service). TextQL bills for compute resources using purchasable Agent Compute Units, as described herein (“ACUs” or “Agent Compute Units”). Virtual compute instances consume ACUs at a rate of 500 ACUs per instance-hour. ACUs are consumed continuously while instances are active in your cluster, whether actively processing workloads or idle and available for immediate assignment. Instance Cluster Management. Each organization is provisioned with a dedicated cluster of compute instances that automatically scales to meet concurrent workload demand, ensuring optimal performance while minimizing unnecessary resource consumption. When you initiate a new workload (such as starting a conversation with Ana or running a playbook), the system intelligently manages instance allocation:
  • If all instances in your cluster are currently assigned to active workloads, a new instance is automatically provisioned to handle your request. This instance remains available in your cluster for up to 24 hours, ready to handle additional workloads without provisioning delays.
  • If idle instances are available in your cluster, one is immediately assigned to your workload, eliminating any cold-start delays. The 24-hour availability window for that instance resets, ensuring it remains ready for subsequent use.
Instances automatically return to an idle state 1 hour after completing their last workload activity, making them available for reassignment while still consuming ACUs at their respective hourly rates. Your cluster dynamically optimizes its size based on your actual usage patterns, automatically scaling down when sustained demand decreases to ensure you’re not paying for more capacity than you need. AI Inference. TextQL offers AI inference capabilities powered by industry-leading large language models from Anthropic, OpenAI, and other providers. Inference charges are based on the number of input and output tokens processed during your interactions with Ana. Input tokens represent the context provided to the model (your questions, relevant data, and conversation history), while output tokens represent Ana’s generated responses. Token consumption is calculated precisely and converted to ACUs based on the rates in the AI Inference Pricing table below. Different models offer varying capabilities and price points — from cost-efficient models optimized for simple queries to advanced reasoning models for complex analytical tasks. Cache features (where available) provide significant cost savings by reusing previously processed context across related queries.

Standard

The base service tier providing full compute and AI inference capabilities at $2.00 per 1,000 ACUs.

Enterprise

Enhanced service tier with priority support and advanced features at $3.00 per 1,000 ACUs.

Business Critical

Dedicated cloud deployment with enhanced security and compliance features at $4.00 per 1,000 ACUs.

White Label

Fully customizable solution with white-label branding capabilities at $5.00 per 1,000 ACUs.

ACU Consumption Pricing

Service Tier Pricing

Service TierACU Rate (per 1,000 ACUs)
Standard$2.00
Enterprise$3.00
Business Critical$4.00
White Label$5.00

Compute

Instance Consumption Rate
Instance-HourACUs Consumed
1 hour500

Inference

AI Inference Pricing (ACUs per 1M Tokens). All inference pricing is denominated in ACUs per 1 million tokens processed.

Claude Models

ModelToken RangeInputCache ReadEphemeral CachePersistent CacheOutput
MODEL_HAIKU_3N/A137.516.5165275687.5
MODEL_HAIKU_3_5N/A440445508802,200
MODEL_SONNET_3_5N/A1,6501652,062.53,3008,250
MODEL_SONNET_3_7N/A1,6501652,062.53,3008,250
MODEL_SONNET_4≤200K1,6501652,062.53,3008,250
MODEL_SONNET_4>200K3,3003304,1256,60012,375
MODEL_SONNET_4_5≤200K1,6501652,062.53,3008,250
MODEL_SONNET_4_5>200K3,3003304,1256,60012,375
MODEL_OPUS_4N/A8,25082510,312.516,50041,250

OpenAI GPT Models

ModelToken RangeInputCache ReadOutput
MODEL_GPT_4N/A16,500N/A33,000
MODEL_GPT_4_TURBON/A5,500N/A16,500
MODEL_GPT_4ON/A1,375687.55,500
MODEL_GPT_4O_MININ/A82.541.25330
MODEL_GPT_4_1N/A1,1002754,400
MODEL_GPT_4_1_MININ/A22055880
MODEL_GPT_4_1_NANON/A5513.75220

OpenAI O-Series Models

ModelToken RangeInputCache ReadOutput
MODEL_O_1N/A8,2504,12533,000
MODEL_O_1_MININ/A605302.52,420
MODEL_O_3N/A1,1002754,400
MODEL_O_3_MININ/A605302.52,420
MODEL_O_4_MININ/A605151.252,420

Other Models

ModelToken RangeInputCache ReadOutput
MODEL_KIMI_K2_INSTRUCTN/A330N/A1,375
MODEL_QWEN3_CODERN/A247.5N/A990
MODEL_QWEN3_CODER_SMALLN/A82.5N/A330
MODEL_GPT_OSSN/A82.5N/A330
MODEL_GPT_OSS_SMALLN/A38.5N/A165

Additional Services

Included Services. TextQL includes several essential services at no additional ACU cost to ensure seamless platform operations.
Service CategoryTypeACUs / UnitNotes
StorageStandard0 / GB / monthPersistent storage for your data and configurations
ConnectivityIngress0 / GBData transfer into TextQL
ConnectivityEgress0 / GBData transfer out of TextQL
ToolsWeb Search50 / searchAna’s ability to search the web for real-time information

This TextQL Virtual Sandcastle Service Consumption Table may be updated from time to time. Changes shall be effective on the date that TextQL announces they are effective. Any capitalized terms used but not defined herein shall have the meaning set forth in the Agreement or the Documentation, as applicable.