ThoughtSpot AI Principles
Our Commitment
ThoughtSpot is committed to developing and deploying agentic analytics that are accurate and trustworthy. General-purpose AI hallucinates; ThoughtSpot doesn't. We build trustworthy AI for analytics - AI that is grounded, transparent, secure, customer-governed and customer-controlled. As AI becomes central to how organizations make decisions, we recognize our responsibility to build AI features that assist human intelligence and decision-making while maintaining the governance, security, and reliability that enterprise customers require.
These principles guide how we design, build, and operate AI features across the ThoughtSpot platform.
Core Principles
Accuracy & Reliability
AI-generated outputs must be trustworthy and verifiable.
We design our AI systems and agents to perform the tasks users expect and deliver accurate, consistent results that users can rely on when making business decisions. When uncertainty exists, we communicate it clearly.
Our approach includes:
-
Insights Grounded in your data: ThoughtSpot generates insights from your governed data—not from broad training data or external sources that could introduce inaccuracies. When external context is used to enrich answers, your underlying data remains the authoritative source.
-
Source attribution: Users can trace AI-generated answers back to the underlying data, queries, and calculations.
-
Task fidelity: Our AI features are purpose-built to perform tasks just like an analyst coworker would—reasoning through logical steps, checking their own work, and refining results, all while ensuring that your data is accessed and shared securely at all points in time.
-
Rigorous and continuous testing and validation: We conduct extensive accuracy testing across diverse data scenarios to continually improve AI features before releases.
Transparency & Explainability
Users should understand how AI reaches its conclusions.
We believe AI should augment human judgment, not replace it. That requires users to understand what the AI did and why.
Our approach includes:
-
Explainable outputs: AI-generated outputs, such as visualizations, queries, insights, data models, and code, include explanations of the underlying logic.
-
Query visibility: Users can inspect the SQL or analytical logic behind AI-generated answers.
-
Reasoning Transparency: Showing AI step-by-step reasoning process, allowing users to see how answers are generated.
-
Clear AI disclosure: We clearly indicate when content is AI-generated versus user-created.
-
Documentation: We provide comprehensive documentation on how our AI features work, their capabilities, and their limitations.
-
Audit trails: AI interactions are logged to support governance and review requirements.
Data Privacy & Security
Customer data is protected.
Your data belongs to you. We implement strict controls to protect customer data and maintain clear boundaries and visibility around how it is used.
Our approach includes:
-
Data isolation: Customer data is logically segregated and never commingled across tenants.
-
No training on customer data: We do not allow training of third-party LLM models on customer data.
-
Encryption: Data is encrypted in transit and at rest using industry-standard protocols.
-
Access controls: Our AI features operate within your existing security framework, respecting role-based access controls and data permissions—ensuring users only receive insights from data they’re authorized to access.
-
Third-party AI providers: When integrating with third-party AI services, we apply contractual and technical safeguards, including zero-retention commitments and data protection agreements.
Human Oversight & Control
Humans remain in control of decisions.
AI should empower users to make better decisions faster—not make decisions for them without appropriate oversight.
Our approach includes:
-
AI you control: Administrators can enable, disable, or constrain AI features based on organizational policies.
-
Human-in-the-loop design: AI-generated models, dashboards, and configurations support easy human review and approval before outputs are finalized.
-
Override capabilities: Users can modify, reject, or refine AI-generated outputs.
-
Feedback mechanisms: Users can flag inaccurate or problematic AI outputs, enabling continuous improvement.
Fairness & Non-Discrimination
AI systems should produce equitable outcomes and avoid reinforcing bias.
Analytics informs decisions that affect people. We recognize the importance of fairness in analytics and work only with third-party LLM providers that conduct bias assessments, red-teaming, and fairness evaluations as part of their model development processes.
Our approach includes:
-
Inclusive design: AI interfaces are designed to be accessible to users of all abilities.
-
Provider accountability: We actively monitor our providers’ published responsible AI practices and reassess partnerships if standards are not maintained.
-
Ongoing due diligence: We periodically review LLM providers’ model cards, safety reports, and responsible AI disclosures to ensure fairness and bias mitigation practices remain current.
Accountability & Governance
Clear ownership and processes ensure responsible AI deployment.
Responsible AI requires organizational commitment, not just technical controls. We maintain governance structures to ensure accountability.
Our approach includes:
-
Cross-functional governance: AI governance involves stakeholders from Product, Engineering, Legal, Security, and Privacy to ensure comprehensive oversight.
-
Incident response: We maintain processes to investigate and remediate AI-related issues reported by customers or identified internally.
-
Regulatory alignment: We monitor evolving AI regulations and update our practices to maintain compliance.
-
Customer transparency: We communicate material changes to our AI practices, features, and third-party providers through release notes, documentation, and Trust Center updates.
-
Regular review: We periodically review and update our principles and practices.
Principles In Action
Customer Controls
We provide customers with tools to govern AI use within their ThoughtSpot environment, including:
| Control |
Description |
| Feature toggles |
Enable or disable specific AI features at the organizational level. |
| Role-based permissions |
Control which users can access AI features. |
| Data security integration |
AI respects your existing data access policies and security configuration. |
| AI governance |
AI features are opt-in with logging and a Conversations Liveboard for compliance and review. |
| Bring Your Own LLM |
Connect your own LLM provider using a BYOLLM key. |
Third-Party AI Services
ThoughtSpot integrates with third-party large language model (“LLM”) providers to power certain AI features. These LLM providers can be viewed at the ThoughtSpot Subprocessors page. When we do:
-
Provider standards: We select providers with enterprise-grade security and privacy practices and commitments.
-
Contractual protections: We negotiate contractual protections, including data processing agreements and zero-retention policies.
-
Data minimization: We minimize the data sent to third parties to what is necessary for the feature to function.
-
Zero training policy: We ensure that third-party LLMs do not use customer data for training purposes.
-
Feature disclosure: We disclose which features rely on third-party AI services in our product documentation.
Continuous Improvement
AI technology and best practices evolve rapidly. We commit to:
-
Staying current: Monitoring developments in AI safety, ethics, and regulations.
-
Engaging stakeholders: Incorporating feedback from customers, employees, and external experts.
-
Updating practices: Revising our principles and implementation as we learn and as standards evolve.
-
Transparency: Communicating material changes to our AI practices.