AI Governance & Security · 7 min read

Choosing AI Tools Safely: A Vendor Evaluation Framework

A practical framework for evaluating AI vendors on security, compliance, and data handling. The questions to ask before you sign.

Why safe AI selection matters

AI tools are appearing everywhere — from standalone SaaS products to features embedded in software you already use. The rush to adopt means many organisations are using AI tools without properly evaluating how they handle data.

This creates real risk. Your customer data, employee records, financial information, or proprietary knowledge could be stored in unknown locations, used for model training, or accessible to the vendor's staff.

Evaluation framework

For every AI tool your organisation considers, evaluate these five areas:

  1. Data residency: Where is data processed and stored? Is it in Australia?
  2. Data usage: Is your data used to train or improve the AI model? Can you opt out?
  3. Security controls: Encryption, access controls, audit logging?
  4. Compliance: SOC 2, ISO 27001, GDPR, or Australian-specific certifications?
  5. Data retention: How long is your data kept? Can you request deletion?

Security questions to ask

Before signing up for any AI tool that will handle business data:

  • Where are your servers located? Do you have Australian data centres?
  • Is data encrypted at rest and in transit? What encryption standards?
  • Who at your company can access our data? What access controls exist?
  • Do you have SOC 2 Type II certification? ISO 27001?
  • What happens to our data if we cancel? How long until deletion?
  • Do you have a data processing agreement (DPA) we can review?
  • What's your incident response process? Will you notify us of breaches?

Data handling checklist

Question Ideal answer Red flag
Is data used for model training?No, with opt-out by default"By default, yes" with no opt-out
Where is data stored?Specific region (e.g., AWS Sydney)"Various global data centres"
Can data be deleted?Yes, on request with confirmation"Data may be retained indefinitely"
Third-party access?No sub-processors without disclosureUnnamed third-party processing
Encryption?AES-256 at rest, TLS 1.2+ in transit"Encryption where appropriate"

Red flags to watch for

  • Vague privacy policies: If they can't clearly explain where data goes and how it's used, assume the worst.
  • No opt-out for model training: Your business data should never be used to train someone else's model without explicit consent.
  • "Free" AI tools: If the product is free, your data is likely the product. Free tiers usually have the weakest privacy protections.
  • No data processing agreement: Any enterprise-grade AI vendor should provide a DPA.
  • Data leaves Australia without disclosure: Under APP 8, this creates legal liability for you.

Our approach: Self-hosted or Bedrock-based AI means your data never leaves your AWS account. No vendor training, no third-party access, full audit control.

Key takeaways

  • Not all AI tools handle your data the same way — evaluate before you adopt.
  • Key areas: data storage location, training opt-out, encryption, access controls, and compliance certifications.
  • Ask explicit questions about where your data goes, who can access it, and how it's retained.
  • Free tiers often have the weakest data protections — pay for the plan that matches your risk profile.

Ready to discuss your project?

Tell us what you're working on. We'll come back with a practical recommendation and clear next steps.