"Where does my data go when I use ChatGPT?" It's the question every business owner should be asking before adopting AI tools. The answer varies significantly depending on which tools you use and how you configure them.
This isn't about avoiding AI—it's about using it intelligently. Understanding data privacy lets you capture the productivity benefits of LLMs while meeting your obligations to customers and staying on the right side of Australian law.
The Core Privacy Concern with AI
When you type something into ChatGPT or similar tools, you're sending data to external servers. Depending on the service and your agreement with them, that data might be:
- Stored temporarily for processing your request
- Logged for security or abuse prevention
- Used for training to improve future models
- Retained indefinitely in some form
For personal use, this might be acceptable. For business use—especially with customer data, financial information, or confidential strategies—it requires careful consideration.
Key question: If you paste customer information into an AI tool, who else might see it, where is it stored, and could it appear in outputs to other users?
Australian Privacy Context
Australian businesses are bound by the Privacy Act 1988, which includes the Australian Privacy Principles (APPs). These apply to most businesses with annual turnover above $3 million, as well as all health service providers and some other categories.
Relevant Privacy Principles
Several APPs are directly relevant to AI use:
- APP 6 (Use and disclosure): Personal information can only be used for the purpose it was collected, unless an exception applies
- APP 8 (Cross-border disclosure): If you share data with overseas entities (like US-based AI providers), you remain accountable for how they handle it
- APP 11 (Security): You must take reasonable steps to protect personal information from misuse, interference, and unauthorised access
What This Means Practically
If you're pasting customer data into AI tools:
- You need to understand what the provider does with that data
- The use should align with why you collected the data originally
- You may need to update your privacy policy
- You remain responsible for any privacy breaches, even if they occur at the AI provider's end
Note: Privacy law is evolving. The Australian Government is actively reviewing AI and privacy regulation. Stay informed about changes.
Types of AI Deployments and Privacy Implications
Not all AI implementations carry the same risk. Understanding the spectrum helps you make appropriate choices.
Consumer AI Tools (Highest Risk)
Free versions of ChatGPT, Claude, Gemini, and similar tools typically:
- May use your inputs to train future models (though many now offer opt-out)
- Store conversation history on provider servers
- Have terms of service designed for consumers, not businesses
- Offer limited guarantees about data handling
Appropriate for: Non-sensitive tasks, general research, personal productivity.
Enterprise API Access (Medium Risk)
Business-tier subscriptions and API access typically offer:
- Opt-out from training data by default
- Data Processing Agreements (DPAs)
- Defined data retention periods
- Compliance certifications (SOC 2, ISO 27001)
- Region-specific data handling options
Appropriate for: Business operations with proper controls, non-critical customer data with consent.
Private Cloud Deployment (Lower Risk)
Running AI models in your own cloud environment (AWS, Azure, GCP):
- Data stays within your controlled infrastructure
- No data leaves your environment for model training
- You control retention and access policies
- Higher setup and operational costs
Appropriate for: Sensitive data processing, regulated industries, organisations with strict data sovereignty requirements.
On-Premises Deployment (Lowest Risk)
Running models entirely on your own hardware:
- Complete data control
- No external network traffic for AI processing
- Requires significant infrastructure investment
- Limited to open-source or licensed models
Appropriate for: Highly regulated industries, organisations with existing infrastructure, air-gapped environments.
Practical Risk Mitigation
1. Classify Your Data
Not all data carries equal risk. Create categories:
- Public: Information already publicly available—minimal restrictions
- Internal: Business information not meant to be public—use business-tier AI tools
- Confidential: Customer data, financial details, strategy—strict controls required
- Restricted: Regulated data (health, financial)—consider private deployment only
2. Create Usage Policies
Your team needs clear guidance on:
- Which AI tools are approved for which data types
- What types of information should never be entered
- How to anonymise or redact sensitive data before using AI
- Who to contact with questions or concerns
3. Anonymise Before Processing
In many cases, you can get the AI assistance you need without using real data:
- Replace customer names with placeholders
- Use aggregated data instead of individual records
- Remove identifying details from documents
- Create synthetic examples that mirror real scenarios
4. Review Provider Agreements
Before committing to an AI tool for business use:
- Read the data processing terms
- Understand training data policies
- Check data retention periods
- Verify cross-border data handling
- Request a Data Processing Agreement if needed
5. Monitor and Audit
Ongoing vigilance matters:
- Log AI tool usage where possible
- Periodically review what data is being processed
- Stay updated on provider policy changes
- Include AI tools in security reviews
RAG Systems and Data Privacy
RAG (Retrieval-Augmented Generation) systems offer a privacy-conscious way to use AI with your proprietary data. Instead of uploading everything to an AI provider, RAG keeps your data in your own database and only sends relevant snippets during queries.
Privacy Advantages of RAG
- Data stays local: Your documents remain in your controlled systems
- Minimal exposure: Only query-relevant chunks are sent to the LLM
- Access control: You can implement who sees what at the retrieval layer
- Auditability: You can log exactly what data was used for each response
Considerations
- The retrieved chunks are still sent to the LLM—provider terms still apply
- Chunking and embedding processes need to preserve confidentiality
- For maximum privacy, combine RAG with private model deployment
Common Business Scenarios
Customer Support Enhancement
Scenario: Using AI to help support staff draft responses to customer queries.
Privacy consideration: Customer details (name, account info, issue history) may be exposed to the AI.
Mitigation: Use business-tier AI with DPA, anonymise customer details where possible, train staff on what not to include.
Document Analysis
Scenario: Using AI to summarise contracts or legal documents.
Privacy consideration: Contracts contain confidential terms, party names, financial details.
Mitigation: Consider private deployment for legal work, or redact sensitive terms before analysis.
HR and Recruitment
Scenario: Using AI to screen resumes or draft job descriptions.
Privacy consideration: Candidate personal information is sensitive and subject to privacy laws.
Mitigation: Avoid uploading full resumes to consumer AI tools. Use AI for job descriptions and general advice rather than candidate-specific analysis.
Financial Analysis
Scenario: Using AI to analyse financial data or create reports.
Privacy consideration: Financial data may be confidential or subject to regulatory requirements.
Mitigation: Use aggregated data where possible. For detailed analysis, consider private deployment or enterprise-tier services with strong compliance.
AI Privacy Readiness Checklist
Before deploying AI tools in your business:
- —¹Ã…“— Identify what data types will be processed by AI
- —¹Ã…“— Review AI provider terms and data handling policies
- —¹Ã…“— Obtain Data Processing Agreements where needed
- —¹Ã…“— Create staff guidelines on AI tool usage
- —¹Ã…“— Update privacy policy to reflect AI tool usage
- —¹Ã…“— Establish data classification for AI use cases
- —¹Ã…“— Plan how to anonymise or redact sensitive information
- —¹Ã…“— Consider customer consent requirements
- —¹Ã…“— Set up monitoring for AI tool usage
- —¹Ã…“— Include AI in your security review process
Taking the Right Approach
Privacy concerns shouldn't stop you from using AI—they should inform how you use it. Start with a clear understanding of what data you're processing and what risks are acceptable for your business.
For most small businesses, using enterprise-tier AI tools with appropriate policies provides a good balance of capability and protection. As your AI usage matures or if you handle particularly sensitive data, private deployment options become worth exploring.
Consider starting with your AI Readiness Assessment to understand where your organisation stands, then make informed decisions about the right deployment model for your needs.
