Today, no security controls can completely guarantee your data is protected when your third-party vendors use an AI tool. However, there are methods to reduce the risk. This piece provides a list of questions to ask vendors about their use of AI and recommends ways to use your own organization’s AI policy to manage expectations and ensure vendors never expose your data without your consent.
AI Presents New Risks to Security
The AI boom is still so new that we don’t yet have a complete understanding of the risks it brings to the table. We know about its risk of data loss and data modification—and most worrisome, third- and fourth-party risks. However, none of our traditional vendor contracts, questionnaires or security audits cover this.
Use Your AI Policy as a Guideline
Begin with your organization’s AI policy. Once it’s in place, make sure third-party contractual language states the third party will protect your data to the same degree you yourself protect it. You can then hand over your organizational AI policy as a guideline for how third-party vendors need to protect your data.
When interviewing prospective vendors, ask questions about their data policies and the tools they use so you can have a clearer understanding of how well they will protect your data. Questions to ask vendors around AI include:
- List all organizations and business units with access to <your organization’s> data while that data is in your custody.
- Do you use any SaaS or external data processing/storage/disposal/analysis services?
- Does your data ever travel or move outside <vendor’s> perimeter?
- Does <vendor> or <vendor’s vendor> ever use generative AI, a large language model (LLM) or any other machine learning/AI system or application internally or externally?
- Provide <vendor’s> policy for AI and LLM usage.
- Provide data flow diagrams of both physical and logical movement and processing of <organization’s> data.
Providing your AI policy to prospective vendors and making certain they are willing to sign off and comply with it is a significant indicator of success. Remember: None of this implies AI or machine learning systems are forbidden. Rather, your goal is to ensure vendors use proper caution and methodologies.
Ensure AI Documentation Is Valid and Authorized
Often, the vendor’s stated technical controls around AI manipulation and processing of data are questionable because vendors sometimes claim to possess capabilities that don’t exist in order to make sales. To ensure vendors, current and future, aren’t using your data within AI prompts for generative AI or LLMs, you should ensure the appropriate weight of authority is behind the documentation involved.
Make certain someone with a reasonably important title signs off on the compliance document and policies you provide to your vendors. Sales engineers can claim ignorance all they want, but if the chief operating officer has signed off on the policy, they must keep to it.
READ: ChatGPT: Uncovering Misuse Scenarios and AI Security Challenges
Tips for Vendor Policy for AI Usage
At their core, AI and machine learning are challenging. Without technical controls, industry standards and solid contractual language, it’s a ‘free for all’. To maintain control over your data and its usage with vendor AI:
- Build relationships: Make sure vendors know you’re happy to work with them as long as they’re willing to work with you. Set expectations and stick to them.
- Use existing contractual language to push your policy: Have your legal team develop strong boilerplate language to help push a consistent policy/process for vendors to comply with.
- Beware dishonest claims and verify what you are told: It’s human nature. People exaggerate or tell you what you want to hear to complete a deal. Get everything in writing and ensure the proper level of authority is behind all decisions.
Although reasonable efforts will be made to ensure the completeness and accuracy of the information contained in our blog posts, no liability can be accepted by IANS or our Faculty members for the results of any actions taken by individuals or firms in connection with such information, opinions, or advice.