Tips to Mitigate AI Business Risks

June 1, 2023 | By IANS Faculty

AI has both weaknesses and unintended consequences, and it is impossible for security teams to guard against all of them. This piece explains measures to help businesses minimize the risk of AI use, abuse and accidents that could potentially impact reputation or revenue.

AI Model Risks and Issues 

Humans are already getting ChatGPT and its ilk to produce malware, phishing emails (see Figure 1) and methods to cause various types of trouble. 

Figure titled Tricking Chatgpt to Write Phishing Emails

 

Another issue is oracles. The term comes from the smart contract world, where control over a data source (an oracle) can net a malicious actor an unethical or illegal edge over the competition. An example of this would be a bet over the temperature in an area for a specific day. If one party has the weather station for the area in their backyard, it’s relatively trivial to hold a flame near it to raise the temperature artificially.

Breaking an AI model is another way to destroy the veracity of an AI system. If you can change the weighting of a choice or the patterns an AI is looking for, then you can change all the answers it provides, changing how it buys stocks or measures the effectiveness of a medication. Such changes can include presenting biased training data, manually overriding the weighting of choices or tampering with the model in some other way.

One of the largest risks is understanding “why” the AI system selected a specific choice. AIs that are simply black boxes are useless to furthering actual solutions and may instead prolong problems. Additionally, they are wholly incapable of being understood or human-verified.

AI Risk Management Framework 

Luckily, NIST has recently released an AI Risk Management Framework. It’s well documented and includes many of the answers needed to be more confident in deploying and using an AI system.

The framework’s core functions include risk governance, context, identifying specific risks and understanding the impact of likely risks (see Figure 2).

Figure titled Four Core Functions of NISTs ai Risk Management Framework


The framework, along with the companion documents and videos, provides a solid foundation from which to build and deploy an AI risk management program.

AI Risk Mitigation and Monitoring Data 

Mitigating the human “simulation theory” risk, wherein a human writes a way for an AI to answer a question it normally wouldn’t, depends on good filtering of terms, phrases and having the AI check its own work. The AI system can be trained to see these kinds of requests as a poisoning of the system, i.e., a pattern. As such, it can be trained to inspect its own work and reject anything that shouldn’t be transmitted to the requestor.

Monitoring for and removing poisoned or biased data sources, poor training data or bad methodologies is exactly the same process used when a database is constructed. Data normalization, data cleaning, data testing, etc., is a very mature art and science. Performing commonsense testing of data sets to determine if the right answers are being given helps discover if anything has made it past the data cleansing systems.

Effectively, the same issues and prioritizations found within data-intensive systems today should be used with AI systems tomorrow.

The only real addition would be a comprehensive error-checking and error-notification process. If a user can click a single button to point out the answer is wrong, it will lead to understanding where the error came from much quicker than if it takes weeks to figure out there was an error in the first place.

The hardest thing to find is a bias in a data set. It may seem fine at first glance, and even when closely examined. But there are always further layers of statistical theory, and fallacies, to ponder on.

Tips to Mitigate AI Risk 

AI systems are being planned for just about any use case you can imagine. They are also being used and abused. To mitigate risks that are fundamental to the technology:

  • Rigorously check the inputs: Check training data carefully, and check for bias, errors, statistical anomalies and relevance to the questions likely to be asked.
  • Focus on errors and error reporting: Use the AI system to analyze requests and feed it (as training data) many rogue requests. Have a robust error reporting system and reward users who find errors (think of it as a bug bounty program for the data and model).
  • Understand the true costs of an AI system: It may help you reduce personnel/headcount, but it also means:
    • Losing institutional memory
    • Adding headcount to do data checking, model building, etc.
    • Speeding up the error cycle (humans can make errors; computers can make them at scale, and fast)
    • Measure the potential risk vs. the potential reward carefully: Many of the risks and rewards are not obvious at first glance.

Although reasonable efforts will be made to ensure the completeness and accuracy of the information contained in our blog posts, no liability can be accepted by IANS or our Faculty members for the results of any actions taken by individuals or firms in connection with such information, opinions, or advice.


Access time-saving tools and helpful guides from our Faculty.


IANS + Artico Search

Our 2024-2025 CISO Compensation and Budget Benchmark Survey is Live!

Subscribe to IANS Blog

Receive a wealth of trending cyber tips and how-tos delivered directly weekly to your inbox.

Please provide a business email.