AI Governance: Tech Problems Without Tech Solutions

February 15, 2024 | By Alex Sharpe, IANS Faculty

This piece is part of our ‘Faculty Focus' series, an interview-style article where a member of the IANS Faculty shares firsthand, practitioner-based insights on an infosec topic. In this feature Alex Sharpe discusses common AI adoption challenges for organizations and provides guiding principles to craft  AI governance strategy.

 

AI Governance Use with IANS Faculty member, Alex Sharpe

Alex Sharpe is a long time Cybersecurity and Privacy expert (+30 years) with real world operational experience. He has spent much of his career helping large corporations and government agencies reap the rewards afforded by advances in technology like digital transformation.

He began his career at NSA moving into the Management Consulting ranks building practices at Booz Allen and KPMG. He subsequently co-founded two firms including successful exits. He has participated in over 20 M&A transactions and has subsequently delivered to clients in over 20 countries on 6 continents. Alex holds degrees in Electrical Engineering, Management and Business from New Jersey Institute of Technology (NJIT), Johns Hopkins University (JHU) and Columbia Business School. He is a published author, speaker, instructor and advisor. He serves on industry forums and is a mentor at an incubator.

 

AI can be a powerful tool for organizations, but governance policy on org use must be carefully managed to ensure it does not lead to unintended consequences.

AI as we know it today has been around for about 70 years. Alan Turing wrote his seminal paper, “Computing Machinery and Intelligence," in 1950. Arthur Samuel, in 1955, developed a program to play checkers. That same year, he held a workshop at Dartmouth on “Artificial Intelligence.” Widely regarded as the first use of the term.

Why suddenly has AI become all the rage, and how do I, as a business leader, fold it into my strategy?

The answer is simple, but the execution takes time. AI is driven by data – large amounts of data. We have more data today than we ever had before. Current estimates show we produce about 2.5 quintillion bytes of data every day. That is 2.5, followed by 18 zeros. The cost to create, store, and process that data has decreased dramatically. The average person has more computing power in their watch than we had to land on the moon. To give you a sense of how much data is required, ChatGPT, the most widely recognized product name in AI, is reported to have required the equivalent of 10,000 years of continuous conversations to train the model.

AI is a technology problem without a technology solution. The ability to reap the rewards of AI safely and security requires the cooperation of technology, people, processes, and the organization. It requires us to take a holistic approach up and down and across the organizational chart. It also requires public/ private partnerships along with active engagement across industries.

AI is following a very familiar path towards adoption, as we have seen with other disruptive technologies throughout time similar to when the internet was first introduced.

When it comes to managing risk, Dr. A. Prabhakar, Director of the Office of Science and Technology Policy (OSTP) and the former Director of the Defense Advanced Research Projects Agency (DARPA), said it best:

“When we look at what’s happening with AI, we see something very powerful, but we also see the technology that is still quite limited. The problem is that when it’s wrong, it’s wrong in ways that no human would ever be.”

With this in mind, we can list some guiding principles to craft our AI Governance strategy. We do not have enough room to explore these here, but we can introduce the principles.

 

AI Adoption Guidance for Leadership

  • The Genie is out of the Bottle, Embrace It: As humans we adopt new technology at different rates and at different times in the adoption cycle. There are two critical differences with AI. First, the widespread availability of AI and AI-powered tools allows your staff to use AI whether you have sanctioned the use. Second, recent studies shows that 75% of respondents want to know your AI policy before entering into business.

     

  • If it is not a good idea, it is a worse idea with AI: It also sounds obvious. The news is full of incidents where an employee uploads sensitive information to an AI tool or relies on the output from an AI tool without checking the results. We have seen this before when other technologies, like when social media, first appeared. At some point, every organization will want an Acceptable Use Policy (AUP) for AI. That can take a while. In the near term, you will want to communicate the broad strokes to the staff. Executive communication from senior leadership is the best way to go. In parallel, fold AI into your security training & awareness programs.

Download: AI Acceptable Policy Use Template

  • The same communication is a perfect opportunity to engage the staff. Please encourage them to share ideas and suggestions for use cases. Ask for their suggestions for the safety and security of AI. Whichever AI policy path you choose, be sure to address AI:
    • Uses that are prohibited
    • Uses that are permitted with some restrictions; and
    • Uses that are generally permitted without any restrictions
  • Create an AI Sandbox:  Create a safe place to explore, learn, and experiment without doing any damage. We are in the early stages of reaping the rewards afforded by Artificial Intelligence (AI). The idea is to foster creativity without worrying about making mistakes; try, fail, build, and destroy. Establishing room for creative muscles to develop and explore is essential to building problem-solving, cooperation, and ideation skills.  While exploring ways to unlock value, be sure to explore ways to ensure safety and security using the dimensions of people, process, technology, and the organization.

     

  • AI Testing Strategy - Crawl, Walk, Run: To paraphrase Dr. Prabhakar’s quote, when AI fails, it fails in ways a human never would. The sandbox limits the damage by controlling the blast radius as you explore, experiment, and create. Be sure to think two steps ahead. As something you are working on starts to look viable, ask yourself, how would we begin to put it into practice? Core to the strategy is keeping the human in the loop to catch those not expected fails. As you build confidence over time, you can reduce the role of the human. Be mindful that while AI is a technology, the ability to ensure safety and security cannot be achieved solely through technical controls.

     

  • Not All AI is Created Equal:  Most people hear AI and think ChatGPT. ChatGPT is one example of a subcategory called Large Language Models (LLM). AI, as it exists today, is mainly single-use. AI applications rarely perform more than one task. When looking at the role of AI in your strategy, you have many potential touchpoints. No single class of AI will get you where you want to go. Unfortunately, a single taxonomy does not exist across the industry.

 

Although reasonable efforts will be made to ensure the completeness and accuracy of the information contained in our blog posts, no liability can be accepted by IANS or our Faculty members for the results of any actions taken by individuals or firms in connection with such information, opinions, or advice.


Access time-saving tools and helpful guides from our Faculty.


IANS + Artico Search

Our 2024-2025 CISO Compensation and Budget Benchmark Survey is Live!

Subscribe to IANS Blog

Receive a wealth of trending cyber tips and how-tos delivered directly weekly to your inbox.

Please provide a business email.