First Midwest BankFirst Midwest Bank logoArrow DownIcon of an arrow pointing downwardsArrow LeftIcon of an arrow pointing to the leftArrow RightIcon of an arrow pointing to the rightArrow UpIcon of an arrow pointing upwardsBank IconIcon of a bank buildingCheck IconIcon of a bank checkCheckmark IconIcon of a checkmarkCredit-Card IconIcon of a credit-cardFunds IconIcon of hands holding a bag of moneyAlert IconIcon of an exclaimation markIdea IconIcon of a bright light bulbKey IconIcon of a keyLock IconIcon of a padlockMail IconIcon of an envelopeMobile Banking IconIcon of a mobile phone with a dollar sign in a speech bubbleMoney in Home IconIcon of a dollar sign inside of a housePhone IconIcon of a phone handsetPlanning IconIcon of a compassReload IconIcon of two arrows pointing head to tail in a circleSearch IconIcon of a magnifying glassFacebook IconIcon of the Facebook logoLinkedIn IconIcon of the LinkedIn LogoXX Symbol, typically used to close a menu
Skip to nav Skip to content
FDIC-Insured - Backed by the full faith and credit of the U.S. Government

Navigating the Impacts of Generative AI on Compliance Programs

Whether it is predicting a consumer’s shopping preferences, guiding military decisions, or providing unique insights into financial crime, nearly every day new uses for artificial intelligence (AI) tools make headlines. Companies of all sizes are looking to expand and scale their AI capabilities to better serve clients and increase operational efficiencies.

While the adoption of AI tools can positively impact businesses, it is important that organizations understand the potential risks associated with implementing these technologies. These risks range from privacy and trade secret concerns to tool hallucinations where large language model AIs generate false information to broad scale sophisticated cyberattacks and weaponized misinformation. AI tools can also be susceptible to bias or model distortion due to the training data and can result in improper, incorrect, or invalid responses.

Given these challenges, the key to finding a solution lies in strict compliance. Compliance is paramount to ensure every efficiency gain specifically considers and addresses any potential legal and regulatory exposure to adverse consequences as they relate to data privacy and security. As Zoom recently discovered when trying to amend its terms of use to allow it to capture additional user data, even modest tweaks to utilize data for AI purposes can be harshly judged and must be considered with caution.

Companies face the daunting challenge of implementing AI governance and compliance frameworks on their own due to the pace of innovation and invention, as well as the limited regulatory guidelines that exist in the AI space. The European Commission has proposed the European Union (EU) AI Act, the first-of-its-kind legislation on the use of AI.  Although the United States does not have a unified regulatory framework, New York and other states are starting to develop rules that must be considered, when utilizing AI tools in hiring processes, for example.

Given that many compliance teams lack full visibility into the current use of AI technologies within their own organization, the coming regulations are set to shake up compliance functions. With AI's relentless progress, compliance must be a cornerstone, ensuring that ethical and legal AI policies anticipate (or at least keep up) with the whirlwind of change.

To ensure compliance and effectiveness, compliance teams should keep five key considerations in mind when building an effective compliance program to address their company’s use of AI.

1. Build the Right Compliance Team to Face the Growing Challenges

Compliance teams have always been expected to wear many hats, serving as experts across an organization. They must work hard to ensure that they are not siloed off from other departments and are actively integrated across the organization to maintain a comprehensive perspective – especially when developing and implementing holistic AI policies that apply to various corporate divisions. This also means that compliance teams should be diverse and comprised of experts from both technical and non-technical backgrounds to assess the potential issues with AI tools. Compliance teams must have resources to ask the right questions about AI models and testing and should not rely solely on the business for these efforts.

2. Maintain an Inventory of Tools

Compliance officers must maintain a clear view of the inventory of AI technologies that are being used across a company. This includes tools a company uses internally and those used for client-facing tasks. Compliance teams are better able to evaluate the benefits and risks of these technologies when they are aware of what tools employees are using. This is another reason why it is important to integrate compliance officers across the company’s various lines of business. Where possible, compliance teams should also be included in the selection of these tools used across the organization. Certain tools may have been designed with privacy, data and other safeguards in mind, and compliance teams are in the best place to assess the sufficiency of those safeguards.

3. Implement Strong, Clear Policies

Compliance teams should make sure that they have developed uniform policies surrounding the use of AI within their company. Without these policies in place, enterprising employees may find an AI tool creates efficiencies in their job and use it, not realizing that they are exposing trade secrets or code to the AI’s large language model. This happened with Samsung engineers who mistakenly provided confidential code to ChatGPT.

Unregulated use of AI within the workspace can have catastrophic effects for a company if there are no guardrails in place, resulting in unwanted reputational consequences. By implementing acceptable use policies that specify the type of AI permitted, what information may be shared, and how this information will be used and validated, can help mitigate potential AI-related risks for compliance teams.

4. Validate and Test Tools

Any AI tools being used across an organization must be validated and tested by a knowledgeable compliance team. This testing must include a review of key questions about any AI model:

  • Where is the model being hosted and how is the data being protected?
  • Who is training the model and how is the training data selected?
  • What protections are being implemented to keep confidential and/or protected data from being input into the AI model?
  • How is access to the AI model controlled and by whom?
  • Is there or has there been any bias testing of the model?

5. Enforce Compliance Program Standards

Finally, companies should ensure that their compliance program enforces the rules, making sure that there are proper controls in place to guarantee that certain restrictions are maintained and that there is visibility into who has access to the information being used. For example, many law firms are developing tools to utilize AI for blog posts and background information but are prohibiting client data from being ingested or used by the tool. This is the type of situation that must be monitored carefully to avoid abuse.

 Having an informed compliance team that is familiar with the technologies’ uses and functions, as well as the latest regulations, can help curb adverse consequences while maintaining compliance.

 

This article was written by Julie Myers Wood from Forbes and was legally licensed through the DiveMarketplace by Industry Dive. Please direct all licensing questions to legal@industrydive.com.

Subscribe for Insights

Subscribe