New threats, same rules for finance generative AI

This article was written by Benjamin Cooper. It appeared first on the Bloomberg Terminal.

The potential for generative artificial intelligence—AI that responds to input by creating something new—to further automate customer service has been embraced by the financial sector. The use of AI in consumer financial customer service “chatbots” is already significant enough that the Consumer Financial Protection Bureau released a report in June on potential risks to consumers.

With new technology comes new risk. An article in MIT Technology Review described large language model (LLM) AI chatbots as “a security disaster.” This characterization is sensational, but generative AI does create novel cybersecurity risks and exacerbates existing risks. General counsel and compliance counsel need to be aware that there are new threats requiring modification of existing cyber-risk management strategies.

Insights for Quant and Data Professionals delivered to your inbox

Sign up for the newsletter

The majority of AI risk discussion has been about privacy and the inadvertent revelation of data, such as the recent FTC investigation of OpenAI. However, less discussed is what infrastructural cybersecurity risk management for these financial AIs—i.e., making sure that the AI doesn’t provide additional access into a system’s network—requires.

Current regulations impose general security requirements, so responsible counsel must ensure that new risks are included in policy and practice, even before guidance documents or opinion letters issue.

Prompt injections, data poisoning, and other risks

The most well-known vulnerability of generative AI is “prompt injection.” Prompt injection refers to any strategy creating input data that causes the generative AI to execute additional instructions beyond the analysis it was doing. Prompt injection is similar to “jailbreaking,” where a hacker tries to reason a chatbot into doing something it would not otherwise do, but prompt injection goes further as the tactic can potentially cause the AI to execute additional malicious code if the input isn’t properly secured.

Researchers at Germany’s Helmholz Center for Information Security discovered that prompt injection is not only a problem with the user input like a chat window or search bar, but it can also be done indirectly through malicious text or code in documents relied upon by the AI. For a simple example, a user-uploaded document could contain extraneous text invisible to a human reader (such as by being in white text on white background, or in a minuscule font size) but that could be processed by an unsecured AI reading the document.

There’s also the potential for the models themselves to be compromised during creation or during an update by “data poisoning.” A generative AI needs millions upon millions of documents to create its understanding of text or images or audio. Most current generative AIs are created by scraping the public internet, and may depend even more on completely public sources should current lawsuits prevent use of copyrighted material. The use of the internet allows potential cybercriminals to indirectly add incorrect data and conclusions to the model by putting up thousands of easily scraped pages with intentionally incorrect material. The AI will then have a skewed version of reality within its “brain,” leading to incorrect results.

In addition, malicious actors can use generative AI as an effort multiplier; the UK’s National Cyber Security Center, among others, warns of “risk that criminals might use LLMs to help with cyber attacks beyond their current capabilities.” Any vulnerability that exists is now potentially easier to exploit with generative AI, and the skill level of potential cyber criminals has effectively increased.

A final security issue is that generative AI increases the complexity of the system, and the more system complexity that exists beyond easy human cataloging, the more likely that there’s an unknown risk. Unknown risks can’t be mitigated, but humility about the level of security and vigilance about the possibility of breach will likely go far to providing best efforts.

Rules say to be secure, but not how

There isn’t yet specific state or federal guidance on the cybersecurity risks of generative AI. The National Institute of Standards and Technology, the primary issuer of specific cybersecurity bulletins, only recently announced a public working group on AI risk management. The standards in place are general in scope and not pointed to specific threats.

The Cyber Risk Institute’s profile for financial sector cyber risk assessment, which helpfully maps its standards to government regulations, including those from the Federal Financial Institutions Examination Council and the New York Department of Financial Securities, is a good example of regulations’ and standards’ generality. A cyber risk management program should be part of daily operations and “tailored to address enterprise-specific risks (both internal and external) and evaluate the organization’s cybersecurity policies, procedures, processes, and controls,” the profile says. Enterprise-specific risks are, therefore, left to the individual enterprise to anticipate and mitigate.

The most specific NYDFS regulation regarding technology is its requirement for multi-factor authentication, but even then the particular method is left to the institution. And as a 2021 enforcement action over a mortgage bank employee mindlessly clicking a smartphone authentication pop-up caused by a hacker using phishing-derived credentials shows, employee behavior is just as much regulated as technology.

Due to the lack of specific guidance, counsel in charge of compliance must make sure policy and practice keep up with current with cybersecurity threats.

For example, the Securities and Exchange Commission made identical statements in July 2022 administrative orders against TradeStation Securities Inc. and UBS Financial Services Inc. regarding violations of the Regulation S-ID identity theft protection program. Despite “significant changes in external cybersecurity risks related to identity theft,[] there were no material changes to the [p]rogram,” the SEC said.

NYDFS cybersecurity rules are explicit about the need for adjustment in the face of risk; they require a periodic cybersecurity risk assessment, which proposed amendments to the rule would require annually. Regardless of the nature of the violation, Federal Deposit Insurance Corporation enforcement orders, such as those issued against Maxwell State Bank this January, require a thorough review of an entire cybersecurity program.

Since most firms will not develop their own AI, but will license them or use them as software as a service (SaaS), the security of the licensor or SaaS contractor is also at issue.

In 2021, the SEC issued an order against Cetera Advisor Networks LLC for violating Regulation S-P‘s Safeguards Rule by failing to have adequate cybersecurity policies for their contractors, leading to a data breach affecting the broker-dealer’s customers. NYDFS cybersecurity requirements require explicit policies and procedures for third party service providers.

State and federal rules—and their history of enforcement—allow regulators to push knowledge for compliance to the institution. After an incident, it’s easy for regulators to say in hindsight that the institution knew or should have known of the security risk—whether or not the regulators were paying attention to the risk prior to the incident. Counsel for financial institutions should be prepared for regulators to require new security for these new risks.

Recommended for you

Request a Demo

Bloomberg quickly and accurately delivers business and financial information, news and insight around the world. Now, let us do that for you.