Almost all facets of securities operations are investigating or even embracing cutting-edge technologies, particularly the extensions of artificial intelligence (A.I.) such as machine learning, natural language processing (NLP), and deep learning. Many firms are also exploring how robotic process automation (RPA) may make their lives easier.
But, as these advanced technologies become more prevalent, the regulators are wondering how they will be able to get a grip on them. Basically, who will mind the robots who are minding the firms?
Late last month, five federal financial regulatory agencies decided to officially gather “insight on financial institutions’ use of artificial intelligence (AI),” according to a combined press release from the Federal Reserve Board, the Consumer Financial Protection Bureau (CFPB), the Federal Deposit Insurance Corp. (FDIC), the National Credit Union Administration (NCUA) and the Office of the Comptroller of the Currency (OCC).
They jointly announced a request for information (RFI) “to gain input from financial institutions, trade associations, consumer groups, and other stakeholders on the growing use of AI by financial institutions.”
These agencies want to understand A.I. in the provision of services to customers “and for other business or operational purposes; appropriate governance, risk management, and controls over A.I.; and any challenges in developing, adopting, and managing AI,” according to the summary of the 23-page RFI.
“The RFI also solicits respondents’ views on the use of A.I. in financial services to assist in determining whether any clarifications from the agencies would be helpful for financial institutions’ use of A.I. in a safe and sound manner and in compliance with applicable laws and regulations,” according to the RFI.
The RFI focuses on key areas of benefit such as:
- “Flagging unusual transactions. This involves employing A.I. to identify potentially suspicious, anomalous, or outlier transactions (e.g., fraud detection and financial crime monitoring). It involves using different forms of data (e.g., email text, audio data — both structured and unstructured), with the aim of identifying fraud or anomalous transactions with greater accuracy and timeliness. It also includes identifying transactions for Bank Secrecy Act/anti-money laundering investigations, monitoring employees for improper practices, and detecting data anomalies.”
And on risks:
- “It is important for financial institutions to have processes in place for identifying and managing potential risks associated with A.I., as they do for any process, tool, or model employed. Many of the potential risks associated with using A.I. are not unique to AI. For instance, the use of AI could result in operational vulnerabilities, such as internal process or control breakdowns, cyber threats, information technology lapses, risks associated with the use of third parties, and model risk, all of which could affect a financial institution’s safety and soundness.”
The agencies want to hear from you and have provided multiple ways to submit feedback. Comments will be accepted until June 1, 2021.
The full RFI can be found via the Federal Register at https://bit.ly/3ahTWOt
Need a Reprint?