Guy Sheppard, Head of APAC Financial Crime, Intelligence and Initiatives, Swift
TheRegtech stage is a crowded place. Although AI/Machine Learning models and Distributed Ledger Technology are the darlings of collective perception, there is a more fundamental paradigm within the Financial Crime Compliance (FCC) space. That of the growing focus on the gap between technical and effective compliance.
Regulators are now asking banks to attest to the effectiveness of their key AML systems, namely their screening filters. It is no longer enough to simply have some form of automated screening s y s t e m generating alerts for names and payments. The questions that banks need to ask themselves are - Can you prove it works, do you understand it, do the settings (lists used, exceptions or private lists) conform to the stated policy, is this consistent across the group and lastly can you show it is tunedfor optimum performance? In short is your system “fit” for the purpose and are you working to get better?
Sustained and Consistent Regulatory Pressure
The regulatory and industry build-up has been unerringly consistent in their desire for banks to assure their screening controls. They include:
• The New York Department of Financial Services (DFS) Part 504 Jan 2017
• Hong Kong Monetary Authority (HKMA) thematic review Apr 2018
• The Wolfsberg Group “Guidance on Sanctions Screening” Apr 2019
• OFAC “Framework for OFAC compliance” May 2019
Is it really that unreasonable for regulators to ask financial institutions to confirm that their fundamentals are in working order? This feels more like bringing the systems in-line with the growing accountability thrust onto senior managers as well as the compliance professionals themselves. Otherwise, we are asking regulatory bodies to believe in the theoretical as opposed to the demonstrable abilities of one of the core elements in the AML toolkit.
The HKMA guidance in particular is extremely prescriptive. It stipulates independent third party testing, frequent testing (i.e. more than once a year), using dummy data, list scope validation, hit suppressions oversight, consistency with risk policy and a more thorough understanding of the algorithms. This is done with a view to monitor numbers of false positives and false negatives asmetricsof improvement.
The irony is that Asia, the traditional double digit growth driver for many firms is now setting the standard for screening assurance and transparency, so safeguarding that growth through such stringent regulation. Hong Kong’s importance as a financial epi-center has seen significant uptake of this “localized” legislation in Australia, South Korea, China, Japan, Singapore and India. Indeed many financial institutions are bracing themselves for similar legislation from other regulatory bodies such as the Monetary Authority of Singapore (MAS), Reserve Bank of India (RBI) or People’s Bank of China (PBOC),as their largest banks have already had to comply with the HKMA guidance.
The Impact on Financial Institutions
It is a well-known fact that regulators drive the investment priorities of their communities through the focus of their reviews and legislation. There has been a step change amongst the Hong Kong compliance community in how “savvy” they have become in terms of understanding their sanctions filters, their vendor options and indeed their list providers in addition to the scope of testing required. For some, this has been a steep learning curve.
Key questions most banks are grappling with in terms of testing include;
• Are we over or under screening?
• How do we compare against the wider industry?
• What does good look like?
• Where does filter performance vary and is this in line with our appetite?
• How do I reduce false alert numbers without compromising on match quality?
• Is my list vendor supplying me with up-to-date and quality sanctions data?
SWIFT Sanctions Filter Testing
SWIFT, an industry-owned cooperative and global payments network owner has been helping its members test their filters for effectiveness and efficiency for over a decade through the acquisition of Omnicision in 2014.
Although historically, testing was the preserve of the global transaction banks with 9 in 10 relying on SWIFT testing, the HKMA legislation has seen a dramatic uptake of SWIFT testing expertise in Asia:
• 25banks in Hong Kong
• 2 of the 3 major Singaporean Banks
• 3 of the big 4 Australian and half the Japanese mega banks
• One of the largest Chinese banks as well as the largest Chinese Fintech payments provider
• All of the major South Korean banks
Although some boutique firms will claim to be the “world leader” in filter assurance, the financial industry has a habit of voting with its feet. SWIFT tests more than 150 filters provided by 50 different vendors in over 40 countries and is relied on for routine filter assurance and testing by 70 of the world’s globally systemically important financial institutions. Every year SWIFT generates over 200 million test cases for its financial community customers, which includes some of the largest insurance firms, payments service providers as well as banks.
The key to an effective filter assurance and the program is to use a recognized and transparent methodology. This allows testing in a controlled fashion, so the effects of changes to settings, rules, and name derivations can be isolated and performance changes easily identified. Sanctions lists should be tested in their entirety. This will address the wide variance in quality of the names found on regulatory sanctions lists, as any extrapolation of sample results at best misleading and statistically incorrect especially for effectiveness ratings. It is also crucial to apply an exhaustive series of derivations to names as well as all major payment message formats, to understand the peaks and troughs of performance. Good quality “dummy data” is essential to ensure that a representative sample of false positives can be generated so the root causes can be mitigated through precise match analysis. Lastly, a key differentiator is often the quality of consultative support and analysis to help make sense of the output and assist in embedding the use of testing tools into the bank’s DNA.
Filter algorithms, which have seen some improvements, still have not changed significantly in twenty years and all ‘fail’ when enough derivative difference has been applied to the entry data. That is not the issue. Institutions need to know where such weaknesses are and to be comfortable that performance is consistent, in line with policy, and it is effective enough.
There should be no mystique to testing filter effectiveness. It is a process of mathematical exhaustion that often underpins the more attention-grabbing AI programs by reducing the “known unknowns” systematically. Hit-reducing rules should be in place, and their effects well understood. Hit suppressions should similarly be well documented and the risk to effectiveness marginal. An effective system is transparent and enables the firm to train and improve over time while also offering a sense check of progress against the world’s premier screening athletes - the global transaction banks that set the pace.