Add a bookmark to get started

Abstract view of canyon
13 October 202213 minute read

The human factor in artificial intelligence

Joint Bank of England / FCA Discussion paper on AI and Machine learning

Financial regulation is forever running to catch up with evolving technology. There are many examples of this: the Second Markets in Financial Instruments Directive (MiFID II) sought to make up ground on the increased electronification of markets since the introduction of MiFID I; policymakers in both the EU and the UK are at this very moment defining the regulatory perimeter around cryptoassets, more than a decade after the initial launch of bitcoin; and regulators first took action against runaway algorithms long before restrictions on algorithmic trading made it into regulatory rulebooks. Continuing this trend, on 11 October 2022, the Bank of England (BoE) and the UK Financial Conduct Authority (FCA) launched a joint discussion paper on how the UK regulators should approach the “safe and responsible” adoption of AI in financial services (FCA DP22/4 and BoE DP5/22) (the AI Discussion Paper), which is now open for responses. This follows the UK Government’s Command Paper published in July 2022, announcing a “pro-innovation” approach to regulating AI (CP 728) across different sectors.

One strong theme that comes out of the AI Discussion Paper is that, notwithstanding the potential benefits of AI in fostering innovation and reducing costs in financial services, the human factor is key to ensure that AI is governed and overseen responsibly and that potential negative impacts on clients and other stakeholders are mitigated appropriately. The fact that the regulators are consulting on bringing the oversight of AI expressly within the scope of the UK Senior Managers and Certifications Regime (SMCR) illustrates the importance of this human element, and that humans should continue to run the machines, rather than the other way around.

Risks and benefits

The fact that applying AI to financial services brings both risks and benefits has been well-rehearsed, including in the June 2021 report from the Alan Turing Institute that was commissioned by the FCA and in the final report of the UK’s AI Public-Private Forum (the AIPPF Final Report) published in February 2022. These risks and benefits stem from the very nature of AI and how it operates, compared to, say, a conventional algorithm with static parameters. Whilst the BoE and the FCA concede that there is no consensus on a single definition of AI, “it is generally accepted that AI is the simulation of human intelligence by machines, including the use of computer systems, which have the ability to perform tasks that demonstrate learning, decision-making, problem solving, and other tasks which previously required human intelligence.1 This is, of course, a technologically neutral definition; there are arguments both for and against a clear technical definition, and the AI Discussion Paper raises the question of how AI should be defined (if at all) by regulators. For example, the proposed EU Regulation on Harmonised Rules on Artificial Intelligence (the AI Act) casts the net widely as to what technology might constitute an “AI System2, with the result that it is likely to capture many existing systems, not necessarily limited to those that feature more advanced learning capabilities typically associated with AI.

When adopted responsibly, the Bank recognises that AI can potentially then outperform human beings in terms of speed, scale and accuracy of outputs. Whilst a conventional algorithm might continue to apply the same parameters, an algorithm augmented by AI might adjust those parameters in line with both traditional data sources as well as “unstructured and alternative data from new sources (such as image and text data).” As the regulators note, “[w]hereas traditional financial models are usually rules-based with explicit fixed parameterisation, AI models are able to learn the rules and alter model parameterisation iteratively.” This creates challenges for the governance and operational oversight of AI that are more pronounced in relation to AI because there is more scope for unpredictable outcomes. It is also potentially more difficult to interrogate the reasons for a given decision driven by AI than it may be with a person (the so-called “black box” problem). Autonomous decision-making by AI has the potential to “limit or even potentially eliminate human judgement and oversight from decisions.”3 For obvious reasons, eliminating oversight entirely is difficult to reconcile with existing governance and operational rules under both the FCA Handbook and the PRA Rulebook, which quite properly see an ongoing role for real, living people to play in ensuring the right outcomes for stakeholders.

The human factor

The AI Discussion Paper highlights that the regulators’ expectation is that firms using AI must ensure a “sufficiently strong set of oversight and governance arrangements that make effective use of humans in the decision-making loop and review the accuracy of those arrangements.” This concept of the “human-in the loop” – the level of human involvement in the decision loop of any given AI system – is a key focus of the AI Discussion Paper, and is a common theme to be found in guidance and nascent regulation from around the world.4

Regulators’ expectations around human involvement in AI may apply at a number of levels, including:

  • the design of the AI system, including defining the inputs and outputs of the system and how they are used, including “identifying where an automated decision could be problematic”;5
  • the operation of the AI system, including in the interpretation of system outputs and avoiding ‘automation bias’, where staff “accept automated recommendations or may be unable to effectively interpret the outputs of complex systems and falsely reject an accurate output”;6
  • the overall oversight and governance of firms’ use of AI. Here, the regulators expect that “firms deploying AI systems need to have a sufficiently strong set of oversight and governance arrangements that make effective use of humans in the decision-making loop and review the accuracy of those arrangements.

A serious look at the governance of AI within any given firm may include consideration of where AI “owners” and “champions” sit within the organisation and whether (and when) they come together through central AI escalation points. Firms may also want to consider the triggers that would lead to particular projects being scrutinised by firms’ ethics or other internal committees that have control over whether any given AI project is greenlighted. The management information (MI) that the governing body and other senior stakeholders would use to monitor both new and existing AI initiatives also merits serious consideration, in particular to ensure that AI is behaving as intended and that illegal bias does not creep into AI-driven decisions.

In certain cases, ensuring human involvement in particular decisions may be an express legal requirement, rather than merely a question of good governance. The Consultation expressly acknowledges that, as things stand, Article 22 of the UK onshored version of the UK General Data Protection Regulation (UK GDPR) “restricts fully automated decisions which have legal or similarly significant effects on individuals to a more limited set of lawful bases and requires certain safeguards to be in place.7 For this purpose, the UK Information Commissioner’s Office (ICO) has insisted that this human input needs to be “meaningful”, and that a decision does not fall outside Article 22 just because a human has ‘rubber-stamped’ it. As noted in the Consultation, however, the UK Government is planning to reform this aspect of UK GDPR through The Data Protection and Digital Information Bill 2022-23, which is currently at the second reading stage in the House of Commons. In a similar vein, with the Consumer Duty being a key focus of the FCA, ahead of the Duty coming into force for new and open products on 31 July, 2023, it is inevitable that the FCA will increasingly look at AI through a lens of whether the use of AI results in a firm delivering “good outcomes for retail customers” in line with new Principle 12, as well as whether the use of AI achieves the consumer outcomes and complies with the cross-cutting rules in new PRIN 2A.

Looking beyond the UK, any firms with European operations will also need to consider the AI Act, which will regulate “High risk AI Systems”, including certain tools used in financial services, for example to establish creditworthiness. The AI Act is likely to impose a significant compliance burden on entities within its scope, central to which is human involvement in risk management. Firms will be keen to ensure their risk management and compliance operations relating to AI can be aligned practically where they face duality of regulation across different jurisdictions. Where there is an opportunity to comment on evolving law, regulation and regulatory enforcement practice – for example in response to the joint discussion paper – firms will no doubt wish to advocate interoperability of evolving requirements, even if full commonality of requirements is unlikely to be achievable.8

Interactions with the SMCR

The Regulators acknowledge that “[w]ithin the SM&CR there is at present no dedicated SMF for AI.9 Whilst responsibility for technology systems currently sits with the Chief Operations function (SMF24), the Chief Risk Function (SMF4) is expected to “ensure that the data used by the firm to assess its risks are fit for purpose in terms of quality, quantity and breadth.10 In addition, neither the SMF4 nor the SMF24 are required functions for core or limited scope SM&CR firms, even if they are key members of the governing body for banks or insurers. Focusing on the SMF4 or SMF24 could therefore leave potentially important gap in regulation, particularly considering smaller firms who may wish to offer AI-based advisory services, for example online via a platform. In addition, in firms of all sizes, business-line aligned SMFs will have direct responsibility for AI initiatives being developed within their particular business area but, depending upon the circumstances, may coordinate with other members of the governing body to a greater or lesser degree on their approach to AI within their perimeter of responsibility.

The AIPPF Final Report raised the question of whether responsibility for AI should be concentrated within a single individual or shared between several senior managers. The Discussion Paper floats the possibility of introducing a new dedicated SMF and/or a Prescribed Responsibility for AI specifically. Here, the regulators highlight the risk of a “technology knowledge gap” between those on the governing body – who will often not have direct experience of working on or overseeing AI projects – and those operating within firms’ businesses who do.11 This highlights a particular challenge of finding individuals with the requisite knowledge and experience to oversee AI initiatives, particularly at the senior level. A range of skills are likely to be necessary to ensure effective oversight, including data science and statistical skills to be able to determine if data curation is being operated in accordance with law and policy and to detect illegal bias, increasing demand for an already rare skill set. It is clearly, however, a challenge that the regulators expect firms to overcome, with the regulators emphasising that governing bodies need to have the diversity of experience and capacity to provide effective challenge across the full range of the firm’s business. Tentatively, the BoE and the FCA propose that “the most appropriate SMF(s) may depend on the organisational structure of the firm, its risk profile, and the areas or use cases where AI is deployed within the firm.12 As ever, this is without prejudice to the collective responsibility of boards and the respective responsibilities of each of the three lines of defence.

The debate around adequacy of governance is not limited to the governing body itself. The regulators emphasise the importance that staff responsible for developing and deploying algorithms are competent to do so. One possibility they suggest to ensure this is the creation of a new certification function for AI, similar to the FCA’s existing certification function for algorithmic trading. The algorithmic trading certification function extends to persons who: (i) approve the deployment of trading algorithms; (ii) approve the amendment to trading algorithms; and (iii) have significant responsibility for the management of the monitoring, or decide, whether or not trading algorithms are compliant with a firm’s obligations. In the interests of consistency, if nothing else, rationalising the regulators’ approach to certifications of staff with responsibility for SI with staff responsible for algorithms has a degree of logic to it.

International influences

It would be difficult to comment on any UK initiative on AI without comparing to other overseas initiatives, not least (on the EU side) the AI Act and its accompanying Directive on AI Liability (the AILD) both of which as drafted have wide extra-territorial effect. Neither the AI Act nor the AILD are financial services sector-specific, though will of course have key considerations for financial services firms using AI, not least where their AI initiatives may get categorised as “High-risk AI systems” for the purposes of those pieces of legislation. It is clear, however, that both the BoE and the FCA are thinking globally in their approach to AI and take inspiration from other AI initiatives beyond Europe’s borders, including (amongst others) the Veritas Initiative from the Monetary Authority of Singapore (the MAS) – which seeks to enable financial institutions to evaluate their AI-driven solutions against the principles of “fairness, ethics, accountability and transparency” (FEAT), and in which many European, UK and US organisations are participating – and AI Principles developed by the Organisation for Economic Co-operation and Development (the OECD). Financial services activity is fundamentally global, and drawing on the best global ideas to produce a regulatory framework that is “best of breed” – and does not conflict with other global standards – is a sensible approach.

Responses to the Discussion Paper

The Discussion Paper is open for comments until February 10 2023. Whilst a lot of good thinking will no doubt come out of the stakeholder engagement on the discussion paper, the overall direction of travel seems clear: the adoption of AI requires robust governance arrangements and human oversight within an organisation, with clear lines of responsibility, and any use of AI system without a ‘human-in-the-loop’ is likely to fall below the regulators’ expectations. It is also clear that effective governance and oversight will require a new skill set, particularly in the second and third lines of defence, to close the knowledge gap between those using and deploying AI and those overseeing its use and deployment.

DLA Piper will be supporting the International Regulatory Strategy Group13 to prepare a response to the discussion paper DP5/22.


1Paragraph 2.10, AI Discussion Paper.
2Article 3, draft AI Act
3Paragraph 2.16, AI Discussion Paper.
4Paragraphs 4.64 – 4.66, AI Discussion Paper.
5Paragraph 4.64, AI Discussion Paper.
6Paragraph 4.66, AI Discussion Paper.
7Paragraph 4.65, AI Discussion Paper.
8DLA Piper will be supporting the International Regulatory Strategy Group (IRSG) and its members to coordinate IRSG’s response to DP22/4 / DP 5/22.
9Paragraph 4.50, AI Discussion Paper.
10SYSC 21.2.1(e)
11Paragraph 4.47, AI Discussion Paper.
12Paragraph 4.55, AI Discussion Paper.
13www.irsg.co.uk

Print