close up of a computer screen

16 December 2025

Practical considerations for data protection and cybersecurity when deploying AI

The adoption of AI is accelerating across the UK financial services sector. Accordingly, institutions must navigate a complex landscape of regulatory expectations, technological risks and ethical responsibilities. The goal is to remain agile, while aligning with core compliance requirements.

 

REGULATORY LANDSCAPE

The Financial Conduct Authority (FCA) and the Information Commissioner’s Office (ICO) have jointly outlined a vision for responsible and innovation-friendly AI adoption that aligns with existing financial and data protection regulations. Absent AI-specific regulations, financial services firms should adopt a “first principles” approach, drawing on existing regulations and frameworks to implement AI policies and use cases.

Regulators do not expect firms to reinvent the wheel. Instead, they should apply existing data protection principles underpinned by the UK GDPR such as integrity, transparency, fairness, accountability and accuracy, along with a robust oversight and safeguarding process, to manage the privacy risks associated with AI. Accordingly, firms should approach AI through the lens of existing frameworks such as the Consumer Duty and the Senior Managers and Certification Regime (SM&CR). According to the Consumer Duty principles, firms must ensure their use of AI is understandable and transparent to customers, as well as delivering fair value and meeting their needs. The SM&CR regulatory regime establishes senior management accountability for operational resilience, data governance, and customer outcomes in financial services firms. The FCA has signalled that SM&CR provides the framework under which senior management can evaluate the safe deployment of AI.

 

PRACTICAL CONSIDERATIONS

AI supports a wide range of use cases, from fraud detection and market forecasting to customer profiling and support. Each use presents unique challenges for maintaining data privacy and security. Key considerations before deployment include:

  • Taking a risk-based approach: Assess the intended use of AI tools from the outset. For example, generative AI chatbots pose different risks to machine learning tools used for credit scoring. Risks should therefore be considered on a case-by-case basis and throughout the lifetime of the tool to ensure AI remains targeted and reliable against its use case. Where risks are identified, safeguards should be adopted to ensure trust.
  • Lawful basis for data processing: Identify lawful bases for processing personal data at each stage of the AI lifecycle, from design and training to deployment. Conducting a Data Protection Impact Assessment helps to demonstrate accountability and mitigate privacy risks.
  • Data quality and bias: Ensure high-quality data is used to train AI tools. Implement safeguards to monitor for bias or hallucination, especially when AI influences customer decisions. Train staff to use targeted prompts and challenge AI outputs and contractually require similar data quality standards of third-party AI products so that accountability can still be demonstrated.
  • Transparency: Clearly explain AI-driven decision-making to customers, including the logic and consequences of automated decisions, to ensure fair and open dealing with consumers in line with Consumer Duty. The ICO is developing a statutory code of practice for AI and automated decision making following recent changes to Art 22 of UK GDPR via the Data (Use and Access) Act 2025. Firms should consider its impact, particularly with AI tools that affect customer outcomes, eg credit checking.
  • Contestability and explainability: Aligned with transparency, firms should consider the explainability of AI outcomes. Ensure that there is effective human oversight of AI tools and that there is effective redress in place to hold firms accountable.
  • Third-party diligence: The FCA has acknowledged that the increasing adoption of AI to perform critical financial services may lead to AI service providers falling within the scope of the UK Critical Third Parties (CTP) regime. Third parties will fall under the CTP designation where the services provided are material to the essential operations of financial institutions and could threaten the stability of the UK financial system in the event of disruption. Deploying AI to carry out tasks fundamental to the financial sector may result in increased regulatory scrutiny of the operational resilience of AI third party service providers. Institutions must therefore conduct rigorous due diligence on AI vendors, clarify contractual roles and responsibilities, and monitor performance and compliance of tools to identify any points of failure. This can help to ensure a contingency plan is in place, fallback procedures are tested, and operational risks are foreseen and mitigated.
  • Cybersecurity: The FCA’s 2025 AI Update highlights AI as a key cybersecurity consideration. Regular testing of AI model resilience is essential as AI deployment introduces significant cybersecurity risks. Threat actors increasingly use AI for sophisticated attacks, such as voice phishing and automated calls, or to bypass authentication controls. Financial institutions must proactively manage these risks by establishing robust governance structures to address the speed, scale, and complexity of AI threats.

 

CONCLUSION

AI adoption in UK financial services demands a balance between innovation and responsible use. By embracing principles-based regulation, investing in privacy and cybersecurity, and fostering transparency, firms can unlock AI’s transformative potential confidently and ethically.

Print