
6 November 2025
As AI reshapes the workforce, how mature is your business?
As part of our Algorithm to Advantage campaign, Jonathan Exten-Wright looks at how AI impacts workers, the risks this presents for organisations, and why AI maturity is crucial to managing them.
The rush among businesses to embrace AI is having a profound impact on workforces. With that impact comes risks for employers.
AI solutions offer potentially enormous benefits: they can enhance productivity, capability, capacity, quality and innovation. But failing to identify and mitigate the implications can have serious consequences.
Developing that understanding demands AI maturity throughout your organisation.
Senior decision-makers – right up to the boardroom – need to understand the impacts of their AI investments on the workforce, and the related process, legal, regulatory and ethical risks.
Workers, meanwhile, must be aware of the guardrails governing AI use in their workplace, so that they use it in compliant and ethically appropriate ways.
How AI is reconfiguring the workforce
Businesses are used to responding to change. But AI is different. It’s an iterative technology that learns as it goes along. And it moves fast.
The scale and pace of the transformation AI is unleashing puts businesses in danger of flying blind. It can be difficult to grasp the full extent of the evolution at hand – and by the time you do so, it’s moved on again.
“The scale and pace of the transformation AI is unleashing puts businesses in danger of flying blind.”
AI is altering the size, shape and purpose of the workforce, fundamentally changing its relationship with the organisation. That presents employers with a host of challenges to address.
- Role evolution. AI changes the nature of many jobs. It can simplify administrative tasks while augmenting others – providing legal advice being a good example. That will drive the need for workforce reskilling, redeployment and replanning.
- Contract variation. Changing workers’ roles and responsibilities may mean altering their contracts – which in many jurisdictions is strictly governed by employment laws. Dismissing staff and re-engaging them on new terms is also tightly regulated and can prove costly or even impractical.
- Role replacement. AI performs some tasks better and faster than people. For example, AI tools can spot some faults on production lines quicker and more accurately than humans, potentially reducing the need for supervisory roles in factories. Companies considering workforce reorganisations or redundancies must ensure they follow the relevant legal processes.
- Collective consultation. In several European jurisdictions, redundancies and contractual changes trigger the need to consult and/or bargain with unions and works councils. Failure to do so could lead to fines, workforce disputes, or any redundancies and contract changes being ruled void – while disruption could risk the very process of AI implementation itself
- Technological shortcomings. No technology is perfect – AI included. Models can make mistakes and hallucinate nonexistent information, while their algorithms may contain bias. This could lead to workplace discrimination at a systemic level. The issue of necessary human oversight is complex but essential to grasp, as is the deployment of the right guardrails before using AI.
- Ethical considerations. Workforce decisions when deploying AI carry ethical, sustainability and reputational implications. Are you using your employee data responsibly? How will redundancies affect your brand, especially in communities that rely on your employment? Ultimately, what are your ethical, not just legal, boundaries?
Exposure to these risks can bring about wide-ranging consequences. There can be significant financial penalties for falling foul of employment regulation and the deployment if the AI could easily be imperilled. These are not issues for execution, they're strategic concerns from the very outset, when looking at design and procurement from the beginning.
Managing these issues across borders is especially complex. Companies must carefully navigate a complex web of legal obligations.
There’s some commonality across Europe, but labour law and AI regulation varies greatly between jurisdictions – and even between states in the US. What’s permissible in one location won’t necessarily be acceptable in another. A one-size-fits-all approach to compliance is not realistically possible.
The steps to AI maturity
If AI implementation is not properly managed, the cost and damage could readily extend well beyond the price of the solutions themselves.
So how can a business develop the maturity to manage workforce change in the face of AI? How can you ensure your executives understand, and plan for, the impacts of the tools they’re deploying – and that employees do not take undue risks when using them?
Embedding AI maturity takes time and a business-wide effort. But following these simplified steps will take you at least some way towards a safe and effective rollout.
If you have not done so already, start by looking immediately at how your AI tools – present and, where possible, future – are impacting or will impact your workforce.
However far you are into your AI adoption journey, it’s not too late to start building a clear view of the implications and risks.
- Form a cross-functional team
Establishing AI maturity is not just a technology initiative; it’s a multidisciplinary effort. Involve your procurement, legal, risk, financial, HR and communications teams. And, crucially, support them from the outset with external legal support who can advise across the jurisdictions your business operates in. - Map your AI rollouts and risks
Catalogue the company’s current AI use and planned deployments. Identify the impacts on the affected parts of the workforce, along with the risks attached. - Assess the implications
Evaluate the risks discovered during the mapping exercise. Prioritise them according to the likelihood of each one arising, the potential exposure, and the organisation’s ethical position and legal risk appetite. - Invest in governance
define an AI strategy and design and apply processes and controls to identify and reduce your AI risks as far as possible, starting with the highest-priority cases. This will involve a raft of actions at the strategic and operational levels. Training, stakeholder engagement, risk assessments, workforce impact assessment, data privacy compliance, authorised usage, and escalation procedures – to name just a few examples.
How we can help
DLA Piper’s employment team has deep experience in AI and employment issues, advising many leading names in the field. The team can help you assess the implications of AI technology for your workforce, and evaluate the legal, regulatory and ethical risks. With more than 400 lawyers in over 40 countries, our global team spans labour law and regulation, governance, compliance, collective consultation and more. The right advice can enable the use of AI effectively and compliantly.

