Add a bookmark to get started

16 February 20246 minute read

Labour law challenges and framework conditions for using AI in the financial sector

Like it is in almost all areas of the economy, AI is on the rise in the financial sector. According to a study by McKinsey from 2023, the use of generative AI (GenAI) in the banking sector could create additional value of USD200 to USD340 billion annually. PwC sees "great potential" in the areas of increased efficiency, cost savings, personalization (eg chatbots), compliance and the development of new business models. But before AI can be implemented in everyday working life, companies have to clarify some labour law issues. The legal considerations are still in the early stages and the technical possibilities of AI are constantly changing, so this article only provides an overview of the current problem areas in employment law.



Pursuant to Section 613 sentence 1 of the German Civil Code (Bügerliches Gesetzbuch – BGB), work must always be performed by the employee in person. According to prevailing opinion, using AI is (so far) merely to help in providing services. Like with the use of other aids, such as computers, tools or databases, the employee who uses them is responsible for checking the work generated with them. So in principle, an employee using AI could be warned or even liable in the event of errors being made. If, for example, a bank advisor relies on AI when selecting investment options for a customer and overlooks the fact that customer preferences have not been accurately mapped, this could even justify a warning to the employee if the error is significant.

However, things could be different if it’s not the employee themselves but the employer who specifies and possibly even demands the use of a certain AI system. In such cases, it could at least be discussed whether the employee can rely on the AI working correctly (eg accurately reflecting customer preferences). But it will depend on the degree of negligence in these cases too. Blind trust in the AI functioning properly will probably not be permissible in these cases either. This point already demonstrates the need for clear guidelines and regulations regarding the use of AI.



With the introduction and use of AI systems in the world of work, the work people do will also change to some extent. Processes, forecasts and solution concepts can be largely automated using appropriate AI applications and developed without extensive input from employees. Using AI, work results could be generated faster and with significantly less effort. Bonuses and other variable remuneration would be easier to achieve in performance-based remuneration systems. This raises the question of whether bonus systems must or should be redesigned in future to adapt them to the potential uses of AI.

Companies should question which work should be "rewarded" with bonuses. If variable remuneration is linked to actual personal performance, it should be noted that the effort the employee puts in is significantly reduced by using AI systems. On the other hand, it could be considered that using AI could lead to a considerable increase in efficiency and the company as a whole could benefit from this approach. In any case, this should be considered before implementing AI applications.



If the decision has been made to integrate AI systems into everyday working life, the practical implementation raises the question of the involvement and rights of an existing works council. In June 2021, the German legislator included the term AI in three places in the Works Constitution Act (Betriebsverfassungsgesetz – BetrVG), enabling works councils to participate in the planning, introduction and use of AI systems.

Pursuant to Section 90 para. 1 no. 3 BetrVG, the works council has a right to information and consultation when planning the use of AI. This enables the works council to influence the decision regarding the introduction of AI through upstream consultation. If the employer does not comply with this information obligation, the works council can also actively demand it.

Section 80 para. 3 BetrVG gives the works council the option to consult an expert in this context. The works council no longer has to prove the necessity of such an expert, as this is now stipulated by law.

According to Section 95 para. 2a BetrVG, the legislator has also clarified that there is a right of co-determination with regard to establishing guidelines on personnel selection for recruitment, transfers, regrouping and dismissals even if an AI system is involved.

But there’s currently no explicit right of co-determination with regard to using AI. The decision on the introduction and use of AI is fundamentally subject to the company's freedom of decision. For the time being, a right of co-determination can only arise through the interaction with other co-determination facts, for example from Section 87 para. 1 No. 6 BetrVG, if an AI system is objectively suitable for monitoring the behaviour or performance of employees. However, if a co-determination situation exists, this may also give rise to an enforceable right of initiative for the works council.

Employers can’t completely disregard the works council in the area of AI. Companies must specifically examine whether a co-determination situation may be triggered by an AI-related project.



If employees are allowed to use AI systems, companies should consider data protection beforehand. They should clarify where exactly the AI system’s server is located. Many AI models are hosted on servers that are subject to lower data protection standards compared to the EU, meaning that employee and customer data would be subject to these lower protection regulations in non-European countries. AI models continue to learn through each use, meaning that the data could theoretically not be protected from access by third parties and made accessible to the general public.

So it’s important that employees inform their employer if AI systems are used for work performance to give the employer the opportunity to take data protection precautions. Depending on the extent and subject matter of the use, the obligation may already arise on the basis of a secondary obligation under the employment contract within the meaning of Section 241 para. 2 BGB. So it’s advisable to inform employees accordingly.

To protect sensitive customer and employee data, it’s also advisable to give employees clear guidelines on using AI. For example, employees shouldn’t enter any personal data or information classified as business secrets into the AI system. The data protection supervisory authorities also check AI systems for compliance with data protection regulations. The results of these checks should also be considered when approving AI systems.



Because of the predicted importance of AI systems in the financial sector, businesses should already be thinking about AI and its implementation. In the future – especially with regard to the upcoming EU AI regulation – there will be a lot of movement in the discussion about legal requirements for the use of AI in the world of work.