Add a bookmark to get started

Meeting light box
27 November 202311 minute read

Using AI responsibly as in-house counsel: Law Society of BC releases guidance on professional ‎responsibilities

Artificial Intelligence is becoming increasingly integrated into various professional settings, including boardrooms, courtrooms, organizations and legal practices. The term “AI” encompasses software capabilities for tasks typically requiring human intelligence, such as data interpretation and analysis—many of which are already in use and could be classified as AI. However, generative AI, which produces new content like images, text, or videos, is gaining traction among professionals and the general public.

In-house counsel must navigate a dual responsibility: providing legal advice to their organization regarding generative AI usage and overseeing external counsel or service providers employing these AI tools. In fact, in those dual roles, in-house teams are often asked to develop or oversee the drafting of policies governing AI use at the organization, which would affect how they themselves might use AI, underlining the importance of understanding associated professional risks and responsibilities.

Recognizing the rising use of generative AI in legal practices, the Law Society of British Columbia has prepared a guide for lawyers who want to use generative AI tools in their practice. This guide, though tailored to the specific duties of British Columbia lawyers, offers a valuable framework for in-house counsel by identifying risks and opportunities of AI use, without endorsing specific products. It provides practical advice for using generative AI ethically and responsibly. The guide primarily addresses large language models (LLMs), which generate text from extensive language data. In 2023, LLMs, known for creating chat-based content, are what most people refer to when discussing AI. This is a wise focus at this stage, as the technologies are evolving so fast regulators run the risk of chasing the latest use cases, while it is apparent that LLMs will be primarily useful to lawyers.

LLMs can offer many benefits for in-house counsel, such as improving efficiency, accuracy, and innovation (for example, they can be tailored to be excellent at summarizing, correcting, reviewing, or coalescing large amounts of information), but they also pose some challenges and risks that require careful and responsible use (for example, they remain at risk for hallucinations, which is a feature not a bug). The guide’s helpful organization may serve as useful when considering how to leverage AI with internal and external legal teams:

  • Competence: Lawyers have an obligation to perform their legal services to a high standard, and that means they need to be knowledgeable and skilled in using generative AI tools. They also need to consider the best options for their clients, communicate effectively with internal and external stakeholders, and keep up with changing professional requirements. As the generative AI technologies improve and offer features that surpass human efficiencies, it will be critical for in-house counsel to ensure that they and their external advisors remain trained, competent and effective in using these tools to deliver the best outcome for their clients.

  • Confidentiality: Lawyers must keep their clients' information private and secure, and that means they need to be careful about what information they provide to the generative AI tool. That also means that they may need to know what their external lawyers intend to do with confidential information shared with them.  External counsel may need to consider getting client consent if they want to use AI tools with their clients’ information. Consistent with that, because lawyers have a duty of honesty and candour with their clients, external counsel (and internal counsel) may need to be ready to inform clients of how and why they plan to use generative AI tools in their work, answer questions and maybe even document their AI use, and inform their clients about potential risks.

  • Responsibility: Lawyers are responsible for the work they produce, whether it is done by themselves, their staff, their external counsel, or their technology. Thus, lawyers in the organizational advice chain need to supervise and review the work done by generative AI tools, and ensure its accuracy and quality – just as they would for work prepared by junior lawyers, students, paralegals or external counsel. The B.C. Code of Professional Conduct actually already requires lawyers to review non-lawyer work (“A lawyer has complete professional responsibility for all business entrusted to him or her and must directly supervise staff and assistants to whom the lawyer delegates particular tasks and functions.”), and while the Law Society notes that it was “intended to cover human-to-human supervision”, it nonetheless uses this as a reminder that the requirement can apply to technology-based work as well.

  • Information security: Lawyers must protect their records and information from loss, destruction, and unauthorized access, use, or disclosure—including by the providers of their technology tools. They need to be aware of how the information is stored and secured by generative AI tools and its providers, and comply with the professional rules on records and security. Very few tools currently provide the level of protection clients (and regulators) would expect from their counsel, and while it is becoming more commonplace, it is still not ubiquitous that in-house counsel are seeking assurances from their external counsel about how tools and technologies are being responsibly deployed to secure their information.

  • Requirements of courts or other decision-makers: Lawyers must follow the rules and expectations of the courts, tribunals, or other bodies where they practise. Some of these may require lawyers to disclose when and how they used generative AI tools to prepare their submissions. Moreover, using AI to determine the plausible or potential interpretation of one or more clauses in an agreement without completing a jurisdictionally specific review of accurate and non generative caselaw could lead to a host of issues.

  • Reasonable fees and disbursements: Lawyers have an obligation to charge and accept fair and reasonable fees and expenses for their work. They will need to think about how they will pay (and in the case of externa counsel, bill) for the use of generative AI tools as part of giving and receiving legal advice, and how that may affect current billing models. This could change as the possible solutions evolve; many have suggested that generative AI will ultimately be a stake through the heart of the billable hour (which is an entirely separate treatise). Whether or not that is true, law firms will need to figure out how to remain profitable while discharging their obligations on client billing and using these efficiency technologies, and in-house teams will need to make sure their legal advisors are doing what they can to remain efficient and fair in their billing practices. Indeed, we would see a world where there is an expectation that some level of AI is expected in order to keep fees down on more rote or time consuming tasks.

  • Plagiarism and copyright: Just like any other business, lawyers must respect the intellectual property rights of others, and that means they need to be careful about using generative AI tools that may copy or reuse material from other sources. These issues may involve complex legal questions that are not fully settled yet, particularly as most intellectual property law arises from statute (meaning that one has to reinterpret those laws in light of the new technology). It remains to be seen how politicians and regulators will solve for the problem of “rewarding” authors whose work is used in, or generated by, artificial intelligence, but the traditional trade-offs between creator incentive and creativity may be breaking down. Lawyers (and their heavily precedent-driven business) already act in a profession that embraces “borrowing”, “copying” and “cutting-and-pasting”, and it will be interesting to see how they adopt and adapt to generative AI tools.

  • Fraud and deep fakes: Lawyers must avoid being involved in any fraudulent or dishonest activities, and that means they need to be vigilant about the potential use of generative AI tools to create fake or misleading content, such as images or videos. They need to protect themselves and their organizations from cybercrimes and fraud risks that may arise from the use of AI tools—which can generate very convincing impersonations, even content that is indistinguishable from reality, that could be used to deceive or manipulate others. Lawyers need to ensure that they do not use or rely on such content, and that they verify and authenticate the content they receive or provide.

  • Bias: As part of discharging their professional responsibilities, lawyers must consider and promote fairness and justice in their work, and that means they need to be aware of and address any bias that may arise in the content generated by generative AI tools. Bias may result from the data that the tool uses or learns from, and may affect the outcomes or decisions that the tool suggests. This is important because generative AI tools may produce content that reflects or reinforces the existing biases or inequalities in the data or the society. Lawyers need to ensure that they do not perpetuate or exacerbate such biases, and that they challenge and correct any bias that may affect their clients' interests or rights.

Of note, the guide does not separately delve into the topic of legal privilege, other than to note that the “law of privilege in respect of generative AI tools is in an early stage, and is likely to emerge over time.”  At some base level, the concept of privilege is likely not affected by the use of AI tools as opposed to, say, other electronic or analytical tools. However, many of the above topics touch upon privilege and it should be considered when applying them.

In-house lawyers, who act as both legal advisors and business partners to their organizations, may face additional challenges and risks in ensuring that their use of generative AI tools does not compromise their privilege obligations. For example, they need to ensure they have consent (or a policy that allows them) to provide confidential information to the generative AI tool (a) for business purposes; or (b) for the purpose of conducting legal work and providing legal advice, and ensure that the tool and its providers do not disclose or misuse the information. They may also need to verify that the content generated by the tool does not contain any sensitive or privileged information that could be inadvertently disclosed to third parties, such as regulators, auditors, or competitors. Moreover, they may need to disclose to their external counsel, if any, when and how they used generative AI tools to prepare their legal work, and whether the tool may have influenced their legal opinions. In-house lawyers should be aware of and comply with the applicable rules and expectations of the courts, tribunals, or other bodies where they practise, as well as the Law Society's rules on records and security, when using generative AI tools.

Generative AI has the potential to transform the legal profession by automating some of the tedious and repetitive tasks that lawyers have to do, such as drafting language for contracts, reviewing and comparing documents, or searching materials and conducting research. By using generative AI tools, lawyers can save time, reduce errors, and improve efficiency. Lawyers can oversee, verify, and refine the output of generative AI tools to provide strategic advice, legal analysis, and advocacy for their clients. Generative AI can help lawyers focus more on value-adding activities that require human skills, such as problem-solving, negotiation, communication, and persuasion. Using generative AI tools responsibly can unlock increasing value for both clients and lawyers: clients can benefit from faster, more efficient, less expensive, and more accurate legal services, while lawyers can enhance their productivity, profitability, and reputation. Generative AI can also enable lawyers to explore new areas of law, discover new insights, and generate new solutions. Generative AI is not a threat, but an opportunity, for the legal profession, as long as it is used with care, caution, and competence—something that regulators like the Law Society of BC are keen to reinforce.
Print