G7 publishes guiding principles and code of conduct for artificial intelligence
The governments of the Group of Seven (G7) recently announced the launch of a set of International Guiding Principles and an International Code of Conduct designed to encourage international cooperation in the effective governance of artificial intelligence.
The launch is the latest in a flurry of government initiatives targeting safe and responsible AI, including several developments to the trialogue negotiations of the EU AI Act; the unveiling of a long-anticipated Executive Order from the White House in the US mandating steps towards safe, secure, and trustworthy AI; and the hosting of the AI Safety Summit in the UK.
International Guiding Principles for Advanced AI Systems
The International Guiding Principles (Principles) aim to function as a non-exhaustive set of principles that organizations and governments should consider in the promotion of safe, secure, and trustworthy AI.
The Principles apply to all parties within the AI lifecycle and to be used as non-binding measures to guide organizations and governments towards best practices that will encourage responsible and ethical use of AI. They are designed as a “living document” that builds on many of the existing international principles, including those from The Organization for Economic Co-operation and Development (OECD), and are expected to adapt and develop as technology advances.
The initial list of Principles is as follows:
- Take appropriate measures throughout the development of advanced AI systems, including their deployment and placement on the market to identify, evaluate, and mitigate risks across the AI lifecycle
- Identify and mitigate vulnerabilities and, where appropriate, incidents and patterns of misuse after deployment, including placement on the market
- Publicly report advanced AI systems’ capabilities, limitations, and domains of appropriate and inappropriate use to support transparency, thereby increasing accountability
- Work towards responsible information sharing and reporting of incidents among organizations developing advanced AI systems including industry, governments, civil society, and academia
- Develop, implement, and disclose AI governance and risk management policies grounded in a risk-based approach – including privacy policies and mitigation measures, particularly for organizations developing advanced AI systems
- Invest in and implement robust security controls including physical security, cybersecurity, and insider threat safeguards across the AI lifecycle
- Develop and deploy reliable content authentication and provenance mechanisms where technically feasible, such as watermarking or other techniques to enable users to identify AI-generated content
- Prioritize research to mitigate societal, safety, and security risks and prioritize investment in effective mitigation measures
- Prioritize the development of advanced AI systems to address the world’s greatest challenges, notably but not limited to the climate crisis, global health, and education
- Advance the development of and, where appropriate, adoption of international technical standards
- Implement appropriate data input measures and protections for personal data and intellectual property
International Code of Conduct for Advanced AI Systems
Alongside the Principles, the G7 released an international code of conduct for advanced AI (Code of Conduct). As with the Principles, the Code of Conduct aims to promote safe, secure, and trustworthy AI worldwide and offers voluntary guidance for organizations developing advanced AI systems, including complex generative AI systems.
The Code of Conduct maps onto the 11 Principles and provides practical steps that organizations may seek to consider in their development and deployment of AI. These include measures directed at data quality, bias controls, technical safeguards, and provenance measures for AI created content.
The Code of Conduct will be reviewed and updated periodically through regular multistakeholder consultations to ensure that the measures proposed remain fit for purpose and respond efficiently to rapid developments in technology.
The voluntary nature of the code of conduct means that different jurisdictions may approach the guidelines in their own way, allowing for cultural and systematic differences to be accommodated during the course of their implementation.
DLA Piper is here to help
DLA Piper’s AI policy team in Washington, DC is led by the founding director of the Senate Artificial Intelligence Caucus. The firm has been shortlisted for the Financial Times 2023 Innovative Lawyers in Technology award for its AI and Data Analytics practice.
For further insight on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.
Gain insights and perspectives that will help shape your AI Strategy through our newly released AI Chatroom Series.
For more information or if you have any questions, please contact any of the authors.
AI experts, Schumer call for “robust, sustained federal investment” in artificial...
27 October 2023 .4 minute read
FDA is embracing AI for its own purposes. Are you keeping pace?
5 September 2023 .8 minute read