
2 October 2025
Lessons from the world of play: AI in the video games sector
Artificial intelligence, especially its generative form, is no longer a futuristic concept. It’s here, reshaping industries and redefining creativity. While predictive algorithms that suggest our next favorite show or autocomplete our texts are largely welcomed, discomfort arises when AI begins to emulate human creativity: writing stories, composing music, or generating art. This discomfort often escalates into controversy, particularly when legal frameworks lag behind technological capabilities.
The video game industry’s early AI adoption
The video game industry offers a unique lens through which to examine the evolution of AI. As a digital native, gaming has long integrated AI-like systems, from early checkers programs in the 1950s to today’s complex procedural generation engines. These systems, while not always labeled “AI”, have shaped gameplay for decades: learning from player behavior, generating content, and enhancing immersion.
Yet even this tech-forward sector is now confronting the same legal and ethical dilemmas facing other creative industries. What distinguishes a benign algorithm from a controversial AI system? The answer lies in a convergence of three factors: the sophistication of the technology, its visibility to the public, and its potential to displace or exploit human creativity.
From invisible tools to visible disruption
Historically, AI in games has been a behind-the-scenes enabler. Procedural generation, for instance, allows developers to create vast, dynamic worlds—like the 18 quintillion planets in No Man’s Sky—that would be impossible to build manually. These tools are seen as augmenting human creativity, not replacing it.
Similarly, AI-assisted programming tools have long helped developers debug and optimize code. While programming is a creative discipline, it doesn’t evoke the same emotional response as visual art or music. As a result, AI’s role in coding has largely escaped public scrutiny.
But when AI becomes visible by generating art, dialogue, or voices, it enters contentious territory. Ubisoft’s “Ghostwriter”, an internal tool for drafting ambient character dialogue, sparked backlash despite being positioned as a time-saver for writers (and developed in collaboration with its writers). Critics feared it could reduce entry-level writing jobs or signal a broader shift toward automation over artistry.
Legal flashpoints: ownership, consent, and accountability
These controversies are not just cultural, but legal. As generative AI tools become more powerful and accessible, they raise urgent questions about intellectual property, consent, and liability.
Who owns AI-generated content? If a tool is trained on copyrighted material, is the output derivative? Can a voice actor’s performance be used to train a synthetic voice without explicit consent? What happens when an AI system produces harmful or biased content?
These questions are no longer hypothetical. In one landmark case, SAG-AFTRA negotiated terms allowing members to license their voices for AI use under specific conditions. Meanwhile, digital storefronts are beginning to require developers to disclose AI-generated assets, reflecting growing demand for transparency and attribution.
Lessons from the world of play
The gaming industry’s long history with AI offers valuable insights. It shows that controversy arises not from the technology itself, but from how it’s used and perceived. When AI is invisible, augmentative, and respectful of human roles, it’s often embraced. When it’s visible, autonomous, and potentially displacing, it invites scrutiny.
This pattern is now playing out across sectors, from publishing and film to customer service and software development. As AI becomes more sophisticated and more public, the need for clear legal and ethical frameworks becomes urgent.
To navigate this evolving landscape, organizations should:
- Monitor AI capabilities and regulations: stay informed about technological advances and legal developments. What’s acceptable today may be controversial tomorrow.
- Mitigate bias: invest in tools and practices to identify and reduce algorithmic bias in training data and outputs.
- Establish internal policies: create clear guidelines for AI use, emphasizing transparency, accountability, and ethical standards.
- Update contracts and IP strategies: ensure agreements with employees and vendors address AI-specific issues like data use, consent, and ownership.
- Engage stakeholders: foster open dialogue with employees, users, and communities to build trust and shape responsible AI practices.
The way forward
The games industry, often a first mover in tech adoption, has become a proving ground for the legal and ethical challenges of generative AI. Its experiences underscore a critical truth: AI controversy is not inherent to the technology - it emerges when sophistication, visibility, and displacement converge. As businesses across all sectors embrace AI, they must do so with foresight, diligence, and a commitment to responsible innovation. The legal landscape is still taking shape, but the lessons from gaming offer a valuable compass. With the right approach, AI can be a powerful tool - not a disruptive force - for creativity, productivity, and progress.
Territories across the world, from Canada to China to the EU, are taking divergent approaches to the regulation of AI. Nonetheless, many principles are converging and for companies in the video game space, compliance is essential regardless of jurisdiction. DLA Piper’s AI Laws of the World provides an overview of AI laws and proposed regulations across more than 40 countries (including Canada and the UK), including key legislative developments, regulations, proposed bills and guidelines issued by governmental bodies. If you have any queries regarding AI and the tech sector, please get in touch with us or your regular DLA Piper contact.