PVP_illuminated_digital_display_PPT

19 November 2025

Diritto intelligente – Issue N. 14

October 2025 marks a decisive moment in Italy’s regulatory path toward artificial intelligence. With the entry into force of Italian Law No. 132/2025 on artificial intelligence and the introduction of a new criminal offense for the dissemination of AI-generated deepfakes, Italy positions itself among the first European countries to give concrete shape to the principles of the EU AI Act through national legislation.

The new rules reveal both the opportunities and the vulnerabilities of the digital age. On one side, the phenomenon of deepfakes—AI-manipulated content that can reproduce faces, voices, and actions indistinguishable from reality—has evolved from a social concern into a criminal matter. The new Article 612-quater of the Italian Criminal Code punishes the illicit distribution of falsified images, videos, or audio capable of deceiving others about their authenticity. It is a milestone in protecting digital identity, but also a complex field of interpretation: how will courts assess “unjust damage” or determine whether content is “capable of misleading”? And what preventive obligations will fall on platforms under the Digital Services Act?

At the same time, the spread of AI in workplaces is transforming organizational and managerial models. Algorithmic systems now monitor productivity, assign tasks, and even evaluate performance. While such tools can enhance efficiency and safety, they also blur the boundaries between control and surveillance, raising questions about autonomy, transparency, and well-being. The European debate is increasingly focused not on whether to regulate, but on how to ensure that technological optimization does not erode human dignity.

Equally significant are the implications of the new Italian AI Law for professions. By reaffirming that professional services must be performed “primarily” by the professional and that AI may only play a supporting role, the law preserves the fiduciary nature of the professional–client relationship but also imposes new obligations of disclosure, responsibility, and competence. The challenge is to integrate AI into professional practice without compromising trust or accountability.

Together, these developments depict a regulatory landscape that is rapidly evolving from principles to practice. Criminal law, labor law, and professional ethics converge toward a single goal: ensuring that artificial intelligence serves humanity rather than replacing it. The task ahead is not merely to punish misuse but to design governance models capable of turning innovation into an instrument of integrity and progress.

Print