The European Union is leading an innovative initiative to set guidelines for the development of artificial intelligence and create a new general code of practice to regulate the technology. Controversial as it is, this is a key part of a broader EU AI Act that seeks to set new global benchmarks on transparency, copyright, and risk in limited AI models.
The European AI Office coordinates the project, and 947 scholars from various fields, firms, and non-profit organizations globally are involved. The cooperation aims to develop a sound policy approach responding to fundamental questions of AI accountability and overall management.
Organized through a videoconference on September 30th, the initial plenary session concerning the concept of the Synodal Constitution was the first step towards the goal set for April 2025, when a final draft of the document seemed to be agreed on.
Working on the Code of Practice, four permanent working groups of experts from all over the world will deal with such questions as transparency, risk analysis, measures to minimize technical risks, and internal regulation.
Several highly regarded professionals are heading the lawsuit, including Nuria Oliver, a leading AI researcher, and Alexander Peukert, a German copyright lawyer. These working groups will meet from October 2024 to April 2025 while consulting with the stakeholders continually to complete the code.
Europe Adopts AI Act For comprehensive Risk-based regulation
Together with the passed in March 2024, the EU AI Act is the first comprehensive and risk-based approach to legislative regulation of AI technology. According to the legislation, AI systems are divided into classes of risk or hazards that vary from low risk to unacceptable risk, and the compliance measures that need to be taken depend on the risks involved.
Challenges arise from the general nature of models like the LLMs, given that they contain a broad range of tasks and applications with numerous social implications and effects. These models are typically considered more complex and, thus, in greater need of the most stringent regulatory measures.
The following Code of Practice will form an essential building block in putting the AI Act to use to guarantee that these systems are understandable ultimately, practically apply ethical standards of conduct, and are governed appropriately.
Despite that, not everyone in the EU can be considered happy about the EU’s proactive approach to regulating AI.
Still, major AI companies, such as Meta, academics, and human rights activists, slammed the current regulations as too restrictive. As a result, the EU has opted for an open, multipartite approach to developing the Code of Practice to ensure that innovative development is not threatened by corrupt practice.
To begin with, more than 430 inputs from the key stakeholders were compiled during the drafting process, which shows how much effort and collective effort is needed to complete such a massive task.
This race to regulate IT is not an interior race, and the EU’s present measures to control it are essential to determining the policies that other governments worldwide will pursue concerning general IT models.
The EU aims to compile a wide variety of individuals associated with the AI industry and development to set an example for how AI should be successfully developed and implemented without posing a threat to society’s benefits.
Therefore, the last Code of Practice, which will be launched in April 2025 alongside the governance model, will not only apply to the EU but can also be used as a reference for other regions worldwide. Considering that AI is progressing at an unprecedented rate, Europe’s creation of this code might shape international AI policy.
This step is one of the most significant, and it will advance the controlled and responsible advancement of artificial intelligence technologies, including, but not limited to, those that impact cultures at large.