Artificial Intelligence Regulation by EU Commission- A Balancing act between scientific innovation and fundamental rights.
As digital technology becomes more and more integral part of people's lives and of the economic affairs, it is unavoidable for people and societies to trust digital technology increases.
On this note, the EU Commission, on 21st of April, 2021 adopted a proposal for a regulation on an effort to regulate "artificial intelligence systems (AI)". This effort is descriptive as "first ever legal framework on Artificial Intelligence".
The proposal of EU Commission is part of an overall effort of the European Union in being a leader in the sector of innovation, on a global level, as well as avoiding instances where Member States adopt national initiatives with potential risk to the integrity of the single market.
The main concern of this proposed regulation is addressing the new risks related to user safety and fundamental rights, while at the same time established an adequate, well functioned regulatory framework within the European Union which will allow scientific innovation to benefit human's lives without infringements of any basic rights that we all have agreed upon.
The Artificial Intelligence Regulation defines AI as "software that is developed with one or more of [certain] approaches and techniques...and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing in the environments they interact with."
This definition provided by the AI Regulation has already been subject to scrutiny, and in particular from a technical perspective by professionals on the digital technology sector. However, the Commission is clear with its intention which is to cast a wide net.
Providers offering or implementing AI systems in the European Union (EU), users located within the EU, as well as providers or users located outside of the EU (if the output produced by the system is utilized within EU), fall under the broad scope of this proposed AI Regulation, covering both public and private sectors.
The EU Commission proposed a risk-based approach, by categorizing the level in four categories, i.e. (i) prohibited AI Systems whose considered unacceptable and that contravene union values, (ii) uses of AI that create a high risk, (iii) those which create a limited risk and (iv) uses of AI that create minimal risk. Depending on the risk classification, the regulatory requirements will vary with those addressed to "high-risk" uses being specific before and after launching into the market.
This outcome of the effort to regulate this new area will depend, on a significant extent, from the Members State key role in the application and enforcement. In this respect, each Member State will need to designate a competent authority for supervision and representation of it in the European Artificial Intelligence Board.
The severity of this effort can be clearly reflected on the heavy administrative fines introduced by the EU Commission in this proposal. Penalties for non-compliance can reach up to 6% of the global annual turnover or 30 million EUR, whichever is greater.
In conclusion, the broad definition and scope proposed by the EU Commission and the superstructure needed to supervise and enforce this initiative signals a new regulatory era where the AI Regulation will affect the economy across all sectors.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.