top of page


“With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted,” Margrethe Vestager, the Executive Vice-President for a Europe Fit for the Digital Age and Competition, added in a statement. “Future-proof and innovation-friendly, our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake.”

On June 14, 2023, landmark rules for Artificial Intelligence regulation were approved by the European Parliament. This decision could possibly set a benchmark for other countries in terms of regulation of AI. Was this really necessary?

Artificial Intelligence tools are becoming a norm across the world. AI tools are revolutionizing the way we live. AI tools like ChatGPT have changed the way individuals learn as knowledge is now available at the tip of our fingers. AI has expanded into our daily lives by becoming a key part of various applications we use in our daily life like Maps and Navigation, Facial Detection and Recognition, Text editors or autocorrect, search and recommendation algorithms, Chatbots, Digital Assistants, Social Media, E-payments etc. As AI is expanding its presence in human life, some kind of regulation could be necessary.

With AI performing so many tasks, the question now also arises of human relevance. However, that is a topic for a different day. It is important to understand that no technology can be risk-free. Does AI pose a risk?

The European Union has taken the lead in trying to classify the various risks posed by the AI system. This blog will be talking about AI, the European Union AI regulation, and what are the future steps with regard to this Act.


The Oxford Dictionary defines artificial intelligence as: “The theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.” The Oxford Dictionary of Phrase and Fable (2 ed.)

In simple terms, AI can be classified as a computer conducting statistical analysis on data fed to it, which enables the computer to understand, analyze and learn from the available data through specifically designed Algorithms. Now let’s have a look at how European Union is planning to regulate the risks caused by AI.


“While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose,” Benifei, a member of the European Parliament working on the EU AIAct, told journalists.

The AI regulatory framework was first proposed in April 2021 by the European Commission. AI systems to be used in different applications are to be classified according to the various risks they pose.

The four different risk levels for AI systems as per the EU Act are as follows

1. Unacceptable Risk

This is the highest risk level as per the Act and AI systems classified under this risk will be banned as they are considered a threat to the people. For eg

  • Tools that cause cognitive behavioural manipulation of people or specific groups. For e.g.: Voice-activated toys that encourage dangerous behaviour in children

  • Tools that can be used for classifying people on personal characteristics, behaviour or socio-economic status.

  • Biometric identification systems both remote and real-time like facial recognition systems used in public spaces for law enforcement. Exceptions are allowed if biometric identification happens after a significant delay then criminals identified by that system can be prosecuted only after court approval.

2. High Risk This is the second highest risk level and AI systems classified under this category pose a threat to safety and fundamental rights.

  • Critical infrastructures (e.g., transport), risking the life and health of citizens

  • Systems utilizing biometric identification in non-public spaces

  • Educational or vocational training, determining the access to education and professional journey of someone’s life (e.g., scoring of exams);

  • Safety components of products (e.g., AI application in robot-assisted surgery);

  • Employment, management of employees and recruitment (e.g., CV-sorting software for recruitment procedures)

  • Essential private and public services (e.g., denying a loan based on credit score)

  • People’s fundamental rights If they are violated by law enforcement (e.g., evaluation of the reliability of evidence)

  • Migration, asylum and border control management (e.g., verification of the reliability of travel documents)

  • Administration of justice and democratic processes (e.g., applying the law to a fixed set of facts) Source info here.

3. Limited Risks AI systems under this category will have to comply with minimum transparency requirements users should be aware that they are interacting with an AI. Examples of systems under this category are video and audio manipulation like deep fakes and chatbots.

4. Minimal Risks AI systems not classified under any of the above categories will have to be classified under Minimal Risks. COMPLIANCE ISSUES The Act proposes a heavy fine on companies for not complying with the act. Fines can reach up to 30 million euros and organizations who submit false and misleading information to regulators could result in fines.


Not currently as now the European Parliament, the Council and the European Commission will engage in a trilogue to come to a final decision. Finally, the provisional agreement must be accepted by each of the institution’s formal procedures.


Finally, a great invention can be used for either good or bad. Before this is used it is critical to understand the risks associated with it. AI tools which can violate fundamental rights or trigger dangerous behaviour can really be a cause of real concern. A risk-based approach to understanding AI tools could be a possible best foot forward.

Gorisco has a wide range of experts who are experienced in defining and designing various solutions to help organizations mitigate their risks and resolve their problems.

At Gorisco, our motto is 'Embedding Resilience' and we are committed to making the organizations and their workforce resilient. Reach out to us if you have any queries, clarifications, or need any support on your initiatives.

To read our other blogs, click here. More importantly, let us know if you liked them or not through your comments.

51 views0 comments
bottom of page