News

The European Union’s regulation for the compliance of artificial intelligence applications will come into effect on Thursday, August 1, 2024. Applications will be ranked according to level of risk.

The European Union’s regulation for the compliance of artificial intelligence applications will come into effect on Thursday, August 1, 2024. Applications will be ranked according to level of risk.

A series of staggered compliance deadlines will come into effect. The implication of this is that the law will apply to various types of AI developers and applications.

The EU expects most of their newly introduced laws to be fully applicable by mid-2026. The first deadline will enforce a ban on minor AI uses in public contexts such as law enforcement and biometrics in public places.

The EU considers most AI applications “low risk”

The European Union considers most AI applications to be low to no risk in the scope of regulation.

In the EU’s Artificial Intelligence Q&A, it was explained that the “majority of AI systems can be developed and used subject to the existing legislation without additional legal obligations.”

Nevertheless, the EU does regard certain AI applications as “high risk” according to the bloc’s regulation compliance policies. The EU believes that “there is [a] limited number of AI systems that can potentially pose danger for people’s safety and their fundamental rights.”

TAs per the EU, such AI systems include those that “assess whether some people can receive a specific type of medical treatment or get a certain job or a loan.”

Developers will now need to guarantee that the new EU AI regulation compliance rules are met

Developers of applications considered “high risk” by the European Union will now be required to certify that new compliance rules have been met.

They will be mandated to undergo risk and quality tests, such as a pre-market conformity assessments. Furthermore, they will also be subject to regulatory audits by authorities if requirements were not met.

Such regulations only apply to private AI system developers. In contrast, high-risk AI systems used by the EU itself will have to be registered on a European Union database.

Limited risk AI systems, the core of the EU’s compliance regulations

AI systems such as chatbots or AI tools that would create deep fakes are at the core of the European Union’s regulations.

EU lawmakers have stipulated that tools like OpenAI’s ChatGPT will now have to meet specific transparency requirements to guarantee users are not misled.

On this subject, OpenAI has stated they will be working “closely with the EU AI Office and other relevant authorities as the new law [will be] implemented in the coming months.”

The European Union plans to hold AI developers and other tech corporations accountable by increasing potential fines to be imposed in case of non-compliance.

Lastly, the EU has increased potential fines for AI developers who fail to meet updated AI regulation compliance to over 7 percent of global annual turnover.