Technology

These are the risks that the EU wants to regulate

The European Union (EU) finalizes the first law in the world that will regulate Artificial intelligence (AI). On June 14, the European Parliament established its position and gave the green light to a pioneering measure whose mission is to control the deployment of this emerging technologysetting more or less strict standards depending on the level of risk that its use entails. Despite having a large majority, the approved text is not yet final. Its final version will depend on the negotiations with the European Councilmade up of the heads of State or Government of the 27, a decision that is expected to arrive before the end of this year.

The law began to take shape on April 21, 2021. It was then that the Commission proposed regulating the AI to ban systems like biometric identification or social score, likely to damage the human rights. That concern has focused much of the negotiations between legislators. Thus, the document sealed in June chooses to classify this technology based on the potential impact of its use. There are four categories depending on the type of risk: unacceptable, high and low or minimal.

The regulation will veto all those systems that pose an “unacceptable risk” and a “threat to people.” As proposed by the European Parliament, this is the case of the predictive policing or of facial recognition. The latter will be restricted when it is in real time (contrary to what the European People’s Party), but it will continue to be allowed a posteriori in public access spaces. More than 155 organizations in defense of human rights, including International Amnestyhave demanded a complete ban on biometric identification.

Permitted uses

Other uses of AI will be permitted, although they must be subject to varying degrees of control. The strictest will be for “high risk” systems, those that “negatively affect the security or to the fundamental rights” of users. The label encompasses tools that are applied in areas such as education, employment or the management of critical infrastructures. To avoid damage, they must be evaluated before being commercialized, registered in a database community data and meet transparency criteria. The European Parliament includes here the algorithms of content recommendation, which would intensify scrutiny of the large content platforms Internet who use them, since instagram and Google to TikTok and Youtube.

However, the text also arouses many controversies. A coalition of more than a hundred NGOs have denounced that Article 6 introduces a “legal loophole“which would allow AI companies to decide whether the systems they have developed should be regulated as “high risk.” “If this is approved the law will be useless,” he explained. Caterina Rodelli, Access Now analyst, to EL PERIÓDICO. They also warn that the 27 members of the community club – represented by the European Council – are putting pressure in the negotiations so that the use of AI in the control of borders should not adhere to the same rules of transparency.

Generative AI

What will happen with ChatGPT? According to position taken by the European Parliament, the tools of Generative AI like the chatbot developed by OpenAI They are included in the category of “limited risk“. These must only comply with transparency obligations and clearly identify artificially generated content. “Users must be aware that they are interacting with a machine to be able to make an informed decision about whether to continue or take a step back,” the text reads. The intention of this measure is to prevent the proliferation of counterfeits.

OpenAI’s pressure would have helped modify an initial draft of the law that classified “general purpose” AIs like ChatGPT within the “high risk” category, according to documents obtained by Time magazine. That would have allowed this emerging company, owned by Microsoftavoid greater restrictions and demands.

The law will also force companies that have created these tools to disclose whether they have been trained with material protected by the Copyright. At first, members of the legislative committee that drafted the proposal asked to completely prohibit this use without consent, but finally imposed the transparency requirement.

Debate at stake

Related news

Finally, on the lowest rung will be AI that poses “minimal risk” or no risk. This category includes the vast majority of systems deployed in the EU, from automatic spam filters to video game who use this technology.

“The law seeks to stop the negative externalities of AI, but regulating it soon can also stop beneficial uses,” says technology analyst Antonio Ortiz. “The EU is in full debate about how to regulate quickly and be at the same time a technological powerbecause their position in the development of AI systems is not very good.”

.
For more news: Elrisala ، For social communication, follow us on Facebook .

Source of data and images: elperiodico

Related Articles

Back to top button