BRUSSELS, Apr 21 (EUROPA PRESS) –
The European Commission presented its proposal on Wednesday to regulate Artificial Intelligence in the European Union in a way that encourages its development and encourages investment and research, but setting high standards of security and fundamental rights to prevent it from being used for repressive purposes. , manipulative or discriminatory.
Brussels is clear that “social scoring” applications should be prohibited in any case, such as the “social credit” system that China uses to monitor its citizens, or will manipulation mechanisms, for example installed in toys with an assistant voice to incite the minor to dangerous behavior.
These are systems that pose an “unacceptable risk” to the safety or fundamental rights of people, as explained in a joint press conference by the vice president of the Community Executive responsible for Competition, Margrethe Vestager, and the Commissioner for Industry and Internal Market, Thierry Breton.
The new rules have yet to be negotiated with the Twenty-seven and with the European Parliament, but the objective is that once agreed they will be applied equally to public and private agents and to any European or foreign entity that operates in the internal market or their use affects to EU citizens.
In the next rung of the risk levels into which the Commission divides the possibilities of artificial intelligence are other options that should be “prohibited in principle”, although its use may be allowed in exceptional circumstances and under strict conditions.
In this block of high-risk systems, community services place, among others, indiscriminate facial recognition in public places, although the door is open to recourse in exceptional situations such as the search for a missing child or the tracking of terrorists and other dangerous criminals. .
In addition, high-risk technologies subject to obligations and human supervision will be considered those used in key infrastructures such as transport networks, used in safety components -surgery with robots-, in the educational or professional field -exam score-, to labor market management – CV classification computer programs for recruitment or migration procedures – document verification.
For these cases, the Community Executive calls for risk assessment and control systems, a high quality of data that feeds the system to avoid discriminatory results and the registration of the activity to ensure its traceability.
Detailed documentation on the system itself and its purpose will also be required from them so that the authorities can assess its compliance and must provide users with clear information.
Another key to the Commission’s proposal is to guarantee the maximum possible transparency to prevent users from being deceived with applications such as ‘deep fake’ (videos manipulated using artificial intelligence techniques) or ‘bots’ (computer programs that carry out automatically repetitive tasks over the Internet).
“It should be very clear to the user that he is talking to a machine,” Vestager warned, after warning of the risks to safety and the fundamental rights of tools that do not allow to distinguish clearly “reality from fiction.”
In any case, Brussels proposes a final level of “minimal or no risk” in which it ensures that the vast majority of artificial intelligence systems enter and that they can continue to develop without specific security conditions. This is the case of tools such as those that filter unwanted messages in email or those used in video games.