The EU’s incoming AI Act is giving off some real I, Robot vibes-

The European Parliament just voted to adopt an EU Commission proposal to make the application of artificial intelligence safer. The new proposal, amended by Members of European Parliament (MEPs), aims to “ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory, and environmentally friendly.”

The AI Act makes a lot of sense considering the state of the artificial intelligence discourse right now, though the first prohibited practice in the original proposal is giving off some real I, Robot vibes.

Among a long list of disallowed practices, is “the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm.”

Of course, there’s always some grey area as to what constitutes physical and psychological harm. But that’s an opinion piece for another day.

European Parliament news gives the low down on the amended “risk based” AI Act, which passed into its draft negotiating stage with a whopping majority of 84 votes in favour. Just seven MEPs voted against, and twelve refrained from voting at all. 

Against the original draft, MEPs have made a few changes in the interest of avoiding “intrusive and discriminatory uses of AI systems”. Italian Socialist & Democratic party member, and representative of the internal market committee, Brando Benifei, previously said he was “confident that, tomorrow, we will add real-time and post biometric identification in publicly accessible spaces in this list of forbidden practices.”

That has come to pass, and the amended list of prohibited artificial intelligence practices now stands as follows: 

  • “Real-time” remote biometric identification systems in publicly accessible spaces; 
  • “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization; 
  • Biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation); 
  • Emotion recognition systems in law enforcement, border management, workplace, and educational institutions; and
  • Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and right to privacy).

There’s a big focus on transparency, too, within the act. Developers of “Generative foundation models, like GPT” will have to not only make it explicitly clear that their content was generated by AI, they will also need to take steps to prevent their model from “publishing summaries of copyrighted data” and “generating illegal content”.

Representing the Civil Liberties committee on the file, Romanian MEP Dragos Tudorache spoke to reporters, assuring them that the act will see to the copyright issues associated with AI development. According to his explanation, AI companies using copyrighted material “will have an obligation to be transparent about that, to document it and to be transparent, so that it opens up the possibility for the owners of the rights to then go and seek compensation”.

He also makes a point that it’s important not to stifle innovation in the AI start-up space. “We have worked to support AI innovation in Europe and to give start-ups, SMEs and industry space to grow and innovate, while protecting fundamental rights, strengthening democratic oversight and ensuring a mature system of AI governance and enforcement.”

If the AI Act is passed, it will be the fist of its kind in the world to outline safe and transparent practices for artificial intelligence development. It will need backing from the whole Parliament at the June 12–15 session, with the majority accepting the changes made by the MEPs.

Related Posts