Public ban of biometric surveillance
EU-based institutions are calling for the European Union to ban biometric surveillance within public spaces, alleging that such practices infringe on human rights. The use of behavioral signs in any context for the purpose of monitoring and identifying citizens has been called into question.
In a joint report, published today, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS), requested a draft of a regulatory policy that can halt the use of artificial intelligence to identify individuals based on their faces, gait, fingerprints, DNA, voice and more.
EU lawmakers have proposed corresponding legislation, however the wide-ranging exemptions have drawn scathing criticism. The EDPS, Wojciech Wiewiorowski, has called for a “rethink,” and has sided with the EDPB in viewpoints.
Major players advocate for legislation to align with the EU bloc’s standing data protection framework. This will ensure that further rights risks are avoided.
“The EDPB and the EDPS strongly welcome the aim of addressing the use of AI systems within the European Union, including the use of AI systems by EU institutions, bodies or agencies. At the same time, the EDPB and EDPS are concerned by the exclusion of international law enforcement cooperation from the scope of the Proposal,” say the two.
“The EDPB and EDPS also stress the need to explicitly clarify that existing EU data protection legislation (GDPR, the EUDPR and the LED) applies to any processing of personal data falling under the scope of the draft AI Regulation.”
Private ban on biometric surveillance
Both the EDBP and the EDPS also call for a complete ban on the use of AI systems to tag individuals as belonging to certain ethnicities, political affiliations or other narrow categories that could lead to discriminatory actions.
As TechCrunch notes, this is an “interesting concern”, given Google’s attempts to replace individual-level ad targeting with advertisements that focus on specific groups, based on interests. Does this represent predatory advertising? And does the EU’s urgent call for an AI-oriented biometrics ban partially represent a response to this?
Notably, Google avoided preliminary testing of this ad targeting model in Europe. This is presumably due to stringent existing data protection laws and close monitoring of discriminatory corporate actions.
Use of AI to interpret emotions
In a further recommendation, the EDPB and the EDPS opine that leveraging surveillance data to interpret human emotions is “highly undesirable and should be prohibited”. The exception rests with “very specific cases, such as some health purposes, where the patient emotion recognition is important”.
Social credit system
“The use of AI for any type of social scoring should be prohibited,” they say. Lawmakers appear interested in preventing any social credit system, similar to that of the Chinese model, from emerging in the European Union.
In failing to pass a prohibition on biometric surveillance in public spaces, experts express concerns that a social credit system of sorts could be established in an insidious way. For example, private groups could use public space surveillance footage to track and profile people’s behavior, and could use it to decide on whether or not to make loans, provide insurance, ticket certain actions, and more.
“The proposed regulation should also prohibit any type of use of AI for social scoring, as it is against the EU fundamental values and can lead to discrimination,” state Andrea Jelinek, EDPB’s chair, and EDPS’ Wiewiorowski.
For more on this story, visit TechCrunch.