Search
Contactez-nous Suivez-nous sur Twitter En francais English Language
 

De la Théorie à la pratique











Freely subscribe to our NEWSLETTER

Newsletter FR

Newsletter EN

Vulnérabilités

Unsubscribe

Expert comment on ammending human review of AI decisions

September 2021 by Andy Patel, researcher with F-Secure’s Artificial Intelligence Center of Excellence

Following the news that the government is suggesting amendments to GDPR and removing the human review of AI decisions, Andy Patel, researcher with F-Secure’s Artificial Intelligence Center of Excellence, comments the following:

“Decisions made by automated systems, and especially machine learning-based algorithms can be prone to error and/or bias. Concrete examples of such errors have already been demonstrated in systems used in insurance, hiring, education, and law enforcement. It is impossible to include every possible real-world scenario and corner case in the data used to train and validate machine learning algorithms. Article 22 of GDPR safeguards individuals against this problem, and, as such, removing the provisions of article 22 from UK law is not only dangerous, but also a step in the wrong direction.”




See previous articles

    

See next articles