by Nicholas West, Activist Post:
I’ve recently been covering the widening use of predictive algorithms in modern-day police work, which frequently has been compared to the “pre-crime” we have seen in dystopian fiction. However, what is not being discussed as often are the many examples of how faulty this data still is.
All forms of biometrics, for example, use artificial intelligence to match identities to centralized databases. However, in the UK we saw police roll-out a test of facial recognition at a festival late last year that resulted in 35 false matches and only one accurate identification. Although this extreme inaccuracy is the worst case I’ve come across, there are many experts who are concerned with the expansion of biometrics and artificial intelligence in police work when various studies have concluded that these systems may not be adequate to be relied upon within any system of justice.
The type of data collected above is described as “physical biometrics” – however, there is a second category which is also gaining steam in police work that primarily centers on our communications; this is called “behavioral biometrics.”
The analysis of behavior patterns leads to the formation of predictive algorithms which claim to be able to identify “hotspots” in the physical or virtual world that might indicate the potential for crime, social unrest, or any other pattern outside the norm. It is the same mechanism that is at the crux of what we are seeing emerge online to identify terrorist narratives and the various forms of other speech deemed to “violate community guidelines.” It is also arguably what is driving the current social media purge of nonconformists. Yet, as one recent prominent example illustrates, the foundation for determining “hate speech” is shaky at best. And, yet, people are losing their free speech and even their livelihoods solely based on the determinations of these algorithms.
The Anti-Defamation League (ADL) recently announced an artificial intelligence program that is being developed in partnership with Facebook, Google, Microsoft and Twitter to “stop cyberhate.” In their video, you can hear the ADL’s Director of the Center for Technology & Society admit to a “78-85% success rate” in their A.I. program to detect hate speech online. I actually heard that as a 15-22% failure rate. And they are defining the parameters. That is a disturbing margin for error, even when supposedly defining a nebulous concept and presuming to know exactly what is being looked for.