by John W. Whitehead, Rutherford Institute:
“The government solution to a problem is usually as bad as the problem and very often makes the problem worse.”—Milton Friedman
You’ve been flagged as a threat.
Before long, every household in America will be similarly flagged and assigned a threat score.
Without having ever knowingly committed a crime or been convicted of one, you and your fellow citizens have likely been assessed for behaviors the government might consider devious, dangerous or concerning; assigned a threat score based on your associations, activities and viewpoints; and catalogued in a government database according to how you should be approached by police and other government agencies based on your particular threat level.
TRUTH LIVES on at https://sgtreport.tv/
If you’re not unnerved over the ramifications of how such a program could be used and abused, keep reading.
It’s just a matter of time before you find yourself wrongly accused, investigated and confronted by police based on a data-driven algorithm or risk assessment culled together by a computer program run by artificial intelligence.
Consider the case of Michael Williams, who spent almost a year in jail for a crime he didn’t commit. Williams was behind the wheel when a passing car fired at his vehicle, killing his 25-year-old passenger Safarian Herring, who had hitched a ride.
Despite the fact that Williams had no motive, there were no eyewitnesses to the shooting, no gun was found in the car, and Williams himself drove Herring to the hospital, police charged the 65-year-old man with first-degree murder based on ShotSpotter, a gunshot detection program that had picked up a loud bang on its network of surveillance microphones and triangulated the noise to correspond with a noiseless security video showing Williams’ car driving through an intersection. The case was eventually dismissed for lack of evidence.
Although gunshot detection program like ShotSpotter are gaining popularity with law enforcement agencies, prosecutors and courts alike, they are riddled with flaws, mistaking “dumpsters, trucks, motorcycles, helicopters, fireworks, construction, trash pickup and church bells…for gunshots.”
As an Associated Press investigation found, “the system can miss live gunfire right under its microphones, or misclassify the sounds of fireworks or cars backfiring as gunshots.”
In one community, ShotSpotter worked less than 50% of the time.
Then there’s the human element of corruption which invariably gets added to the mix. In some cases, “employees have changed sounds detected by the system to say that they are gunshots.” Forensic reports prepared by ShotSpotter’s employees have also “been used in court to improperly claim that a defendant shot at police, or provide questionable counts of the number of shots allegedly fired by defendants.”
The same company that owns ShotSpotter also owns a predictive policing program that aims to use gunshot detection data to “predict” crime before it happens. Both Presidents Biden and Trump have pushed for greater use of these predictive programs to combat gun violence in communities, despite the fact that found they have not been found to reduce gun violence or increase community safety.
The rationale behind this fusion of widespread surveillance, behavior prediction technologies, data mining, precognitive technology, and neighborhood and family snitch programs is purportedly to enable the government takes preemptive steps to combat crime (or whatever the government has chosen to outlaw at any given time).
This is precrime, straight out of the realm of dystopian science fiction movies such as Minority Report, which aims to prevent crimes before they happen, but in fact, it’s just another means of getting the citizenry in the government’s crosshairs in order to lock down the nation.
Even Social Services is getting in on the action, with computer algorithms attempting to predict which households might be guilty of child abuse and neglect.
All it takes is an AI bot flagging a household for potential neglect for a family to be investigated, found guilty and the children placed in foster care.
Mind you, potential neglect can include everything from inadequate housing to poor hygiene, but is different from physical or sexual abuse.
According to an investigative report by the Associated Press, once incidents of potential neglect are reported to a child protection hotline, the reports are run through a screening process that pulls together “personal data collected from birth, Medicaid, substance abuse, mental health, jail and probation records, among other government data sets.” The algorithm then calculates the child’s potential risk and assigns a score of 1 to 20 to predict the risk that a child will be placed in foster care in the two years after they are investigated. “The higher the number, the greater the risk. Social workers then use their discretion to decide whether to investigate.”
Other predictive models being used across the country strive to “assess a child’s risk for death and severe injury, whether children should be placed in foster care and if so, where.”
Incredibly, there’s no way for a family to know if AI predictive technology was responsible for their being targeted, investigated and separated from their children. As the AP notes, “Families and their attorneys can never be sure of the algorithm’s role in their lives either because they aren’t allowed to know the scores.”
One thing we do know, however, is that the system disproportionately targets poor, black families for intervention, disruption and possibly displacement, because much of the data being used is gleaned from lower income and minority communities.
The technology is also far from infallible. In one county alone, a technical glitch presented social workers with the wrong scores, either underestimating or overestimating a child’s risk.
Yet fallible or not, AI predictive screening program is being used widely across the country by government agencies to surveil and target families for investigation. The fallout of this over surveillance, according to Aysha Schomburg, the associate commissioner of the U.S. Children’s Bureau, is “mass family separation.”
The impact of these kinds of AI predictive tools is being felt in almost every area of life.
Under the pretext of helping overwhelmed government agencies work more efficiently, AI predictive and surveillance technologies are being used to classify, segregate and flag the populace with little concern for privacy rights or due process.
All of this sorting, sifting and calculating is being done swiftly, secretly and incessantly with the help of AI technology and a surveillance state that monitors your every move.
Where this becomes particularly dangerous is when the government takes preemptive steps to combat crime or abuse, or whatever the government has chosen to outlaw at any given time.
In this way, government agents—with the help of automated eyes and ears, a growing arsenal of high-tech software, hardware and techniques, government propaganda urging Americans to turn into spies and snitches, as well as social media and behavior sensing software—are spinning a sticky spider-web of threat assessments, behavioral sensing warnings, flagged “words,” and “suspicious” activity reports aimed at snaring potential enemies of the state.
Are you a military veteran suffering from post-traumatic stress disorder? Have you expressed controversial, despondent or angry views on social media? Do you associate with people who have criminal records or subscribe to conspiracy theories? Were you seen looking angry at the grocery store? Is your appearance unkempt in public? Has your driving been erratic? Did the previous occupants of your home have any run-ins with police?
All of these details and more are being used by AI technology to create a profile of you that will impact your dealings with government.
It’s the American police state rolled up into one oppressive pre-crime and pre-thought crime package, and the end result is the death of due process.
In a nutshell, due process was intended as a bulwark against government abuses. Due process prohibits the government of depriving anyone of “Life, Liberty, and Property” without first ensuring that an individual’s rights have been recognized and respected and that they have been given the opportunity to know the charges against them and defend against those charges.
With the advent of government-funded AI predictive policing programs that surveil and flag someone as a potential threat to be investigated and treated as dangerous, there can be no assurance of due process: you have already been turned into a suspect.
To disentangle yourself from the fallout of such a threat assessment, the burden of proof rests on you to prove your innocence.
You see the problem?
It used to be that every person had the right to be assumed innocent until proven guilty, and the burden of proof rested with one’s accusers. That assumption of innocence has since been turned on its head by a surveillance state that renders us all suspects and overcriminalization which renders us all potentially guilty of some wrongdoing or other.