Public school districts across the United States, including those in New Jersey and Wisconsin, have recently implemented visitor management systems utilizing facial recognition AI. While proponents argue that these systems enhance school safety, they also represent a significant escalation of the all seeing eye of state sponsored surveillance in children's daily lives.
The Visitor Aware system, developed by Singlewire Software, collects a vast array of personal data from school visitors, including facial photos, and cross-references this information against various government watch lists.
As cities and airports across the nation pour increasing portions of their budgets into surveillance technology, a troubling paradox emerges. This digital panopticon, touted as a total solution for urban crime, is slowly being revealed to be just a security theater – an elaborate ruse that fails to address the root causes of societal insecurity and criminal activity. The shift towards high-tech policing has created a feedback loop of distrust between communities and police that further escalates reliance on impersonal surveillance methods.
This pre-crime political philosophy, however, lacks a crucial element: evidence of efficacy. Despite the ubiquitous nature of cameras and data collection in public spaces, there's little statistical proof that these measures reduce crime rates or enhance public safety. History has shown that embracing this technology leads to a disproportionate scrutiny of poorer neighborhoods, creating a divide in law enforcement that exacerbates existing societal inequities.
We must ask ourselves: is AI facial recognition technology building safer communities, or simply more watched ones?
The Slippery Slope of Surveillance
The evolution of facial recognition technology can be traced back to the 1960s, when Woodrow Wilson Bledsoe pioneered the first semi-automated facial recognition systems.
Initially developed for benign purposes such as enhancing photo organization and identification processes, these early systems laid the groundwork for the futuristic Orwellian visions of the political elite.
A watershed moment for the then experimental tech came in 2001 during Super Bowl XXXV in Tampa, Florida. Here, facial recognition was deployed on a massive scale to scan crowds for potential criminals, marking one of the first major public uses of this technology. This event sparked early debates about mass surveillance and the balance between public safety and individual privacy, back when the public actually cared and feared the implications of the all seeing eye.
This historical trajectory reminds us that what begins as a benign tool for a specific, limited purpose can evolve into a pervasive totalitarian system of technocracy, with disturbing, dystopian consequences for civil liberties.
Empty Promises and Bureaucratic Hurdles
A recent report from the National Academies of Sciences, Engineering, and Medicine calls for swift government action to address concerns raised by AI driven facial recognition technology.
But the track record of government regulation in rapidly evolving tech sectors is less than stellar. By the time any meaningful legislation is passed, the technology will likely have advanced far beyond the scope of proposed regulations. This is by design.
The suggestion that the Departments of Justice and Homeland Security should establish working groups to develop standards for law enforcement use of facial recognition is laughable. These are the very agencies that have been pushing for expanded surveillance powers. Asking the technocratic elite to self-regulate is like asking the fox to guard the henhouse.
While the report focuses heavily on government use and regulation, it ignores the elephant in the room: the private sector development of facial recognition technology. The private sector of Big Tech will continue to develop and deploy these systems, outpacing any potential regulation from the public sector.
A False Sense of Security
While the National Academies' report may appear to address concerns about facial recognition technology, a critical analysis reveals that its surface level pseudo-recommendations are unlikely to result in meaningful change. The proposed actions will only provide a false sense of security while the real work of developing and deploying these invasive technologies will continue, uninterrupted.
As some cities allocate up to 60% of their budgets to policing and surveillance, it's time to reconsider our approach to community safety and the rise of crime in America. Derrick Broze has also called out Houston’s problematic implementation of ShotSpotter, a controversial acoustic gunfire detection technology.
There are some promising underrated alternatives to total surveillance, like violence-interruption programs. But of these approaches offer a more holistic, organic view of security – one that prioritizes well funded community engagement over constant observation. In other words, everything big tech and government isn’t interested in providing funding for.
These methods recognize that true safety is more than just the absence of crime; it includes social cohesion and reforming the core nature of the psychology of fear.
Sources
https://www.theverge.com/2023/7/1/23781040/the-tsa-will-use-facial-recognition-in-over-400-airports
https://www.wesleyan.edu/allbritton/cspl/scholarship/jennifertucker.pdf
https://www.wired.com/2010/01/0128tampa-super-bowl-facial-recognition/
I sure hope Putin will put a stop to this globalist tyranny /s
https://www.biometricupdate.com/202006/ntechlab-to-supply-biometric-facial-recognition-to-over-43000-russian-schools