Playback speed
×
Share post
Share post at current time
0:00
/
0:00
Transcript

Police Officers Use AI Chatbots To Write Crime Reports

The Oklahoma City Police Department is experimenting with an AI tool called Draft One, developed by Axon, to generate initial drafts of incident reports. This system utilizes the audio recorded by officers' body cameras during an incident. In one example, Sergeant Matt Gilmore's hour-long searcha with his K-9 dog was automatically transcribed and summarized into a report in just eight seconds, a task that would typically take him 30 to 45 minutes to write manually.

Draft One is an AI solution built on the same underlying technology as the NSA sponsored ChatGPT, powered by OpenAI's large language model, but with additional controls to reduce the ‘creativity knob’ and focus on factual reporting. It's designed to capture all relevant details from the audio on a crime scene, including radio chatter and environmental sounds, and translate them into a coherent narrative report. Officers can then review and edit the AI-generated draft before finalizing it.

Naturally, there are concerns that have been raised from critics about potential inaccuracies, the risk of "hallucinations" (AI-generated falsehoods), and the impact on the legal process, particularly in cases where officers might need to testify about the contents of their reportaas in court. Officers claim that they are countering this potential issue by making sure the chatbot report is read and reviewed by real human police before the final report is officially submitted.

There is also a more speculative question here about how the accuracy of Draft One might begin to change with more Police data, as both learning and accuracy in LLMs tend to change over time.

In "Punishment Without Crime", criminologist Alexandra Natapoff (Basic Books, 2018), has argued that even minor inaccuracies in police reports can have devastating consequences, especially for vulnerable populations. The introduction of AI-generated reports could exacerbate these issues, potentially leading to increased wrongful arrests and prosecutions.

AI In City Government

The rush to implement AI systems also reflects a broader trend of "technosolutionism" in government, a term coined by Evgeny Morozov in his 2013 book "To Save Everything, Click Here". This ideology, which posits technology as the answer to all societal problems, is particularly dangerous when adopted by state actors and intel agencies with the power to carefully monitor every aspect of citizens' lives.

Last year, New York City launched an LLM called MyCity as part of Mayor Eric Adams' initiative to create a "one-stop shop" for small business owners navigating the city's bureaucratic processes. The chatbot, built on Microsoft's Azure AI services, was designed to provide quick answers to questions about local policies and regulations. However, as reported by tech news outlet The Markup, the system has been dispensing alarmingly inaccurate advice, including telling the public to break the law.

MyCity tells user to terminate an employee for 'being a woman'.

Despite acknowledging that the chatbot's answers were "wrong in some areas," Mayor Adams defended the decision to keep the tool active on the city's official website. The chatbot continues to provide incorrect guidance on critical issues, such as falsely suggesting it's legal for employers to fire workers for complaining about sexual harassment or refusing to cut their dreadlocks.

The dismissive attitude of officials like Adams, who defended the flawed AI as part of the process of "ironing out kinks," is disturbing.

Government and Police Relying on AI

As AI systems become more integrated into governance, there's a risk of what science fiction author Isaac Asimov termed the "Frankenstein complex" – a fear of our own creations surpassing and ultimately controlling us. This fear is vividly realized in films like Ex Machina where AI systems designed to serve humanity ultimately seek to replace us.

With technological addiction and brainrot taking over large portions of the populace, we are at risk becoming so dependent on AI systems that we lose our ability to think critically and make decisions for ourselves. The convenience of having AI handle every aspect of civic life could lead to a populace that is disengaged and easily manipulated, unable to challenge or even comprehend the systems that govern them.

In response of systems like these, self-governance activists from the Law of Mankind have argued to have a camera in your car that is able to record and provide evidence in case of any kind of police or AI manipulation of events. Considering chatbots like Draft One are audio based and not actively connected to the visual camera component of police body cam footage, there will naturally be events that are out of sync with reality.

Resources

https://apnews.com/article/ai-writes-police-reports-axon-body-cameras-chatgpt-a24d1502b53faae4be0dac069243f418

Punishment Without Crime by Alexandra Natapoff

To Save Everything, Click Here by Evgeny Morozov

https://themarkup.org/news/2024/03/29/nycs-ai-chatbot-tells-businesses-to-break-the-law

https://arxiv.org/pdf/2307.09009

https://knowyourmeme.com/memes/brain-rot-brainrot

Discussion about this podcast