K-12 schools will operate in a way similar to Minority Report, Person of Interest, and Robocop if we are to believe the peddlers of school security systems. In order to dispatch officers before the would-be perpetrators could carry out their vile acts, military grade systems would slurp up student data. In the unlikely event that someone was able to evade the predictive systems, they would be stopped by next- generation weapon-detection systems andbiometrics that interpret the tone of a person, warning authorities of impending danger. The final layer may be a robot dog, which is able to disarm, distract, or destroy the dangerous individual before any real damage is done. Children will be safe if we invest in these systems.

This is not our present and will never be our future.

In the past several years, a host of companies have sprouted up, all promising a variety of technological interventions that will curtail or even eliminate the risk of school shooting. The solutions range from tools that use machine learning and human monitoring to predict violent behavior, to artificial intelligence with cameras that determine the intent of individuals, to microphones that identify potential for violence based on a tone of voice. A lot of them use dead children to promote their technology. The images of the Sandy Hook shooting are used in a presentation by AnyVision. After the Uvalde shooting last month, the company announced plans for a taser-equipped drone as a means of dealing with school shooters. The plan was put on hold after the ethics board resigned. Each company would want us to believe that it alone can solve the problem.

The failure here is not only in the systems themselves, but also in the way people think of them. The failure of a security system usually leads to people calling for more extensive monitoring. Companies often cite the need for more data to address the gaps in their systems if a danger isn't predicted. The mayor of New York decided to double down on the need for more technology after the recent subway shooting. The schools are ignoring the ban on facial recognition technology. According to the New York Times, US schools spent over $3 billion on security products and services in one year. $300 million is included in the recent gun legislation.

Predicting what will happen in certain situations is a measure of certainty.

Predicting situations about which there can't be none is what many of these systems promise. Tech companies pitch the idea of complete data, and therefore perfect systems, as something that is just over the next ridge, an environment where we are so surveilled that violence can be prevented. The horizon is a comprehensive data set of ongoing human behavior.

Companies engage in a variety of bizarre techniques to train their systems, some of which are not good indicators of actual life. It is possible that these companies would train their systems on data from real world shootings. Even if footage from real incidents became available, the models wouldn't be able to predict the next tragedy based on previous ones. Uvalde was different from other places, like Sandy Hook.

Predicting intent or motivation based on incomplete and contextless data is not a good way to bet on a future. The assumption when using a machine-learning model is that there is a pattern to be identified; in this case, that there is some "normal" behavior that the shooter exhibits at the scene of the crime. It's not likely that a pattern is found. There are near-continual changes in the practices of teens. Young people are shifting the way they dress, speak, and present themselves in order to avoid being watched. It is nearly impossible to develop a model of that behavior.