Most of the time, US President Donald Trump is hanging out with the country's big tech companies. For example, he recently accused Google of biasing conservative opinions when displaying search results. However, this reproach can not be empirically proven. However, following the rampage in Dayton and El Paso, the president called on social network operators to work together. Together with the Ministry of Justice one should develop "tools that detect gunmen before they strike". Detailed information about this idea did not follow. Obviously, the president has a kind of algorithm that evaluates activities in social networks and thus identifies potential perpetrators. The answer of the corporations is still pending. But even now it can be said that the proposal brings two major problems:
1. The technology is far from good enough
Basically, the idea is not brand new. So a similar approach is already being followed to identify users with suicidal thoughts. Ideally, this can then be offered early help. However, the algorithms are by no means working perfectly. The number of false hits is very high. In suicide prevention, this is not so bad: it was better once too much help offered than once too little. An algorithm for identifying amok runners would have to work much more accurately. Even with a margin of error of only one percent, the number of suspects could otherwise be more than one million.
This would far exceed the capacity of the security authorities. In addition, algorithms are by no means as neutral as they seem. For example, software has been used in the US judiciary for some time to judge convicted persons of the likelihood of a new offense. However, more detailed analysis has shown that the algorithms significantly discriminate against black people. The same would be expected in the search for potential gunmen. Here, too, would be bound capacities of the security forces, which could be used better elsewhere.
2. Many ethical issues are still unclear
So far, our legal system has largely been based on the fact that one can only be convicted of actually committed crimes – not because one is a bad person in and of itself. Therefore, the question arises: What happens if the algorithm sounds an alarm to a specific user? Simply arresting and locking away would violate fundamental rights massively and is therefore not an option. A permanent monitoring would also be critical and should also be rather difficult to implement. Most likely, therefore, would be that the suspicious names are collected in a database. But this alone does not prevent a terrorist attack.
In addition, there is another problem: The suspects have no chance to prove their innocence. Because the accusation is not based on hard facts, but purely on the assessment of an algorithm. Again, this is difficult to reconcile with today's value system and legal system. However, these restrictions apply only to software that is intended to detect gunmen using certain patterns before they announce their actions. If, on the other hand, someone posts on Facebook that he is about to commit a similar act, the security forces can intervene in advance even today. It might actually make sense to use intelligent algorithms to detect suspicious posts and then submit them to a human for review.
Conclusion: There are much better ideas
The idea of Trump is not completely out of thin air. Currently, the necessary technology is simply too immature. A use would therefore not be practicable. Should this change one day, difficult ethical, legal and moral issues will affect politics and the judiciary. The probability of a killing spree can be reduced even today by other measures significantly. Statistically, it has been proven that the number of acts correlates with the number of weapons in circulation. In other words, stricter arms policy would massively increase the security of the population without unduly interfering with fundamental rights. Currently, there seems to be no majority in US politics.