Published on February 10th, 2024 📆 | 4464 Views ⚑
0London Underground Is Testing Real-Time AI Surveillance Tools to Spot Crime
In response to WIRED's Freedom of Information request, the TfL says it used existing CCTV images, AI algorithms, and ânumerous detection modelsâ to detect patterns of behavior. âBy providing station staff with insights and notifications on customer movement and behaviour they will hopefully be able to respond to any situations more quickly,â the response says. It also says the trial has provided insight into fare evasion that will âassist us in our future approaches and interventions,â and the data gathered is in line with its data policies.
In a statement sent after publication of this article, Mandy McGregor, TfL's head of policy and community safety, says the trial results are continuing to be analyzed and adds, âthere was no evidence of biasâ in the data collected from the trial. During the trial, McGregor says, there were no signs in place at the station that mentioned the tests of AI surveillance tools.
âWe are currently considering the design and scope of a second phase of the trial. No other decisions have been taken about expanding the use of this technology, either to further stations or adding capability.â McGregor says. âAny wider roll out of the technology beyond a pilot would be dependent on a full consultation with local communities and other relevant stakeholders, including experts in the field.â
Computer vision systems, such as those used in the test, work by trying to detect objects and people in images and videos. During the London trial, algorithms trained to detect certain behaviors or movements were combined with images from the Underground stationâs 20-year-old CCTV camerasâanalyzing imagery every tenth of a second. When the system detected one of 11 behaviors or events identified as problematic, it would issue an alert to station staffâs iPads or a computer. TfL staff received 19,000 alerts to potentially act on and a further 25,000 kept for analytics purposes, the documents say.
The categories the system tried to identify were: crowd movement, unauthorized access, safeguarding, mobility assistance, crime and antisocial behavior, person on the tracks, injured or unwell people, hazards such as litter or wet floors, unattended items, stranded customers, and fare evasion. Each has multiple subcategories.
Daniel Leufer, a senior policy analyst at digital rights group Access Now, says whenever he sees any system doing this kind of monitoring, the first thing he looks for is whether it is attempting to pick out aggression or crime. âCameras will do this by identifying the body language and behavior,â he says. âWhat kind of a data set are you going to have to train something on that?â
The TfL report on the trial says it âwanted to include acts of aggressionâ but found it was âunable to successfully detectâ them. It adds that there was a lack of training dataâother reasons for not including acts of aggression were blacked out. Instead, the system issued an alert when someone raised their arms, described as a âcommon behaviour linked to acts of aggressionâ in the documents.
âThe training data is always insufficient because these things are arguably too complex and nuanced to be captured properly in data sets with the necessary nuances,â Leufer says, noting it is positive that TfL acknowledged it did not have enough training data. âI'm extremely skeptical about whether machine-learning systems can be used to reliably detect aggression in a way that isnât simply replicating existing societal biases about what type of behavior is acceptable in public spaces.â There were a total of 66 alerts for aggressive behavior, including testing data, according to the documents WIRED received.
Gloss