Fri, 21 June 2024

Newsletter sign-up

Subscribe now
The House Live All
A fresh start for rail: getting the UK railway industry back on track in 2024 Partner content
Press releases

Test tube: The tube stations experimenting with AI

Illustration: Tracy Worrall

7 min read

We don’t yet know how AI is going to impact us but experiments in two suburban tube stations are revealing the potential, both good and bad, reports James O’Malley. Illustrations by Tracy Worrall

We all know the AI revolution is going to change the world – but few might have expected Willesden Green tube station to be in the vanguard.

For all the hype about large language models like ChatGPT, the coming upheaval still feels more theoretical – even gimmicky – than actual. 

But very quietly, and perhaps a little nervously, Transport for London (TfL) has been trialling cameras enabled with artificial intelligence (AI) – and the results are startling.

To understand the significance of the experiments, one needs a basic grounding in the technology that allows a computer to ‘see’ images and video, and understand what it’s looking at. 

Just over a decade ago it would have been unthinkable, but today we all use this technology on our phones to identify our friends and family in photos, or to apply beautifying filters to our faces.

The AI would also track passengers as they moved through the station, and it would send an alert to the staff iPads if it looked as though someone was too close to the platform edge

The technological leap that has made this possible is the realisation that objects in images can be recognised by ‘training’ a statistical model, showing it thousands of photos of the same thing. So instead of trying to describe what an object looks like in code, probabilities are used – and if you ‘train’ with enough images, the results are scarily accurate.

It all makes for an exciting technology – one that means that every camera in the world can be much smarter, and actually react to what it is seeing.

TfL, which has been admirably brilliant at harnessing new technology, has experimented with AI computer vision for a little while. Freedom of Information requests have surfaced at least two – in stations at the opposite ends of the capital. In late 2019, TfL quietly conducted an experiment by installing a special AI camera, made by a company called Xovis, at Blackhorse Road, a station near the northern end of the Victoria Line.

The camera was pointed down to look at the row of ticket barriers at the entrance to the station. The idea was that it would essentially act as an additional member of staff, tasked with monitoring whether more people were queuing to enter or to exit the station. If it found that queues were forming on either side of the ticket barriers, it could automatically switch over the direction on an additional barrier or suggest to staff on their iPads that they switch the direction manually, helping more people to get through the gates more quickly.

Though the live trial was only short, at just under two weeks, according to a later analysis, if all of the barriers in the station were fully automated like this, it could increase passenger throughput by as much as 30 per cent, and reduce queue-times by as much as 90 per cent – meaning more people getting through the tube network that little bit quicker without having to do anything expensive, like bulldoze and expand station buildings.

Credit: Alamy
Credit: Alamy

Though the ticket barrier trial didn’t end in the technology being rolled out any further (presumably the pandemic had something to do with that), TfL has continued to expand its experiments with AI.

For example, a couple of years ago an even more ambitious trial took place at Willesden Green station on the Jubilee Line. What TfL realised was that, instead of just using cameras to watch the gateline, it could instead plug all of the station’s existing CCTV cameras into some clever AI software, and transform the entire station into a ‘smart station’.

Essentially, the idea was that by having AI monitor the station’s cameras, all sorts of different station operations could be improved. This ranges from the trivial – the AI could spot discarded newspapers and coffee cups on the platform – to the life-threatening. The AI would also track passengers as they moved through the station, and it would send an alert to staff iPads if it looked as though someone was too close to the platform edge, or appeared to be planning to jump, so that staff could intervene before it was too late.

In total, the system was reportedly capable of spotting up to 77 different ‘use cases’, though only 11 were used in the trial. Others included spotting whether passengers had been waiting on the platform or in the ticket hall for more than 10 or 15 minutes, as this might suggest they were confused or required assistance. 

Similarly, it would spot wheelchair users and pushchairs entering the station, which doesn’t have step-free access to the trains, so that staff could provide support. And if the system spotted anyone having fallen, it would immediately raise the alarm too.

Perhaps the most interesting aspect of the trial was how the cameras could also help to reduce fare evasion and other crime. Cameras would spot if a passenger pushed through, jumped or even crawled under the barriers to avoid paying a fare (though the system later had to be adjusted to avoid flagging children as fare evaders) – and data on the people spotted would be sent back to TfL’s revenue enforcement team to help them build cases against evaders.

Cameras were also theoretically capable of spotting knives and guns – a British Transport Police officer even apparently brought firearms to the station to help ‘train’ the cameras on what guns looked like – though during the trial there were no real-life incidents in which to ‘check’ whether the AI was right or not.

How TfL solved the problem of spotting ‘aggressive’ behaviour was even more impressive. Unlike spotting defined objects like coffee cups, or identifying people to track their movements, interpreting how someone is behaving is much harder.

Instead, they found a clever workaround: the AI was trained to identify raised arms, as this is something people often do in violent situations (such as raising hands in a surrender motion).

Though this sometimes led to false positives – people just raised their arms in the air – it was good enough, and it meant that when danger was suspected, station staff could be immediately alerted.

And there was a clever added bonus too: because raising your arms would have the AI flag you to staff, TfL realised it could be a subtle alarm mechanism for calling for help. 

If there was a violent situation where staff were unable to radio for assistance, all they would have to do is raise their arms, and could assume help would be on the way.

It’s easy to imagine why AI-powered ‘smart stations’ could be the future. An extra pair of all-seeing eyes could make stations safer, cleaner and more efficient – while improving service for passengers at the same time.

But it’s also possible to imagine why there may be some reticence. The challenge of AI-powered cameras is the same as their value: still images become something that can be mined for data. The camera technology is neutral – but what people do with the data collected is not.

For example, there’s no technological reason why these sorts of cameras couldn’t be used to identify individuals using facial recognition, or to spot certain political symbols. And given that if you live in London it’s hard to avoid using the Tube, it poses serious civil-liberty questions about how to balance protecting rights with convenience and efficiency.

That’s perhaps why a TfL spokesperson tells The House: “We are currently considering the design and scope of a second phase of the trial. No other decisions have been taken about expanding the use of this technology, either to further stations or adding capability.” They added: “Any wider roll-out of the technology beyond a pilot would be dependent on a full consultation with local communities and other relevant stakeholders, including experts in the field.”

But given how obviously useful the technology is, it seems inevitable that AI cameras like this will play some sort of role in the future. So, like the passengers tapping in at Blackhorse Road, or the people on the platform at Willesden Green, it’s definitely something to watch. 

PoliticsHome Newsletters

Get the inside track on what MPs and Peers are talking about. Sign up to The House's morning email for the latest insight and reaction from Parliamentarians, policy-makers and organisations.


Engineering a Better World

The Engineering a Better World podcast series from The House magazine and the IET is back for series two! New host Jonn Elledge discusses with parliamentarians and industry experts how technology and engineering can provide policy solutions to our changing world.

NEW SERIES - Listen now