top of page

Current Most Read

Watching the Watchers: Is Live Facial Recognition Fit for Purpose?
India–Pakistan Conflict Escalates After Air Strikes and Retaliation
Trump’s Tariff Tantrum: How One Man’s Ego Could Wreck the Global Film Industry

Disney Cancels Star Wars: Acolyte—What’s Next for the Galaxy Far, Far Away?

On 4th June 2024 the first Episode of a brand new Star Wars TV show hit Disney+. After years and years of fans crying out for something new and away from the main story of the mainline continuity of the Skywalker Saga they finally got what they had been crammering for. A new TV show that was said 100 years before any other Star Wars media, with completely new characters and set during the High Republic of the Star Wars world, a period of time that hasn't had much expanded on it meaning Disney could do whatever they wanted to add to the lore without stepping on any toes. It seemed as though fans got what they had been asking for. Except the first episode of the Acolyte was not very well received. In fact the rest of the series wasn't received well at all to the point that disney recently have announced that the shows second season was going to be cancelled.



The final episode of Star Wars: The Acolyte had a viewership of 335 million minutes streamed, according to Nielsen's streaming charts. This was notably low compared to other Star Wars series, making it one of the least-watched finales for a Star Wars show on Disney+. For context, this figure is just 27.5% of what The Mandalorian Season 3 finale achieved and only 23.2% of the Season 2 finale that featured Luke Skywalker. With this steep decline in viewership it’s no surprise that Disney opted to cancel the show.


What does this mean for the future of Star Wars? Disney's latest attempt to carve out something fresh in the galaxy far, far away was their first real step away from the well-worn path of nostalgia bait. It's a move fans have been clamouring for, yet when it arrived, the show was met with widespread disdain online. But let's be honest—it's not as terrible as the internet would have you believe. It's just... okay. A middling effort, neither spectacular nor disastrous, but unmistakably padded—what could have been a tight three-hour story stretched thin over eight episodes, all in the name of keeping Disney+ subscribers engaged.


The real concern, however, lies in how Disney might interpret this outcome. Instead of concluding that they should avoid diluting small stories across bloated runtimes, they could very well decide that venturing into new territory is a mistake. The safer route, after all, is the proven one: stick to what sells. And unfortunately, that usually means more of the same—more nostalgia, more familiar faces, more recycled plots. Why? Because every time Disney has leaned into nostalgia, it's paid off handsomely.


Just look at The Force Awakens—a near copy of A New Hope that raked in billions. Or The Mandalorian, which has increasingly relied on nostalgia, even resurrecting a CGI Mark Hamill as young Luke Skywalker. The Ahsoka series? Another nostalgia-driven venture. All of these projects have been profitable, reinforcing the idea that sticking to the old formula is a surefire way to keep the cash flowing.


So, instead of pushing the boundaries of the Star Wars universe and exploring new, creative possibilities, Disney is likely to double down on what they know works. The result? A franchise that remains shackled to its past, replaying the same notes rather than composing something truly new.

Watching the Watchers: Is Live Facial Recognition Fit for Purpose?

Watching the Watchers: Is Live Facial Recognition Fit for Purpose?

8 May 2025

Paul Francis

Want your article or story on our site? Contact us here

In an age of rapid technological advancement, surveillance is no longer a passive act. Live Facial Recognition (LFR) technology has moved from science fiction into the heart of modern policing and commercial security systems. Able to scan faces in real time and match them to watchlists within seconds, it promises efficiency, safety, and even crime prevention. But with these promises come serious questions about legality, accuracy, ethics, and trust.


Futuristic officer with glowing green eyes and circuit-patterned uniform in a neon-lit corridor, exuding a cool, technological vibe.

As this technology continues to spread across public streets and private retail spaces alike, we must ask: is LFR ready for widespread use, or is it running ahead of the safeguards designed to protect our rights?


What is Live Facial Recognition?

Live Facial Recognition (LFR) is a biometric surveillance tool that uses real-time video feeds to detect and identify faces. Unlike static facial recognition, which analyses images after an event has occurred, LFR operates live. Cameras scan crowds, extract facial features, and compare them to a database of preloaded images. If the system detects a potential match, it alerts a human operator to intervene or investigate.


LFR is being trialled and used by several police forces in the UK, including the Metropolitan Police and South Wales Police. Retailers, stadiums, and event organisers are also deploying the technology in an attempt to identify shoplifters or detect banned individuals before trouble starts.


A woman's face on a monitor with blue facial recognition lines, surrounded by software interface text, creates a tech-focused atmosphere.

How Does It Work? A Closer Look

LFR involves several distinct technical steps. At its core, it is powered by artificial intelligence and machine learning algorithms trained on vast datasets of facial images. The process typically unfolds as follows:


Face Detection

First, the system identifies a face within a video frame. This step uses computer vision models to detect facial structures such as the eyes, nose, and jawline. This is not identification yet; it is simply recognising that a face is present.


Alignment and Normalisation

Once detected, the system adjusts the face to account for differences in head tilt, lighting, or distance. This is known as normalisation. The aim is to ensure that all faces are processed in a similar format so that they can be compared reliably.


Feature Extraction

The system then uses a deep learning model, often a convolutional neural network, to extract features from the face. These are translated into a biometric template, a mathematical vector that represents the unique aspects of that person’s face.


Matching

This template is then compared against a watchlist. The system calculates a similarity score between the live face and each entry in the database. If the score passes a predefined threshold, the system flags it as a match. A human operator is usually involved at this stage to confirm or reject the result.

This entire process happens in seconds, enabling real-time surveillance across public or private spaces.


The Case For LFR

Proponents argue that LFR is a valuable tool for modern policing. It can identify wanted criminals, locate missing persons, and even prevent terrorist acts before they happen. In retail settings, it promises to reduce shoplifting and protect staff from repeat offenders. Unlike traditional methods, it allows for rapid identification without the need for physical interaction or delays.

The technology also allows for more efficient use of resources. Officers can be directed to individuals flagged by the system, rather than relying solely on observation or tip-offs. In theory, this reduces the burden on police and enhances public safety.

The Case Against LFR

Despite its promise, LFR is far from perfect. One of the main concerns is accuracy. Studies have shown that LFR systems are more likely to produce false positives for people with darker skin tones and for women. These errors are not trivial. A mistaken identity can result in an innocent person being stopped, searched, or even arrested.


There is also the issue of bias in training data. If an algorithm has been trained primarily on certain demographics, it will perform less effectively on others. In real-world conditions, such as low lighting or crowd movement, these problems can become even more pronounced.


Beyond technical flaws, legal and ethical questions loom large. In the United Kingdom, there is currently no specific law governing the use of LFR. Its deployment relies on a complex mesh of data protection laws, human rights principles, and operational guidance. Critics argue that this legal uncertainty leaves too much room for misuse.


A 2020 Court of Appeal ruling found South Wales Police’s use of LFR to be unlawful, citing insufficient safeguards, inadequate impact assessments, and the risk of discriminatory practices. The ruling did not ban the technology outright but signalled that current uses are walking a legal tightrope.


Profile of a woman with glowing blue cybernetic lines on her face, set against a blurred background. Futuristic and serene mood.

Potential Misuse and the Chilling Effect

One of the most troubling aspects of LFR is its capacity for mass surveillance. By scanning every face in a crowd, it treats everyone as a potential suspect. This blanket approach has been described as disproportionate and invasive by privacy groups such as Big Brother Watch and Liberty.


There is also the risk of function creep. A system introduced to identify serious offenders could, over time, be expanded to monitor protests, track political activists, or even control access to public spaces based on social or behavioural metrics.


Furthermore, the use of LFR by private companies raises concerns about data ownership and accountability. Retailers may share watchlists across multiple sites or even with law enforcement, all without the consent or knowledge of the individuals being scanned. This could lead to people being unfairly banned, blacklisted, or targeted, based on secretive and unchallengeable criteria.


Is It Fit for Purpose?

At present, the evidence suggests that Live Facial Recognition technology is not ready for widespread deployment. While it offers considerable potential, its use is outpacing the development of ethical, legal, and technical safeguards. In its current state, LFR is more likely to erode public trust than to enhance security.


Without robust legislation, transparent oversight, and significant improvements in accuracy and fairness, LFR risks doing more harm than good. Surveillance should not come at the cost of civil liberties or human dignity. As with all powerful technologies, its benefits must be balanced against the risks, and right now, that balance appears off.



LFR is a powerful tool with a fragile foundation. Its strengths lie in speed and scale, but its weaknesses—bias, error, and lack of transparency—cast a long shadow. Until these flaws are addressed, caution must guide its use.


In the race to embrace smart surveillance, we must not forget the human rights and democratic values that underpin our society. Watching the watchers may be just as important as watching the streets.


Images provided by Leonardo AI

bottom of page