top of page

Current Most Read

Watching the Watchers: Is Live Facial Recognition Fit for Purpose?
India–Pakistan Conflict Escalates After Air Strikes and Retaliation
Trump’s Tariff Tantrum: How One Man’s Ego Could Wreck the Global Film Industry

Elon Musk’s Controversial Salute and Trump’s Inauguration: A Polarising Start

Donald Trump’s inauguration as the 47th President of the United States was marked by sweeping executive actions and a controversial appearance by billionaire Elon Musk, whose gestures at the event have sparked widespread backlash.


A Contentious Start to Trump’s Presidency

Hours after being sworn in, President Trump announced a raft of executive orders aimed at undoing key policies of his predecessor, Joe Biden. Addressing supporters at an indoor parade event in Washington, D.C., Trump promised to reverse “80 destructive and radical executive actions” from the previous administration.


Among his first actions, Trump issued pardons to approximately 1,500 individuals charged in connection with the 6 January 2021 Capitol riot. This included shortening sentences for 14 members of far-right groups such as the Proud Boys and Oath Keepers, some of whom had been convicted of seditious conspiracy. Trump also declared illegal immigration at the US-Mexico border a national emergency, reinstated policies barring citizenship for children of undocumented immigrants, and designated drug cartels as terrorist organisations.


On the international front, Trump announced the withdrawal of the US from the Paris Climate Agreement, citing concerns about the nation’s energy independence. He further ordered the repeal of a Biden-era memo barring oil drilling in the Arctic and began the process of withdrawing the US from the World Health Organisation, criticising the agency’s financial demands on the US compared to China.


AI image of Elon Musk and Donald Trump shaking hands.
Image generated by Leonardo AI

Elon Musk’s Controversial Salutes

The inauguration also drew headlines due to the actions of Elon Musk, the billionaire owner of Tesla, SpaceX, and the social media platform X. Musk, a prominent Trump supporter and donor, appeared onstage before Trump’s address and delivered remarks praising the audience for their contributions to the administration’s victory.


During his speech, Musk made a gesture that has been widely criticised. He placed his right hand over his chest before extending it outward in a motion many likened to a Nazi salute. “My heart goes out to you,” Musk told the crowd. “It is thanks to you that the future of civilisation is assured.” He repeated the gesture moments later, prompting a storm of reactions on social media.



Historians and advocacy groups were quick to condemn Musk’s actions. Ruth Ben-Ghiat, a historian of fascism, described the motion as a “Nazi salute” and “a very belligerent one too.” The Anti-Defamation League (ADL) issued a statement calling the gesture “awkward” and advising restraint, though critics, including Congresswoman Alexandria Ocasio-Cortez, accused the organisation of minimising the incident.


Musk responded on X, dismissing the controversy. “Frankly, they need better dirty tricks. The ‘everyone is Hitler’ attack is sooo tired,” he posted, adding a yawning emoji. He also reposted memes mocking the backlash, further fuelling the debate.


A Polarised Reaction

Supporters of Musk and Trump dismissed the outrage as overblown. “Can we please retire the calling people a Nazi thing?” one user wrote on X. Far-right groups, however, appeared to embrace Musk’s actions. Neo-Nazi leader Christopher Pohlhaus celebrated the gestures, stating, “I don’t care if this was a mistake. I’m going to enjoy the tears over it.”


Musk’s appearance added to the already divisive atmosphere surrounding Trump’s return to power. For many, it symbolised a normalisation of far-right rhetoric at the highest levels of influence, while others viewed it as a distraction from Trump’s ambitious policy agenda.



Trump’s inauguration has set the stage for a presidency marked by aggressive policy reversals and deeply polarising optics. Musk’s controversial gestures underscore the fraught political landscape, where symbolism and ideology often overshadow substantive debate. As the administration moves forward, the tension between unity and division will remain a central theme in American politics.

Watching the Watchers: Is Live Facial Recognition Fit for Purpose?

Watching the Watchers: Is Live Facial Recognition Fit for Purpose?

8 May 2025

Paul Francis

Want your article or story on our site? Contact us here

In an age of rapid technological advancement, surveillance is no longer a passive act. Live Facial Recognition (LFR) technology has moved from science fiction into the heart of modern policing and commercial security systems. Able to scan faces in real time and match them to watchlists within seconds, it promises efficiency, safety, and even crime prevention. But with these promises come serious questions about legality, accuracy, ethics, and trust.


Futuristic officer with glowing green eyes and circuit-patterned uniform in a neon-lit corridor, exuding a cool, technological vibe.

As this technology continues to spread across public streets and private retail spaces alike, we must ask: is LFR ready for widespread use, or is it running ahead of the safeguards designed to protect our rights?


What is Live Facial Recognition?

Live Facial Recognition (LFR) is a biometric surveillance tool that uses real-time video feeds to detect and identify faces. Unlike static facial recognition, which analyses images after an event has occurred, LFR operates live. Cameras scan crowds, extract facial features, and compare them to a database of preloaded images. If the system detects a potential match, it alerts a human operator to intervene or investigate.


LFR is being trialled and used by several police forces in the UK, including the Metropolitan Police and South Wales Police. Retailers, stadiums, and event organisers are also deploying the technology in an attempt to identify shoplifters or detect banned individuals before trouble starts.


A woman's face on a monitor with blue facial recognition lines, surrounded by software interface text, creates a tech-focused atmosphere.

How Does It Work? A Closer Look

LFR involves several distinct technical steps. At its core, it is powered by artificial intelligence and machine learning algorithms trained on vast datasets of facial images. The process typically unfolds as follows:


Face Detection

First, the system identifies a face within a video frame. This step uses computer vision models to detect facial structures such as the eyes, nose, and jawline. This is not identification yet; it is simply recognising that a face is present.


Alignment and Normalisation

Once detected, the system adjusts the face to account for differences in head tilt, lighting, or distance. This is known as normalisation. The aim is to ensure that all faces are processed in a similar format so that they can be compared reliably.


Feature Extraction

The system then uses a deep learning model, often a convolutional neural network, to extract features from the face. These are translated into a biometric template, a mathematical vector that represents the unique aspects of that person’s face.


Matching

This template is then compared against a watchlist. The system calculates a similarity score between the live face and each entry in the database. If the score passes a predefined threshold, the system flags it as a match. A human operator is usually involved at this stage to confirm or reject the result.

This entire process happens in seconds, enabling real-time surveillance across public or private spaces.


The Case For LFR

Proponents argue that LFR is a valuable tool for modern policing. It can identify wanted criminals, locate missing persons, and even prevent terrorist acts before they happen. In retail settings, it promises to reduce shoplifting and protect staff from repeat offenders. Unlike traditional methods, it allows for rapid identification without the need for physical interaction or delays.

The technology also allows for more efficient use of resources. Officers can be directed to individuals flagged by the system, rather than relying solely on observation or tip-offs. In theory, this reduces the burden on police and enhances public safety.

The Case Against LFR

Despite its promise, LFR is far from perfect. One of the main concerns is accuracy. Studies have shown that LFR systems are more likely to produce false positives for people with darker skin tones and for women. These errors are not trivial. A mistaken identity can result in an innocent person being stopped, searched, or even arrested.


There is also the issue of bias in training data. If an algorithm has been trained primarily on certain demographics, it will perform less effectively on others. In real-world conditions, such as low lighting or crowd movement, these problems can become even more pronounced.


Beyond technical flaws, legal and ethical questions loom large. In the United Kingdom, there is currently no specific law governing the use of LFR. Its deployment relies on a complex mesh of data protection laws, human rights principles, and operational guidance. Critics argue that this legal uncertainty leaves too much room for misuse.


A 2020 Court of Appeal ruling found South Wales Police’s use of LFR to be unlawful, citing insufficient safeguards, inadequate impact assessments, and the risk of discriminatory practices. The ruling did not ban the technology outright but signalled that current uses are walking a legal tightrope.


Profile of a woman with glowing blue cybernetic lines on her face, set against a blurred background. Futuristic and serene mood.

Potential Misuse and the Chilling Effect

One of the most troubling aspects of LFR is its capacity for mass surveillance. By scanning every face in a crowd, it treats everyone as a potential suspect. This blanket approach has been described as disproportionate and invasive by privacy groups such as Big Brother Watch and Liberty.


There is also the risk of function creep. A system introduced to identify serious offenders could, over time, be expanded to monitor protests, track political activists, or even control access to public spaces based on social or behavioural metrics.


Furthermore, the use of LFR by private companies raises concerns about data ownership and accountability. Retailers may share watchlists across multiple sites or even with law enforcement, all without the consent or knowledge of the individuals being scanned. This could lead to people being unfairly banned, blacklisted, or targeted, based on secretive and unchallengeable criteria.


Is It Fit for Purpose?

At present, the evidence suggests that Live Facial Recognition technology is not ready for widespread deployment. While it offers considerable potential, its use is outpacing the development of ethical, legal, and technical safeguards. In its current state, LFR is more likely to erode public trust than to enhance security.


Without robust legislation, transparent oversight, and significant improvements in accuracy and fairness, LFR risks doing more harm than good. Surveillance should not come at the cost of civil liberties or human dignity. As with all powerful technologies, its benefits must be balanced against the risks, and right now, that balance appears off.



LFR is a powerful tool with a fragile foundation. Its strengths lie in speed and scale, but its weaknesses—bias, error, and lack of transparency—cast a long shadow. Until these flaws are addressed, caution must guide its use.


In the race to embrace smart surveillance, we must not forget the human rights and democratic values that underpin our society. Watching the watchers may be just as important as watching the streets.


Images provided by Leonardo AI

bottom of page