top of page
When AI Measures “Friendliness”: Who Decides What Good Service Sounds Like?

When AI Measures “Friendliness”: Who Decides What Good Service Sounds Like?

5 March 2026

Paul Francis

Want your article or story on our site? Contact us here

Artificial intelligence is moving steadily from assisting workers to assessing them.


Cashier with robotic eyes, wearing a headset in a fast-food setting. Neon colors on screens in the background create a futuristic vibe.


Burger King meal with wrapped burger, fries, and drink cup with logo on table. Bright, casual setting, with focus on branded items.

Burger King has begun piloting an AI system in parts of the United States that listens to staff interactions through headsets and analyses speech patterns. The system, reportedly known as “Patty,” is designed to help managers track operational performance and, more controversially, measure staff “friendliness.” It does this by detecting politeness cues such as whether employees say “welcome,” “please,” or “thank you.”


From a corporate perspective, the logic is clear. Fast food is built on consistency. Brand standards matter. Customer experience scores influence revenue. If AI can help managers see patterns across shifts and locations, it promises efficiency, insight and improved service quality. On paper, it sounds like innovation.


In practice, it raises deeper questions about surveillance, culture, authenticity and who gets to define what “friendly” actually means, Because friendliness is not a checkbox, It is human.


The Promise Versus the Reality

The official line from companies testing this technology is that it is a coaching tool rather than a disciplinary one. It is presented as support for staff, helping identify trends rather than scoring individuals. It is framed as data-driven improvement rather than digital oversight, but the moment speech is analysed, quantified and turned into a metric, something changes.


Service work has always required emotional intelligence. It has also required emotional labour. Employees adjust tone, language and pace depending on the situation in front of them. A lunchtime rush feels different from a quiet mid-afternoon shift. A tired commuter is different from a group of teenagers. A frustrated parent is different from a regular parent who comes in every day.


Anyone who has worked in face-to-face customer service understands this instinctively. Your tone changes, your rhythm changes, your humour changes, and that is precisely where the friction with AI begins.


Culture Cannot Be Reduced to Keywords

One of the most immediate concerns is accent and cultural bias. Speech recognition systems are not neutral; they are trained on datasets. Those datasets may not equally represent every regional accent, dialect or speech pattern.


Hungry Jack's sign above a red canopy on a city street corner. Traffic light displays red pedestrian signal with trees and buildings in the background.

In a noisy fast food environment, with headsets, background clatter and rapid speech, even minor variations can affect recognition accuracy. If an AI system relies heavily on detecting specific words, then any difficulty interpreting accents could skew the data. That is not a theoretical concern. Studies have shown that automated speech systems often perform better on standardised forms of English and less well on regional or non-native accents. If politeness metrics depend on exact phrasing, workers with stronger regional accents or different speech rhythms could appear less compliant in the data, even when their service is perfectly warm and appropriate.


Beyond pronunciation, there is the question of cultural expression. In some regions, friendliness is relaxed and informal. In others, it is brisk and efficient. In some communities, humour and banter are part of service culture. In others, restraint and professionalism are valued. AI systems do not instinctively understand these nuances. They detect patterns.

But hospitality is not a pattern. It is a relationship.


Who Sets the Definition of Friendly?

This leads to a more fundamental question. Who decides what counts as friendly?

These systems do not calibrate themselves. Someone defines the threshold. Someone selects the keywords. Someone decides how often “thank you” should be said and in what context. Those decisions are typically made at the corporate level, often by operations teams and technology partners working from brand guidelines and idealised customer journeys.


There is nothing inherently wrong with brand standards, but there is often a distance between corporate design and frontline reality.


Business meeting with people at a wooden table, one reading a marketing plan. Laptops, coffee cups, and documents on the table.

Many workplace policies are written by people who have not worked a drive-thru shift in years, if ever. They may be excellent strategists. They may understand customer data deeply. But that does not always translate into lived experience on a busy Saturday afternoon when the fryer breaks and the queue is out the door.


In those moments, efficiency may matter more than repetition of scripted politeness.

If an algorithm expects a perfectly phrased greeting under all conditions, it risks becoming disconnected from the environment it is meant to improve.


Once those expectations are embedded in software, they become harder to question. The algorithm becomes policy.


The Authenticity Problem

Having worked in face-to-face customer service myself, I know that the best interactions were rarely scripted. Regular customers would come in, and you would adjust instantly. You might joke with them. You might take the piss in a friendly way. You might shorten the greeting entirely because familiarity made it unnecessary. That rapport is built over time and trust. Would an AI system recognise that as excellent service? Or would it mark down the interaction because the expected keywords were missing?


Hospitality is dynamic. It depends on reading the room, reading the person, and reading the moment. If workers begin focusing on hitting verbal benchmarks rather than engaging naturally, the interaction risks becoming mechanical. Customers can tell the difference between genuine warmth and box-ticking politeness. Ironically, quantifying friendliness may reduce the very authenticity companies are trying to protect.


Surveillance or Support?

This is where the tone of the debate shifts. Because even if the system is introduced as a supportive tool, the psychological reality of being monitored is not neutral.

Anyone who has worked in customer-facing roles knows that service environments are already performance spaces. You are representing the brand; you are expected to maintain composure and remain polite, even when customers are not. That emotional regulation is part of the job. Now imagine adding a layer where your tone and phrasing are being analysed in real time by software.


Hand holding a cassette recorder in focus, with blurred figures in business attire seated at a table in the background.

Even if managers insist it is not punitive, the awareness that your speech is being measured changes behaviour. You begin to think not just about the customer in front of you, but about whether the system has “heard” the right words. In high-pressure environments, that is another cognitive load. Another thing to get right. Over time, that kind of monitoring can subtly alter workplace culture. It can shift service from something relational to something performative in a more rigid way. Employees may begin speaking not to connect, but to comply, and when compliance becomes the goal, service risks losing its texture.


Supportive technology tends to feel like something that works with you. Surveillance, even when softly framed, feels like something that watches you. The distinction matters, particularly in lower-wage sectors where workers have limited influence over policy decisions.


The Broader Direction of Travel

What makes this story significant is that it does not exist in isolation. It is part of a wider pattern in which AI is moving steadily from automating tasks to evaluating behaviour.

First, algorithms helped optimise stock levels and predict demand. Then they began assisting with scheduling and logistics. Now they are increasingly assessing how people speak, how they respond and how closely they align with brand standards. Each step may seem incremental. Taken together, they represent a fundamental shift in how work is structured and supervised.


Historically, managers evaluated service quality through observation, feedback and experience. There was room for interpretation, for context, for understanding that a difficult shift or a complex interaction could influence tone. Human judgment allowed for nuance.

When evaluation becomes data-driven, nuance can be harder to capture. Metrics tend to favour what is measurable. Words are measurable. Frequency is measurable. Context is far less so. The risk is not that AI becomes tyrannical overnight. The risk is that over time, it narrows the definition of good service to what can be quantified. And what can be quantified is rarely the full story.


A Question Worth Asking

Technology reflects priorities. If a company invests in systems that measure friendliness, it is signalling that friendliness can be standardised, monitored and optimised like any other operational metric, but service is not assembly. It is interaction.


It is shaped by region, by culture, by individual personality and by the particular chemistry between staff and customer in that moment. It shifts depending on who walks through the door. It changes across communities and demographics. It even evolves over the course of a day. When AI systems define behavioural benchmarks, someone has decided what the ideal interaction sounds like. That definition may come from brand research, from head office strategy sessions or from consultants analysing survey data. It may be carefully considered. It may be well-intentioned, but it is still a definition created at a distance from the frontline.


Many workplace standards across industries are designed by people who have not stood behind a till in years. That does not invalidate their expertise, but it does introduce a gap between theory and practice. When those standards are encoded into algorithms, that gap can become structural. The core issue is not whether AI can improve service. It is whether those deploying it are prepared to listen as carefully to staff experience as the system listens to staff voices. If friendliness becomes a metric, then it is fair to ask who sets the parameters, how flexible they are, and whether they reflect the messy, human reality of service work.


Because once the headset becomes the evaluator, the definition of “good” may no longer be negotiated on the shop floor and that is a shift worth paying attention to.

Current Most Read

When AI Measures “Friendliness”: Who Decides What Good Service Sounds Like?
5 Ways To Reduce Microplastics In Your Home
AI Everywhere: Innovation, Infrastructure, Investment and the Growing Backlash

UK Government Pressures Apple for Encrypted Data Access – Security Measure or Privacy Risk?

  • Writer: Paul Francis
    Paul Francis
  • Feb 11, 2025
  • 4 min read

The UK government has taken a bold step in its ongoing efforts to strengthen national security, issuing a formal request to Apple demanding access to encrypted iCloud data. The demand, made under the Investigatory Powers Act 2016 (IPA)—often referred to as the "Snooper’s Charter"—could force Apple to create a backdoor in its encryption system, granting law enforcement access to user data that is currently inaccessible, even to Apple itself.


Black Apple logo on a silver metallic background, centered. The scene is minimalistic and sleek, emphasizing the brand's iconic design.

The UK argues that encryption prevents law enforcement from investigating serious crimes, including terrorism, child exploitation, and organized crime. Apple, however, has refused to comply, warning that such a move would undermine the privacy and security of users not just in the UK but globally.


The dispute has reignited the long-running debate over privacy versus security, raising serious concerns about the future of digital rights, government surveillance, and the potential consequences of setting a precedent that other countries may follow.


Why the UK Government Wants Access to Encrypted Data

The UK government insists that its demand is a matter of public safety and crime prevention. With technology evolving, criminals and terrorists have increasingly turned to encrypted services to communicate and store illicit material, making it difficult—if not impossible—for law enforcement to access vital evidence.


Government officials argue that:

  • Encrypted backups prevent police from gathering evidence – Many investigations, particularly those related to terrorism or child abuse, rely on digital evidence stored in cloud backups. Without access, law enforcement is effectively blind to potential criminal activity.

  • A controlled backdoor would not compromise regular users – The government claims that a well-regulated backdoor could provide law enforcement with access only in cases where it is legally justified, such as under a court order.

  • Other forms of surveillance are already permitted – The UK already has extensive data collection laws, including those that allow authorities to request communications metadata and access to unencrypted services. Extending this to encrypted iCloud backups is seen as a logical next step.


From this perspective, encryption is not just a tool for privacy—it can also shield criminals from justice, making it harder for authorities to investigate and prevent serious crimes.


Apple’s Resistance: The Security and Privacy Risks

Apple has made it clear that it will not comply with the UK’s request, arguing that creating a backdoor for government access would put all users at risk. The company’s Advanced Data Protection (ADP) feature provides end-to-end encryption for iCloud backups, meaning that even Apple cannot access a user’s data once encryption is enabled.

Apple—and many cybersecurity experts—warn that:


  • A backdoor for law enforcement is a backdoor for everyone – Any vulnerability introduced for one government could be exploited by hackers, cybercriminals, and foreign intelligence agencies.

  • The UK is not the only country that would make this demand – If Apple complies, other governments—including those with weaker human rights protections—may demand the same access, potentially leading to mass surveillance.

  • It would weaken cybersecurity globally – Encryption protects not just individuals but also businesses, financial transactions, and even national security infrastructure. Weakening it could increase cybercrime, identity theft, and data breaches.

  • There is no guarantee of ‘controlled’ access – While the UK claims any backdoor would be used responsibly, history shows that government surveillance powers often expand beyond their original scope.


Apple’s stance reflects a broader industry position: once an encryption backdoor exists, it is impossible to ensure it remains in the right hands.


The Precedent: What Happens If Apple Complies?

The implications of this case go far beyond Apple. If the UK succeeds in forcing the company to weaken encryption, it could set a precedent for other technology firms, including:

  • Google (Android devices and Google Drive backups)

  • Microsoft (OneDrive and Windows security systems)

  • Meta (WhatsApp, Messenger, and Facebook backups)

  • Encrypted messaging services like Signal and Telegram


This could trigger a global wave of government demands for similar access, making it increasingly difficult for any company to maintain strong encryption protections for its users.


There’s also the risk that the UK’s demand won’t stay limited to cloud storage. If Apple is forced to weaken iCloud encryption, what’s stopping governments from demanding the same for iMessage, FaceTime, and local device encryption?


Could Apple Withdraw Security Features from the UK?

Apple has taken drastic action before in response to government pressures. In 2023, it threatened to pull iMessage and FaceTime from the UK market rather than comply with potential encryption-busting requirements. While those laws were later amended, the current dispute over iCloud encryption raises the question: Could Apple withdraw its security features from the UK entirely?


Some experts believe Apple may choose to disable end-to-end encryption for iCloud backups in the UK, ensuring compliance without weakening security globally. However, this would leave UK users at a greater risk of cyberattacks, making them an easier target for hackers and surveillance programs.


Others suggest Apple could fight the order in court, delaying compliance for years while legal battles unfold. Given that the UK’s stance on encryption is stricter than many other Western nations, a legal challenge could pressure lawmakers to reconsider their approach.


A Dangerous Precedent in the Making

At its core, this debate is about where to draw the line between privacy and security. The UK government argues that its demand is necessary to protect citizens from crime, while Apple maintains that it would compromise global security by setting a dangerous precedent.


If the UK is successful, the world could see a dramatic shift in encryption policies, with other countries following suit. While government officials insist their intentions are to protect the public, critics warn that weakening encryption is a slippery slope, leading to widespread surveillance and reduced digital security for all.


As the standoff continues, the outcome will shape not just Apple’s encryption policies, but also the future of digital privacy, cybersecurity, and the balance of power between governments and technology companies worldwide.

bottom of page