top of page
When AI Measures “Friendliness”: Who Decides What Good Service Sounds Like?

When AI Measures “Friendliness”: Who Decides What Good Service Sounds Like?

5 March 2026

Paul Francis

Want your article or story on our site? Contact us here

Artificial intelligence is moving steadily from assisting workers to assessing them.


Cashier with robotic eyes, wearing a headset in a fast-food setting. Neon colors on screens in the background create a futuristic vibe.


Burger King meal with wrapped burger, fries, and drink cup with logo on table. Bright, casual setting, with focus on branded items.

Burger King has begun piloting an AI system in parts of the United States that listens to staff interactions through headsets and analyses speech patterns. The system, reportedly known as “Patty,” is designed to help managers track operational performance and, more controversially, measure staff “friendliness.” It does this by detecting politeness cues such as whether employees say “welcome,” “please,” or “thank you.”


From a corporate perspective, the logic is clear. Fast food is built on consistency. Brand standards matter. Customer experience scores influence revenue. If AI can help managers see patterns across shifts and locations, it promises efficiency, insight and improved service quality. On paper, it sounds like innovation.


In practice, it raises deeper questions about surveillance, culture, authenticity and who gets to define what “friendly” actually means, Because friendliness is not a checkbox, It is human.


The Promise Versus the Reality

The official line from companies testing this technology is that it is a coaching tool rather than a disciplinary one. It is presented as support for staff, helping identify trends rather than scoring individuals. It is framed as data-driven improvement rather than digital oversight, but the moment speech is analysed, quantified and turned into a metric, something changes.


Service work has always required emotional intelligence. It has also required emotional labour. Employees adjust tone, language and pace depending on the situation in front of them. A lunchtime rush feels different from a quiet mid-afternoon shift. A tired commuter is different from a group of teenagers. A frustrated parent is different from a regular parent who comes in every day.


Anyone who has worked in face-to-face customer service understands this instinctively. Your tone changes, your rhythm changes, your humour changes, and that is precisely where the friction with AI begins.


Culture Cannot Be Reduced to Keywords

One of the most immediate concerns is accent and cultural bias. Speech recognition systems are not neutral; they are trained on datasets. Those datasets may not equally represent every regional accent, dialect or speech pattern.


Hungry Jack's sign above a red canopy on a city street corner. Traffic light displays red pedestrian signal with trees and buildings in the background.

In a noisy fast food environment, with headsets, background clatter and rapid speech, even minor variations can affect recognition accuracy. If an AI system relies heavily on detecting specific words, then any difficulty interpreting accents could skew the data. That is not a theoretical concern. Studies have shown that automated speech systems often perform better on standardised forms of English and less well on regional or non-native accents. If politeness metrics depend on exact phrasing, workers with stronger regional accents or different speech rhythms could appear less compliant in the data, even when their service is perfectly warm and appropriate.


Beyond pronunciation, there is the question of cultural expression. In some regions, friendliness is relaxed and informal. In others, it is brisk and efficient. In some communities, humour and banter are part of service culture. In others, restraint and professionalism are valued. AI systems do not instinctively understand these nuances. They detect patterns.

But hospitality is not a pattern. It is a relationship.


Who Sets the Definition of Friendly?

This leads to a more fundamental question. Who decides what counts as friendly?

These systems do not calibrate themselves. Someone defines the threshold. Someone selects the keywords. Someone decides how often “thank you” should be said and in what context. Those decisions are typically made at the corporate level, often by operations teams and technology partners working from brand guidelines and idealised customer journeys.


There is nothing inherently wrong with brand standards, but there is often a distance between corporate design and frontline reality.


Business meeting with people at a wooden table, one reading a marketing plan. Laptops, coffee cups, and documents on the table.

Many workplace policies are written by people who have not worked a drive-thru shift in years, if ever. They may be excellent strategists. They may understand customer data deeply. But that does not always translate into lived experience on a busy Saturday afternoon when the fryer breaks and the queue is out the door.


In those moments, efficiency may matter more than repetition of scripted politeness.

If an algorithm expects a perfectly phrased greeting under all conditions, it risks becoming disconnected from the environment it is meant to improve.


Once those expectations are embedded in software, they become harder to question. The algorithm becomes policy.


The Authenticity Problem

Having worked in face-to-face customer service myself, I know that the best interactions were rarely scripted. Regular customers would come in, and you would adjust instantly. You might joke with them. You might take the piss in a friendly way. You might shorten the greeting entirely because familiarity made it unnecessary. That rapport is built over time and trust. Would an AI system recognise that as excellent service? Or would it mark down the interaction because the expected keywords were missing?


Hospitality is dynamic. It depends on reading the room, reading the person, and reading the moment. If workers begin focusing on hitting verbal benchmarks rather than engaging naturally, the interaction risks becoming mechanical. Customers can tell the difference between genuine warmth and box-ticking politeness. Ironically, quantifying friendliness may reduce the very authenticity companies are trying to protect.


Surveillance or Support?

This is where the tone of the debate shifts. Because even if the system is introduced as a supportive tool, the psychological reality of being monitored is not neutral.

Anyone who has worked in customer-facing roles knows that service environments are already performance spaces. You are representing the brand; you are expected to maintain composure and remain polite, even when customers are not. That emotional regulation is part of the job. Now imagine adding a layer where your tone and phrasing are being analysed in real time by software.


Hand holding a cassette recorder in focus, with blurred figures in business attire seated at a table in the background.

Even if managers insist it is not punitive, the awareness that your speech is being measured changes behaviour. You begin to think not just about the customer in front of you, but about whether the system has “heard” the right words. In high-pressure environments, that is another cognitive load. Another thing to get right. Over time, that kind of monitoring can subtly alter workplace culture. It can shift service from something relational to something performative in a more rigid way. Employees may begin speaking not to connect, but to comply, and when compliance becomes the goal, service risks losing its texture.


Supportive technology tends to feel like something that works with you. Surveillance, even when softly framed, feels like something that watches you. The distinction matters, particularly in lower-wage sectors where workers have limited influence over policy decisions.


The Broader Direction of Travel

What makes this story significant is that it does not exist in isolation. It is part of a wider pattern in which AI is moving steadily from automating tasks to evaluating behaviour.

First, algorithms helped optimise stock levels and predict demand. Then they began assisting with scheduling and logistics. Now they are increasingly assessing how people speak, how they respond and how closely they align with brand standards. Each step may seem incremental. Taken together, they represent a fundamental shift in how work is structured and supervised.


Historically, managers evaluated service quality through observation, feedback and experience. There was room for interpretation, for context, for understanding that a difficult shift or a complex interaction could influence tone. Human judgment allowed for nuance.

When evaluation becomes data-driven, nuance can be harder to capture. Metrics tend to favour what is measurable. Words are measurable. Frequency is measurable. Context is far less so. The risk is not that AI becomes tyrannical overnight. The risk is that over time, it narrows the definition of good service to what can be quantified. And what can be quantified is rarely the full story.


A Question Worth Asking

Technology reflects priorities. If a company invests in systems that measure friendliness, it is signalling that friendliness can be standardised, monitored and optimised like any other operational metric, but service is not assembly. It is interaction.


It is shaped by region, by culture, by individual personality and by the particular chemistry between staff and customer in that moment. It shifts depending on who walks through the door. It changes across communities and demographics. It even evolves over the course of a day. When AI systems define behavioural benchmarks, someone has decided what the ideal interaction sounds like. That definition may come from brand research, from head office strategy sessions or from consultants analysing survey data. It may be carefully considered. It may be well-intentioned, but it is still a definition created at a distance from the frontline.


Many workplace standards across industries are designed by people who have not stood behind a till in years. That does not invalidate their expertise, but it does introduce a gap between theory and practice. When those standards are encoded into algorithms, that gap can become structural. The core issue is not whether AI can improve service. It is whether those deploying it are prepared to listen as carefully to staff experience as the system listens to staff voices. If friendliness becomes a metric, then it is fair to ask who sets the parameters, how flexible they are, and whether they reflect the messy, human reality of service work.


Because once the headset becomes the evaluator, the definition of “good” may no longer be negotiated on the shop floor and that is a shift worth paying attention to.

Current Most Read

When AI Measures “Friendliness”: Who Decides What Good Service Sounds Like?
5 Ways To Reduce Microplastics In Your Home
AI Everywhere: Innovation, Infrastructure, Investment and the Growing Backlash

Modern-Day Slavery: A Hidden Crisis in the West

  • Writer: Connor Banks
    Connor Banks
  • Oct 3, 2024
  • 3 min read
Persons hands in chains on a black background

Despite living in an era of increased awareness and legislation aimed at eradicating human trafficking and forced labour, modern slavery remains a pressing and hidden issue, even in developed nations. While most people believe that exploitation exists primarily in poorer, less-regulated parts of the world, the reality is far more troubling: many Western countries, including the UK, the US, and Italy, continue to see disturbing cases of forced labour and human trafficking across various industries. Recent revelations regarding modern slavery at a McDonald’s in Cambridgeshire and a bread factory supplying major UK supermarkets underscore just how pervasive this problem remains.


The McDonald’s and Bread Factory Case

In one of the most shocking cases of modern-day slavery in recent years, 16 individuals were trafficked from the Czech Republic and forced to work in deplorable conditions at a McDonald’s branch in Cambridgeshire and a bread factory that supplied major UK supermarkets such as Asda, Tesco, and Sainsbury’s. The workers, many of whom were vulnerable due to homelessness and addiction, had their wages stolen by a gang that controlled their movements through fear and confiscation of their passports.


Despite working excessive hours—some up to 100 hours a week—these victims saw very little of their earnings, as the bulk of their wages was funnelled into a single bank account controlled by their traffickers. The exploitation lasted for over four years, with warning signs, such as shared bank accounts and poor living conditions, repeatedly missed by employers and authorities.


This case highlights the ease with which modern slavery can slip through the cracks in even the most well-known and established companies. McDonald’s and large supermarket chains, brands synonymous with quality and global reach, failed to detect that their workers were victims of trafficking and labour exploitation for an extended period.


Luxury Brands in Italy: Slavery in the Shadows of Opulence

While the McDonald’s case sheds light on forced labour in industries like fast food and supermarket supply chains, the luxury sector is far from immune. In fact, high-end fashion brands—often associated with exclusivity and craftsmanship—have also been implicated in labour exploitation. In Italy, a country renowned for its luxury goods, brands such as Louis Vuitton, Salvatore Ferragamo, and Dior have come under fire for their involvement in modern slavery through subcontracted factories.


In 2023, investigations into Dior revealed that some of its subcontractors in Milan were employing migrant workers under highly exploitative conditions. These workers, many of whom were from China, were reportedly paid as little as €2 an hour while working long shifts in unsafe environments. Dior, a brand associated with opulence and craftsmanship, became the focus of a judicial investigation as authorities sought to address these abuses.


Additionally, Salvatore Ferragamo, another hallmark of Italian luxury, scored poorly in transparency rankings concerning labour practices. Many of these brands, despite their high prices, have been criticised for not ensuring that workers in their supply chains are paid fair wages or treated ethically. Reports indicate that subcontracted factories in Italy's fashion industry regularly exploit migrant labourers, forcing them to work long hours for very little pay under threats of deportation or violence.


The Price Tag Doesn’t Guarantee Ethics

The exploitation of workers in the supply chains of both global fast-food chains and luxury fashion brands reveals a disturbing truth: the price of a product does not equate to ethical practices. The allure of high-end fashion often masks the harsh reality of labour exploitation. Similarly, while McDonald’s and supermarket giants project images of corporate responsibility, their vast supply chains are susceptible to abuse.


What links these cases together is the way modern slavery remains hidden in plain sight. Consumers trust these brands, expecting that the price or reputation guarantees ethical production standards. Unfortunately, as these cases show, luxury and corporate responsibility can sometimes come at the expense of human dignity.


The Need for Stronger Regulation and Awareness

Cases like the one involving McDonald’s in the UK and luxury brands in Italy highlight the urgent need for stronger regulation and more robust enforcement of labour rights. While many countries, including the UK, have passed legislation such as the Modern Slavery Act, enforcement remains inconsistent. Companies are often required to publish modern slavery statements, but compliance is often more about ticking boxes than affecting meaningful change.


More importantly, consumers play a critical role in demanding accountability. By supporting brands with transparent, ethical practices and pressuring others to improve, individuals can help combat modern slavery.


Modern-day slavery is not a relic of the past. It continues to thrive in industries as varied as fast food and luxury fashion, hiding behind the veneer of quality and success. The recent cases in the UK and Italy should serve as a wake-up call for consumers, governments, and corporations alike to address this hidden crisis with the seriousness and urgency it demands.

bottom of page