top of page
When AI Measures “Friendliness”: Who Decides What Good Service Sounds Like?

When AI Measures “Friendliness”: Who Decides What Good Service Sounds Like?

5 March 2026

Paul Francis

Want your article or story on our site? Contact us here

Artificial intelligence is moving steadily from assisting workers to assessing them.


Cashier with robotic eyes, wearing a headset in a fast-food setting. Neon colors on screens in the background create a futuristic vibe.


Burger King meal with wrapped burger, fries, and drink cup with logo on table. Bright, casual setting, with focus on branded items.

Burger King has begun piloting an AI system in parts of the United States that listens to staff interactions through headsets and analyses speech patterns. The system, reportedly known as “Patty,” is designed to help managers track operational performance and, more controversially, measure staff “friendliness.” It does this by detecting politeness cues such as whether employees say “welcome,” “please,” or “thank you.”


From a corporate perspective, the logic is clear. Fast food is built on consistency. Brand standards matter. Customer experience scores influence revenue. If AI can help managers see patterns across shifts and locations, it promises efficiency, insight and improved service quality. On paper, it sounds like innovation.


In practice, it raises deeper questions about surveillance, culture, authenticity and who gets to define what “friendly” actually means, Because friendliness is not a checkbox, It is human.


The Promise Versus the Reality

The official line from companies testing this technology is that it is a coaching tool rather than a disciplinary one. It is presented as support for staff, helping identify trends rather than scoring individuals. It is framed as data-driven improvement rather than digital oversight, but the moment speech is analysed, quantified and turned into a metric, something changes.


Service work has always required emotional intelligence. It has also required emotional labour. Employees adjust tone, language and pace depending on the situation in front of them. A lunchtime rush feels different from a quiet mid-afternoon shift. A tired commuter is different from a group of teenagers. A frustrated parent is different from a regular parent who comes in every day.


Anyone who has worked in face-to-face customer service understands this instinctively. Your tone changes, your rhythm changes, your humour changes, and that is precisely where the friction with AI begins.


Culture Cannot Be Reduced to Keywords

One of the most immediate concerns is accent and cultural bias. Speech recognition systems are not neutral; they are trained on datasets. Those datasets may not equally represent every regional accent, dialect or speech pattern.


Hungry Jack's sign above a red canopy on a city street corner. Traffic light displays red pedestrian signal with trees and buildings in the background.

In a noisy fast food environment, with headsets, background clatter and rapid speech, even minor variations can affect recognition accuracy. If an AI system relies heavily on detecting specific words, then any difficulty interpreting accents could skew the data. That is not a theoretical concern. Studies have shown that automated speech systems often perform better on standardised forms of English and less well on regional or non-native accents. If politeness metrics depend on exact phrasing, workers with stronger regional accents or different speech rhythms could appear less compliant in the data, even when their service is perfectly warm and appropriate.


Beyond pronunciation, there is the question of cultural expression. In some regions, friendliness is relaxed and informal. In others, it is brisk and efficient. In some communities, humour and banter are part of service culture. In others, restraint and professionalism are valued. AI systems do not instinctively understand these nuances. They detect patterns.

But hospitality is not a pattern. It is a relationship.


Who Sets the Definition of Friendly?

This leads to a more fundamental question. Who decides what counts as friendly?

These systems do not calibrate themselves. Someone defines the threshold. Someone selects the keywords. Someone decides how often “thank you” should be said and in what context. Those decisions are typically made at the corporate level, often by operations teams and technology partners working from brand guidelines and idealised customer journeys.


There is nothing inherently wrong with brand standards, but there is often a distance between corporate design and frontline reality.


Business meeting with people at a wooden table, one reading a marketing plan. Laptops, coffee cups, and documents on the table.

Many workplace policies are written by people who have not worked a drive-thru shift in years, if ever. They may be excellent strategists. They may understand customer data deeply. But that does not always translate into lived experience on a busy Saturday afternoon when the fryer breaks and the queue is out the door.


In those moments, efficiency may matter more than repetition of scripted politeness.

If an algorithm expects a perfectly phrased greeting under all conditions, it risks becoming disconnected from the environment it is meant to improve.


Once those expectations are embedded in software, they become harder to question. The algorithm becomes policy.


The Authenticity Problem

Having worked in face-to-face customer service myself, I know that the best interactions were rarely scripted. Regular customers would come in, and you would adjust instantly. You might joke with them. You might take the piss in a friendly way. You might shorten the greeting entirely because familiarity made it unnecessary. That rapport is built over time and trust. Would an AI system recognise that as excellent service? Or would it mark down the interaction because the expected keywords were missing?


Hospitality is dynamic. It depends on reading the room, reading the person, and reading the moment. If workers begin focusing on hitting verbal benchmarks rather than engaging naturally, the interaction risks becoming mechanical. Customers can tell the difference between genuine warmth and box-ticking politeness. Ironically, quantifying friendliness may reduce the very authenticity companies are trying to protect.


Surveillance or Support?

This is where the tone of the debate shifts. Because even if the system is introduced as a supportive tool, the psychological reality of being monitored is not neutral.

Anyone who has worked in customer-facing roles knows that service environments are already performance spaces. You are representing the brand; you are expected to maintain composure and remain polite, even when customers are not. That emotional regulation is part of the job. Now imagine adding a layer where your tone and phrasing are being analysed in real time by software.


Hand holding a cassette recorder in focus, with blurred figures in business attire seated at a table in the background.

Even if managers insist it is not punitive, the awareness that your speech is being measured changes behaviour. You begin to think not just about the customer in front of you, but about whether the system has “heard” the right words. In high-pressure environments, that is another cognitive load. Another thing to get right. Over time, that kind of monitoring can subtly alter workplace culture. It can shift service from something relational to something performative in a more rigid way. Employees may begin speaking not to connect, but to comply, and when compliance becomes the goal, service risks losing its texture.


Supportive technology tends to feel like something that works with you. Surveillance, even when softly framed, feels like something that watches you. The distinction matters, particularly in lower-wage sectors where workers have limited influence over policy decisions.


The Broader Direction of Travel

What makes this story significant is that it does not exist in isolation. It is part of a wider pattern in which AI is moving steadily from automating tasks to evaluating behaviour.

First, algorithms helped optimise stock levels and predict demand. Then they began assisting with scheduling and logistics. Now they are increasingly assessing how people speak, how they respond and how closely they align with brand standards. Each step may seem incremental. Taken together, they represent a fundamental shift in how work is structured and supervised.


Historically, managers evaluated service quality through observation, feedback and experience. There was room for interpretation, for context, for understanding that a difficult shift or a complex interaction could influence tone. Human judgment allowed for nuance.

When evaluation becomes data-driven, nuance can be harder to capture. Metrics tend to favour what is measurable. Words are measurable. Frequency is measurable. Context is far less so. The risk is not that AI becomes tyrannical overnight. The risk is that over time, it narrows the definition of good service to what can be quantified. And what can be quantified is rarely the full story.


A Question Worth Asking

Technology reflects priorities. If a company invests in systems that measure friendliness, it is signalling that friendliness can be standardised, monitored and optimised like any other operational metric, but service is not assembly. It is interaction.


It is shaped by region, by culture, by individual personality and by the particular chemistry between staff and customer in that moment. It shifts depending on who walks through the door. It changes across communities and demographics. It even evolves over the course of a day. When AI systems define behavioural benchmarks, someone has decided what the ideal interaction sounds like. That definition may come from brand research, from head office strategy sessions or from consultants analysing survey data. It may be carefully considered. It may be well-intentioned, but it is still a definition created at a distance from the frontline.


Many workplace standards across industries are designed by people who have not stood behind a till in years. That does not invalidate their expertise, but it does introduce a gap between theory and practice. When those standards are encoded into algorithms, that gap can become structural. The core issue is not whether AI can improve service. It is whether those deploying it are prepared to listen as carefully to staff experience as the system listens to staff voices. If friendliness becomes a metric, then it is fair to ask who sets the parameters, how flexible they are, and whether they reflect the messy, human reality of service work.


Because once the headset becomes the evaluator, the definition of “good” may no longer be negotiated on the shop floor and that is a shift worth paying attention to.

Current Most Read

When AI Measures “Friendliness”: Who Decides What Good Service Sounds Like?
5 Ways To Reduce Microplastics In Your Home
AI Everywhere: Innovation, Infrastructure, Investment and the Growing Backlash

Meta (Facebook) Under Fire Again: Why the Tech Giant Faces a New Wave of Privacy Lawsuits

  • Writer: Paul Francis
    Paul Francis
  • Aug 6, 2025
  • 4 min read

Once again, Meta is in the spotlight, and not for the reasons it might hope. This time, it finds itself under renewed legal and regulatory scrutiny across both the UK and United States, as fresh allegations emerge about its continued tracking of user data without full consent. Despite previous fines and public outcry, the tech giant behind Facebook, Instagram, and WhatsApp is facing a storm that may be more difficult to weather.


Blue Facebook logo on a reflective surface, glowing against a dark gradient background. The scene is minimalistic and modern.

New Legal Pressure in the United States

In the US, Meta is currently facing multiple class action lawsuits, many of which revolve around privacy breaches, exploitative platform design, and the targeting of young users.

One of the most prominent ongoing cases was filed in 2023 by dozens of US states. The lawsuit accuses Meta of deliberately designing features on Instagram and Facebook that exploit young users' psychological vulnerabilities, encouraging addictive use of the platforms. The company is alleged to have known about the harm these features could cause, particularly to teenage mental health, but did little to change the design.


Another class action is gaining traction over the unauthorised tracking of user behaviour on third-party websites. This includes the alleged misuse of tracking pixels to collect data even when users are not logged into Meta’s platforms. Users claim they were unaware that their health, financial, or browsing information was being collected in the background.


These lawsuits follow in the wake of a $725 million settlement Meta agreed to pay in 2022 over the Cambridge Analytica scandal. The current cases suggest that regulatory and legal appetite for holding Big Tech accountable is only increasing.


Investigations and Pressure in the UK


Facebook app icon with a red notification badge showing "3" on a smartphone screen. Adjacent icons partially visible.

Across the Atlantic, Meta is facing scrutiny from the UK’s Information Commissioner's Office (ICO). Though details of ongoing investigations remain confidential, the ICO has expressed growing concern about the use of data tracking in digital advertising. The regulator is reportedly investigating whether Meta’s ad targeting systems and platform architecture violate UK privacy laws, especially in light of recent Online Safety Act provisions.


In addition, the Competition and Markets Authority (CMA) has launched probes into how Meta collects and uses consumer data, particularly around its growing integration with virtual and augmented reality services.


The UK’s appetite for action follows similar moves in the European Union. In 2023, Ireland’s Data Protection Commission issued Meta with a €1.2 billion fine for unlawful data transfers to the United States, the largest GDPR-related fine ever issued.


A Pattern of Privacy Failures

Meta’s defenders often argue that the company is simply evolving in a fast-moving tech environment. However, critics point to a repeated pattern of behaviour that undermines public trust.


From Cambridge Analytica to hidden tracking pixels and now algorithms that allegedly harm young people, Meta’s record on user data is far from spotless. Regulators and campaigners say this pattern suggests systemic issues rather than one-off mistakes.

James Steyer, CEO of Common Sense Media, said:"Tech giants like Meta have failed to put the wellbeing of users first. We have seen this time and again. Fines may not be enough to drive real change."


What Could This Mean for Meta?

The potential financial impact of these actions could be considerable. Under GDPR, fines can reach up to 4 percent of global turnover. With Meta reporting revenue of around $135 billion in 2024, this could mean penalties in the multi-billion range.


Beyond the fines, Meta faces restrictions on how it can collect and use data. Some legal experts believe upcoming rulings may force the company to overhaul its ad systems entirely, particularly those built around inferred personal data and behavioural tracking.


There is also reputational risk. Consumers, particularly younger ones, are increasingly conscious of how their data is handled. With rival platforms emerging and concerns around AI-generated content on the rise, Meta’s grip on digital culture may be slipping.


Why It Matters to Everyone

These legal actions may feel distant from everyday life, but they reflect a deeper issue about how much control individuals have over their digital lives. Most users never read the fine print or understand the scope of the data being collected. Meta’s platforms remain free to use, but the cost is increasingly paid in privacy.


There is also a broader societal question at play. If companies continue to operate in a way that values data extraction over transparency, can regulation ever catch up? Or are we simply witnessing the beginning of a new kind of digital economy, one where personal information is the price of entry?


A Familiar Story with Higher Stakes

Whether this new wave of lawsuits and investigations leads to genuine change is yet to be seen. Meta has the resources to fight prolonged legal battles, and history has shown the company is rarely forced into long-term reform.


But there is a sense that the tide is turning. Public sentiment is shifting, and regulators appear more coordinated than ever. If Meta is once again under fire for failing to respect data boundaries, this time it may find the consequences harder to brush off.


Sources and Further Reading

bottom of page