top of page
When AI Measures “Friendliness”: Who Decides What Good Service Sounds Like?

When AI Measures “Friendliness”: Who Decides What Good Service Sounds Like?

5 March 2026

Paul Francis

Want your article or story on our site? Contact us here

Artificial intelligence is moving steadily from assisting workers to assessing them.


Cashier with robotic eyes, wearing a headset in a fast-food setting. Neon colors on screens in the background create a futuristic vibe.


Burger King meal with wrapped burger, fries, and drink cup with logo on table. Bright, casual setting, with focus on branded items.

Burger King has begun piloting an AI system in parts of the United States that listens to staff interactions through headsets and analyses speech patterns. The system, reportedly known as “Patty,” is designed to help managers track operational performance and, more controversially, measure staff “friendliness.” It does this by detecting politeness cues such as whether employees say “welcome,” “please,” or “thank you.”


From a corporate perspective, the logic is clear. Fast food is built on consistency. Brand standards matter. Customer experience scores influence revenue. If AI can help managers see patterns across shifts and locations, it promises efficiency, insight and improved service quality. On paper, it sounds like innovation.


In practice, it raises deeper questions about surveillance, culture, authenticity and who gets to define what “friendly” actually means, Because friendliness is not a checkbox, It is human.


The Promise Versus the Reality

The official line from companies testing this technology is that it is a coaching tool rather than a disciplinary one. It is presented as support for staff, helping identify trends rather than scoring individuals. It is framed as data-driven improvement rather than digital oversight, but the moment speech is analysed, quantified and turned into a metric, something changes.


Service work has always required emotional intelligence. It has also required emotional labour. Employees adjust tone, language and pace depending on the situation in front of them. A lunchtime rush feels different from a quiet mid-afternoon shift. A tired commuter is different from a group of teenagers. A frustrated parent is different from a regular parent who comes in every day.


Anyone who has worked in face-to-face customer service understands this instinctively. Your tone changes, your rhythm changes, your humour changes, and that is precisely where the friction with AI begins.


Culture Cannot Be Reduced to Keywords

One of the most immediate concerns is accent and cultural bias. Speech recognition systems are not neutral; they are trained on datasets. Those datasets may not equally represent every regional accent, dialect or speech pattern.


Hungry Jack's sign above a red canopy on a city street corner. Traffic light displays red pedestrian signal with trees and buildings in the background.

In a noisy fast food environment, with headsets, background clatter and rapid speech, even minor variations can affect recognition accuracy. If an AI system relies heavily on detecting specific words, then any difficulty interpreting accents could skew the data. That is not a theoretical concern. Studies have shown that automated speech systems often perform better on standardised forms of English and less well on regional or non-native accents. If politeness metrics depend on exact phrasing, workers with stronger regional accents or different speech rhythms could appear less compliant in the data, even when their service is perfectly warm and appropriate.


Beyond pronunciation, there is the question of cultural expression. In some regions, friendliness is relaxed and informal. In others, it is brisk and efficient. In some communities, humour and banter are part of service culture. In others, restraint and professionalism are valued. AI systems do not instinctively understand these nuances. They detect patterns.

But hospitality is not a pattern. It is a relationship.


Who Sets the Definition of Friendly?

This leads to a more fundamental question. Who decides what counts as friendly?

These systems do not calibrate themselves. Someone defines the threshold. Someone selects the keywords. Someone decides how often “thank you” should be said and in what context. Those decisions are typically made at the corporate level, often by operations teams and technology partners working from brand guidelines and idealised customer journeys.


There is nothing inherently wrong with brand standards, but there is often a distance between corporate design and frontline reality.


Business meeting with people at a wooden table, one reading a marketing plan. Laptops, coffee cups, and documents on the table.

Many workplace policies are written by people who have not worked a drive-thru shift in years, if ever. They may be excellent strategists. They may understand customer data deeply. But that does not always translate into lived experience on a busy Saturday afternoon when the fryer breaks and the queue is out the door.


In those moments, efficiency may matter more than repetition of scripted politeness.

If an algorithm expects a perfectly phrased greeting under all conditions, it risks becoming disconnected from the environment it is meant to improve.


Once those expectations are embedded in software, they become harder to question. The algorithm becomes policy.


The Authenticity Problem

Having worked in face-to-face customer service myself, I know that the best interactions were rarely scripted. Regular customers would come in, and you would adjust instantly. You might joke with them. You might take the piss in a friendly way. You might shorten the greeting entirely because familiarity made it unnecessary. That rapport is built over time and trust. Would an AI system recognise that as excellent service? Or would it mark down the interaction because the expected keywords were missing?


Hospitality is dynamic. It depends on reading the room, reading the person, and reading the moment. If workers begin focusing on hitting verbal benchmarks rather than engaging naturally, the interaction risks becoming mechanical. Customers can tell the difference between genuine warmth and box-ticking politeness. Ironically, quantifying friendliness may reduce the very authenticity companies are trying to protect.


Surveillance or Support?

This is where the tone of the debate shifts. Because even if the system is introduced as a supportive tool, the psychological reality of being monitored is not neutral.

Anyone who has worked in customer-facing roles knows that service environments are already performance spaces. You are representing the brand; you are expected to maintain composure and remain polite, even when customers are not. That emotional regulation is part of the job. Now imagine adding a layer where your tone and phrasing are being analysed in real time by software.


Hand holding a cassette recorder in focus, with blurred figures in business attire seated at a table in the background.

Even if managers insist it is not punitive, the awareness that your speech is being measured changes behaviour. You begin to think not just about the customer in front of you, but about whether the system has “heard” the right words. In high-pressure environments, that is another cognitive load. Another thing to get right. Over time, that kind of monitoring can subtly alter workplace culture. It can shift service from something relational to something performative in a more rigid way. Employees may begin speaking not to connect, but to comply, and when compliance becomes the goal, service risks losing its texture.


Supportive technology tends to feel like something that works with you. Surveillance, even when softly framed, feels like something that watches you. The distinction matters, particularly in lower-wage sectors where workers have limited influence over policy decisions.


The Broader Direction of Travel

What makes this story significant is that it does not exist in isolation. It is part of a wider pattern in which AI is moving steadily from automating tasks to evaluating behaviour.

First, algorithms helped optimise stock levels and predict demand. Then they began assisting with scheduling and logistics. Now they are increasingly assessing how people speak, how they respond and how closely they align with brand standards. Each step may seem incremental. Taken together, they represent a fundamental shift in how work is structured and supervised.


Historically, managers evaluated service quality through observation, feedback and experience. There was room for interpretation, for context, for understanding that a difficult shift or a complex interaction could influence tone. Human judgment allowed for nuance.

When evaluation becomes data-driven, nuance can be harder to capture. Metrics tend to favour what is measurable. Words are measurable. Frequency is measurable. Context is far less so. The risk is not that AI becomes tyrannical overnight. The risk is that over time, it narrows the definition of good service to what can be quantified. And what can be quantified is rarely the full story.


A Question Worth Asking

Technology reflects priorities. If a company invests in systems that measure friendliness, it is signalling that friendliness can be standardised, monitored and optimised like any other operational metric, but service is not assembly. It is interaction.


It is shaped by region, by culture, by individual personality and by the particular chemistry between staff and customer in that moment. It shifts depending on who walks through the door. It changes across communities and demographics. It even evolves over the course of a day. When AI systems define behavioural benchmarks, someone has decided what the ideal interaction sounds like. That definition may come from brand research, from head office strategy sessions or from consultants analysing survey data. It may be carefully considered. It may be well-intentioned, but it is still a definition created at a distance from the frontline.


Many workplace standards across industries are designed by people who have not stood behind a till in years. That does not invalidate their expertise, but it does introduce a gap between theory and practice. When those standards are encoded into algorithms, that gap can become structural. The core issue is not whether AI can improve service. It is whether those deploying it are prepared to listen as carefully to staff experience as the system listens to staff voices. If friendliness becomes a metric, then it is fair to ask who sets the parameters, how flexible they are, and whether they reflect the messy, human reality of service work.


Because once the headset becomes the evaluator, the definition of “good” may no longer be negotiated on the shop floor and that is a shift worth paying attention to.

Current Most Read

When AI Measures “Friendliness”: Who Decides What Good Service Sounds Like?
5 Ways To Reduce Microplastics In Your Home
AI Everywhere: Innovation, Infrastructure, Investment and the Growing Backlash

TikTok Ban Looms: Millions of Users Could Be Affected

  • Writer: Paul Francis
    Paul Francis
  • Jan 9, 2025
  • 3 min read
TikTok logo with a light ring around it.

On January 19, 2025, TikTok, one of the world’s most popular social media platforms, faces a potential ban in the United States. If enacted, the ban could impact over 170 million U.S. users who rely on the platform daily for entertainment, education, and business. This significant move stems from a 2024 law requiring ByteDance, TikTok’s Chinese parent company, to divest its U.S. operations. Failure to comply would result in TikTok being removed from app stores and blocked by internet service providers across the country.


TikTok: A Short History of Global Success

TikTok’s journey began in September 2016, when ByteDance launched the app as Douyin in China. Within a year, ByteDance released an international version, rebranding it as TikTok. The platform exploded in popularity after its 2018 merger with Musical.ly, a U.S.-based app that focused on lip-syncing videos. This move not only expanded TikTok's user base but also solidified its foothold in Western markets.


TikTok's algorithm, which curates personalized content for users based on their interests and interactions, became its defining feature. By 2024, TikTok had over 1.04 billion monthly active users worldwide, with U.S. users alone spending an average of 95 minutes per day on the app. This translates to nearly 24 hours a month of consistent engagement, with content spanning everything from viral dance challenges to educational tutorials.


The platform is not just a hub for creators; it has become an essential marketing tool for brands and a primary income source for influencers. Businesses of all sizes use TikTok to reach younger demographics, with Gen Z and millennials making up the majority of its user base.


The Court Case: Allegations of Spying and National Security Risks

The legal controversy surrounding TikTok stems from concerns that ByteDance could share U.S. user data with the Chinese government, an allegation TikTok and ByteDance have consistently denied. In April 2024, the U.S. government passed the Protecting Americans from Foreign Adversary Controlled Applications Act (PAFACA). This legislation required ByteDance to sell TikTok’s U.S. operations or face a nationwide ban by January 19, 2025.

The Department of Justice has emphasized that the app poses a significant national security risk. They argue that the Chinese government could exploit TikTok’s access to U.S. user data for espionage purposes, despite ByteDance’s assertions that U.S. data is stored on servers outside of China.


ByteDance has countered with legal challenges, claiming that the law infringes on First Amendment rights and suppresses free speech. As the deadline looms, the Supreme Court is set to make a critical decision, balancing concerns about national security with the constitutional rights of millions of users and creators.


Potential Fallout for the Tech Industry

A TikTok ban could send ripples across the tech industry, especially for foreign-owned applications operating in the U.S. If TikTok is banned due to its ownership structure, other non-U.S.-based platforms could face heightened scrutiny. This could result in stricter regulations, potential bans, or even demands for foreign companies to establish U.S. subsidiaries or sell assets.


The case raises broader questions about the future of the global tech landscape. Could governments worldwide follow suit, restricting access to apps based on their country of origin? Such actions could lead to a fragmented internet, where digital platforms are siloed based on national boundaries and geopolitical alliances.


Implications for Creators and Businesses

For creators and businesses, the stakes are high. TikTok has become an indispensable platform for reaching audiences, generating income, and driving brand awareness. A ban would force creators to migrate to other platforms, potentially disrupting their income streams and reducing their reach. Businesses reliant on TikTok advertising would need to pivot their strategies, potentially investing more heavily in alternative platforms like Instagram Reels, YouTube Shorts, or Snapchat.


The Future of TikTok

As the January 19 deadline approaches, millions of users, creators, and businesses are left in limbo. The Supreme Court’s ruling will not only determine TikTok’s fate in the U.S. but also set a precedent for how governments regulate foreign-owned technology in the future. Regardless of the outcome, this case underscores the complex intersection of technology, politics, and national security in an increasingly interconnected world.


TikTok’s potential ban serves as a wake-up call for businesses and creators to diversify their digital strategies and consider the broader implications of a globalized tech landscape shaped by geopolitical tensions. The next few weeks will be critical for the platform’s future—and for the millions who depend on it.

bottom of page