top of page
When AI Measures “Friendliness”: Who Decides What Good Service Sounds Like?

When AI Measures “Friendliness”: Who Decides What Good Service Sounds Like?

5 March 2026

Paul Francis

Want your article or story on our site? Contact us here

Artificial intelligence is moving steadily from assisting workers to assessing them.


Cashier with robotic eyes, wearing a headset in a fast-food setting. Neon colors on screens in the background create a futuristic vibe.


Burger King meal with wrapped burger, fries, and drink cup with logo on table. Bright, casual setting, with focus on branded items.

Burger King has begun piloting an AI system in parts of the United States that listens to staff interactions through headsets and analyses speech patterns. The system, reportedly known as “Patty,” is designed to help managers track operational performance and, more controversially, measure staff “friendliness.” It does this by detecting politeness cues such as whether employees say “welcome,” “please,” or “thank you.”


From a corporate perspective, the logic is clear. Fast food is built on consistency. Brand standards matter. Customer experience scores influence revenue. If AI can help managers see patterns across shifts and locations, it promises efficiency, insight and improved service quality. On paper, it sounds like innovation.


In practice, it raises deeper questions about surveillance, culture, authenticity and who gets to define what “friendly” actually means, Because friendliness is not a checkbox, It is human.


The Promise Versus the Reality

The official line from companies testing this technology is that it is a coaching tool rather than a disciplinary one. It is presented as support for staff, helping identify trends rather than scoring individuals. It is framed as data-driven improvement rather than digital oversight, but the moment speech is analysed, quantified and turned into a metric, something changes.


Service work has always required emotional intelligence. It has also required emotional labour. Employees adjust tone, language and pace depending on the situation in front of them. A lunchtime rush feels different from a quiet mid-afternoon shift. A tired commuter is different from a group of teenagers. A frustrated parent is different from a regular parent who comes in every day.


Anyone who has worked in face-to-face customer service understands this instinctively. Your tone changes, your rhythm changes, your humour changes, and that is precisely where the friction with AI begins.


Culture Cannot Be Reduced to Keywords

One of the most immediate concerns is accent and cultural bias. Speech recognition systems are not neutral; they are trained on datasets. Those datasets may not equally represent every regional accent, dialect or speech pattern.


Hungry Jack's sign above a red canopy on a city street corner. Traffic light displays red pedestrian signal with trees and buildings in the background.

In a noisy fast food environment, with headsets, background clatter and rapid speech, even minor variations can affect recognition accuracy. If an AI system relies heavily on detecting specific words, then any difficulty interpreting accents could skew the data. That is not a theoretical concern. Studies have shown that automated speech systems often perform better on standardised forms of English and less well on regional or non-native accents. If politeness metrics depend on exact phrasing, workers with stronger regional accents or different speech rhythms could appear less compliant in the data, even when their service is perfectly warm and appropriate.


Beyond pronunciation, there is the question of cultural expression. In some regions, friendliness is relaxed and informal. In others, it is brisk and efficient. In some communities, humour and banter are part of service culture. In others, restraint and professionalism are valued. AI systems do not instinctively understand these nuances. They detect patterns.

But hospitality is not a pattern. It is a relationship.


Who Sets the Definition of Friendly?

This leads to a more fundamental question. Who decides what counts as friendly?

These systems do not calibrate themselves. Someone defines the threshold. Someone selects the keywords. Someone decides how often “thank you” should be said and in what context. Those decisions are typically made at the corporate level, often by operations teams and technology partners working from brand guidelines and idealised customer journeys.


There is nothing inherently wrong with brand standards, but there is often a distance between corporate design and frontline reality.


Business meeting with people at a wooden table, one reading a marketing plan. Laptops, coffee cups, and documents on the table.

Many workplace policies are written by people who have not worked a drive-thru shift in years, if ever. They may be excellent strategists. They may understand customer data deeply. But that does not always translate into lived experience on a busy Saturday afternoon when the fryer breaks and the queue is out the door.


In those moments, efficiency may matter more than repetition of scripted politeness.

If an algorithm expects a perfectly phrased greeting under all conditions, it risks becoming disconnected from the environment it is meant to improve.


Once those expectations are embedded in software, they become harder to question. The algorithm becomes policy.


The Authenticity Problem

Having worked in face-to-face customer service myself, I know that the best interactions were rarely scripted. Regular customers would come in, and you would adjust instantly. You might joke with them. You might take the piss in a friendly way. You might shorten the greeting entirely because familiarity made it unnecessary. That rapport is built over time and trust. Would an AI system recognise that as excellent service? Or would it mark down the interaction because the expected keywords were missing?


Hospitality is dynamic. It depends on reading the room, reading the person, and reading the moment. If workers begin focusing on hitting verbal benchmarks rather than engaging naturally, the interaction risks becoming mechanical. Customers can tell the difference between genuine warmth and box-ticking politeness. Ironically, quantifying friendliness may reduce the very authenticity companies are trying to protect.


Surveillance or Support?

This is where the tone of the debate shifts. Because even if the system is introduced as a supportive tool, the psychological reality of being monitored is not neutral.

Anyone who has worked in customer-facing roles knows that service environments are already performance spaces. You are representing the brand; you are expected to maintain composure and remain polite, even when customers are not. That emotional regulation is part of the job. Now imagine adding a layer where your tone and phrasing are being analysed in real time by software.


Hand holding a cassette recorder in focus, with blurred figures in business attire seated at a table in the background.

Even if managers insist it is not punitive, the awareness that your speech is being measured changes behaviour. You begin to think not just about the customer in front of you, but about whether the system has “heard” the right words. In high-pressure environments, that is another cognitive load. Another thing to get right. Over time, that kind of monitoring can subtly alter workplace culture. It can shift service from something relational to something performative in a more rigid way. Employees may begin speaking not to connect, but to comply, and when compliance becomes the goal, service risks losing its texture.


Supportive technology tends to feel like something that works with you. Surveillance, even when softly framed, feels like something that watches you. The distinction matters, particularly in lower-wage sectors where workers have limited influence over policy decisions.


The Broader Direction of Travel

What makes this story significant is that it does not exist in isolation. It is part of a wider pattern in which AI is moving steadily from automating tasks to evaluating behaviour.

First, algorithms helped optimise stock levels and predict demand. Then they began assisting with scheduling and logistics. Now they are increasingly assessing how people speak, how they respond and how closely they align with brand standards. Each step may seem incremental. Taken together, they represent a fundamental shift in how work is structured and supervised.


Historically, managers evaluated service quality through observation, feedback and experience. There was room for interpretation, for context, for understanding that a difficult shift or a complex interaction could influence tone. Human judgment allowed for nuance.

When evaluation becomes data-driven, nuance can be harder to capture. Metrics tend to favour what is measurable. Words are measurable. Frequency is measurable. Context is far less so. The risk is not that AI becomes tyrannical overnight. The risk is that over time, it narrows the definition of good service to what can be quantified. And what can be quantified is rarely the full story.


A Question Worth Asking

Technology reflects priorities. If a company invests in systems that measure friendliness, it is signalling that friendliness can be standardised, monitored and optimised like any other operational metric, but service is not assembly. It is interaction.


It is shaped by region, by culture, by individual personality and by the particular chemistry between staff and customer in that moment. It shifts depending on who walks through the door. It changes across communities and demographics. It even evolves over the course of a day. When AI systems define behavioural benchmarks, someone has decided what the ideal interaction sounds like. That definition may come from brand research, from head office strategy sessions or from consultants analysing survey data. It may be carefully considered. It may be well-intentioned, but it is still a definition created at a distance from the frontline.


Many workplace standards across industries are designed by people who have not stood behind a till in years. That does not invalidate their expertise, but it does introduce a gap between theory and practice. When those standards are encoded into algorithms, that gap can become structural. The core issue is not whether AI can improve service. It is whether those deploying it are prepared to listen as carefully to staff experience as the system listens to staff voices. If friendliness becomes a metric, then it is fair to ask who sets the parameters, how flexible they are, and whether they reflect the messy, human reality of service work.


Because once the headset becomes the evaluator, the definition of “good” may no longer be negotiated on the shop floor and that is a shift worth paying attention to.

Current Most Read

When AI Measures “Friendliness”: Who Decides What Good Service Sounds Like?
5 Ways To Reduce Microplastics In Your Home
AI Everywhere: Innovation, Infrastructure, Investment and the Growing Backlash

A music illiterate reviews Eurovision Part 1

  • Writer: Connor Banks
    Connor Banks
  • May 15, 2024
  • 7 min read

This past weekend saw millions tune in around the world to watch the 2024 Eurovision final hosted in Malmo Sweden. Known for its diverse musical genres, spectacular performances, and the unique opportunity to showcase national cultures, Eurovision captivates millions of viewers worldwide. This year's entries are no exception, featuring everything from pop and rock to folk and opera, each aiming to capture the hearts of both the professional juries and the voting public. But no one has heard the opinion of someone that knows nothing about music, so clearly that is what’s needed! Join us for A Music Idiots Review of Eurovision!


37 Countries competed in this year's Eurovision across the Semi Finals and the Final itself, which means we have 37 songs to review!



Iceland “Scared of Heights” By Hera Bjork


Starting off with Iceland, they have had a notable presence in the Eurovision Song Contest since their debut in 1986. Though it has yet to win the competition, failing to make it out of the semi-finals this year, was this perhaps a hidden gem among the songs this year?

Scared of Heights must be quite fortunate for Hera as the song did not reach the dizzying heights of the semi final scoreboard only receiving 3 points, which arguably is 3 too many. This song certainly is one of the songs of all time, in fact it's so striking in its blandness and lack of uniqueness that we come to expect from Eurovision songs that it unfortunately reminds me of “Embers”. Sorry Iceland, but I have to give you nil pois for this one.



Azerbaijan "Özünlə apar" by FAHREE feat. Ilkin Dovlatov


Since debuting in 2008, Azerbaijan has quickly established itself as a powerhouse in the Eurovision Song Contest, highlighted by its win in 2011 with "Running Scared" by Ell & Nikki. With a reputation for high-quality performances and frequent top 10 finishes, Azerbaijan continues to be a formidable and dynamic competitor in Eurovision. This year they were represented by FAHREE and Ilkin Dovlatov with their song "Özünlə apar". Despite historically doing well at Eurovision this only only netted them 11 points in the semi finals. But was this a justified 14th place finish at the semis? Honestly, I kind of like the song, it has a great sound to it and the singing is only adds to it but yet its missing something. Whilst the song has good vocals and seemed to be a decent representation of Mugham music, it lacked a lot of character we tend to expect from Azerbaijan and their performances. Speaking of, the performance they did required you to have seen the music video to “understand” it, which does not usually make for an entertaining viewing experience when you need to have extracurricular viewing requirements. In the end, it’s a decent song just not the best representation we’ve had out of the country, finishing 14th in the semi final was probably fair.



Moldova “In The Middle” Natalia Barbu


Since their debut in 2005, Moldova has made a significant impact on the Eurovision Song Contest with a mix of quirky and memorable performances. Highlighted by a 3rd place finish in 2017 with SunStroke Project's "Hey Mamma," Moldova continues to be a favourite for its unique and entertaining contributions to the contest. But what was this year's entry like?


I think this might be the first song where my opinion differs from the results and I guess the rest of Europe. Natalia Barbu has beautiful vocals on this song, with her voice being the main focal point of the song carrying the bridge into the chorus which switches into her native language from English and even goes into a violin solo. This song is what many of us expect to hear when we think of Eurovision. The fact this song only barely finished above the previous 2 songs is criminal. It definitely deserved to finish higher, maybe even challenging for a final spot.



Poland “The Tower” by Luna


Debuting in 1994, Poland has participated in the Eurovision Song Contest with a wide range of musical styles and culturally infused performances. Despite not securing a win, Poland has achieved notable successes, particularly with Edyta Górniak's 2nd place in their debut year and other memorable entries like Donatan & Cleo's "My Słowianie" and Michał Szpak's "Color of Your Life." But enough about the past, what was this year’s song and performance like? Well considering it finished above Moldova it has to be good right?!?! WRONG. I like synth-pop, a few of my favourite songs of all time are synth songs. But this is one of the dullest synth pop songs I’ve heard. Not to mention the vocal performance itself isn't anything special, sure Luna can sing but it's not blowing anyone away. The stage performance was interesting with her performing on a chess board with 2 rook pieces, it was an entertaining performance and definitely deserves praise from that perspective. However I do think it finished around where it should have. 12th in the semi final is a good spot for it. It’s getting a 4.



Australia “One Milkali (One Blood)” by Electric Fields


A much more recent addition to Eurovision, Australia first appeared in 2015 and since making significant impact with strong entries and memorable performances. Highlighted by Dami Im's 2nd place finish in 2016 with "Sound of Silence," Australia has consistently delivered high-quality performances that blend powerful vocals, creative staging, and contemporary pop appeal. Australia's participation adds a unique and diverse dimension to the contest, further broadening Eurovision's global reach. However the reach wasn’t global enough this year as they failed to make it out of the semi final stage. Was this deserved? Well the song they performed wasn’t the worst song from Eurovision, and if there's anything Australia has proven its that they “get” eurovision. I personally liked the song and how they incorporated Aboriginal lyrics and instruments into the song. It had a fun and catchy house beat with a strong vocal performance to go along with it. Whilst failing to make it through to the semi’s, I don’t think Australia should be ashamed of their performance. A solid song, I’m giving it a 6.5/10 on my totally not arbitrary made up scoring system that's totally objective and not subjective.



Malta “Loop” by Sarah Bonnici


Malta has established itself as a formidable contender in the Eurovision Song Contest. The country has achieved notable success, particularly with Ira Losco's "7th Wonder" in 2002 and Chiara's "Angel" in 2005, both of which secured 2nd place finishes. Malta is celebrated for its strong vocal performances and polished pop songs. Despite not having secured a win yet, Malta continues to be a competitive and respected participant, consistently delivering engaging and high-quality entries that captivate audiences and showcase contemporary music blended with captivating stage presentations. Despite good history in the competition, I don't think this song was particularly anything special, whilst it had a fun live dance performance and Sarah has a great voice, I personally didn’t think it was anything special overall nor thought that it was Eurovision enough to deserve to go beyond the semi finals, which turns out others agreed with as it finished bottom of its semi final grouping. Not a bad song, just not very Eurovision or inspired. 4/10



Albania “Titan” Besa


Albania has consistently participated in the Eurovision Song Contest, earning respect for its strong vocal performances and culturally rich entries. Highlighted by Rona Nishliu's 5th place finish in 2012 with "Suus," Albania has made a significant impact with its blend of powerful ballads, rock influences, and cultural authenticity. This year they were represented by Besa with the ballad “Titan” and failed to make it past the Semi Final. But was this what the song deserved? Yeah probably, whilst I personally am a sucker for ballads, this one wasn’t the more inspiring. Vocal ballads rely heavily on vocal performance and whilst Besa has beautiful voice, the performance didn’t live up to the height required for a ballad to do well at Eurovision. 5/10



Belgium “Before The Party Is Over” Mustii


Belgium has made a significant impact on the Eurovision Song Contest with a variety of musical styles and memorable performances. This year they were represented by Mustii with the song “Before The Party Is Over”. The on stage performance featured Mustii performing whilst surrounded by a circle of microphones, which helped provide a memorable and unique visual. The song itself was described by Mustii as “pop with a dark edge” and honestly I sort of can see that, its very much similar to other pop ballads but Mustii’s voice is the main focus of the song and helps elevate it to the next level. The song build and builds as it slowly reaches a crescendo as it reaches its truly epic scale. This song is a hidden gem among the songs that failed to get past the semi final. It’s actually a crime that it did not make it through to the final. 7/10



Denmark “Sand” Saba


Denmark has established itself as a formidable and respected presence in the Eurovision Song Contest, known for its high-quality entries and diverse musical styles. The country has achieved three notable victories: in 1963 with Grethe & Jørgen Ingmann's melodic "Dansevise," in 2000 with the Olsen Brothers' catchy "Fly on the Wings of Love," and in 2013 with Emmelie de Forest's powerful "Only Teardrops." But does “Sand” by Saba live up to this legacy? Well sort of? The song itself is a catchy vocal pop ballad that gets to show off Saba’s talent and skill however the song itself didn’t make it out of the semi finals and I honestly can understand why. However I don’t think this song should have finished above Belgium and I understand why this one couldn’t break it out of the Semi Finals. 5.5/10



Czechia “Pedestal” Aiko


Czechia, also known as the Czech Republic, has been making its mark on the Eurovision Song Contest with a series of notable performances and increasing success since its debut in 2007. After initial challenges, including non-qualification in its first three attempts, Czechia's perseverance paid off with Mikolas Josef's energetic "Lie to Me" in 2018, which finished in 6th place, marking the country’s best result to date.


This year they submitted the pop-punk song “Pedestal” by Aiko. This song seems to be a lot like marmite for a lot of people with some thinking that it deserved to make it into the final and others thinking that it did well to almost qualify but just missed out. Personally I think it's a fine song, but I don't think the final was missing the song, I’ll give a 5/10.

bottom of page