top of page
After the Moon: What Happened to Progress in the World That Followed 1969?

After the Moon: What Happened to Progress in the World That Followed 1969?

16 April 2026

Paul Francis

Want your article or story on our site? Contact us here

When the Future Seemed to Arrive All at Once

In July 1969, humanity did something that felt definitive.


Astronaut on the moon, standing in a white suit with starry sky in the background. Lunar surface is barren and shadowy, creating a serene mood.

For those watching, it was not just a technological achievement. It carried the sense that the future had arrived in full view. If humans could stand on the Moon, then the rest seemed inevitable. Space travel would expand, technology would accelerate, and the decades ahead would continue that same upward trajectory.


Now imagine you were among those watching at 75 years old.


You had already lived through the transformation from oil lamps to electricity, from horse-drawn streets to aircraft, from handwritten letters to television broadcasts. The Moon landing would have felt like the final, extraordinary confirmation that progress had no ceiling.


And yet, what followed was not quite what that moment seemed to promise.


The World Did Not Stop, But It Changed Direction

The years after 1969 were not a period of stagnation in any simple sense. In fact, they brought some of the most profound changes in human history. The difference is that progress became less visible, less unified, and in many ways less reassuring.


The late 20th century saw the Cold War come to an end, reshaping global politics. The Berlin Wall fell in 1989, and the Soviet Union dissolved shortly after, bringing an end to a geopolitical structure that had defined the post-war world. Europe reorganised itself through deeper cooperation, leading to the formation and expansion of the European Union.


At the same time, the global economy became more interconnected. Trade expanded, supply chains stretched across continents, and financial systems became increasingly complex. The world that emerged was more integrated than ever before, but also more dependent on fragile networks.


This was progress, but it was not the kind that could be captured in a single image like the Moon landing.


The Digital Revolution Rewrote Everyday Life

If the earlier era was defined by physical transformation, the decades after 1969 were defined by something less tangible but no less powerful.


Retro computer setup with a beige monitor displaying "Bomb Jack" game menu, white keyboard, orange joystick, and floppy discs.

The rise of personal computing, followed by the internet, altered the structure of daily life. By the early 21st century, communication, work, entertainment and even social relationships had begun to move into digital spaces. Smartphones then placed that connectivity into people’s pockets, creating a world that was permanently online.


This was a revolution of scale and speed. Information that once took days or weeks to travel could now move instantly. Entire industries were reshaped or replaced. New forms of work and culture emerged.


Yet for all its impact, the digital revolution lacks the visual clarity of earlier breakthroughs. A smartphone does not feel as dramatic as a rocket launch, even if its influence is arguably broader.


Why Progress Feels Different Now

This shift in perception is central to understanding why the post-1969 world can feel slower, even when it is not.


Between 1894 and 1969, progress was visible in everyday surroundings. Streets changed. Homes changed. Transport changed. The world became recognisably different within a single lifetime.


After 1969, much of the change moved beneath the surface. Networks, software and data became the drivers of transformation. These are harder to see, and therefore easier to overlook.


There is also the question of expectation. The Moon landing set a psychological benchmark. It suggested that the future would continue to deliver breakthroughs of similar scale and drama. When that did not happen in the same way, it created a sense of slowdown, even as other forms of progress accelerated.


The Role of Money and Incentives

This is where the question of money and greed becomes relevant, though not in a simplistic sense.


In the earlier part of the 20th century, many of the most significant developments were driven by governments, public investment or the demands of war. Electrification, infrastructure and the space race itself were not primarily profit-driven. They were strategic, national or collective efforts.


In the decades after 1969, innovation became increasingly shaped by markets. Private companies began to play a larger role in determining which technologies advanced and how quickly. This shift did not stop progress, but it changed its direction.


Technologies that offered clear commercial returns, particularly in the digital and consumer sectors, moved rapidly. Meanwhile, areas that required long-term investment with uncertain profit, such as large-scale infrastructure or energy transformation, often progressed more slowly.


The result is a world where innovation continues, but is unevenly distributed and often aligned with economic incentives rather than collective ambition.


A More Complex and Uneven World

The post-1969 era has also been marked by challenges that complicate any straightforward narrative of progress.


Factory chimneys release thick smoke against a moody, orange sky. Industrial structures loom in the foreground, emitting more smoke.

The HIV/AIDS crisis reshaped public health and exposed global inequalities. Climate change emerged as a defining issue, forcing a reckoning with the environmental cost of industrial growth. The COVID-19 pandemic demonstrated both the strengths and vulnerabilities of a globally connected world.


These are not signs of stagnation, but reminders that progress is not linear or universally positive. The same systems that enable rapid advancement can also create new risks.


In the UK, as in many other countries, these shifts have been felt in everyday life. Economic pressures, housing challenges and debates over public services sit alongside technological advancement, creating a more complicated picture of what progress actually means.


From the Moon to the Age of AI

Today, in 2026, the world stands at another threshold.


A hand holds a glowing human brain against a dark background with digital icons, suggesting technology and innovation.

Artificial intelligence, once confined to research labs, is now entering daily use. Systems capable of generating text, images and analysis are beginning to reshape work and creativity. At the same time, space exploration has returned to the public eye through new missions, including renewed efforts to send humans beyond low Earth orbit.


And yet, the mood is different from 1969. There is less certainty that each breakthrough leads to a better world. Progress continues, but it is accompanied by questions about control, impact and long-term consequences.


A Different Kind of Future

The decades after the Moon landing did not deliver a simple continuation of the story that began before it. Instead, they introduced a more complex and less predictable phase of human development.


The world did not stop moving forward. It became faster, more connected and more technologically advanced. But it also became more fragmented, more unequal and more difficult to interpret.


For those who watched Apollo 11 at 75, the Moon landing may have felt like the culmination of a lifetime of progress. What followed would have been harder to define, not because less was happening, but because so much of it was happening in ways that were less visible, less shared and less certain.


The future did not disappear after 1969.


It simply became harder to recognise.

Current Most Read

After the Moon: What Happened to Progress in the World That Followed 1969?
How to Know When You're Ready to Start a Home Business Abroad
From Oil Lamps to the Moon: The Lifetime That Witnessed the Modern World Being Built

When AI Measures “Friendliness”: Who Decides What Good Service Sounds Like?

  • Writer: Paul Francis
    Paul Francis
  • Mar 5
  • 6 min read

Artificial intelligence is moving steadily from assisting workers to assessing them.


Cashier with robotic eyes, wearing a headset in a fast-food setting. Neon colors on screens in the background create a futuristic vibe.


Burger King meal with wrapped burger, fries, and drink cup with logo on table. Bright, casual setting, with focus on branded items.

Burger King has begun piloting an AI system in parts of the United States that listens to staff interactions through headsets and analyses speech patterns. The system, reportedly known as “Patty,” is designed to help managers track operational performance and, more controversially, measure staff “friendliness.” It does this by detecting politeness cues such as whether employees say “welcome,” “please,” or “thank you.”


From a corporate perspective, the logic is clear. Fast food is built on consistency. Brand standards matter. Customer experience scores influence revenue. If AI can help managers see patterns across shifts and locations, it promises efficiency, insight and improved service quality. On paper, it sounds like innovation.


In practice, it raises deeper questions about surveillance, culture, authenticity and who gets to define what “friendly” actually means, Because friendliness is not a checkbox, It is human.


The Promise Versus the Reality

The official line from companies testing this technology is that it is a coaching tool rather than a disciplinary one. It is presented as support for staff, helping identify trends rather than scoring individuals. It is framed as data-driven improvement rather than digital oversight, but the moment speech is analysed, quantified and turned into a metric, something changes.


Service work has always required emotional intelligence. It has also required emotional labour. Employees adjust tone, language and pace depending on the situation in front of them. A lunchtime rush feels different from a quiet mid-afternoon shift. A tired commuter is different from a group of teenagers. A frustrated parent is different from a regular parent who comes in every day.


Anyone who has worked in face-to-face customer service understands this instinctively. Your tone changes, your rhythm changes, your humour changes, and that is precisely where the friction with AI begins.


Culture Cannot Be Reduced to Keywords

One of the most immediate concerns is accent and cultural bias. Speech recognition systems are not neutral; they are trained on datasets. Those datasets may not equally represent every regional accent, dialect or speech pattern.


Hungry Jack's sign above a red canopy on a city street corner. Traffic light displays red pedestrian signal with trees and buildings in the background.

In a noisy fast food environment, with headsets, background clatter and rapid speech, even minor variations can affect recognition accuracy. If an AI system relies heavily on detecting specific words, then any difficulty interpreting accents could skew the data. That is not a theoretical concern. Studies have shown that automated speech systems often perform better on standardised forms of English and less well on regional or non-native accents. If politeness metrics depend on exact phrasing, workers with stronger regional accents or different speech rhythms could appear less compliant in the data, even when their service is perfectly warm and appropriate.


Beyond pronunciation, there is the question of cultural expression. In some regions, friendliness is relaxed and informal. In others, it is brisk and efficient. In some communities, humour and banter are part of service culture. In others, restraint and professionalism are valued. AI systems do not instinctively understand these nuances. They detect patterns.

But hospitality is not a pattern. It is a relationship.


Who Sets the Definition of Friendly?

This leads to a more fundamental question. Who decides what counts as friendly?

These systems do not calibrate themselves. Someone defines the threshold. Someone selects the keywords. Someone decides how often “thank you” should be said and in what context. Those decisions are typically made at the corporate level, often by operations teams and technology partners working from brand guidelines and idealised customer journeys.


There is nothing inherently wrong with brand standards, but there is often a distance between corporate design and frontline reality.


Business meeting with people at a wooden table, one reading a marketing plan. Laptops, coffee cups, and documents on the table.

Many workplace policies are written by people who have not worked a drive-thru shift in years, if ever. They may be excellent strategists. They may understand customer data deeply. But that does not always translate into lived experience on a busy Saturday afternoon when the fryer breaks and the queue is out the door.


In those moments, efficiency may matter more than repetition of scripted politeness.

If an algorithm expects a perfectly phrased greeting under all conditions, it risks becoming disconnected from the environment it is meant to improve.


Once those expectations are embedded in software, they become harder to question. The algorithm becomes policy.


The Authenticity Problem

Having worked in face-to-face customer service myself, I know that the best interactions were rarely scripted. Regular customers would come in, and you would adjust instantly. You might joke with them. You might take the piss in a friendly way. You might shorten the greeting entirely because familiarity made it unnecessary. That rapport is built over time and trust. Would an AI system recognise that as excellent service? Or would it mark down the interaction because the expected keywords were missing?


Hospitality is dynamic. It depends on reading the room, reading the person, and reading the moment. If workers begin focusing on hitting verbal benchmarks rather than engaging naturally, the interaction risks becoming mechanical. Customers can tell the difference between genuine warmth and box-ticking politeness. Ironically, quantifying friendliness may reduce the very authenticity companies are trying to protect.


Surveillance or Support?

This is where the tone of the debate shifts. Because even if the system is introduced as a supportive tool, the psychological reality of being monitored is not neutral.

Anyone who has worked in customer-facing roles knows that service environments are already performance spaces. You are representing the brand; you are expected to maintain composure and remain polite, even when customers are not. That emotional regulation is part of the job. Now imagine adding a layer where your tone and phrasing are being analysed in real time by software.


Hand holding a cassette recorder in focus, with blurred figures in business attire seated at a table in the background.

Even if managers insist it is not punitive, the awareness that your speech is being measured changes behaviour. You begin to think not just about the customer in front of you, but about whether the system has “heard” the right words. In high-pressure environments, that is another cognitive load. Another thing to get right. Over time, that kind of monitoring can subtly alter workplace culture. It can shift service from something relational to something performative in a more rigid way. Employees may begin speaking not to connect, but to comply, and when compliance becomes the goal, service risks losing its texture.


Supportive technology tends to feel like something that works with you. Surveillance, even when softly framed, feels like something that watches you. The distinction matters, particularly in lower-wage sectors where workers have limited influence over policy decisions.


The Broader Direction of Travel

What makes this story significant is that it does not exist in isolation. It is part of a wider pattern in which AI is moving steadily from automating tasks to evaluating behaviour.

First, algorithms helped optimise stock levels and predict demand. Then they began assisting with scheduling and logistics. Now they are increasingly assessing how people speak, how they respond and how closely they align with brand standards. Each step may seem incremental. Taken together, they represent a fundamental shift in how work is structured and supervised.


Historically, managers evaluated service quality through observation, feedback and experience. There was room for interpretation, for context, for understanding that a difficult shift or a complex interaction could influence tone. Human judgment allowed for nuance.

When evaluation becomes data-driven, nuance can be harder to capture. Metrics tend to favour what is measurable. Words are measurable. Frequency is measurable. Context is far less so. The risk is not that AI becomes tyrannical overnight. The risk is that over time, it narrows the definition of good service to what can be quantified. And what can be quantified is rarely the full story.


A Question Worth Asking

Technology reflects priorities. If a company invests in systems that measure friendliness, it is signalling that friendliness can be standardised, monitored and optimised like any other operational metric, but service is not assembly. It is interaction.


It is shaped by region, by culture, by individual personality and by the particular chemistry between staff and customer in that moment. It shifts depending on who walks through the door. It changes across communities and demographics. It even evolves over the course of a day. When AI systems define behavioural benchmarks, someone has decided what the ideal interaction sounds like. That definition may come from brand research, from head office strategy sessions or from consultants analysing survey data. It may be carefully considered. It may be well-intentioned, but it is still a definition created at a distance from the frontline.


Many workplace standards across industries are designed by people who have not stood behind a till in years. That does not invalidate their expertise, but it does introduce a gap between theory and practice. When those standards are encoded into algorithms, that gap can become structural. The core issue is not whether AI can improve service. It is whether those deploying it are prepared to listen as carefully to staff experience as the system listens to staff voices. If friendliness becomes a metric, then it is fair to ask who sets the parameters, how flexible they are, and whether they reflect the messy, human reality of service work.


Because once the headset becomes the evaluator, the definition of “good” may no longer be negotiated on the shop floor and that is a shift worth paying attention to.

bottom of page