Search Results
406 results found with an empty search
- The System Works, But Not for the People Living Next to It: What Wigan Tells Us About Modern Development
A Local Story That Feels Increasingly Familiar What is happening in parts of Wigan may look, at first glance, like a local planning dispute. Large-scale warehouse developments rising close to residential areas, residents voicing concerns about noise, traffic, flooding and loss of privacy, and a council insisting that the proper processes have been followed. On paper, it is a story that fits neatly within the rules of modern development. Orwellian Wigan by Gary Rogers Yet speak to those living next to these sites, and a different picture begins to emerge. Homes overshadowed by vast industrial buildings, concerns about drainage and water flow, increased vehicle movement on roads never designed for that volume, and perhaps most unsettling of all, security infrastructure that now looks directly into spaces that were once considered private. These are not abstract planning concerns. They are changes that reshape everyday life. The more closely you look, the clearer it becomes that Wigan is not an isolated case. It is a visible example of something that is happening across the UK, where the system functions as intended, but the outcome does not feel like a fair balance for the people most affected. When Approval Does Not Mean Acceptance There is no suggestion that these developments have been built without permission. They have moved through the planning system, been assessed, debated and ultimately approved. Councils are required to consider economic benefits, land use, infrastructure and environmental factors, and in many cases, warehouse developments tick the right boxes. They promise jobs, investment and long-term economic activity. They make use of land that may already be designated for industrial or mixed use. From a planning perspective, they can be justified. But there is a gap between approval and acceptance, and it is in that gap where much of the frustration sits. Residents can object, sign petitions and attend consultations, yet still find that the outcome is largely unchanged. The process allows for participation, but not necessarily for influence. This is not a failure of procedure. It is a limitation of what the procedure is designed to achieve. Living With the Consequences What matters most is not the planning application itself, but what happens once the development becomes reality. In Wigan, residents have raised concerns that go beyond aesthetics. Flooding has been linked, rightly or wrongly, to changes in land use and drainage patterns. Increased traffic brings noise, congestion and safety worries. Infrastructure that once served a smaller population struggles to cope with the added demand. Then there are the less obvious impacts. Security systems, including CCTV, are often installed as part of large industrial sites. While they serve a legitimate purpose, their placement can have unintended consequences for neighbouring homes, introducing a level of surveillance that feels intrusive in what were previously private spaces. Individually, each of these issues might be manageable. Together, they represent a significant shift in how people experience their own neighbourhood. The Rise of the Warehouse Economy To understand why this is happening, it is necessary to look beyond Wigan. The growth of online retail, next-day delivery and global supply chains has created an enormous demand for logistics space. Warehouses are no longer remote facilities placed far from where people live. They are increasingly positioned close to major roads and population centres, where they can serve customers more efficiently. Poundland Warehouse, South Lancs Industrial Estate, Bryn by Gary Rogers Wigan, with its proximity to key motorway networks, is an ideal location from a logistics perspective. What makes sense for distribution networks, however, does not always align with the needs of residential communities. This tension is not unique to one town. It is a feature of a broader economic shift, where convenience and efficiency are prioritised, often at the expense of localised impact. When Consultation Feels Like a Formality A recurring theme in situations like this is the feeling that consultation exists, but does not meaningfully shape the outcome. Legally, councils are required to notify certain residents, publish plans and allow time for responses. In practice, that information can be difficult to access, easy to overlook or hard to interpret without specialist knowledge. By the time the scale of a development becomes fully understood, the process may already be too far advanced to change. This creates a sense of decisions being made around people rather than with them. The framework allows for input, but the influence of that input can feel limited. It is here that trust begins to erode, not because rules have been broken, but because the experience of those rules does not feel equitable. A System Designed for Balance, But Delivering Imbalance Planning systems are built on the idea of balance. Economic growth must be weighed against environmental impact, infrastructure against demand, and development against community well-being. The difficulty is that these factors are not always equal in practice. Economic arguments are often clear, measurable and immediate. Community impacts, particularly those that affect quality of life, can be harder to quantify and easier to downplay. Over time, this can lead to outcomes that consistently favour development, even when local resistance is strong. The system functions, but the balance it produces does not always feel fair to those who live with the results. What Wigan Should Teach Us If there is a lesson to be taken from Wigan, it is not that development should stop. Growth, investment and infrastructure are all necessary parts of a functioning economy. The lesson is that the current approach is leaving gaps that need to be addressed. Communities need clearer, more accessible information at the earliest stages of planning. Consultation needs to feel meaningful rather than procedural. Infrastructure considerations, from drainage to transport, need to be treated as central, not secondary. And the lived experience of residents needs to carry more weight alongside economic arguments. Without these changes, situations like this will continue to repeat, not as isolated incidents, but as a pattern. A Modern Norm That Deserves Scrutiny What is happening in Wigan is not an anomaly. It is an example of how modern development is unfolding across the country. Large-scale projects are moving closer to where people live. Decisions are being made within systems that prioritise efficiency and growth. And communities are being asked, in effect, to adapt after the fact. The system, in a technical sense, is working. Applications are processed, regulations are followed and developments are delivered. But for the people living next to them, the outcome can feel very different. And that is where the conversation needs to shift, from whether the system functions to whether it functions fairly.
- GDPR: Neither Use Nor Ornament, or Just Quietly Being Stretched?
A Law That Promised Control It is difficult to forget the moment GDPR arrived. In 2018, inboxes filled overnight with privacy updates, consent requests and new terms. For a brief period, it felt as though something meaningful had shifted. Companies were being forced to explain themselves, and users were, at least in theory, being given control over how their data was used. The promise was simple enough. Clear consent, transparent data use and the ability to say no. Fast forward to today, and that promise feels less certain. Not because GDPR has disappeared, but because everyday experience increasingly suggests that something is not quite working as intended. Settings are pre-enabled, choices are buried, and consent often feels like something you give by default rather than something you actively decide. That is where the question begins. Not whether GDPR still exists, but whether it still feels like it protects people in the way it was meant to. The Reality People Are Experiencing Spend a few minutes going through the settings of most modern apps or devices, and a pattern quickly emerges. Features that rely on data collection are often already switched on. Options to limit or disable them exist, but they are rarely presented in a way that invites easy understanding. Consent, in many cases, has become something passive. It is tied to long terms and conditions, accepted in a single tap, and rarely revisited. The idea of being fully informed at the point of agreement feels increasingly distant from how these systems actually work. This creates a gap between expectation and reality. On paper, users have control. In practice, that control requires effort, awareness and persistence to exercise. Not Broken, But Being Navigated It would be easy to conclude from this that GDPR has failed, but that would not be entirely accurate. The law itself still sets out clear requirements around transparency, consent and data protection. It has led to real changes in how companies handle personal data, and it continues to provide a framework for enforcement and accountability. The issue is not that the law is useless. It is that companies have learned how to operate within it in ways that minimise disruption to their business models. One of the most significant tools in this regard is the concept of “legitimate interest”. This allows organisations to process certain types of data without explicit consent, provided they can justify a valid reason for doing so. In theory, this is a practical necessity. In practice, it can be stretched to cover a wide range of activities that users might reasonably expect to opt into rather than opt out of. This is where GDPR begins to feel less like a shield and more like a framework that can be carefully worked around. The Rise of Design Over Consent Another factor shaping this experience is the way interfaces are designed. Consent is no longer just a legal concept. It has become part of user experience design, and not always in a way that favours the user. Options to accept are often prominent and easy, while options to decline or customise are less visible or require additional steps. These patterns are sometimes referred to as “dark patterns”, though they are not always labelled as such. They do not remove choice entirely, but they guide it in a particular direction. The result is that many users end up agreeing to things not because they fully understand or support them, but because the process of declining is inconvenient. Over time, this shapes behaviour, turning consent into something that feels automatic. Legal Compliance Versus Real Understanding At the heart of the issue is a distinction that is easy to overlook. There is a difference between being legally compliant and being genuinely transparent. A company can meet the technical requirements of GDPR while still presenting information in a way that is difficult to interpret. Long privacy policies, complex language and layered settings may satisfy regulatory standards, but they do not necessarily lead to informed users. This creates a situation where protection exists in principle, but feels distant in practice. Users are covered by rules they rarely engage with, and decisions about their data are often made in environments that prioritise speed and convenience over clarity. Why It Feels Like It Is No Longer Working The frustration many people feel does not come from a single failure, but from accumulation. Each small instance, a pre-ticked box, a hidden setting, a feature enabled by default, adds to the sense that control is slipping away. When that experience is repeated across multiple platforms and devices, it begins to shape perception. GDPR is still there, but it becomes harder to see its impact in everyday use. That is how a regulation designed to empower users can start to feel as though it is neither use nor ornament. Not because it has no value, but because its presence is no longer obvious in the moments that matter. The Gap Between Law and Experience What this ultimately highlights is a gap between intention and implementation. GDPR was designed to give individuals meaningful control over their data. That intention remains valid. The challenge is that technology has evolved quickly, and companies have adapted just as quickly to ensure that their models continue to function within the boundaries of the law. As a result, the letter of the regulation is often maintained, while the spirit becomes harder to recognise. Consent exists, but it is shaped by design. Transparency exists, but it is buried in complexity. This does not mean the law has failed. It means it is being tested in ways that were perhaps inevitable. Where This Leaves the User For the average user, the situation is both simple and frustrating. The protections are there, but accessing them requires time, knowledge and attention that most people do not have to spare. This creates a form of imbalance. Companies understand the systems they operate within. Users, more often than not, are reacting to them. Closing that gap would require more than just regulation. It would require a shift in how consent is presented, how choices are offered and how transparency is delivered. A Regulation Still Worth Having It is important not to lose sight of the fact that GDPR still matters. It has introduced standards that did not exist before and continues to provide a basis for holding organisations accountable. The problem is not that it is useless. It is that its effectiveness depends on how it is applied, and at the moment, that application often favours compliance over clarity. That leaves users in an uncomfortable position. Protected, but not always informed. Covered, but not always in control. And that is why, for many, it can feel as though something that was meant to make a clear difference has become harder to see in everyday life.
- You Bought It, So Why Is It Changing Without You Knowing?
When Devices Start Making Decisions Without Asking It started as one of those small discoveries that does not seem like much at first, until you realise what it actually represents. A setting, buried deep inside TikTok, already switched on, allowing artificial intelligence to remix content without any clear moment of consent. There was no prompt, no obvious notification, no point at which you were asked whether this was something you wanted. It had simply been enabled, quietly, as if the decision had already been made. That moment might have been easy to ignore on its own. But it did not stop there. A television, already purchased and sitting in the living room, had begun to behave differently as well. Sound settings had been “upgraded” to AI-enhanced modes, new features had appeared in menus, and adverts had started to creep into spaces that had once been clean. Again, none of this was presented clearly at the point of use. It was only by going into the settings, digging through layers of options, that the extent of what had been switched on became visible. Individually, these changes feel small. Taken together, they point to something much larger. The devices and platforms we use are no longer static, and more importantly, they are no longer waiting to be asked before they change. The Shift Towards Default Consent What sits behind this is a design choice that has become increasingly common across technology. New features, particularly those linked to artificial intelligence or personalisation, are not being introduced as clear choices. Instead, they arrive already active, operating on the assumption that most users will not notice, or will not take the time to switch them off. In theory, nothing has been taken away. The option to disable these features still exists. In practice, that option is often buried in menus that require both time and technical confidence to navigate. The default setting does most of the work, and the burden shifts onto the user to undo a decision they never knowingly made. This is what makes the shift feel uncomfortable. It is not that choice has disappeared entirely, but that it has been quietly repositioned. Consent is no longer something you give in a clear moment. It is something assumed unless you go looking for it. When Ownership Starts to Feel Conditional There is a deeper frustration running through all of this, and it has less to do with any single feature than with what it suggests about ownership itself. When you buy something, particularly something as tangible as a television, there is a basic expectation that it belongs to you in a meaningful sense. You decide how it works, what it displays and how it behaves in your home. That understanding has been part of consumer life for decades, and it is not an unreasonable one. What has changed is that modern devices are no longer fixed objects. They are connected systems, capable of updating themselves, adapting their behaviour and introducing new functions long after they have been sold. The product you bought is no longer the product you necessarily continue to use. It evolves, often under the control of the company that made it rather than the person who paid for it. This becomes particularly noticeable when advertising enters the equation. There is a clear difference between using a free service that relies on adverts and paying for a physical product that then begins to behave in a similar way. If a television is funded by advertising from the outset, that relationship is understood. When it appears after purchase, without clear agreement, it feels like something else entirely. It raises a simple but difficult question. If you have already paid for the product, why does it continue to behave as though it still needs to extract value from you? The Language of “Enhancement” Part of the reason these changes slip under the radar is the way they are presented. Features are rarely introduced in a way that invites scrutiny. Instead, they are framed as improvements, as upgrades, as enhancements designed to make the experience better. AI sound, smarter recommendations, more personalised content. On the surface, these sound like benefits, and in some cases they may well be. But the language does more than describe the feature. It shapes how it is received. By positioning these changes as positive additions, the fact that they are enabled by default becomes less obvious. The emphasis is placed on what the feature does, rather than how it has been introduced. The result is a situation where the method of deployment is softened, even when it has meaningful implications for privacy and control. Not a Rejection of Technology, but a Question of Transparency It is worth being clear about what this is not. Most people are not resistant to new technology. Updates, improvements and new capabilities are part of what makes modern devices useful. The issue is not that features are being added, but how they are being introduced. There is a difference between being offered something and having it applied without a clear moment of agreement. Transparency is not simply about making information available somewhere in a settings menu. It is about presenting that information in a way that allows people to make a genuine choice. When that clarity is missing, the relationship begins to feel uneven. The company decides what is enabled, and the user is left to discover it after the fact. That is not a partnership. It is a one-sided arrangement. When Quiet Changes Become Normal Perhaps the most subtle shift of all is how quickly this behaviour starts to feel normal. Devices update themselves regularly, platforms introduce new features without fanfare, and the experience changes in ways that are easy to overlook unless you are actively paying attention. Over time, this creates a new baseline. What once might have raised questions becomes part of the background. The absence of clear consent stops feeling unusual, not because it has been resolved, but because it has been repeated often enough to seem expected. That is where the real concern lies. Not in any single feature, but in the gradual adjustment of expectations. The Line That Should Still Exist At its core, this is not a technical issue. It is a question about where control sits. Technology will continue to evolve, and devices will continue to improve. That is not in dispute. But there is still a line between offering something new and deciding on behalf of the user that it should already be in place. If a feature is genuinely valuable, it should not need to be hidden. It should be presented clearly, explained properly and chosen deliberately. Because once that line begins to blur, ownership starts to feel less like something you have, and more like something you are temporarily allowed. And that is a very different relationship from the one most people thought they were buying into.
- Stop Killing Games: The Fight Over Who Really Owns What You Buy in the Digital Age
From Online Petition to Political Pressure What began as frustration among gamers has now crossed into something far more serious. The Stop Killing Games movement, initially sparked by the shutdown of titles like The Crew, has moved beyond forums and social media into legal challenges and political debate. Consumer groups in Europe have backed legal action against publishers, arguing that players were misled into believing they owned products that could later be rendered unusable. At the same time, the campaign has reached the European Parliament, where discussions around digital ownership and consumer protection have begun to take shape. What was once dismissed as niche has become a test case for how digital goods are regulated. The movement itself is led by creator Ross Scott, but it has grown well beyond any single figure. It now represents a broader unease about how modern products are sold, controlled and ultimately withdrawn. At its core, Stop Killing Games is not just about gaming. It is about a shift in how ownership works, and whether consumers have quietly lost more control than they realise. What the Movement Is Actually Fighting For Despite the name, the campaign is not demanding that every online game be supported indefinitely. Its central argument is more grounded than that. When a publisher decides to shut down a game, particularly one that requires constant server access, that decision often makes the entire product unplayable. Even single-player elements can disappear overnight. For players who paid for that experience, it raises a simple but uncomfortable question: what exactly was purchased? The movement is calling for practical solutions rather than unrealistic guarantees. These include allowing offline modes when servers are closed, enabling private servers, or providing some form of end-of-life access that preserves functionality. The goal is not to prevent change, but to prevent total erasure. In many ways, it is a request to restore something that once felt obvious. If you buy something, you should be able to use it. Ownership Versus Access in the Digital Economy The deeper issue sits beneath the surface of gaming and extends into the structure of the digital economy itself. For decades, buying a product meant owning a physical object. A book, a film, a game cartridge or a disc. That ownership was simple and difficult to revoke. Once purchased, the item existed independently of the company that made it. Digital products have altered that relationship. Today, many purchases are effectively licenses rather than ownership. Access is granted under certain conditions, often tied to accounts, servers or ongoing support. When those conditions change, access can disappear. Gaming has become one of the clearest examples of this shift. Titles are increasingly designed as ongoing services, reliant on infrastructure controlled entirely by the publisher. The result is a situation where the consumer’s sense of ownership does not match the legal reality. Stop Killing Games has brought that contradiction into focus. It asks whether the language of buying still holds meaning in a system built on controlled access. The Move From Products to Services Part of the reason this issue has intensified is the way the gaming industry has evolved. Modern games are often no longer standalone products. They are platforms. They receive updates, expansions and live content over time. From a business perspective, this model offers clear advantages. It creates recurring revenue, extends engagement and allows companies to adapt their products continuously. However, it also creates a dependency. The game is no longer something that exists on its own. It is something that functions only as long as the supporting systems remain active. When those systems are withdrawn, the product effectively ceases to exist. This is not unique to gaming. Similar models are visible across software, media and even hardware. Subscription services, cloud-based tools and connected devices all rely on ongoing support to function. The difference is that games make the consequences of that model immediately visible. When a game is shut down, there is no ambiguity. It stops working. Why This Moment Feels Different The Stop Killing Games movement has gained traction now because it intersects with a broader shift in how people view digital ownership. There is a growing awareness that many of the things we “own” are conditional. Music libraries can disappear from platforms. Software can lose functionality. Devices can become limited when support ends. What once felt permanent now feels provisional. This has created a sense that control is increasingly one-sided. Companies retain the ability to alter or remove products, while consumers have little recourse once a purchase has been made. The legal challenges emerging in Europe reflect that tension. They suggest that existing consumer protection frameworks may not fully account for the realities of digital goods. If those frameworks begin to change, the implications will extend well beyond gaming. The Industry Perspective Publishers and developers do not see the issue in the same way. Maintaining servers costs money. Supporting older titles can divert resources from new projects. In some cases, the technical structure of a game makes it difficult to separate offline and online components. There are also concerns about security, intellectual property and the potential for unauthorised modifications if private servers are allowed. From this perspective, games are not static products but evolving services. Ending support is part of their lifecycle. The tension lies in the gap between that model and consumer expectations. Players are not always aware of the limitations attached to what they are buying, and when those limitations become visible, the sense of loss is immediate. A Question That Goes Beyond Gaming What makes Stop Killing Games significant is not just the issue it addresses, but the question it raises. If digital purchases can be altered or removed after the fact, what does ownership mean in the modern world? This question applies to far more than games. It touches on software, media and the increasing number of products that depend on connectivity and external control. As more of life moves into digital systems, the balance between convenience and control becomes harder to ignore. The movement has gained attention because it makes that balance visible. It turns an abstract concern into a concrete example that people can understand. Where This Could Lead It is still unclear how this issue will be resolved. Legal cases are ongoing, and political discussions are in their early stages. The outcome could range from minor adjustments in how games are designed to more substantial changes in consumer protection law. What is clear is that the conversation has shifted. The idea that digital products can simply disappear without consequence is being challenged in a way that feels more organised and more serious than before. For now, Stop Killing Games represents a growing pushback against a system that has quietly redefined ownership. Whether that pushback leads to lasting change will depend on how regulators, companies and consumers respond. What began as a complaint about a single game has become something larger. It is now part of a broader debate about who controls the things we buy, and whether that control has already moved further away from the consumer than most people realised.
- Too Young for Gen X, Too Old for Millennials: The Generation That Grew Up Between Worlds
A Childhood That No Longer Exists, An Adulthood That Arrived Overnight There is a particular kind of disorientation that comes with realising your life does not quite fit the categories you are given. For those born between the late 1970s and mid-1980s, that feeling is familiar. Officially, you are placed somewhere between Generation X and the Millennials, but in practice, neither label feels entirely accurate. You might remember using a rotary phone as a child, waiting for the dial to spin back into place before trying again. You also now carry a smartphone that can do more in seconds than entire rooms of equipment once could. That contrast is not just technological. It defines an experience of growing up that sits between two distinct worlds. This is not simply a matter of nostalgia. It is a reflection of a generation that did not grow up in a stable cultural environment, but in the middle of a rapid and permanent transition. Not Quite Gen X, Not Quite Millennial Generational labels tend to assume continuity. They group people based on shared experiences, cultural references and social conditions that broadly align over time. The problem for those born roughly between 1976 and 1985 is that the ground shifted beneath them during their formative years. Gen X, broadly speaking, grew up in an analogue world and entered adulthood before the internet reshaped everyday life. Millennials, by contrast, came of age alongside digital technology, with the internet already embedded in education, communication and culture. Those in between experienced something different. They had an analogue childhood, but a digital adolescence or early adulthood. They remember life before the internet not as a general historical idea, but as a lived reality. At the same time, they were young enough to adapt quickly when that world changed. The result is a group that overlaps with both generations but belongs fully to neither. Growing Up Before Everything Changed To understand this group, it helps to remember just how recently the digital world arrived. Childhood in the 1980s and early 1990s was still largely offline. Communication was slower and more deliberate. If you wanted to speak to someone, you called their house and hoped they were in. Plans were made in advance and rarely changed at short notice. Entertainment was physical and finite, whether it was tapes, television schedules or early video games that existed entirely within the home. Information had weight to it. Encyclopedias sat on shelves, and finding an answer required time and effort. There was a natural limit to how much you could know and how quickly you could know it. For those who grew up in this environment, the world had boundaries that now feel almost unfamiliar. Then the Shift Happened The transition did not arrive gradually over centuries. It unfolded within a decade. By the mid to late 1990s, the internet began to enter homes. Email replaced letters, search engines replaced reference books, and communication started to accelerate. Mobile phones followed, initially basic and limited, before evolving into the always-connected devices we now take for granted. For those in this in-between generation, this was not background noise. It was a visible and often confusing transformation. They were old enough to understand what was changing, but young enough to adapt without resistance. They learned digital systems rather than inheriting them. They remember the sound of dial-up connections, the uncertainty of early online spaces, and the novelty of being able to access information instantly. It was not simply the arrival of new tools. It was the rewriting of how life worked. Living With Two Sets of Instincts This dual experience has left a lasting mark. People in this bracket often carry what could be described as two sets of instincts. On one hand, there is a familiarity with independence, patience and offline thinking that aligns with Gen X. On the other hand, there is an ease with technology, communication and rapid adaptation that aligns more closely with Millennials. This combination creates a perspective that is both flexible and, at times, sceptical. Technology is embraced, but not blindly. There is an awareness of what has been gained, but also of what has been lost. It also shapes how this group navigates modern life. They are comfortable using digital tools, but they are not entirely defined by them. They can remember a time when constant connectivity did not exist, and that memory acts as a quiet point of reference. The Last to Remember, The First to Adapt There is a simple way to describe this generation, and it captures the essence of the experience. They are the last people who clearly remember life before the internet, and the first who had to fully adapt to it. That position carries a certain weight. It means they have seen the transition from limitation to abundance, from slower communication to instant access, from localised experience to global connection. It also means they understand that these changes were not inevitable. They happened, and they happened quickly. Why This Generation Often Feels Overlooked Despite this unique position, this group is rarely the focus of generational discussion. The narrative tends to favour broader, more easily defined categories. Gen X is associated with independence and scepticism. Millennials are linked to digital culture and social change. Those in between are often absorbed into one group or the other, depending on the context. This lack of clear definition can create a sense of being overlooked, but it also reflects a deeper issue. The frameworks used to describe generations struggle when faced with periods of rapid transformation. They are designed for stability, not transition. As a result, the people who lived through that transition do not always fit neatly into the categories that follow. A Bridge Between Two Eras If there is a more accurate way to understand this generation, it is not as a misfit, but as a bridge. They connect two fundamentally different ways of living. They understand analogue systems because they grew up with them. They understand digital systems because they had to learn and use them as those systems emerged. This makes them translators of a kind, able to move between perspectives that can sometimes feel disconnected. They can relate to those who find modern technology overwhelming, and to those who have never known anything else. In a world that continues to change at speed, that ability has value. Looking Back, Looking Forward The experience of growing up between worlds is not always easy to define, but it is increasingly relevant. As new technologies continue to reshape daily life, from artificial intelligence to further automation, the perspective of those who have already lived through one major transformation becomes more important. They understand that change is rarely smooth, that progress brings trade-offs, and that adaptation is as much about mindset as it is about tools. To be too young for Gen X and too old for Millennials is, in many ways, to have had a front-row seat to one of the most significant cultural shifts in modern history. It may not come with a neat label, but it offers something else. A clear memory of what came before, and a grounded understanding of what came after.
- AI Is Taking Jobs Before It’s Ready, and That Should Concern Us All
A Shift That Feels Rushed, Not Earned The language around artificial intelligence has changed quickly. Only a few years ago, it was framed as a tool that would support workers, handle repetitive tasks and unlock new forms of productivity. In 2026, that tone has shifted. Companies are now cutting roles and openly pointing to AI as part of the justification, presenting it as an inevitable next step rather than a choice. What makes this moment uncomfortable is not simply that jobs are being lost. It is the sense that those decisions are being made ahead of the technology’s actual capability. There is a growing gap between what AI can reliably do and what businesses are claiming it can replace, and that gap is being filled not with caution, but with cost-cutting logic. We Have Seen Disruption Before, But This Feels Different There is a tendency to compare the current moment to earlier waves of automation. The Luddites are often brought up, sometimes dismissively, as a warning against resisting progress. It is true that machinery transformed industries, from textiles to farming, and reduced the need for large numbers of workers. Over time, new forms of employment emerged and economies adjusted. But that comparison only goes so far. Those earlier transitions were grounded in technologies that demonstrably outperformed what they replaced in clear, physical terms. A mechanical loom could produce more cloth, more consistently, than a human worker. A tractor could do the work of many labourers in the field with obvious, measurable gains. AI does not yet offer that same clarity. It produces convincing outputs, but not consistently reliable ones. It can assist, accelerate and sometimes impress, but it still requires oversight, correction and, in many cases, human judgment to prevent mistakes. The comparison with past automation begins to look strained when the replacement is not fully capable of doing the job on its own. The Technology Still Struggles With the Real World Away from carefully controlled demonstrations, the limitations of AI are not hard to find. Autonomous vehicles, long presented as just around the corner, continue to encounter problems when faced with the unpredictability of real roads. Edge cases, unusual conditions and split-second decisions still expose gaps that human drivers handle instinctively. Delivery robots, another widely promoted example of automation, have faced similar issues. Navigating complex urban environments, dealing with obstacles, weather and human behaviour has proven far more difficult than early projections suggested. In many cases, these systems still rely on remote monitoring or are restricted to limited areas. Even in digital spaces, where AI performs best, the cracks are visible. Generated content can be persuasive but inaccurate. Customer service systems can feel efficient from a company’s perspective while becoming frustrating and ineffective for the people using them. The technology works, but not in a way that consistently justifies removing the human layer entirely. So, Why Are Jobs Being Cut Now? If the technology is not fully ready, the question becomes unavoidable. Why are companies acting as if it is? The answer sits less in engineering and more in economics. Labour is one of the highest costs any business carries. Reducing that cost, even partially, has an immediate and measurable impact on profitability. AI does not need to be perfect to make that calculation appealing. It only needs to be cheaper than the alternative. This is where the conversation moves beyond innovation and into something more uncomfortable. The push towards AI adoption is not being driven solely by technological readiness. It is being accelerated by financial incentives, investor pressure and the constant demand to operate leaner and faster. To put it plainly, the decision to replace workers is often made because it makes financial sense in the short term, not because the technology has truly earned that level of trust. The Risk of Replacing Too Soon There is a cost to moving at this pace, and it is not always immediately visible on a balance sheet. When roles are removed and replaced with systems that still require supervision, the burden does not disappear. It shifts. Errors increase. Quality becomes inconsistent. Customers notice the difference, even if they cannot always articulate it. What appears efficient internally can translate into a poorer experience externally. Over time, that erosion matters. There is also a broader risk to the workforce itself. When entry-level and mid-level roles are reduced, the pipeline for developing future expertise narrows. If fewer people are trained, fewer people gain experience, and the long-term capacity of industries begins to weaken. These are not abstract concerns. They are the predictable consequences of adopting technology faster than it can reliably support the roles it is expected to fill. Progress Is Not the Same as Acceleration None of this is an argument against technological progress. AI will continue to develop, and in time, it may reach a level where it can genuinely replace certain types of work without compromise. That is the trajectory history suggests. The issue is timing. Progress becomes something else when it is forced, when it is pushed into place before it is ready, and when the primary driver is cost reduction rather than capability. There is a difference between innovation that expands what is possible and implementation that narrows what is acceptable. The current moment sits uncomfortably between the two. A Decision Disguised as Inevitability Perhaps the most concerning aspect of all is how these changes are being framed. The language used by companies often suggests that this is simply the direction of travel, an unavoidable step in the evolution of technology. It is not. These are decisions made by people, influenced by financial pressures and strategic priorities. Presenting them as inevitable removes accountability and shuts down the conversation that should be taking place about readiness, responsibility and long-term impact. The Question We Should Be Asking AI is already taking jobs. That part is no longer in doubt. The more important question is whether it deserves to. At the moment, the answer is far less certain than the headlines suggest. The technology shows promise, but it also shows clear limitations. Replacing large numbers of workers with systems that still struggle in real-world conditions is not a sign that progress is reaching its peak. It is a sign of decisions being made ahead of the evidence. If there is a lesson from history, it is not that disruption should be resisted, but that it should be grounded in reality. When the balance shifts too far towards short-term gain, the consequences tend to follow. And right now, there is a growing sense that the balance is shifting too quickly.
- After the Moon: What Happened to Progress in the World That Followed 1969?
When the Future Seemed to Arrive All at Once In July 1969, humanity did something that felt definitive. For those watching, it was not just a technological achievement. It carried the sense that the future had arrived in full view. If humans could stand on the Moon, then the rest seemed inevitable. Space travel would expand, technology would accelerate, and the decades ahead would continue that same upward trajectory. Now imagine you were among those watching at 75 years old. You had already lived through the transformation from oil lamps to electricity, from horse-drawn streets to aircraft, from handwritten letters to television broadcasts. The Moon landing would have felt like the final, extraordinary confirmation that progress had no ceiling. And yet, what followed was not quite what that moment seemed to promise. The World Did Not Stop, But It Changed Direction The years after 1969 were not a period of stagnation in any simple sense. In fact, they brought some of the most profound changes in human history. The difference is that progress became less visible, less unified, and in many ways less reassuring. The late 20th century saw the Cold War come to an end, reshaping global politics. The Berlin Wall fell in 1989, and the Soviet Union dissolved shortly after, bringing an end to a geopolitical structure that had defined the post-war world. Europe reorganised itself through deeper cooperation, leading to the formation and expansion of the European Union. At the same time, the global economy became more interconnected. Trade expanded, supply chains stretched across continents, and financial systems became increasingly complex. The world that emerged was more integrated than ever before, but also more dependent on fragile networks. This was progress, but it was not the kind that could be captured in a single image like the Moon landing. The Digital Revolution Rewrote Everyday Life If the earlier era was defined by physical transformation, the decades after 1969 were defined by something less tangible but no less powerful. The rise of personal computing, followed by the internet, altered the structure of daily life. By the early 21st century, communication, work, entertainment and even social relationships had begun to move into digital spaces. Smartphones then placed that connectivity into people’s pockets, creating a world that was permanently online. This was a revolution of scale and speed. Information that once took days or weeks to travel could now move instantly. Entire industries were reshaped or replaced. New forms of work and culture emerged. Yet for all its impact, the digital revolution lacks the visual clarity of earlier breakthroughs. A smartphone does not feel as dramatic as a rocket launch, even if its influence is arguably broader. Why Progress Feels Different Now This shift in perception is central to understanding why the post-1969 world can feel slower, even when it is not. Between 1894 and 1969, progress was visible in everyday surroundings. Streets changed. Homes changed. Transport changed. The world became recognisably different within a single lifetime. After 1969, much of the change moved beneath the surface. Networks, software and data became the drivers of transformation. These are harder to see, and therefore easier to overlook. There is also the question of expectation. The Moon landing set a psychological benchmark. It suggested that the future would continue to deliver breakthroughs of similar scale and drama. When that did not happen in the same way, it created a sense of slowdown, even as other forms of progress accelerated. The Role of Money and Incentives This is where the question of money and greed becomes relevant, though not in a simplistic sense. In the earlier part of the 20th century, many of the most significant developments were driven by governments, public investment or the demands of war. Electrification, infrastructure and the space race itself were not primarily profit-driven. They were strategic, national or collective efforts. In the decades after 1969, innovation became increasingly shaped by markets. Private companies began to play a larger role in determining which technologies advanced and how quickly. This shift did not stop progress, but it changed its direction. Technologies that offered clear commercial returns, particularly in the digital and consumer sectors, moved rapidly. Meanwhile, areas that required long-term investment with uncertain profit, such as large-scale infrastructure or energy transformation, often progressed more slowly. The result is a world where innovation continues, but is unevenly distributed and often aligned with economic incentives rather than collective ambition. A More Complex and Uneven World The post-1969 era has also been marked by challenges that complicate any straightforward narrative of progress. The HIV/AIDS crisis reshaped public health and exposed global inequalities. Climate change emerged as a defining issue, forcing a reckoning with the environmental cost of industrial growth. The COVID-19 pandemic demonstrated both the strengths and vulnerabilities of a globally connected world. These are not signs of stagnation, but reminders that progress is not linear or universally positive. The same systems that enable rapid advancement can also create new risks. In the UK, as in many other countries, these shifts have been felt in everyday life. Economic pressures, housing challenges and debates over public services sit alongside technological advancement, creating a more complicated picture of what progress actually means. From the Moon to the Age of AI Today, in 2026, the world stands at another threshold. Artificial intelligence, once confined to research labs, is now entering daily use. Systems capable of generating text, images and analysis are beginning to reshape work and creativity. At the same time, space exploration has returned to the public eye through new missions, including renewed efforts to send humans beyond low Earth orbit. And yet, the mood is different from 1969. There is less certainty that each breakthrough leads to a better world. Progress continues, but it is accompanied by questions about control, impact and long-term consequences. A Different Kind of Future The decades after the Moon landing did not deliver a simple continuation of the story that began before it. Instead, they introduced a more complex and less predictable phase of human development. The world did not stop moving forward. It became faster, more connected and more technologically advanced. But it also became more fragmented, more unequal and more difficult to interpret. For those who watched Apollo 11 at 75, the Moon landing may have felt like the culmination of a lifetime of progress. What followed would have been harder to define, not because less was happening, but because so much of it was happening in ways that were less visible, less shared and less certain. The future did not disappear after 1969. It simply became harder to recognise.
- How to Know When You're Ready to Start a Home Business Abroad
For new international home business owners, deciding to start a home business often comes down to timing versus uncertainty. The challenge is that a promising idea can look “ready” on paper, while everyday realities, permits, taxes, banking access, shipping limits, or housing rules, change the true cost and effort outside the United States. A simple home business opportunity evaluation helps separate enthusiasm from practical readiness by surfacing the non-US entrepreneurial considerations that commonly catch beginners off guard. With the right lens on global small business startup factors, the start decision becomes clearer. Quick Readiness Checklist Evaluate profitability factors to confirm your home business can earn reliably abroad. Assess the home space to create a workable, distraction-limited office setup. Review your skills and experience to spot gaps you must fill before launching. Calculate startup capital requirements to cover costs and sustain early operations. Plan time management and local compliance steps to run smoothly and legally. Understanding What “Ready” Really Means To make a home business abroad work, “ready” means your basics line up in real life, not just in your head. That includes simple profitability math, a workable home office setup, an honest skill check, enough startup capital, enough time in your week, and a clear view of local rules. This matters because most early mistakes are predictable and expensive. Many small businesses fail because of poor business planning and funding gaps, and moving countries can amplify both. When you assess readiness upfront, you protect your savings, reduce stress at home, and avoid compliance surprises. Think of it like packing for a long trip. Profitability is your ticket, capital is your emergency cash, time is your schedule buffer, and regulations are the border checks. Your entrepreneurial fit is your ability to adapt when the plan changes. Build a Start-or-Wait Readiness Checklist This checklist helps you decide whether to launch your home business abroad now, postpone until key gaps are fixed, or adjust your idea to fit reality. It keeps the decision practical by testing your market, capabilities, legal footing, cash, and weekly capacity. Review local economic conditions: Start by scanning basics that affect demand: typical prices, competitors, customer buying habits, and how people actually discover services (local directories, messaging apps, word-of-mouth). If you can, talk to 5 to 10 locals in your target audience and ask what they pay now, what they dislike, and what would make them switch. Rate your skills and operational readiness: List the top 8 to 12 tasks your business requires (selling, delivery, customer support, bookkeeping, language, tech setup) and score yourself 1 to 5 on each. Close the biggest two gaps with a simple fix: a short course, a template, a weekly practice block, or outsourcing one task so your launch does not stall. Confirm local requirements and friction points: Write down what you need to operate legally: visa or work permissions, registration steps, any local licenses, and whether you can run the business from your address. Add one “how will this work daily?” check, such as testing your customer contact flow, since a phone system that is hard to reach can quietly kill early sales. Map a starter budget and survival runway: Create a one-page budget with three columns: one-time setup costs, monthly operating costs, and personal living costs you must still cover. Then calculate a runway number: cash available divided by monthly burn, and decide your minimum target (often 3 to 6 months) before you commit to full speed. Apply time-management rules and make the decision: Block your week into fixed commitments first (job, family, admin), then schedule 5 to 10 focused hours for the business and protect them like appointments. Plan for consistency because 66 days for a habit means your routine needs enough runway to stick. If you cannot hold the hours for four straight weeks, choose “later” or redesign the offer to require less ongoing time. Common Questions Before Starting From Home Abroad Q: How can I tell if I have enough time and energy to commit to a home-based venture? A: You are ready when you can protect a small, repeatable work block most weeks without sacrificing sleep or key family duties. Track your energy for two weeks, then test a “minimum schedule” you can keep even during busy days. If that trial creates constant friction, simplify the offer or delay the launch. Q: What space considerations should I keep in mind to maintain balance between my home life and new work activities? A: Choose one dedicated zone with clear boundaries, even if it is a small desk and a storage bin. If you are American and you plan to claim any home-related deductions later, the IRS notes that the term home includes many living setups, so keep your work area and records distinct. Agree on quiet hours and a shutdown routine, so work does not spill into evenings. Q: How can I prepare myself mentally and emotionally to manage the uncertainties of starting something new from home? A: Expect mixed weeks and build a simple coping plan: a daily start ritual, one priority goal, and a fixed stop time. Research suggests the direct effect of working from home on well-being is not automatically positive or negative, so your routines and support matter. Consider a weekly check-in with a friend or peer group to reduce isolation. Q: What steps can I take to stay organised and avoid feeling overwhelmed in my daily routine? A: Use one task list, one calendar, and one “admin hour” each week for invoices, messages, and compliance notes. Create a simple filing routine with folders for income, expenses, tax, and legal documents, then save receipts the same day. When forms pile up, combine related PDFs into a single labelled record per month so nothing gets lost, and take a look at a simple way to merge them. Q: What if I need help managing the financial aspects of starting a home-based venture? A: Start with a one-page cash flow: expected income, fixed costs, variable costs, and a buffer for tax and fees in your host country. If the rules feel unclear, get a short consultation with a qualified local accountant or tax adviser who understands cross-border situations. Keep a clean paper trail from day one to lower stress at filing time. Commit to a Clear Start Date for Your Home Business Abroad Starting a home business abroad can feel risky when markets, rules, and family demands keep shifting at once. The steady way forward is informed decision-making for startups: weigh the key factors, recap for home businesses, choose simple assumptions, and plan around what you can verify. When this mindset guides encouragement for business planning, motivating international entrepreneurs becomes less about confidence and more about clarity and follow-through. Readiness is proven by one verified decision, not endless preparation. Choose one next move, validate demand, close one readiness gap, or set a realistic start date, before investing more time or money. That restraint builds stability and resilience as you grow across borders.
- From Oil Lamps to the Moon: The Lifetime That Witnessed the Modern World Being Built
The Moment That Redefined What Was Possible By the summer of 1969, humanity was no longer confined to Earth. As Apollo 11 touched down on the lunar surface, millions watched in real time as Neil Armstrong stepped onto the Moon. It was not simply a scientific achievement. It was a moment that redefined the limits of what human beings could do, collapsing centuries of imagination into a single, grainy broadcast. Now consider this. Imagine you were 75 years old as you watched it unfold. You would have been born in 1894, into a world that, in many ways, still belonged to the 19th century. What you witnessed over those seven and a half decades would not feel like gradual progress. It would feel like the entire world had been rebuilt around you. A Childhood Lit by Flame, Not Electricity In 1894, modern life had not yet taken hold in the way we understand it today. Electricity existed, but it was far from universal. Many homes across Britain and beyond still relied on gas lighting, oil lamps or candles. Streets were dim, nights were quieter, and daily life was bound more closely to natural light. Transport was slow and grounded. Horses dominated the roads, and while early motor cars had begun to appear, they were rare and unreliable. Travel over long distances was possible by train or ship, but it was not routine in the way it would later become. Communication was deliberate and patient. Letters carried news across towns and countries. The telegraph existed, but it was largely confined to business and official use. The idea of instant, voice-based communication between homes was still emerging. Medicine, too, was limited. There were no antibiotics. Infections that are now easily treated could prove fatal. Life expectancy was shorter, and the risks of illness were woven into everyday existence. This was the world into which a person born in 1894 would open their eyes. The Machine Age Begins to Take Hold As the new century unfolded, change began to accelerate. The early 1900s saw the rise of the motor car from novelty to necessity. Henry Ford’s introduction of assembly line production transformed manufacturing, making vehicles more affordable and gradually more common. Roads began to change. Cities began to expand. Electricity spread steadily, first through industry and public spaces, then into homes. It altered how people lived, worked and rested. Artificial light extended the day. New appliances began to reduce the physical burden of domestic life. At the same time, communication evolved. The telephone became more widely available, and radio emerged as a powerful new medium. For the first time, people could sit in their homes and hear voices from across the country, sharing news, music and major events in real time. The world was becoming faster, more connected and increasingly mechanised. War on an Industrial Scale For someone born in 1894, the First World War would arrive just as they reached adulthood. Beginning in 1914, it introduced a scale of conflict that had never been seen before. Industrial capacity was turned towards warfare, producing weapons, vehicles and technologies that transformed how wars were fought. Trench warfare, machine guns and chemical weapons created a brutal and prolonged stalemate across Europe. The war reshaped borders, economies and societies. It also left a lasting psychological mark on those who lived through it. The decades that followed brought both recovery and instability, culminating in the Second World War from 1939 to 1945. This conflict expanded across continents and accelerated technological development at an extraordinary pace. Radar, advanced aircraft and early computing all emerged or matured during this period. The war ended with the use of atomic weapons, introducing a new and deeply unsettling dimension to global power. For a single lifetime to contain two world wars is, in itself, a staggering reality. The Home Becomes Modern Between and after these wars, everyday life began to change in ways that were just as profound, if less dramatic. Electricity became a standard feature of homes. Appliances such as refrigerators, washing machines and vacuum cleaners began to transform domestic routines. Tasks that once took hours of physical effort could now be completed far more efficiently. Entertainment shifted as well. Cinema became a dominant cultural force, bringing stories and news to mass audiences. By the 1950s and 1960s, television entered the home, creating a shared national and, at times, global experience. It is difficult to overstate the significance of this shift. A person who grew up without electricity could now sit in their living room and watch events happening on the other side of the world as they unfolded. The Science That Changed Everything Alongside these visible changes, deeper scientific revolutions were taking place. The early 20th century saw breakthroughs in physics that redefined our understanding of reality. Einstein’s work on relativity and the development of quantum mechanics challenged long-held assumptions about space, time and matter. Medicine advanced rapidly. The discovery of penicillin in 1928 marked the beginning of the antibiotic era, transforming the treatment of infections and saving countless lives. Vaccination programmes expanded, and surgical techniques improved. Computing, in its earliest forms, began during the Second World War. These machines were large, complex and limited, but they laid the groundwork for the digital systems that would follow. These were not isolated developments. Together, they reshaped how humanity understood itself and the universe it inhabited. From Flight to Space At the start of this lifetime, powered flight itself was a new and uncertain achievement. The Wright brothers had flown only a decade earlier, and aviation remained experimental. By the mid-20th century, aircraft had become faster, more reliable and central to both war and travel. Commercial aviation began to take shape, shrinking the distances between countries and continents. Then, in the late 1950s and 1960s, attention turned upwards. The launch of Sputnik in 1957 marked the beginning of the space age. Yuri Gagarin’s flight in 1961 proved that humans could leave Earth. What followed was a rapid escalation of ambition, driven by Cold War rivalry and scientific curiosity. Less than twelve years after the first satellite entered orbit, humans were walking on the Moon. Watching the Moon Landing at 75 For someone born in 1894, watching the Moon landing in 1969 would not simply be impressive. It would be almost beyond comprehension. They would remember a childhood without electricity, a youth shaped by horse-drawn travel and handwritten letters. They would have lived through two world wars, witnessed the arrival of radio and television, and adapted to a world that became faster and more complex with each passing decade. And now, in their mid-seventies, they would be watching human beings stand on another world. It is the compression of these changes that makes the moment so powerful. Progress did not unfold over distant centuries. It happened within a single human lifetime. A World Remade Within One Generation The period from 1894 to 1969 represents one of the most concentrated bursts of transformation in history. In those 75 years, humanity moved from a largely local, mechanical existence to a global, electrified and technologically advanced society. The shift touched every aspect of life, from how people travelled and communicated to how they understood health, science and their place in the universe. The Moon landing stands as the most visible symbol of that transformation, but it is only the endpoint of a much larger story. To have lived through that era was to witness the modern world being built, piece by piece, until it no longer resembled the one you were born into. And as the images from 1969 flickered across television screens, for some viewers, it was not just history being made. It was the final confirmation of how far everything had come.
- Artemis II Returns From the Moon as Old Conspiracies Find New Life Online
A Mission in Motion, Not Preparation Artemis II is no longer a promise or a plan. It is a live, unfolding mission. Having successfully travelled beyond low Earth orbit and looped around the Moon, the crew are now on their return journey to Earth. In doing so, they have already secured their place in history as the first humans in more than half a century to venture into deep space. The mission itself has been widely followed, not just through official NASA channels but across social media, where images, clips and astronaut updates have circulated in near real time. Among the most striking moments so far have been the views of Earth from lunar distance. These are not abstract renderings or archival references. They are current, high-resolution visuals captured by a crew physically present in deep space. For many, it has been a powerful reminder of both scale and perspective, reinforcing the reality of human spaceflight beyond Earth orbit. Yet as these images spread, something else has travelled with them. The Return of a Familiar Narrative Alongside the excitement and global attention, Flat Earth narratives have begun to reappear with renewed visibility. As with previous milestones in space exploration, the mission has acted as a catalyst rather than a cause. Footage from Artemis II, particularly anything showing Earth as a curved, distant sphere, has been picked apart across various platforms. Claims of digital manipulation, lens distortion and staged environments have resurfaced, often attached to short clips or isolated frames removed from their original context. This is not evidence of a growing movement in terms of numbers. It is, however, a clear increase in visibility. The scale of Artemis II has pulled these conversations back into mainstream timelines, where they sit alongside genuine public interest and scientific engagement. Real-Time Content, Real-Time Reaction What distinguishes Artemis II from earlier missions is the immediacy of its coverage. This is not a mission filtered through delayed broadcasts or carefully edited highlights. It is being experienced as it happens. That immediacy has a double edge. On one hand, it allows for unprecedented access and transparency. On the other, it provides a constant stream of material that can be reinterpreted, clipped and redistributed without context. A reflection in a window, a momentary visual artefact in a video feed, or even the way lighting behaves inside the spacecraft can quickly be reframed as suspicious. Once those clips are detached from their technical explanations, they take on a life of their own within certain online communities. The speed at which this happens is key. Reaction no longer follows the event. It unfolds alongside it. Scepticism in the Age of Algorithms Flat Earth content does not exist in isolation. It is sustained by a broader culture of scepticism towards institutions, particularly those associated with government and large-scale scientific endeavour. NASA, as both a symbol of authority and a source of complex, hard-to-verify information, naturally becomes a focal point. Artemis II, with its deep space trajectory and high visibility, fits neatly into that framework. Social media platforms then amplify the effect. Content that challenges, contradicts or provokes tends to perform well, regardless of its factual basis. As a result, posts questioning the mission often gain traction not because they are persuasive, but because they are engaging. This creates a distorted sense of scale. What is, in reality, a fringe viewpoint can appear far more prominent than it actually is. The Broader Public Perspective Outside of these pockets of scepticism, the response to Artemis II has been largely one of fascination and admiration. The mission has reignited interest in human spaceflight, particularly among audiences who have never experienced a live crewed journey beyond Earth orbit. There is also a noticeable difference in tone compared to previous eras. The Apollo missions were moments of collective attention, where a single narrative dominated public consciousness. Artemis II exists in a far more fragmented environment, where multiple conversations unfold simultaneously. In that landscape, it is entirely possible for celebration, curiosity and conspiracy to coexist without directly intersecting. A Reflection of the Modern Media Landscape The re-emergence of Flat Earth narratives during Artemis II is not an anomaly. It is part of a broader pattern that defines how major events are now experienced. Every significant moment generates its own parallel discourse. One is grounded in reality, driven by science, engineering and exploration. The other is shaped by interpretation, scepticism and the mechanics of online engagement. Artemis II, currently making its way back to Earth, sits at the centre of both. The mission itself is a clear demonstration of human capability and technological progress. The conversation around it, however, reveals something different. It highlights how information is processed, challenged and reshaped in real time. In that sense, Artemis II is not just a journey through space. It is a case study in how modern audiences navigate truth, trust and visibility in an increasingly complex digital world.
- Streamlining Small Business Operations for Maximum Efficiency
In 2026, owning and running a small business is more difficult than ever. With rising costs for electricity and materials, as well as more restrictions and laws being introduced monthly, running a small business, never making a profit, can seem impossible. This is where efficiency through cost-saving tactics comes in, making a small business more competitive in a global market. Often, small business owners face unique challenges that make streamlining operations more difficult, such as limited financial resources for skilled staff or time constraints, putting pressure on owners who have to juggle multiple roles and leaving limited time for strategic improvements. This is where streamlining operations comes in. Not only can streamlining reduce unwarranted spending on resources or people, but it also frees up time so that business owners can focus on the things that matter to them and try to grow their business. So, if you want to streamline operations for your small business, here is the route that you should take so you can take control of your operations. Assessing current operations The first step to streamlining operations is to assess your current ones to see what is and isn't working. This is where you can identify duplicated tasks, outdated processes and any processes that are not working for the company. Although this may take a chunk of time, this can be hugely beneficial as many companies lose time and money when they get stuck in their old ways. You can do this through several ways, whether this be process mapping, employee feedback, performance metrics and KPIs. A mixture of all of these results should signify where the business is being slowed down. Automating repetitive tasks Automation is the process of technology taking over tasks that humans may do, helping to save time while lo removing human error. Not only will this save money on paying a person to do tasks such as scheduling, but it also makes it more accurate, so that you lose less money on costly mistakes that could be avoided. Even a small mistake can be devastating for small businesses, especially if it is costly. Small businesses can access tools, services, and software that can seemingly take necessary but time-consuming and costly tasks, such as phone answering, and replace them with services such as virtual receptionist services Improving communication and collaboration Improving communication and collaboration is one way to cut costs and free up time in a small business. Many businesses suffer from poor communication, causing delays and unclear responsibilities. When employees are delegated tasks through strong communication, it can lead to task duplication and confusion, which eats into time and affects overall efficiency. This is where the use of project management tools and messaging apps comes in, as it helps to set clear roles and expectations, which can standardise operating procedures and implement structure throughout the business. Final thoughts Running a small business can be stressful; however, with smart strategies implemented through operations, you take control of your business and keep profitability high.
- Posts Are Down, But Scrolling Isn’t: Are We Watching More and Sharing Less on Social Media?
There was a time when social media felt like a conversation. People posted updates, shared opinions, uploaded photos and interacted openly with friends, colleagues and sometimes complete strangers. It was noisy, often chaotic, but undeniably active. You could scroll for a few minutes and feel like you had caught up with people’s lives. That version of social media is starting to fade. Recent data suggests that while the vast majority of UK adults are still using social platforms regularly, far fewer are actually posting, commenting or engaging in visible ways. The number of people actively contributing has dropped, yet time spent on platforms remains high. In simple terms, the content is still being consumed, but fewer people are adding to it. It raises a simple question. If fewer people are posting, what exactly are we all looking at? Less Posting, Same Viewing The most striking shift is not that people are leaving social media, but that they are becoming quieter on it. Usage remains high across the UK, with most adults still logging in daily, yet a growing number are choosing not to post publicly at all. Instead, social media has become something closer to a viewing experience. People open apps, scroll through feeds, watch videos and read content, but they do so without interacting. The behaviour is less about participation and more about consumption. This change is subtle, but significant. Social media has not disappeared, it has simply become less social in the traditional sense. So What Are We Actually Looking At? If fewer people are sharing personal updates, the content filling our feeds has naturally shifted. A large portion now comes from: Brands and businesses posting regularly to maintain visibility Influencers and creators producing highly polished content Advertisements, often seamlessly integrated into feeds Suggested posts driven by algorithms rather than people you know Alongside this, there has been a noticeable rise in group-based content. Facebook groups, Reddit threads and niche communities have become more active, offering a space for discussion without the same level of public exposure. People are still interacting, but often in smaller, more contained environments. The result is a feed that feels less like a collection of personal updates and more like a stream of curated content. The Rise of Passive Scrolling This is where the idea of “doom scrolling” starts to make sense. Social media is increasingly being used in short, in-between moments. Sitting in a waiting room, standing in a queue, or filling a few spare minutes during the day, people instinctively reach for their phones and begin scrolling. There is no real intention to engage. It is simply a way to pass time. The content itself is designed for this kind of behaviour. Short videos, quick headlines and endless feeds create a loop where it is easier to keep scrolling than to stop. You move from one piece of content to the next without needing to think too much about it. It is less about connection and more about distraction. Why People Are Posting Less There are a number of reasons behind the drop in public posting, and most of them come down to a shift in how people view social media itself. There is a growing awareness that anything shared publicly can be permanent, searchable and open to interpretation. What once felt like a casual update can now feel like a statement, something that might be judged, challenged or taken out of context. At the same time, the tone of online interaction has changed. Public comment sections can be unpredictable, and many people simply do not want to invite that level of attention or debate into their day. As a result, people are becoming more selective. Instead of posting publicly, they are choosing to communicate privately, through direct messages or smaller group chats where the audience is known and the interaction feels more controlled. Social Media Without the “Social” This shift creates an interesting contradiction. People are still spending time on social media, often just as much as before, but the nature of that time has changed. The platforms are still active, but the interaction is quieter, more individual and less visible. In many ways, social media is starting to resemble traditional media. It is something you consume rather than something you contribute to. You watch, you read, you scroll, but you do not necessarily take part. That does not mean people have stopped connecting. It just means those connections are happening in different, less public ways. A Platform Built for Watching The platforms themselves have also evolved to support this behaviour. Algorithms now prioritise content that keeps users engaged for longer periods, rather than content from people you necessarily know. This means feeds are increasingly filled with recommended videos, trending topics and sponsored posts, all designed to hold attention. The result is a system that rewards viewing over sharing. You do not need to post anything to spend a significant amount of time on the platform. In fact, in many cases, you are encouraged not to. The New Normal What we are seeing is not a decline in social media, but a change in how it is used. People have not logged off. They have simply stepped back from the spotlight. They are still watching, still scrolling and still consuming content, but they are doing so more quietly, more selectively and often more privately than before. Which brings us back to the original question. If posts are down but views remain high, are we still using social media… or are we just passing time on it?











