top of page
Why You Should Not Trust Your Car’s Automatic Systems Completely

Why You Should Not Trust Your Car’s Automatic Systems Completely

12 February 2026

Paul Francis

Want your article or story on our site? Contact us here

Most modern drivers assume that if a feature is labelled “automatic”, it will take care of itself. Automatic lights. Automatic braking. Automatic lane correction. The car feels intelligent, almost watchful.


Car dashboard at night with blurred city lights in the background. Speedometer glows blue. Display shows 8:39. Moody, urban setting.

But there is a quiet issue that many drivers are unaware of, and it begins with something as simple as headlights.


The automatic headlight problem

In fog, heavy rain or dull grey daylight, many cars will show illuminated front lights but leave the rear of the vehicle dark. From inside the car, everything appears normal. The dashboard is lit. The automatic light symbol is active. You can see light reflecting ahead.


However, what often happens is that the vehicle is running on daytime running lights rather than full dipped headlights. On many cars, daytime running lights only operate at the front. The rear lights remain off unless the dipped headlights are manually switched on.

The system relies on a light sensor that measures brightness, not visibility. Fog does not always make the environment dark enough to trigger full headlights. Heavy motorway spray can reduce visibility dramatically while still registering as daylight. The result is a vehicle that is difficult to see from behind, especially at speed.


Under the Highway Code, drivers must use headlights when visibility is seriously reduced. Automatic systems do not override that responsibility. In poor weather, manual control is often the safer choice. It is a small action that can make a significant difference.


Automatic emergency braking is not foolproof

Automatic Emergency Braking, often referred to as AEB, is one of the most widely praised safety technologies in modern vehicles. It is designed to detect obstacles and apply the brakes if a collision appears imminent.


In controlled testing, it reduces certain types of crashes. But it is not infallible. Cameras and radar can struggle in heavy rain, low sun glare, fog, or when sensors are obstructed by dirt or ice. Some systems have difficulty detecting stationary vehicles at high speed. Others may not recognise pedestrians at certain angles.


It is a safety net, not a guarantee.


Lane assist is not autopilot

Lane keeping systems gently steer the car back into its lane if it detects a drift. On clear motorways with bright road markings, they can work well.


On rural roads, in roadworks, or where markings are faded, they can disengage or behave unpredictably. Drivers may not even realise when the system has switched off. Over time, there is a risk that drivers become less attentive, assuming the vehicle will correct mistakes.

It will not.


Cars drive on a wet highway during sunset. The sky is golden, and trees line the road. The scene is viewed through a windshield.

Adaptive cruise control still requires full attention

Adaptive cruise control maintains speed and distance from the car ahead. It is comfortable on long motorway journeys.


However, it does not anticipate hazards like a human driver. It can brake sharply when another vehicle exits your lane. It may not react appropriately to a fast vehicle cutting in. Most importantly, it does not read the wider context of traffic conditions.


It reduces workload, but it does not remove responsibility.


Blind spot monitoring is not perfect

Blind spot indicators are helpful, especially in heavy traffic. They provide an extra warning when another vehicle is alongside you.


But motorcycles, fast approaching cars, or vehicles at unusual angles can sometimes escape detection. Sensors can also be affected by weather or dirt. A physical shoulder check remains essential.


Cameras distort reality

Reversing cameras and parking sensors have reduced low-speed bumps and scrapes. They are undeniably useful.


Yet cameras distort depth perception, and small or low obstacles can be difficult to judge accurately. Relying entirely on the screen rather than physically checking surroundings is one of the most common causes of minor accidents.


The bigger risk is complacency

There is a growing concern among safety researchers about automation complacency. When systems work well most of the time, drivers begin to relax. Attention drifts. Reaction times lengthen.


Modern vehicles are safer than ever, but the technology is designed to support an attentive driver. It is not designed to replace one.


The word “assist” appears frequently in the naming of these systems for a reason. They assist. They do not assume control.


Automatic lights, braking, steering correction and cruise systems are impressive pieces of engineering. They reduce risk. They improve comfort. But they still require a human driver who understands their limits.


Trusting technology is reasonable. Trusting it completely is not.

Current Most Read

Why You Should Not Trust Your Car’s Automatic Systems Completely
The Property Industry Is Going Remote — But Is It For The Better?
US Naval Pursuit and Seizure of Oil Tanker in the Indian Ocean: What It Means

When AI Crosses the Line: Why the Grok Controversy Has Triggered a Regulatory Reckoning

  • Writer: Paul Francis
    Paul Francis
  • Jan 14
  • 4 min read

Concerns about artificial intelligence crossed a new threshold this week after the BBC reported that Ofcom had made urgent contact with Elon Musk’s company xAI over the misuse of its AI tool, Grok. According to the broadcaster, the chatbot has been used on social media platform X to digitally undress women without their consent and, in some cases, generate sexualised imagery that regulators fear could edge toward illegal content involving children.


Woman with glasses draped in white fabric, looking over shoulder. Neutral gray background, casual yet poised expression.
AI-generated Image of a Woman holding a Bed Sheet

The BBC’s investigation uncovered numerous examples of users prompting Grok to alter real photographs of women, making them appear in bikinis or placing them in sexualised situations. Some of the images targeted high-profile individuals, including Catherine, Princess of Wales. For those affected, the harm was not abstract or theoretical. Journalists who found themselves targeted described feeling violated, dehumanised, and reduced to a sexual stereotype, even though the images were artificially generated.


Ofcom confirmed it was investigating whether the tool breaches the Online Safety Act, which makes it illegal in the UK to create or share intimate or sexually explicit images of a person without their consent, including AI-generated deepfakes. Under the same law, technology companies are required to take appropriate steps to reduce the risk of such content appearing on their platforms and to remove it swiftly when identified.


A problem extending far beyond one platform

While the BBC report brought the issue into sharp focus for UK audiences, it is far from an isolated case. Reuters, Sky News, Yahoo News UK, and Channel NewsAsia have all reported on similar concerns surrounding Grok in recent weeks. The European Commission has confirmed it is examining the matter under the EU’s Digital Services Act, with officials describing some of the reported outputs as appalling and unacceptable.


Authorities in France, India and Malaysia have also indicated they are assessing whether Grok’s image generation features violate local laws. The scale of the response reflects not just the seriousness of the content itself, but the speed at which it spread and the ease with which it was created.


Unlike earlier deepfake scandals, which often relied on specialist software and fringe forums, Grok is embedded directly into a mainstream social media platform. Any user can tag the chatbot in a post and request an image alteration in seconds. That accessibility has lowered the barrier to abuse and made moderation far more difficult.


Safeguards that existed, but failed

xAI’s own acceptable use policy explicitly prohibits depicting real people in a pornographic manner. Elon Musk has publicly warned that users who ask Grok to generate illegal content will face consequences equivalent to uploading such material themselves. Yet regulators tend to focus less on policy statements and more on outcomes.


The fact that these images were created at all suggests that safeguards were either insufficient, poorly implemented, or unable to keep pace with real-world misuse. From a regulatory perspective, intent matters less than impact. If a system can be misused at scale, responsibility increasingly falls on those who designed and deployed it.


The UK’s Internet Watch Foundation has said it has received reports relating to Grok-generated images, though it has not yet confirmed material that meets the legal threshold for child sexual abuse imagery. Even so, experts warn that tools capable of undressing adults without consent can often be adapted to target minors, making early intervention critical.


A familiar pattern in AI development

The Grok controversy is part of a broader pattern that has been unfolding across the AI sector. In early 2024, AI-generated sexual deepfakes of public figures circulated widely online, sparking political backlash and renewed calls for regulation. Since then, generative image tools have become more powerful, more realistic, and more widely available.


What has changed is not just the technology, but the pace. AI systems are being deployed to millions of users before lawmakers, regulators, or even developers fully understand how they will be used in the wild. Each controversy follows a similar arc. A tool is released with optimistic claims about creativity and freedom. Abuse emerges rapidly. Companies respond with statements and incremental fixes. Regulators step in after harm has already occurred.


The Grok case has brought that cycle into stark relief.


Why regulation can no longer wait

Governments are increasingly acknowledging that existing laws are struggling to keep up. The UK has announced plans to criminalise the supply of AI nudification tools, not just their use. Under proposed legislation, companies that provide such technology could face prison sentences and substantial fines.


In Europe, enforcement of the Digital Services Act is already tightening. X was fined more than €120 million last year for breaching platform safety rules, placing it under heightened scrutiny. Any further failures will be viewed in that context.


The underlying challenge is that AI is no longer experimental. It is embedded in everyday digital life. When tools can manipulate images, voices and identities with ease, the potential for harm scales faster than voluntary safeguards.


This moment feels like a turning point. The Grok controversy is not simply about one chatbot or one platform. It is about whether societies are willing to set clear boundaries around technologies that affect dignity, consent and safety, or whether they will continue reacting after damage is done.


AI is moving quickly. Public awareness is catching up. Regulation is still lagging behind. The gap between those three forces is where harm occurs. If the lesson from this episode is taken seriously, it may help shape rules that protect people before the next misuse goes viral rather than after.

bottom of page