Please can I ask how old is fire on earth, not tamed by people but since when has there been fire and flames on the planet.
Samuel, 5, London
You ask a very interesting question. For many years, scientists assumed that fire and humans were so connected that few of them gave any thought to what happened to fire before humans evolved.
Even now, after many years of research, you won’t find much information in books
about ancient fire. Indeed, I first started to become interested in this question of fire in the geological past more than 50 years ago, but my work was largely ignored until recently.
Curious Kids is a series by The Conversation that gives children the chance to have their questions about the world answered by experts. If you have a question you’d like an expert to answer, send it to curiouskids@theconversation.com and make sure you include the asker’s first name, age and town or city. We won’t be able to answer every question, but we’ll do our very best.
Your question is important today as the Earth’s weather is changing quickly and we are seeing deadly wildfires around the world. Humans may have used fire for a long time but they have never been able to tame fire. The challenge for scientists at the moment is to work out which fires are caused by humans and which ones are natural. To do this, we need to understand ancient fire in the first place.
A lot of our knowledge comes from studies of charcoal found in rocks more than 350 million years old, in a period geologists call the Carboniferous Period. As I say in my book Burning Planet: The Story of Fire Through Time, charcoal preserves the detail of different parts of the plant charcoal is made from. If you visit a place where there was a recent fire that burned a lot of plants, or collect some charcoal from the remains of a bonfire and look at it under a magnifying glass you may be able to see some of this amazing detail.
Over many years I, together with my students at Royal Holloway University of London, have been collecting information on ancient charcoal to help us understand fires of the past.
The key to understanding when fire appeared on Earth comes from what we call the fire triangle and I have discussed this in my small book Fire: A Very Short Introduction.
The first side of the triangle is fuel. Fire needs plants to burn. So we would not expect to have fire on the Earth before plants evolved. Plants first lived in the sea and started to spread on to the land around 420 million years ago. So there couldn’t have been fire before then.
The second side is heat – we need heat or a spark to start the fire – and that in the ancient past would be lightning. There has always been lightning and we can see evidence of this from fused sand grains found in some ancient sediments.
Finally, we need oxygen to allow the burning process to happen, the same way we need oxygen to breathe. We know this from simple ways we might put out a fire. You can cover the flames to stop oxygen or use sand, water or other materials to cut off the oxygen from the fire. Today the air we breathe has 21% oxygen. But experiments have shown that if you reduce the level to below 17% fires will not spread.
And above 30% it would be hard to put out a fire as even wet plants can burn with that level of oxygen. That is also why no fire or smoking is allowed in hospitals where there is oxygen used for the patients.
The level of oxygen in the Earth’s air has changed a lot over time. Scientists have shown that around 350-250 million years ago was a time of high levels of oxygen between 23 and 30% in the atmosphere and a lot of fire.
Evidence of the first fires was around 420 million years ago from charcoal in sedimentary rocks. But plants were small and there weren’t many places on Earth where they could grow. That meant there weren’t many places fire could burn. It was not until around 350 million years ago that fires started burning in lots of places and burnt in some of the first forests to grow on Earth.
Another time period of high fire was between 140 and 65 million years ago when many of our famous dinosaurs such as triceratops and tyrannosaurus were living and also when flowers first appeared. Around 40 million years ago oxygen levels in the atmosphere stabilised to modern levels. Proper tropical rain forest spread widely. This probably made fire rarer as wet rain forests don’t catch alight easily.
But around 7 million years ago grasslands spread, and these were easily burned. The grass-fire cycle began. This is where regular fire kills the saplings of trees, stopping grasslands turning into forests.
It is into this fiery world that humans evolved around 1.5 million years ago.
This article features references to books that have been included for editorial reasons, and may contain links to bookshop.org. If you click on one of the links and go on to buy something from bookshop.org The Conversation UK may earn a commission.
Andrew Scott has previously received funding from the Natural Environment Research Council and the Leverhulme Trust.
Source: The Conversation – UK – By Bamo Nouri, Honorary Research Fellow, Department of International Politics, City St George’s, University of London
Reports of a growing US naval presence in the Gulf have prompted speculation that the US could be preparing for another Middle East war, this time with Iran.
The US president, Donald Trump, has warned of “serious consequences” if Iran does not comply with his demands to permanently halt uranium enrichment, curb its ballistic missile program and end support for regional proxy groups.
Yet, despite the familiar language of escalation, much of what is unfolding appears closer to brinkmanship than preparation for war.
The US president’s own political history offers an important starting point for understanding why this is. Trump’s electoral appeal, both in 2016 and again in 2024, has rested heavily on a promise to end America’s “forever wars” and to avoid costly overseas interventions.
And Iran represents the very definition of such a war. Any all-out conflict with Tehran would almost certainly be long and drag in other countries in the region.
It would also be hard to achieve a decisive victory. For a president whose political brand is built on restraint abroad and disruption at home, a war with Iran would contradict the central logic of his foreign policy narrative.
Meanwhile Iran’s strategic posture is rooted in decades of preparing for precisely this scenario. Since the 1979 revolution, Tehran’s military doctrine and foreign policy have been shaped by survival in the face of potential external attack.
Rather than building a conventional force able to defeat the US in open combat, Iran has invested in asymmetric capabilities: ballistic and cruise missiles, the use of regional proxies, cyber operations and anti-access strategies (including missiles, air defences, naval mines, fast attack craft, drones and electronic warfare capabilities). Anyone who attacks Iran would face prolonged and escalating costs.
This is why comparisons to Iraq in 2003 are misleading. Iran is larger, more populous, more internally cohesive and far more militarily prepared for a sustained confrontation.
An attack on Iranian territory would not represent the opening phase of regime collapse but the final layer of a defensive strategy that anticipates exactly such a scenario. Tehran would be prepared to absorb damage and is capable of inflicting it across multiple theatres – including in Iraq, the Gulf, Yemen and beyond.
With an annual defence budget approaching US$900 billion (£650 billion), there is no question that the US has the capacity to initiate a conflict with Iran. But the challenge for the US lies not in starting a war, but in sustaining one.
The wars in Iraq and Afghanistan offer a cautionary precedent. Together, they are estimated to have cost the US between US$6 and and US$8 trillion when long-term veterans’ care, interest payments and reconstruction are included.
These conflicts stretched over decades, repeatedly exceeded initial cost projections and contributed to ballooning public debt. A war with Iran – larger, more capable and more regionally embedded – would almost certainly follow a similar, if not more expensive, trajectory.
The opportunity cost of the conflicts in Iraq and Afghanistan were potentially greater, absorbing vast financial and political capital at a moment when the global balance of power was beginning to shift.
As the US focused on counterinsurgency and stabilisation operations, other powers, notably China and India, were investing heavily in infrastructure, technology and long-term economic growth.
That dynamic is even more pronounced today. The international system is entering a far more intense phase of multipolar rivalry, characterised not only by military competition but by races in artificial intelligence, advanced manufacturing and strategic technologies.
Sustained military engagement in the Middle East would risk locking the US into resource-draining distractions just as competition with China accelerates and emerging powers seek greater influence.
Iran’s geographic position compounds this risk. Sitting astride key global energy routes, Tehran has the ability to disrupt shipping through the Strait of Hormuz.
Even limited disruption would drive oil prices sharply higher, feeding inflation globally. For the US, this would translate into higher consumer prices and reduced economic resilience at precisely the moment when strategic focus and economic stability are most needed.
There is also a danger that military pressure would backfire politically. Despite significant domestic dissatisfaction, the Iranian regime has repeatedly demonstrated its ability to mobilise nationalist sentiment in response to external threats. Military action could strengthen internal cohesion, reinforce the regime’s narrative of resistance and marginalise opposition movements.
Trump has repeatedly signalled his desire to be recognised as a peacemaker. He has framed his Middle East approach as deterrence without entanglement, citing the Abraham Accords and the absence of large-scale wars during his presidency. This sits uneasily alongside the prospect of war with Iran, particularly the week after the US president launched his “Board of Peace”.
The Abraham Accords depend on regional stability, economic cooperation and investment. A war with Iran would jeopardise all of these. Despite their own rivalry with Tehran, Gulf states such as Saudi Arabia, the UAE and Qatar have prioritised regional de-escalation.
Recent experience in Iraq and Syria shows why. The collapse of central authority created power vacuums quickly filled by terrorist groups, exporting instability rather than peace.
Some argue that Iran’s internal unrest presents a strategic opportunity for external pressure. While the Islamic Republic faces genuine domestic challenges, including economic hardship and social discontent, this should not be confused with imminent collapse. The regime retains powerful security institutions and loyal constituencies, particularly when framed as defending national sovereignty.
Taken together, these factors suggest that current US military movements and rhetoric are better understood as coercive signalling rather than preparation for invasion.
This is not 2003, and Iran is neither Iraq nor Venezuela. A war would not be swift, cheap or decisive. The greatest danger lies not in a deliberate decision to invade, but in miscalculation. Heightened rhetoric and military proximity can increase the risk of accidents and unintended escalation.
Avoiding that outcome will require restraint, diplomacy and a clear recognition that some wars – however loudly threatened – are simply too costly to fight.
Bamo Nouri does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Cancer and Alzheimer’s disease are two of the most feared diagnoses in medicine, but they rarely strike the same person. For years, epidemiologists have noticed that people with cancer seem less likely to develop Alzheimer’s, and those with Alzheimer’s are less likely to get cancer, but nobody could explain why.
Alzheimer’s is characterised by sticky deposits of a protein called amyloid beta that build up between nerve cells in the brain. These clumps, or plaques, interfere with communication between nerve cells and trigger inflammation and damage that slowly erodes memory and thinking.
In the new study, scientists implanted human lung, prostate and colon tumours under the skin of mice bred to develop Alzheimer‑like amyloid plaques. Left alone, these animals reliably develop dense clumps of amyloid beta in their brains as they age, mirroring a key feature of the human disease.
But when the mice carried tumours, their brains stopped accumulating the usual plaques. In some experiments, the animals’ memory also improved compared with Alzheimer‑model mice without tumours, suggesting that the change was not just visible under the microscope.
The team traced this effect to a protein called cystatin‑C that was being pumped out by the tumours into the bloodstream. The new study suggests that, at least in mice, cystatin‑C released by tumours can cross the blood–brain barrier – the usually tight border that shields the brain from many substances in the circulation.
Once inside the brain, cystatin‑C appears to latch on to small clusters of amyloid beta and mark them for destruction by the brain’s resident immune cells, called microglia. These cells act as the brain’s clean‑up crew, constantly patrolling for debris and misfolded proteins.
In Alzheimer’s, microglia seem to fall behind, allowing amyloid beta to accumulate and harden into plaques. In the tumour‑bearing mice, cystatin‑C activated a sensor on microglia known as Trem2, effectively switching them into a more aggressive, plaque‑clearing state.
Surprising trade-offs
At first glance, the idea that a cancer could “help” protect the brain from dementia sounds almost perverse. Yet biology often works through trade-offs, where a process that is harmful in one context can be beneficial in another.
In this case, the tumour’s secretion of cystatin‑C may be a side‑effect of its own biology that happens to have a useful consequence for the brain’s ability to handle misfolded proteins. It does not mean that having cancer is good, but it does reveal a pathway that scientists might be able to harness more safely.
The study slots into a growing body of research suggesting that the relationship between cancer and neurodegenerative diseases is more than a statistical quirk. Large population studies have reported that people with Alzheimer’s are significantly less likely to be diagnosed with cancer, and vice versa, even after accounting for age and other health factors.
People with Alzheimer’s are significantly less likely to get cancer, and vice versa. Halfpoint/Shutterstock.com
This has led to the idea of a biological seesaw, where mechanisms that drive cells towards survival and growth, as in cancer, may push them away from the pathways that lead to brain degeneration. The cystatin‑C story adds a physical mechanism to that picture.
However, the research is in mice, not humans, and that distinction matters. Mouse models of Alzheimer’s capture some features of the disease, particularly amyloid plaques, but they do not fully reproduce the complexity of human dementia.
We also do not yet know whether human cancers in real patients produce enough cystatin‑C, or send it to the brain in the same way, to have meaningful effects on Alzheimer’s disease risk. Still, the discovery opens intriguing possibilities for future treatment strategies.
One idea is to develop drugs or therapies that mimic the beneficial actions of cystatin‑C without involving a tumour at all. That could mean engineered versions of the protein designed to bind amyloid beta more effectively, or molecules that activate the same pathway in microglia to boost their clean‑up capacity.
The research also highlights how interconnected diseases can be, even when they affect very different organs. A tumour growing in the lung or colon might seem far removed from the slow build up of protein deposits in the brain, yet molecules released by that tumour can travel through the bloodstream, cross protective barriers and change the behaviour of brain cells.
For people living with cancer or caring for someone with Alzheimer’s today, this work will not change treatment immediately. But the study does offer a more hopeful message: by studying even grim diseases like cancer in depth, scientists can stumble on unexpected insights that point towards new ways to keep the brain healthy in later life.
Perhaps the most striking lesson is that the body’s defences and failures are rarely simple. A protein that contributes to disease in one organ may be used as a clean‑up tool in another, and by understanding these tricks, researchers may be able to use them safely to help protect the ageing human brain.
Justin Stebbing does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Pigeons are well-suited to urban living, and are outcompeting distinctive local species around the world. Wirestock Creators / shutterstock
The age of humans is increasingly an age of sameness. Across the planet, distinctive plants and animals are disappearing, replaced by species that are lucky enough to thrive alongside humans and travel with us easily. Some scientists have a word for this reshuffling of life: the Homogenocene.
Evidence for it is found in the world’s museums. Storerooms are full of animals that no longer walk among us, pickled in spirit-filled jars: coiled snakes, bloated fish, frogs, birds. Each extinct species marks the removal of a particular evolutionary path from a particular place – and these absences are increasingly being filled by the same hardy, adaptable species, again and again.
One such absence is embodied by a small bird kept in a glass jar in London’s Natural History Museum: the Fijian Bar-winged rail, not seen in the wild since the 1970s. It seems to be sleeping, its eyes closed, its wings tucked in along its back, its beak resting against the glass.
A flightless bird, it was particularly vulnerable to predators introduced by humans, including mongooses brought to Fiji in the 1800s. Its disappearance was part of a broad pattern in which island species are vanishing and a narrower set of globally successful animals thrive in their place.
It’s a phenomenon that was called the Homogenocene even before a similar term growing in popularity, the Anthropocene, was coined in 2000. If the Anthropocene describes a planet transformed by humans, the Homogenocene is one ecological consequence: fewer places with their own distinctive life.
It goes well beyond charismatic birds and mammals. Freshwater fish, for instance, are becoming more “samey”, as the natural barriers that once kept populations separate – waterfalls, river catchments, temperature limits – are effectively blurred or erased by human activity. Think of common carp deliberately stocked in lakes for anglers, or catfish released from home aquariums that now thrive in rivers thousands of miles from their native habitat.
Meanwhile, many thousands of mollusc species have disappeared over the past 500 years, with snails living on islands also severely affected: many are simply eaten by non-native predatory snails. Some invasive snails have become highly successful and widely distributed, such as the giant African snail that is now found from the Hawaiian Islands to the Americas, or South American golden apple snails rampant through east and south-east Asia since their introduction in the 1980s.
Homogeneity is just one facet of the changes wrought on the Earth’s tapestry of life by humans, a process that started in the last ice age when hunting was likely key to the disappearance of the mammoth, giant sloth and other large mammals. It continued over around 11,700 years of the recent Holocene epoch – the period following the last ice age – as forests were felled and savannahs cleared for agriculture and the growth of farms and cities.
Over the past seven decades changes to life on Earth have intensified dramatically. This is the focus of a major new volume published by the Royal Society of London: The Biosphere in the Anthropocene.
The Anthropocene has reached the ocean
Life in the oceans was relatively little changed between the last ice age and recent history, even as humans increasingly affected life on land. No longer: a feature of the Anthropocene is the rapid extension of human impacts through the oceans.
This is partly due to simple over-exploitation, as human technology post-second world war enabled more efficient and deeper trawling, and fish stocks became seriously depleted.
Lionfish from the Pacific have been introduced in the Caribbean, where they’re hoovering up native fish who don’t recognise them as predators. Drew McArthur / shutterstock
Partly this is also due to the increasing effects of fossil-fuelled heat and oxygen depletion spreading through the oceans. Most visibly, this is now devastating coral reefs.
Out of sight, many animals are being displaced northwards and southwards out of the tropics to escape the heat; these conditions are also affecting spawning in fish, creating “bottlenecks” where life cycle development is limited by increasing heat or a lack of oxygen. The effects are reaching through into the deep oceans, where proposals for deep sea mining of minerals threaten to damage marine life that is barely known to science.
And as on land and in rivers, these changes are not just reducing life in the oceans – they’re redistributing species and blurring long-standing biological boundaries.
Local biodiversity, global sameness
Not all the changes to life made by humans are calamitous. In some places, incoming non-native species have blended seamlessly into existing environments to actually enhance local biodiversity.
In other contexts, both historical and contemporary, humans have been decisive in fostering wildlife, increasing the diversity of animals and plants in ecosystems by cutting or burning back the dominant vegetation and thereby allowing a greater range of animals and plants to flourish.
In our near-future world there are opportunities to support wildlife, for instance by changing patterns of agriculture to use less land to grow more food. With such freeing-up of space for nature, coupled with changes to farming and fishing that actively protect biodiversity, there is still a chance that we can avoid the worst predictions of a future biodiversity crash.
But this is by no means certain. Avoiding yet more rows of pickled corpses in museum jars will require a concerted effort to protect nature, one that must aim to help future generations of humans live in a biodiverse world.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By Simon Haslett, Pro Vice-Chancellor and Professor of Physical Geography, Bath Spa University; Swansea University
People living on the low-lying shores of the Bristol Channel and Severn estuary began their day like any other on January 30 1607. The weather was calm. The sky was bright.
Then, suddenly, the sea rose without warning. Water came racing inland, tearing across fields and villages, sweeping away the homes, livestock and people in its path.
By the end of the day, thousands of acres were underwater. As many as 2,000 people may have died. It was, quite possibly, the deadliest sudden natural disaster to hit Britain in 500 years.
More than four centuries later, the flood of 1607 still raises a troubling question. What, exactly, caused it?
Most early explanations blamed an exceptional storm. But when my colleague and I began examining the historical evidence more closely in 2002, we became less certain that this was the full picture. For one, eyewitness accounts tell a more unsettling story.
The flood struck on January 30 1607 – or January 20 1606, according to the old Julian calendar, which was still in use at that time. The flood affected coastal communities across south Wales, Somerset, Gloucestershire and Devon, inundating some areas several miles inland. People at the time were no strangers to storms or high tides – but this was different.
Churches were inundated. Entire villages vanished. Vast stretches of farmland were ruined by saltwater, leaving communities facing hunger as well as grief. Memorial plaques in local churches and parish documents still mark the scale of the catastrophe.
Much of what we know about how the event unfolded comes from chapbooks, which were cheaply printed pamphlets sold in the early 17th century. These accounts describe not just the damage, but the terrifying speed and character of the water itself.
One such pamphlet, God’s Warning to His People of England, describes a calm morning suddenly interrupted by what witnesses saw approaching from the sea:
Upon Tuesday 20 January 1606 there happened such an overflowing of waters … the like never in the memory of man hath been seen or heard of. For about nine of the morning, many of the inhabitants of these countreys … perceive afar off huge and mighty hilles of water tombling over one another, in such sort as if the greatest mountains in the world had overwhelmed the lowe villages or marshy grounds.
Our interest in the event arose from reading that account. It gives a specific time for the inundation – around nine in the morning – and emphasises the fair weather and sudden arrival of the floodwaters.
From a geographer’s perspective, this description is striking. Sudden onset, wave-like forms and an absence of storm conditions are not typical of storm surges. To us, the language was reminiscent of eyewitness accounts of tsunamis elsewhere in the world. This suggested a tsunami origin for the flood should be evaluated.
Until the early 2000s, few researchers seriously questioned the storm-surge explanation. But as we revisited the historical sources, we began to ask whether the physical landscape might also preserve clues to what happened in 1607. If an extreme marine inundation had struck the coast at that time, it may have left geological evidence behind.
In several locations around the estuary, we identified a suite of features with a chronological link to the early 17th century: the erosion of two spurs of land that previously jutted out into the estuary, the removal of almost all fringing salt marsh deposits, and the occurrence of sand layers in otherwise muddy deposits
These features point to a high-energy event. The question was what kind?
Testing the theory
To explore this further, we undertook a programme of fieldwork in 2004. We examined sand layers and noted signatures of tsunami impact such as coastal erosion, and analysed the movement of large boulders along the shoreline. Boulder transport is particularly useful, as it allows estimates of the wave heights needed to move them.
Some fieldwork was filmed for a BBC documentary broadcast in April 2005, which featured other colleagues too. It included an argument for a storm, but also another suggesting it isn’t fanciful to consider that an offshore earthquake provided the trigger.
Our results were published in 2007, coincidentally the 400th anniversary of the flood. In parallel, colleagues published a compelling model supporting a storm surge. The scientific debate, rather than being resolved, intensified.
An updating of wave heights based on boulder data using refined formula was published in 2021, suggesting a minimum tsunami wave height of 4.2 metres is required to explain the coastal features – whereas, according to the calculations, storm waves of over 16 metres would be required. This is perhaps unlikely within the relatively sheltered Severn estuary.
The low-lying coasts around the Bristol Channel remain vulnerable to flooding. Storm surges occur regularly, though usually with more limited effects. Climate change is now increasing the risk through rising sea levels and more intense weather systems.
Tsunamis, by contrast, are rare. A report by the UK government’s Department for Environment, Food & Rural Affairs found it unlikely that the 1607 flood may have been caused by one. However, it also noted that offshore southwest Britain is among the more credible locations for a future tsunami, triggered by seismic activity or submarine landslides.
This distinction matters. Storm surges can usually be forecast. Tsunamis may arrive with little or no warning.
Scholarly and public interest in the flood has not waned. In November 2024, a Channel 5 documentary brought together several strands of recent research, concluding that the jury is still out on the flood’s cause.
That uncertainty should not be seen as a failure. Evaluating competing explanations is essential when trying to understand extreme events in the past – especially when those events have implications for present-day risk.
Whether the flood of 1607 was driven by storm winds, unusual tides or waves generated far offshore, its lesson is clear. Coastal societies ignore rare disasters at their peril.
The sea has come in before. And it will do so again.
Simon Haslett does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
As winter set in across the UK, the flags strung up during 2025’s controversial Operation Raise the Colours were becoming tatty and grey. Yet, they continue to send an important message: despite increasingly digitally connected lives, neighbourhoods still matter when it comes to political views.
The strength of feeling among those putting up flags since summer 2025 and those who objected to them is proof that people filter big political issues through the places where they live and work. People measure their lives through local heritage, memories and a sense of home. So these areas are also battlegrounds for competing visions of what it means to belong.
Reform UK has clearly recognised this. It has worked hard to win council elections in England, appealing to concerns held across the political spectrum about the character and decline of neighbourhoods. But such tactics tend to to push people’s buttons on sensitive issues such as immigration and encourage resentment.
Historically, local civic institutions – pubs, working men’s clubs, trade union halls, church halls – came into their own when communities faced hard times. They acted as emergency shelters and dining halls, information points and advice services, they gave emotional and practical support, as well as being spaces for enjoyment and celebration. Some such spaces still exist, but today, much of this social infrastructure has declined or been dismantled.
Into this vacuum steps populist right and far-right parties. They generate support by offering some residents a renewed sense of community, security or hope. In Epping, a recent site of major anti-immigrant protests, some residents have established Essex Spartans, a vigilante patrol group to “protect women, children and the elderly”.
Offering help to vulnerable residents in a spirit of community and care is laudable but these groups risk exaggerating local feelings of “stranger danger” towards migrants and minorities. And with alleged connections to both Reform UK and other rightwing groups, Essex Spartans and initiatives like them could create pathways to more extreme perspectives.
Far-right groups such as Homeland are also actively seeking to enter the mainstream civic life of communities. This has included joining parish councils, church congregations and sports clubs, distributing food to homeless people, and establishing litter-picking groups.
Communities pushing back
But it is a common mistake to assume that the political winds are blowing only in the favour of the right and far right, and that working-class white communities are hotbeds of racism or xenophobia. The research I’ve conducted in two of Bristol’s poorest suburbs has revealed the huge efforts made by neighbourhood groups to show that communities targeted by far-right messaging can be inclusive, imaginative and progressive.
These communities fit the profile for an area at risk of far-right influence: working-class, peripheral, declining and predominantly white. Far-right and anti-immigrant sentiments are shared openly on local social media groups, as stickers and graffiti on walls and lampposts, and in conversations in the few pubs and cafes that remain.
So they are not unusual communities, but they are also home to impressive levels of hidden work being done by community activists who want to turn the tide.
In one community that abuts a major logistics zone, British-born and migrant job-seekers and low-waged workers are crammed into overcrowded and low-quality homes. They are drawn there by a promise of plentiful work which does not always materialise.
Instead of simply blaming immigration for negative side effects, several community groups are working together to support the residents, challenge the council and landlords to improve their conditions, and clean up the neighbourhood’s streets.
Monica, manager of the community hall, explains her approach: “Just work on the ground, and person by person.” This is how she helped a longstanding older people’s club and the migrant women learning English down the hallway to start sharing lunch together. Now this semi-regular lunch date has become an unthreatening way for these very different groups to mingle.
In a neighbourhood on the other side of Bristol, decades of neglect, disinvestment and stigma have left the area in decline. But rather than blaming immigration, networks of residents and organisations are leading the charge on neighbourhood renewal.
By pooling resources, skills, and ingenuity, finding workarounds to divert resources where they are needed, they are rebuilding dignity and agency from below. This isn’t dramatic transformation but small changes that benefit everyone, such as reintroducing bins in the park.
Community groups are also safer spaces for difficult conversations about local identity and sense of place that acknowledge residents’ feelings of loss or injustice. Darren, a youth worker, explains that well-loved community spaces are “vital” for keeping conversations respectful.
Bristol’s identity – a vibrant and exciting city with a troubled colonial past – rarely fits their own experience of growing up at its forgotten peripheries. Instead of becoming mired in these citywide “culture wars”, groups in both areas celebrate their neighbourhood’s unique heritage in response to this desire for pride and belonging.
Looking to the future
Community activists nationwide are defying assumptions about working-class neighbourhoods as being “on benefits, uneducated, having loads of kids, racist”, as Trish, a tenants’ group member told me.
With elections around the UK in 2026, the future of the country’s neighbourhoods is up for grabs. But trust in any politician is at rock bottom in these Bristolian communities and elsewhere. One resident told me, if any party set up a stall outside the local shops, “that table’s getting flipped”.
Reform UK doesn’t have a foothold like Labour here, but its candidates could still be in contention here if they can ride their national party’s wave. For now, the hard work of community activists appears to be having some effect.
This fight won’t just play out in the halls of power or the ballot box – it will unfold in streets, parks, and community halls.
Anthony Ince has received research funding from the British Academy and the Independent Social Research Foundation.
Source: The Conversation – UK – By Ignazio Cabras, Professor of Regional Economic Development, Northumbria University, Newcastle
English pubs will receive a 15% discount on their business rates from April this year. The government deal, which also applies to music venues, follows a backlash from landlords who were facing a steep increase in their tax bills.
Some industry campaigners have said the support package – worth around £1,600 per pub – will allow landlords to breathe a sigh of relief. Some opposition politicians think it doesn’t go far enough.
Either way, it’s been a tough few years. High energy costs, inflation and wage increases have contributed to the serious financial difficulties facing many pubs.
The sector was also among the worst affected (alongside retail and leisure) by the social distancing and lockdowns of COVID. The government responded at the time by giving pubs significant business rate discounts in a show of support.
Then, in November 2025, it was announced that those discounts would be reduced and then phased out completely. This move, combined with big increases in the rateable values of pub premises, left landlords with the prospect of much higher bills.
But pubs are far more than cash machines for the Treasury. To many, they represent a vital part of British traditions and heritage. They also play a pivotal role in building and maintaining social relationships among the people who live near them.
Whether that’s a family meeting up for Sunday lunch, university students at their society gathering, or some elderly fans of real ale, pubs have a clear and long-standing role in creating community cohesion.
Several scientific studies have measured their positive effects on people, economies and societies.
One, for example, confirms the strong link between pubs and local community events. It has also been shown that pubs are often more effective than other organisations at stimulating a wide range of social activities. This could include everything from sports teams and quiz nights to hosting book groups, as well as charitable and volunteering initiatives.
Pubs also frequently promote community events – such as charity events and social clubs – more effectively than other places such as sport or village halls. Research has shown that in rural areas especially, pubs are very effective – more so than village shops for example – at building community cohesion and local social networks.
Overall, opportunities for communal initiatives in some areas would be extremely reduced, if not nonexistent, without pubs. This is why the loss of a pub has a much broader impact than a mere business closure.
Yet despite all of this proven positive impact, the number of UK pubs has been constantly declining since the start of the century. According to the British Beer and Pub Association, there were 60,800 in 2000, compared to about 45,000 in 2024, meaning one in four closing its doors in the past 25 years.
Life for publicans has been extremely hard for a long time. This is why the changes proposed in the last budget prompted a significant pushback from the industry.
Last orders
But other businesses probably deserve a tax break too. High street shops can also help maintain higher levels of socialisation and community cohesion.
Particularly in remote and rural areas, which suffer from a general lack of local services and public transport options compared to urban areas, these businesses are important in terms of economic development and social activity.
They are also a vital part of their local economic structure, providing employment opportunities and training for local residents. This is why the Treasury should consider a rethink about business rates across the board.
Like pubs, local businesses have value beyond the revenue they generate. A tax system which recognises their positive social impact would be a better and fairer fiscal tool all round.
In the past, Ignazio Cabras’ research work has received financial support from multiple funding bodies, including the British Academy, the Society of Independent Brewers (SIBA), and the Vintners Federation of Ireland (VFI). He is a Fellow of the Academy of Social Sciences (FAcSS).
Time feels like the most basic feature of reality. Seconds tick, days pass and everything from planetary motion to human memory seems to unfold along a single, irreversible direction. We are born and we die, in exactly that order. We plan our lives around time, measure it obsessively and experience it as an unbroken flow from past to future. It feels so obvious that time moves forward that questioning it can seem almost pointless.
And yet, for more than a century, physics has struggled to say what time actually is. This struggle is not philosophical nitpicking. It sits at the heart of some of the deepest problems in science.
Modern physics relies on different, but equally important, frameworks. One is Albert Einstein’s theory of general relativity, which describes the gravity and motion of large objects such as planets. Another is quantum mechanics, which rules the microcosmos of atoms and particles. And on an even larger scale, the standard model of cosmology describes the birth and evolution of the universe as a whole. All rely on time, yet they treat it in incompatible ways.
When physicists try to combine these theories into a single framework, time often behaves in unexpected and troubling ways. Sometimes it stretches. Sometimes it slows. Sometimes it disappears entirely.
The Insights section is committed to high-quality longform journalism. Our editors work with academics from many different backgrounds who are tackling a wide range of societal and scientific challenges.
Einstein’s theory of relativity was, in fact, the first major blow to our everyday intuition about time. Time, Einstein showed, is not universal. It runs at different speeds depending on gravity and motion. Two observers moving relative to one another will disagree about which events happened at the same time. Time became something elastic, woven together with space into a four-dimensional fabric called spacetime.
Quantum mechanics made things even stranger. In quantum theory, time is not something the theory explains. It is simply assumed. The equations of quantum mechanics describe how systems evolve with respect to time, but time itself remains an external parameter, a background clock that sits outside the theory.
This mismatch becomes acute when physicists try to describe gravity at the quantum level, which is crucial for developing the much coveted theory of everything – which links the main fundamental theories. But in many attempts to create such a theory, time vanishes as a parameter from the fundamental equations altogether. The universe appears frozen, described by equations that make no reference to change.
This puzzle is known as the problem of time, and it remains one of the most persistent obstacles to a unified theory of physics. Despite enormous progress in cosmology and particle physics, we still lack a clear explanation for why time flows at all.
Now a relatively new approach to physics, building on a mathematical framework called information theory, developed by Claude Shannon in the 1940s, has started coming up with surprising answers.
Entropy and the arrow of time
When physicists try to explain the direction of time, they often turn to a concept called entropy. The second law of thermodynamics states that disorder tends to increase. A glass can fall and shatter into a mess, but the shards never spontaneously leap back together. This asymmetry between past and future is often identified with the arrow of time.
This idea has been enormously influential. It explains why many processes are irreversible, including why we remember the past but not the future. If the universe started in a state of low entropy, and is getting messier as it evolves, that appears to explain why time moves forward. But entropy does not fully solve the problem of time.
For one thing, the fundamental quantum mechanical equations of physics do not distinguish between past and future. The arrow of time emerges only when we consider large numbers of particles and statistical behaviour. This also raises a deeper question: why did the universe start in such a low-entropy state to begin with? Statistically, there are more ways for a universe to have high entropy than low entropy, just as there are more ways for a room to be messy than tidy. So why would it start in a state that is so improbable?
The information revolution
Over the past few decades, a quiet but far-reaching revolution has taken place in physics. Information, once treated as an abstract bookkeeping tool used to track states or probabilities, has increasingly been recognised as a physical quantity in its own right, just like matter or radiation. While entropy measures how many microscopic states are possible, information measures how physical interactions limit and record those possibilities.
This shift did not happen overnight. It emerged gradually, driven by puzzles at the intersection of thermodynamics, quantum mechanics and gravity, where treating information as merely mathematical began to produce contradictions.
One of the earliest cracks appeared in black hole physics. When Stephen Hawking showed that black holes emit thermal radiation, it raised a disturbing possibility: information about whatever falls into a black hole might be permanently lost as heat. That conclusion conflicted with quantum mechanics, which demands that the entirety of information be preserved.
Resolving this tension forced physicists to confront a deeper truth. Information is not optional. If we want a full description of the universe that includes quantum mechanics, information cannot simply disappear without undermining the foundations of physics. This realisation had profound consequences. It became clear that information has thermodynamic cost, that erasing it dissipates energy, and that storing it requires physical resources.
In parallel, surprising connections emerged between gravity and thermodynamics. It was shown that Einstein’s equations can be derived from thermodynamic principles that link spacetime geometry directly to entropy and information. In this view, gravity doesn’t behave exactly like a fundamental force.
Instead, gravity appears to be what physicists call “emergent” – a phenomenon describing something that’s greater than the sum of its parts, arising from more fundamental constituents. Take temperature. We can all feel it, but on a fundamental level, a single particle can’t have temperature. It’s not a fundamental feature. Instead it only emerges as a result of many molecules moving collectively.
Similarly, gravity can be described as an emergent phenomenon, arising from statistical processes. Some physicists have even suggested that gravity itself may emerge from information, reflecting how information is distributed, encoded and processed.
These ideas invite a radical shift in perspective. Instead of treating spacetime as primary, and information as something that lives inside it, information may be the more fundamental ingredient from which spacetime itself emerges. Building on this research, my colleagues and I have explored a framework in which spacetime itself acts as a storage medium for information – and it has important consequences for how we view time.
In this approach, spacetime is not perfectly smooth, as relativity suggests, but composed of discrete elements, each with a finite capacity to record quantum information from passing particles and fields. These elements are not bits in the digital sense, but physical carriers of quantum information, capable of retaining memory of past interactions.
A useful way to picture them is to think of spacetime like a material made of tiny, memory-bearing cells. Just as a crystal lattice can store defects that appeared earlier in time, these microscopic spacetime elements can retain traces of the interactions that have passed through them. They are not particles in the usual sense described by the standard model of particle physics, but a more fundamental layer of physical structure that particle physics operates on rather than explains.
This has an important implication. If spacetime records information, then its present state reflects not only what exists now, but everything that has happened before. Regions that have experienced more interactions carry a different imprint of information than regions that have experienced fewer. The universe, in this view, does not merely evolve according to timeless laws applied to changing states. It remembers.
A recording cosmos
This memory is not metaphorical. Every physical interaction leaves an informational trace. Although the basic equations of quantum mechanics can be run forwards or backwards in time, real interactions never happen in isolation. They inevitably involve surroundings, leak information outward and leave lasting records of what has occurred. Once this information has spread into the wider environment, recovering it would require undoing not just a single event, but every physical change it caused along the way. In practice, that is impossible.
This is why information cannot be erased and broken cups do not reassemble. But the implication runs deeper. Each interaction writes something permanent into the structure of the universe, whether at the scale of atoms colliding or galaxies forming.
Geometry and information turn out to be deeply connected in this view. In our work, we have showed that how spacetime curves depends not only on mass and energy, as Einstein taught us, but also on how quantum information, particularly entanglement, is distributed. Entanglement is a quantum process that mysteriously links particles in distant regions of space – it enables them to share information despite the distance. And these informational links contribute to the effective geometry experienced by matter and radiation.
From this perspective, spacetime geometry is not just a response to what exists at a given moment, but to what has happened. Regions that have recorded many interactions tend, on average, to behave as if they curve more strongly, have stronger gravity, than regions that have recorded fewer.
This reframing subtly changes the role of spacetime. Instead of being a neutral arena in which events unfold, spacetime becomes an active participant. It stores information, constrains future dynamics and shapes how new interactions can occur. This naturally raises a deeper question. If spacetime records information, could time emerge from this recording process rather than being assumed from the start?
Time arising from information
Recently, we extended this informational perspective to time itself. Rather than treating time as a fundamental background parameter, we showed that temporal order emerges from irreversible information imprinting. In this view, time is not something added to physics by hand. It arises because information is written in physical processes and, under the known laws of thermodynamics and quantum physics, cannot be globally unwritten again. The idea is simple but far-reaching.
Every interaction, such as two particles crashing, writes information into the universe. These imprints accumulate. Because they cannot be erased, they define a natural ordering of events. Earlier states are those with fewer informational records. Later states are those with more.
Quantum equations do not prefer a direction of time, but the process of information spreading does. Once information has been spread out, there is no physical path back to a state in which it was localised. Temporal order is therefore anchored in this irreversibility, not in the equations themselves.
Time, in this view, is not something that exists independently of physical processes. It is the cumulative record of what has happened. Each interaction adds a new entry, and the arrow of time reflects the fact that this record only grows.
The future differs from the past because the universe contains more information about the past than it ever can about the future. This explains why time has a direction without relying on special, low-entropy initial conditions or purely statistical arguments. As long as interactions occur and information is irreversibly recorded, time advances.
Interestingly, this accumulated imprint of information may have observable consequences. At galactic scales, the residual information imprint behaves like an additional gravitational component, shaping how galaxies rotate without invoking new particles. Indeed, the unknown substance called dark matter was introduced to explain why galaxies and galaxy clusters rotate faster than their visible mass alone would allow.
In the informational picture, this extra gravitational pull does not come from invisible dark matter, but from the fact that spacetime itself has recorded a long history of interactions. Regions that have accumulated more informational imprints respond more strongly to motion and curvature, effectively boosting their gravity. Stars orbit faster not because more mass is present, but because the spacetime they move through carries a heavier informational memory of past interactions.
From this viewpoint, dark matter, dark energy and the arrow of time may all arise from a single underlying process: the irreversible accumulation of information.
Testing time
But could we ever test this theory? Ideas about time are often accused of being philosophical rather than scientific. Because time is so deeply woven into how we describe change, it is easy to assume that any attempt to rethink it must remain abstract. An informational approach, however, makes concrete predictions and connects directly to systems we can observe, model and in some cases experimentally probe.
Black holes provide a natural testing ground, as they seems to suggest information is erased. In the informational framework, this conflict is resolved by recognising that information is not destroyed but imprinted into spacetime before crossing the horizon. The black hole records it.
This has an important implication for time. As matter falls toward a black hole, interactions intensify and information imprinting accelerates. Time continues to advance locally because information continues to be written, even as classical notions of space and time break down near the horizon and appear to slow or freeze for distant observers.
As the black hole evaporates through Hawking radiation, the accumulated informational record does not vanish. Instead, it affects how radiation is emitted. The radiation should carry subtle signs that reflect the black hole’s history. In other words, the outgoing radiation is not perfectly random. Its structure is shaped by the information previously recorded in spacetime. Detecting such signs remains beyond current technology, but they provide a clear target for future theoretical and observational work.
The same principles can be explored in much smaller, controlled systems. In laboratory experiments with quantum computers, qubits (the quantum computer equivalent of bits) can be treated as finite-capacity information cells, just like the spacetime ones. Researchers have shown that even when the underlying quantum equations are reversible, the way information is written, spread and retrieved can generate an effective arrow of time in the lab. These experiments allow physicists to test how information storage limits affect reversibility, without needing cosmological or astrophysical systems.
Extensions of the same framework suggest that informational imprinting is not limited to gravity. It may play a role across all fundamental forces of nature, including electromagnetism and the nuclear forces. If this is correct, then time’s arrow should ultimately be traceable to how all interactions record information, not just gravitational ones. Testing this would involve looking for limits on reversibility or information recovery across different physical processes.
Taken together, these examples show that informational time is not an abstract reinterpretation. It links black holes, quantum experiments and fundamental interactions through a shared physical mechanism, one that can be explored, constrained and potentially falsified as our experimental reach continues to grow.
What time really is
Ideas about information do not replace relativity or quantum mechanics. In everyday conditions, informational time closely tracks the time measured by clocks. For most practical purposes, the familiar picture of time works extremely well. The difference appears in regimes where conventional descriptions struggle.
Near black hole horizons or during the earliest moments of the universe, the usual notion of time as a smooth, external coordinate becomes ambiguous. Informational time, by contrast, remains well defined as long as interactions occur and information is irreversibly recorded.
All this may leave you wondering what time really is. This shift reframes the longstanding debate. The question is no longer whether time must be assumed as a fundamental ingredient of the universe, but whether it reflects a deeper underlying process.
In this view, the arrow of time can emerge naturally from physical interactions that record information and cannot be undone. Time, then, is not a mysterious background parameter standing apart from physics. It is something the universe generates internally through its own dynamics. It is not ultimately a fundamental part of reality, but emerges from more basic constituents such as information.
Whether this framework turns out to be a final answer or a stepping stone remains to be seen. Like many ideas in fundamental physics, it will stand or fall based on how well it connects theory to observation. But it already suggests a striking change in perspective.
The universe does not simply exist in time. Time is something the universe continuously writes into itself.
To hear about new Insights articles, join the hundreds of thousands of people who value The Conversation’s evidence-based news. Subscribe to our newsletter.
Florian Neukart does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Government plans published earlier this month around the water sector in England and Wales were heralded as a “once-in-a-generation” opportunity to transform the system. However, despite the confidence of UK environment secretary Emma Reynolds, the long-awaited plans raise significant concerns. This is a reform agenda for water as a business – but not a vision for managing a vital public and environmental resource.
The fully privatised water system in England and Wales has been facing two (self-inflicted) crises in recent years. First, companies have failed to invest enough in infrastructure and have been pouring untreated sewage into rivers and seas.
Second, some companies (acting primarily in the interests of their shareholders) have hiked up debts while still paying out dividends. The largest company, Thames Water, has been teetering on the brink of financial collapse since 2023.
Occasional but serious interruptions of water supplies prove that all is not well in water service delivery, and there is growing recognition that the water system is unfit for purpose. We, as academics who helped set up a research body called the People’s Commission on the Water Sector, would have to agree.
But crucially, missing from the government’s white paper detailing the new policy is any reflection on the processes that led to this situation. It acknowledges that companies have behaved badly and that some water companies and their owners have prioritised short-term profits over long-term resilience and the environment.
This is a serious understatement. Underlying the outcomes of the past few years are the profit-seeking activities by private investors. Private companies cut back on costs and manipulated finances to benefit shareholders.
Water users and the environment suffer the consequences. And the regulators, Ofwat and the Environment Agency, were ill-prepared for the scale of the private sector’s extractive practices.
The bottom line is that the profit motive is incompatible with treating water in a way that is socially and environmentally equitable.
Now, the government is proposing a new single regulator with dedicated teams for each company, rather than the four institutions that have been in place until now. In addition, there are plans for better regulation and enforcement for pollution, and improvements to infrastructure (the white paper reveals how little is known about water company assets).
However, the language around regulation is confused and contradictory. On the one hand there is talk of being tough: Reynolds says there will be “nowhere to hide” for errant water companies. And there could be criminal proceedings against directors, who may also be deprived of bonus payments.
But on the other, the language is remarkably accommodating in its approach to the firms that have put the whole system in jeopardy.
Those that were behind the sewage crises and the perilous state of water company finances are to be helped to improve through a “performance-improvement regime”. Considerable attention is devoted to creating an attractive climate for investors, where returns will be stable and predictable. This, despite the fact that recent unpredictability was largely due to the activities of private companies.
Power and politics
If water in England and Wales remains in private hands, the unresolvable tension between the drive for profits alongside controls to protect consumers and the environment will persist. The demands of capital tend to prevail, with considerable government attention devoted to ensuring that the sector is attractive to investors.
As an example, the government claims that the next five years will see £104 billion of private investment. But this ultimately is funded by the planned 36% rise in bills (plus inflation). And a fifth of this (£22 billion) is set aside for the costs of capital, to cover interest payments and dividends.
The focus on regulatory and management measures obscures issues of power and politics in water governance. Water is supplied by companies whose shareholders have immense political power.
Private equity investors BlackRock, the biggest asset manager in the world, has stakes in three water companies – Severn Trent, United Utilities and South West Water (via Pennon). Keir Starmer, the prime minister, entertained BlackRock’s CEO in November 2024 where an overhaul of regulation was reportedly promised.
And Hong Kong-based CKI, once a contender to take over Thames Water, is the majority owner of Northumbrian Water. It also has stakes in Britain’s gas, electricity and rail networks, as well as owning Superdrug and the discount store, Savers.
A similar story is told in other companies. These are global behemoths that have influence and huge resources, and as such may seek to shape regulation in their own interests.
The system in England and Wales is an outlier. No other country has copied this extreme privatised model. In fact, many have taken privatised water back into public hands. In Paris, the public water operator Eau de Paris is an award-winning example of transparency, accountability and integrity in public service.
It demonstrates that it is possible to create public services that are fair, sustainable and resilient. Key to this process has been the vision of water as a vital common good rather than a commodity.
The government’s plans will patch up the water system, particularly with the boost in revenue from bill payers. But the private sector has found unanticipated ways to maximise profits in the past and may well do so again. Rather than continually tweaking the failed private model, the only real route to operating water in the public interest is for it to be in public ownership.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
In November 2012, during my first year as a PhD student, a 23-year-old medical student knocked on my door. Earlier that day, we had been discussing our ages in our shared kitchen. At 30, I had stayed silent, feeling a sharp sting of embarrassment next to my 20-something housemates.
But this student was determined to get an answer from me. He shoved his passport in my face and demanded to see mine. When I admitted my age, he laughed and said: “Wow, you’re so old.”
In that moment, I felt a deep sense of shame and failure. But after a decade of research tracking more than 100 young people, I want to tell my younger self: you weren’t failing. You simply hadn’t inherited the same amount of time as your peer.
My work with students in China shows that social inequality isn’t just about money or status. It’s also about time inheritance.
I started my PhD at 30 only after spending five years working to clear my family’s debts and move my parents out of a house where sewage regularly flooded their floors. My housemate, whose father and grandfather were doctors and Cambridge alumni, had inherited “banked time” – a cushion of security that allowed him to glide straight to the academic starting line.
Banked time v borrowed time
To make sense of this, I distinguish between two kinds of time inheritance.
Some young people receive banked time. They start life with a “full tank”: parents who can afford to support them through unpaid internships, gap years, or an extra degree, and the freedom to change course or repeat a year without financial ruin. This creates a sense of temporal security that allows them to take measured risks, explore their interests, and wait for the best opportunities to arise. They have “slack” in the system that actually generates more time in the long run.
Others live on borrowed time. They start with an “empty tank,” already owing years of labour to their families before they even begin. Because their education often relies on the extreme sacrifices of parents or the missed opportunities of siblings, these students carry a heavy debt-paying mentality.
A delay in earning feels dangerous because it isn’t just a personal setback; it is a failure to repay a moral and economic debt to those who supported them. This pressure works in two punishing ways.
Some make “self-sabotaging” choices by picking lower-tier degrees or precarious jobs just because they offer immediate income. Others find their education takes far longer as they are forced to pause their studies to work and save, trapped in a cycle of paying off “time interest” before they can finally begin their own lives.
Take Jiao, a brilliant student from a poor rural family in China. He scored high enough to enter one of the country’s top two universities: Peking or Tsinghua, the equivalent of Oxford or Cambridge. Yet he chose a second-tier university.
He felt he could not afford the “time cost” of the mandatory military training that was required at the elite universities at the time he was applying. This would have delayed his ability to earn money and support his parents. On paper, this looks like a self-sabotaging decision. In reality, it was a survival strategy shaped by time poverty: he simply did not have months to spare.
In contrast, Yi, born into a comfortable Beijing family, dropped out of university after just one year because she didn’t like the teaching. She didn’t see this as a failure, but as “cutting her losses”. With her parents’ backing, she quickly applied to an elite university in Australia. Yi had inherited banked time, which gave her the security to try again.
Both students were capable. What differed was how much time they could afford to lose.
Lost learning
Although my research focuses on China, these temporal mechanisms are not culturally unique. They show up in different forms in other countries.
We saw this during post-pandemic debates about “lost learning”. In the UK, for example, tutoring programmes and extra school hours were offered as fixes. But these only work if pupils have the spare time to use them.
For those already caring for siblings or parents, working part-time or commuting long distances, the extra provision can become another burden: deepening, rather than reducing, time debt.
In universities, the cost-of-living crisis has pushed more students into long hours of paid work during term. They get through their degrees, but at a price: less time to build networks, take internships or simply think about their next steps.
Rigid career “windows” also matter. Age-limited grants, early-career schemes that expire a few years after graduation and expectations of a seamless CV all act as a time tax on those who took longer to reach the starting line. They might have been caring for relatives, changing country, or working to stay afloat.
Making education fairer means being aware of this time disparity. This could mean designing catch-up and tutoring schemes around the actual schedules of working and caring students, not an idealised timetable.
Within academia, extending age and career-stage limits on scholarships, fellowships and early-career posts would mean that those who started “late” are not permanently penalised. And more recognition of the burden of unpaid care and emotional labour in both universities and workplaces would be a valuable step.
Ultimately, doing well in education is not just about how we spend our time. It is about who is allowed to have time in the first place, and who is quietly starting the race already in debt.
Dr Cora Lingling Xu receives funding from the Cambridge International Trust, the Sociological Review Foundation, the ESRC Social Science Festival,the British Academy and various grants from Queens’ College Cambridge, Cambridge University, Keele University and Durham University.