Did a tsunami hit the Bristol Channel four centuries ago? Revisiting the great flood of 1607

Source: The Conversation – UK – By Simon Haslett, Pro Vice-Chancellor and Professor of Physical Geography, Bath Spa University; Swansea University

People living on the low-lying shores of the Bristol Channel and Severn estuary began their day like any other on January 30 1607. The weather was calm. The sky was bright.

Then, suddenly, the sea rose without warning. Water came racing inland, tearing across fields and villages, sweeping away the homes, livestock and people in its path.

By the end of the day, thousands of acres were underwater. As many as 2,000 people may have died. It was, quite possibly, the deadliest sudden natural disaster to hit Britain in 500 years.

More than four centuries later, the flood of 1607 still raises a troubling question. What, exactly, caused it?

Most early explanations blamed an exceptional storm. But when my colleague and I began examining the historical evidence more closely in 2002, we became less certain that this was the full picture. For one, eyewitness accounts tell a more unsettling story.

The flood struck on January 30 1607 – or January 20 1606, according to the old Julian calendar, which was still in use at that time. The flood affected coastal communities across south Wales, Somerset, Gloucestershire and Devon, inundating some areas several miles inland. People at the time were no strangers to storms or high tides – but this was different.

Churches were inundated. Entire villages vanished. Vast stretches of farmland were ruined by saltwater, leaving communities facing hunger as well as grief. Memorial plaques in local churches and parish documents still mark the scale of the catastrophe.

Much of what we know about how the event unfolded comes from chapbooks, which were cheaply printed pamphlets sold in the early 17th century. These accounts describe not just the damage, but the terrifying speed and character of the water itself.

One such pamphlet, God’s Warning to His People of England, describes a calm morning suddenly interrupted by what witnesses saw approaching from the sea:

Upon Tuesday 20 January 1606 there happened such an overflowing of waters … the like never in the memory of man hath been seen or heard of. For about nine of the morning, many of the inhabitants of these countreys … perceive afar off huge and mighty hilles of water tombling over one another, in such sort as if the greatest mountains in the world had overwhelmed the lowe villages or marshy grounds.

Our interest in the event arose from reading that account. It gives a specific time for the inundation – around nine in the morning – and emphasises the fair weather and sudden arrival of the floodwaters.

From a geographer’s perspective, this description is striking. Sudden onset, wave-like forms and an absence of storm conditions are not typical of storm surges. To us, the language was reminiscent of eyewitness accounts of tsunamis elsewhere in the world. This suggested a tsunami origin for the flood should be evaluated.

Until the early 2000s, few researchers seriously questioned the storm-surge explanation. But as we revisited the historical sources, we began to ask whether the physical landscape might also preserve clues to what happened in 1607. If an extreme marine inundation had struck the coast at that time, it may have left geological evidence behind.

In several locations around the estuary, we identified a suite of features with a chronological link to the early 17th century: the erosion of two spurs of land that previously jutted out into the estuary, the removal of almost all fringing salt marsh deposits, and the occurrence of sand layers in otherwise muddy deposits

These features point to a high-energy event. The question was what kind?

Testing the theory

To explore this further, we undertook a programme of fieldwork in 2004. We examined sand layers and noted signatures of tsunami impact such as coastal erosion, and analysed the movement of large boulders along the shoreline. Boulder transport is particularly useful, as it allows estimates of the wave heights needed to move them.

Some fieldwork was filmed for a BBC documentary broadcast in April 2005, which featured other colleagues too. It included an argument for a storm, but also another suggesting it isn’t fanciful to consider that an offshore earthquake provided the trigger.

Our results were published in 2007, coincidentally the 400th anniversary of the flood. In parallel, colleagues published a compelling model supporting a storm surge. The scientific debate, rather than being resolved, intensified.

An updating of wave heights based on boulder data using refined formula was published in 2021, suggesting a minimum tsunami wave height of 4.2 metres is required to explain the coastal features – whereas, according to the calculations, storm waves of over 16 metres would be required. This is perhaps unlikely within the relatively sheltered Severn estuary.

The low-lying coasts around the Bristol Channel remain vulnerable to flooding. Storm surges occur regularly, though usually with more limited effects. Climate change is now increasing the risk through rising sea levels and more intense weather systems.

Tsunamis, by contrast, are rare. A report by the UK government’s Department for Environment, Food & Rural Affairs found it unlikely that the 1607 flood may have been caused by one. However, it also noted that offshore southwest Britain is among the more credible locations for a future tsunami, triggered by seismic activity or submarine landslides.

This distinction matters. Storm surges can usually be forecast. Tsunamis may arrive with little or no warning.




Read more:
From Noah’s flood to Shakespeare’s storms, what literature reveals about our changing relationship with the weather


Scholarly and public interest in the flood has not waned. In November 2024, a Channel 5 documentary brought together several strands of recent research, concluding that the jury is still out on the flood’s cause.

That uncertainty should not be seen as a failure. Evaluating competing explanations is essential when trying to understand extreme events in the past – especially when those events have implications for present-day risk.

Whether the flood of 1607 was driven by storm winds, unusual tides or waves generated far offshore, its lesson is clear. Coastal societies ignore rare disasters at their peril.

The sea has come in before. And it will do so again.

The Conversation

Simon Haslett does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Did a tsunami hit the Bristol Channel four centuries ago? Revisiting the great flood of 1607 – https://theconversation.com/did-a-tsunami-hit-the-bristol-channel-four-centuries-ago-revisiting-the-great-flood-of-1607-274135

Why it would be a big mistake for the US to go to war with Iran

Source: The Conversation – UK – By Bamo Nouri, Honorary Research Fellow, Department of International Politics, City St George’s, University of London

Reports of a growing US naval presence in the Gulf have prompted speculation that the US could be preparing for another Middle East war, this time with Iran.

The US president, Donald Trump, has warned of “serious consequences” if Iran does not comply with his demands to permanently halt uranium enrichment, curb its ballistic missile program and end support for regional proxy groups.

Yet, despite the familiar language of escalation, much of what is unfolding appears closer to brinkmanship than preparation for war.

The US president’s own political history offers an important starting point for understanding why this is. Trump’s electoral appeal, both in 2016 and again in 2024, has rested heavily on a promise to end America’s “forever wars” and to avoid costly overseas interventions.

And Iran represents the very definition of such a war. Any all-out conflict with Tehran would almost certainly be long and drag in other countries in the region.

It would also be hard to achieve a decisive victory. For a president whose political brand is built on restraint abroad and disruption at home, a war with Iran would contradict the central logic of his foreign policy narrative.

Meanwhile Iran’s strategic posture is rooted in decades of preparing for precisely this scenario. Since the 1979 revolution, Tehran’s military doctrine and foreign policy have been shaped by survival in the face of potential external attack.

Rather than building a conventional force able to defeat the US in open combat, Iran has invested in asymmetric capabilities: ballistic and cruise missiles, the use of regional proxies, cyber operations and anti-access strategies (including missiles, air defences, naval mines, fast attack craft, drones and electronic warfare capabilities). Anyone who attacks Iran would face prolonged and escalating costs.

This is why comparisons to Iraq in 2003 are misleading. Iran is larger, more populous, more internally cohesive and far more militarily prepared for a sustained confrontation.

An attack on Iranian territory would not represent the opening phase of regime collapse but the final layer of a defensive strategy that anticipates exactly such a scenario. Tehran would be prepared to absorb damage and is capable of inflicting it across multiple theatres – including in Iraq, the Gulf, Yemen and beyond.

With an annual defence budget approaching US$900 billion (£650 billion), there is no question that the US has the capacity to initiate a conflict with Iran. But the challenge for the US lies not in starting a war, but in sustaining one.

The wars in Iraq and Afghanistan offer a cautionary precedent. Together, they are estimated to have cost the US between US$6 and and US$8 trillion when long-term veterans’ care, interest payments and reconstruction are included.

These conflicts stretched over decades, repeatedly exceeded initial cost projections and contributed to ballooning public debt. A war with Iran – larger, more capable and more regionally embedded – would almost certainly follow a similar, if not more expensive, trajectory.

The opportunity cost of the conflicts in Iraq and Afghanistan were potentially greater, absorbing vast financial and political capital at a moment when the global balance of power was beginning to shift.

As the US focused on counterinsurgency and stabilisation operations, other powers, notably China and India, were investing heavily in infrastructure, technology and long-term economic growth.

That dynamic is even more pronounced today. The international system is entering a far more intense phase of multipolar rivalry, characterised not only by military competition but by races in artificial intelligence, advanced manufacturing and strategic technologies.

Sustained military engagement in the Middle East would risk locking the US into resource-draining distractions just as competition with China accelerates and emerging powers seek greater influence.

Iran’s geographic position compounds this risk. Sitting astride key global energy routes, Tehran has the ability to disrupt shipping through the Strait of Hormuz.

Even limited disruption would drive oil prices sharply higher, feeding inflation globally. For the US, this would translate into higher consumer prices and reduced economic resilience at precisely the moment when strategic focus and economic stability are most needed.

There is also a danger that military pressure would backfire politically. Despite significant domestic dissatisfaction, the Iranian regime has repeatedly demonstrated its ability to mobilise nationalist sentiment in response to external threats. Military action could strengthen internal cohesion, reinforce the regime’s narrative of resistance and marginalise opposition movements.

Previous US and Israeli strikes on Iranian infrastructure have not produced decisive strategic outcomes. Despite losses of facilities and senior personnel, Iran’s broader military posture and regional influence have proved adaptable.

Rhetoric and restraint

Trump has repeatedly signalled his desire to be recognised as a peacemaker. He has framed his Middle East approach as deterrence without entanglement, citing the Abraham Accords and the absence of large-scale wars during his presidency. This sits uneasily alongside the prospect of war with Iran, particularly the week after the US president launched his “Board of Peace”.

The Abraham Accords depend on regional stability, economic cooperation and investment. A war with Iran would jeopardise all of these. Despite their own rivalry with Tehran, Gulf states such as Saudi Arabia, the UAE and Qatar have prioritised regional de-escalation.

Recent experience in Iraq and Syria shows why. The collapse of central authority created power vacuums quickly filled by terrorist groups, exporting instability rather than peace.

Some argue that Iran’s internal unrest presents a strategic opportunity for external pressure. While the Islamic Republic faces genuine domestic challenges, including economic hardship and social discontent, this should not be confused with imminent collapse. The regime retains powerful security institutions and loyal constituencies, particularly when framed as defending national sovereignty.

Taken together, these factors suggest that current US military movements and rhetoric are better understood as coercive signalling rather than preparation for invasion.

This is not 2003, and Iran is neither Iraq nor Venezuela. A war would not be swift, cheap or decisive. The greatest danger lies not in a deliberate decision to invade, but in miscalculation. Heightened rhetoric and military proximity can increase the risk of accidents and unintended escalation.

Avoiding that outcome will require restraint, diplomacy and a clear recognition that some wars – however loudly threatened – are simply too costly to fight.

The Conversation

Bamo Nouri does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why it would be a big mistake for the US to go to war with Iran – https://theconversation.com/why-it-would-be-a-big-mistake-for-the-us-to-go-to-war-with-iran-274592

Artemis II: The first human mission to the moon in 54 years launches soon — with a Canadian on board

Source: The Conversation – Canada – By Gordon Osinski, Professor in Earth and Planetary Science, Western University

The crew of the new NASA moon rocket Artemis II at the Kennedy Space Center, including Jeremy Hansen of the Canadian Space Agency, on the far right. From left: Reid Wiseman, Victor Glover and Christina Koch. (NASA)

It’s been 54 years since the last Apollo mission, and since then, humans have not ventured beyond low-Earth orbit. But that’s all about to change with next week’s launch of the Artemis II mission from the Kennedy Space Center in Florida.

This is the first crewed flight of NASA’s Artemis program and the first time since 1972 that humans have ventured to the moon. Onboard is Canadian astronaut Jeremy Hansen, who will be the first non-American to fly to the moon and will make Canada only the second country in the world to send an astronaut into deep space.




Read more:
Canada’s space technology and innovations are a crucial contribution to the Artemis missions


I am a professor, an explorer and a planetary geologist. For the past 15 years, I have been helping to train Hansen and other astronauts in geology and planetary science. I am also a member of the Artemis III Science Team and the principal investigator for Canada’s first ever rover mission to the moon.

a rocket in a launcher at night
NASA’s Artemis II SLS rocket and Orion spacecraft secured to the mobile launcher at NASA’s Kennedy Space Center in Florida.
(NASA)

What will the mission achieve?

NASA’s Artemis program, launched in 2017, has the ambitious goal to return humans to the moon and to establish a lunar base in preparation for sending humans to Mars. The first mission, Artemis I, launched in late 2022. Following some delays, Artemis II is scheduled for launch as early as a week from now.

Onboard will be Hansen, along with his three American crew-mates.

This is an incredibly exciting mission. Artemis II is the first time humans have launched on NASA’s huge SLS (Space Launch System) rocket, and the first time humans have flown in the Orion spacecraft.

SLS is the most powerful rocket NASA has ever built, with the capability to send more than 27 metric tonnes of payload — equipment, instruments, scientific experiments and cargo — to the moon. The Orion spacecraft sits at the very top and is the crew’s ride to the moon. The Artemis II crew named their Orion capsule Integrity, a word they say embodies trust, respect, candour and humility.

an infographic illustrates a spacecraft
An infographic produced by NASA showing the different parts of the Orion spacecraft.
(NASA)

What will Artemis II crew do in space?

Following launch, the crew will carry out tests of Integrity’s essential life-support systems: the water dispenser, firefighting equipment, and, of course, the toilet. Did you know there was no toilet on the Apollo missions? Instead, the crews used “relief tubes.”

If everything looks good, the Artemis II will ignite what’s known as the Interim Cryogenic Propulsion Stage — part of the SLS rocket still connected to Integrity — to elevate the spacecraft’s orbit. If things are still looking good, the Orion spacecraft and its four human travellers will spend 24 hours in a high-Earth orbit up to 70,000 kilometres away from the planet.

For comparison, the International Space Station orbits the Earth at a mere 400 kilometres.

Following a series of tests and checks, the crew will conduct one of the most critical stages of the mission: the Trans-Lunar Injection, or TLI. This is the crucial moment that changes a spacecraft from orbiting the Earth — where the option to quickly return home remains — to sending it on its way to the moon and into deep space.

an infographic shows the trajectory of a spacecraft
The Artemis II mission’s 10-day ‘figure-eight’ trajectory.
(NASA)

Once the Integrity is on its way to the moon after TLI, there is no turning back — at least, not without going to the moon first. That’s because Artemis II — like the early Apollo missions — enters what’s called a “free-return trajectory” after the TLI. What this means is that even if Integrity’s engines fail completely, the moon’s gravity will naturally loop the spacecraft around it and aim it towards Earth.

After the three-day journey to the moon, the crew will carry out perhaps the most exciting stage of the mission: lunar fly-by. Integrity will loop around the far side of the moon, passing anywhere from 6,000 to 10,000 kilometres above its surface — much farther than any Apollo mission.

To quote Star Trek, at that most distant point, the Artemis II crew will have boldly gone where no (hu)man has gone before. This will be, quite literally, the farthest from Earth that any human being has ever travelled.

International effort to explore the moon

That a Canadian astronaut is part of the crew of Artemis II is a testament to the collaborative international nature of the Artemis program.

While NASA created the program and is the driving force, there are now 60 countries that have signed the Artemis Accords.

an infographic shows all the artemis accords signatories
On Jan. 26, 2026, Oman became the 61st nation to sign the Artemis Accords.
(NASA)

The foundation for the Artemis Accords is the recognition that international co-operation in space is intended not only to bolster space exploration but to enhance peaceful relationships among nations. This is particularly necessary now — perhaps more than any other time since the Cold War.

I truly hope that as Integrity returns from the moon’s far side, people around the world will pause — at least for a few moments — and be united in thinking of a better future. As American astronaut Bill Anders, who flew the first crewed Apollo mission to the moon, once said:

“We came all this way to explore the moon, and the most important thing is that we discovered the Earth.”

The Conversation

Gordon Osinski founded the company Interplanetary Exploration Odyssey Inc. He receives funding from the Natural Sciences and Engineering Research Council of Canada and the Canadian Space Agency.

ref. Artemis II: The first human mission to the moon in 54 years launches soon — with a Canadian on board – https://theconversation.com/artemis-ii-the-first-human-mission-to-the-moon-in-54-years-launches-soon-with-a-canadian-on-board-273881

Submarine mountains and long-distance waves stir the deepest parts of the ocean

Source: The Conversation – Global Perspectives – By Jessica Kolbusz, Research Fellow, School of Biological Sciences, The University of Western Australia

NOAA Office of Ocean Exploration and Research, 2019 Southeastern U.S. Deep-sea Exploration

When most of us look out at the ocean, we see a mostly flat blue surface stretching to the horizon. It’s easy to imagine the sea beneath as calm and largely static – a massive, still abyss far removed from everyday experience.

But the ocean is layered, dynamic and constantly moving, from the surface down to the deepest seafloor. While waves, tides and currents near the coast are familiar and accessible, far less is known about what happens several kilometres below, where the ocean meets the seafloor.

Our new research, published in the journal Ocean Science, shows water near the the seafloor is in constant motion, even in the abyssal plains of the Pacific Ocean. This has important consequences for climate, ecosystems and how we understand the ocean as an interconnected system.

Enter the abyss

The central and eastern Pacific Ocean include some of Earth’s largest abyssal regions (places where the sea is more than 3,000 metres deep). Here, most of the seafloor lies four to six kilometres below the surface. It is shaped by vast abyssal plains, fracture zones and seamounts.

It is cold and dark, and the water and ecosystems here are under immense pressure from the ocean above.

Just above the seafloor, no matter the depth, sits a region known as the bottom mixed layer. This part of the ocean is relatively uniform in temperature, salinity and density because it is stirred through contact with the seafloor.

Rather than a thin boundary, this layer can extend from tens to hundreds of metres above the seabed. It plays a crucial role in the movement of heat, nutrients and sediments between the pelagic ocean and the seabed, including the beginning of the slow return of water from the bottom of the ocean toward the surface as part of global ocean circulation.

Observations focused on the bottom mixed layer are rare, but this is beginning to change. Most ocean measurements focus on the upper few kilometres, and deep observations are scarce, expensive and often decades apart.

In the Pacific especially, scientists have long known that cold Antarctic waters flow northward, along topographic features such as the Tonga-Kermadec Ridge and the Izu-Ogasawara and Japan Trenches.

But the finer details of how these waters interact with seafloor features in ways that intermittently stir and reshape the bottom layer of the ocean has remained largely unknown.

A bright pink, soft coral attached to a grey seafloor mount.
Deep sea ecosystems are under immense pressure from the ocean above.
NOAA Photo Library

Investigating the abyss

To investigate the Pacific abyssal ocean, my colleagues and I combined new surface-to-seafloor measurements collected during a trans-Pacific expedition with high-quality repeat data about the physical features of the ocean gathered over the past two decades.

These observations allowed us to examine temperature and pressure all the way down to the seafloor over a wide range of latitudes and longitudes.

We then compared multiple scientific methods for identifying the bottom mixed layer and used machine learning techniques to understand what factors best explain the variations in its thickness.

Rather than being a uniform layer, we found the bottom mixed layer in the abyssal Pacific varies dramatically. In some regions it was less than 100m thick; in others it exceeded 700m.

This variability is not random; it’s controlled by the seafloor depth and the interactions between waves generated by surface tides and rough landscapes on the seabed.

In other words, the deepest ocean is not quietly stagnant as is often imagined. It is continually stirred by remote forces, shaped by seafloor features, and dynamically connected to the rest of the ocean above.

Just as coastal waters are shaped by waves, currents and sediment movement, the abyssal ocean is shaped by its own set of drivers. However, it is operating over larger distances and longer timescales.

Underwater mounts on the seafloor covered in gold minerals.
Topographic features of the seafloor intermittently stir and reshape the bottom layer of the ocean.
NOAA Photo Library

Connected to the rest of the world

This matters for several reasons.

First, the bottom mixed layer influences how heat is stored and redistributed in the ocean, affecting long-term climate change. Some ocean and climate models still simplify seabed mixing, which can lead to errors in how future climate is projected.

Second, it plays a role in transporting sediment and seabed ecosystems. As interest grows in deep-sea mining and other activities on the high seas, understanding how the seafloor environment changes, and importantly how seafloor disturbances might spread, becomes increasingly important.

Our results highlight how little of the deep ocean we actually observe.

Large areas of the abyssal Pacific remain effectively unsampled, even as international agreements such as the new UN High Seas Treaty seek to manage and protect these regions.

The deep ocean is not a silent, static place. It is active, connected to the oceans above and changing. If we want to make informed decisions about the future of the high seas, we need to understand what’s happening at the very bottom in space and time.

The Conversation

Jessica Kolbusz receives funding from the marine research organisation Inkfish LLC. The funder was not involved in the study design, collection, analysis, interpretation of data, the writing of this article, or the decision to submit it for publication.

ref. Submarine mountains and long-distance waves stir the deepest parts of the ocean – https://theconversation.com/submarine-mountains-and-long-distance-waves-stir-the-deepest-parts-of-the-ocean-274124

What the ‘mother of all deals’ between India and the EU means for global trade

Source: The Conversation – Global Perspectives – By Peter Draper, Professor, and Executive Director: Institute for International Trade, and Director of the Jean Monnet Centre of Trade and Environment, Adelaide University

The “mother of all deals”: that’s how European Commission President Ursula von der Leyen described the new free trade agreement between the European Union and India, announced on Tuesday after about two decades of negotiations.

The deal will affect a combined population of 2 billion people across economies representing about a quarter of global GDP.

Speaking in New Delhi, von der Leyen characterised the agreement as a “tale of two giants” who “choose partnership, in a true win-win fashion”.

So, what have both sides agreed to – and why does it matter so much for global trade?

What has been agreed

Under this agreement, tariffs on 96.6% of EU goods exported to India will be eliminated or reduced. This will reportedly mean savings of approximately €4 billion (about A$6.8 billion) annually in customs duties on European products.

The automotive sector is the big winner. European carmakers – including Volkswagen, BMW, Mercedes-Benz and Renault – will see tariffs on their vehicles gradually reduced from the current punitive rate of 110% to as little as 10%.

The reduced tariffs will apply to an annual quota of 250,000 vehicles, which is six times larger than the quota the UK received in its deal with India.

To protect India’s domestic manufacturers, European cars priced below €15,000 (A$25,500) will face higher tariffs, while electric vehicles get a five-year grace period.

India will almost entirely eliminate tariffs on machinery (which previously faced rates up to 44%), chemicals (22%) and pharmaceuticals (11%).

Wine is particularly notable – tariffs are being slashed from 150% to between 20–30% for medium and premium varieties. Spirits face cuts from 150% to 40%.

In return, the EU is also opening up its market. It will reduce tariffs on 99.5% of goods imported from India. EU tariffs on Indian marine products (such as shrimp), leather goods, textiles, handicrafts, gems and jewellery, plastics and toys will be eliminated.

These are labour-intensive sectors where India has genuine competitive advantage. Indian exporters in marine products, textiles and gems have faced tough conditions in recent years, partly due to US tariff pressures. That makes this EU access particularly valuable.

What’s been left out

This deal, while ambitious by India standards, has limits. It explicitly excludes deeper policy harmonisation on several fronts. Perhaps most significantly, the deal doesn’t include comprehensive provisions on labour rights, environmental standards or climate commitments.

While there are references to carbon border adjustment mechanisms (by which the EU imposes its domestic carbon price on imports into their common market), these likely fall short of enforceable environmental standards increasingly common in EU deals.

And the deal keeps protections for sensitive sectors in Europe: the EU maintains tariffs on beef, chicken, dairy, rice and sugar. Consumers in Delhi might enjoy cheaper European cars, while Europe’s farmers are protected from competition.

An auction takes place at a busy seafood market.
India’s seafood exporters stand to benefit from the deal.
Elke Scholiers/Getty

Why now?

Three forces converged to make this deal happen. First, a growing need to diversify from traditional partners amid economic uncertainty.

Second, the Donald Trump factor. Both the EU and India currently face significant US tariffs: India faces a 50% tariff on goods, while the EU faces headline tariffs of 15% (and recently avoided more in Trump’s threats over Greenland). This deal provides an alternative market for both sides.

And third, there’s what economists call “trade diversion” – notably, when Chinese products are diverted to other markets after the US closes its doors to them.

Both the EU and India want to avoid becoming dumping grounds for products that would normally go to the American market.

A dealmaking spree

The EU has been on something of a dealmaking spree recently. Earlier this month, it signed an agreement with Mercosur, a South American trade bloc.

That deal, however, has hit complications. On January 21, the European Parliament voted to refer it to the EU Court of Justice for legal review, which could delay ratification.

This creates a cautionary tale for the India deal. The legal uncertainty around Mercosur shows how well-intentioned trade deals can face obstacles.

The EU also finalised negotiations with Indonesia in September; EU–Indonesia trade was valued at €27 billion in 2024 (about A$46 billion).

For India, this deal with the EU is considerably bigger than recent agreements with New Zealand, Oman and the UK. It positions India as a diversified trading nation pursuing multiple partnerships.

However, the EU–India trade deal should be understood not as a purely commercial breakthrough, but also as a strategic signal — aimed primarily at the US.

In effect, it communicates that even close allies will actively seek alternative economic partners when faced with the threat of economic coercion or politicised trade pressure.

This interpretation is reinforced by both the deal’s timing and how it was announced. The announcement came even though key details still need to be negotiated and there remains some distance to go before final ratification.

That suggests the immediate objective was to deliver a message: the EU has options, and it will use them.

What does this mean for Australia and India?

For Australians, this deal matters more than you might think. Australia already has the Australia-India Economic Cooperation and Trade Agreement, which came into force in late 2022.

Australia has eliminated tariffs on all Indian exports, while India has removed duties on 90% of Australian goods by value, rising from an original commitment of 85%.

This EU-India deal should provide impetus for Australia and India to finalise their more comprehensive Comprehensive Economic Cooperation Agreement, under negotiation since 2023.

The 11th round of negotiations took place in August, covering goods, services, digital trade, rules of origin, and – importantly – labour and environmental standards.

The EU deal suggests India is willing to engage seriously on tariff liberalisation. However, it remains to be seen whether that appetite will transfer to the newer issues increasingly central to global trade, notably those Australia is now trying to secure with Indian negotiators.

Chasing an Australia-EU deal

Australia should take heart from the EU’s success in building alternative trading relationships.

This should encourage negotiators still pursuing an EU–Australia free trade agreement, negotiations for which were renewed last June after collapsing in 2023.

These deals signal something important about the global trading system: countries are adapting to American protectionism not by becoming protectionist themselves, but by deepening partnerships with each other.

The world’s democracies are saying they want to trade, invest, and cooperate on rules-based terms.

The Conversation

Nathan Howard Gray receives funding from Department of Foreign Affairs and Trade.

Mandar Oak and Peter Draper do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. What the ‘mother of all deals’ between India and the EU means for global trade – https://theconversation.com/what-the-mother-of-all-deals-between-india-and-the-eu-means-for-global-trade-274515

Monumental ambitions: the history behind Trump’s triumphal arch

Source: The Conversation – Global Perspectives – By Garritt C. Van Dyk, Senior Lecturer in History, University of Waikato

Getty Images

Donald Trump took time out this week from dramatic events at home and abroad to reveal three new design concepts for his proposed “Independence Arch” in Washington DC.

All three renderings resemble the famous Arc de Triomphe in Paris, although one features gilded livery not unlike Trump’s chosen adornments to the Oval Office in the White House.

Commissioned in preparation for the 250th anniversary of the signing of the Declaration of Independence on July 4, the triumphal arch draws on a long history of celebrating military conquest, from Roman emperors to Napoleon Bonaparte.

As such, it aligns seamlessly with Trump’s foreign policy and his stated mission for the United States to control the western hemisphere – as he has dubbed it, the “Donroe Doctrine”.

But as many have been asking, while the design is a copy of an iconic monument, is a personal tribute necessarily the best way to mark the anniversary of America’s break with absolute rule and the British monarchy?

The ‘Arc de Trump’

When Trump first displayed models of the proposed arch last October, a reporter asked him who it was for. Trump replied “Me. It’s going to be beautiful.”

In a December update, the president said the new arch “will be like the one in Paris, but to be honest with you, it blows it away. It blows it away in every way.”

There was one exception, he noted: “The only thing they have is history […] I always say [it’s] the one thing you can’t compete with, but eventually we’ll have that history too.”

The president clearly believes his arch will be part of creating that history. “It’s the only city in the world that’s of great importance that doesn’t have a triumphal arch,” he said of Washington DC.

Set to be located near Arlington National Cemetery and the Lincoln Memorial, the site would put the new structure in a visual conversation with many of the most famous landmarks in the national capital.

This also aligns with other projects that will leave Trump’s mark on the physical fabric of Washington: changes to the White House last year that included paving over the famous Rose Garden, decorating the Oval Office in rococo gold, and demolishing the East Wing for a US$400 million ballroom extension.

The “Arc de Trump” (as it has been branded) is now the “top priority” for Vince Haley, the director of the Domestic Policy Council for the White House.

Triumph and design

The Arc de Triomphe in Paris, located at the top of the Champs-Élysées, was commissioned by Napoleon Bonaparte in 1806 to honour the French imperial army following his victory at the Battle of Austerlitz. It was not finished until 1836, under the reign of King Louis Philippe I.

Architects for the project, Jean-François Thérèse Chalgrin and Jean-Arnaud Raymond, drew on classical arches for inspiration, with Rome’s Arch of Titus (circa 85 CE) as the main source. It was built by Emperor Domitian (51–96 CE), a cruel and ostentatious tyrant who was popular with the people but battled with the Senate and limited its power to make laws.

Domitian commissioned the arch to commemorate the deification of his brother Titus, and his military victory crushing the rebellion in Judea.

Given its inspiration, Trump’s proposed arch doesn’t reference any uniquely American design features. But the neoclassical style recalls earlier monuments that also reference antiquity.

The Washington Monument, for example, is built in the form of an Egyptian obelisk. A four-sided pillar, it tapers as it rises and is topped with a pyramid, a tribute to the sun god Ra.

But it also incorporated an element that was meant to symbolise American technological advancement and innovation – a pyramid cap made of aluminium.

When the obelisk was completed in 1884, aluminium was rare because the process for refining it had not been perfected. The top of the monument was the largest piece of cast aluminium on the planet at that time.

‘Truth and sanity’

Trump’s triumphal arch is likely destined to join a long debate about the merits of public monuments and what they represent.

During the Black Lives Matter movement, many statues of historical figures were removed from public display because they were seen as celebrations of racism and imperialism.

Trump has since restored at least one Confederate statue toppled during that time, and his desire to add a new monument to himself should come as little surprise.

During the Jim Crow era of racial segregation and throughout the civil rights movement, there was a sharp spike in the number of monuments erected to Confederate soldiers and generals.

Just as tearing down those statues was a statement, so is the creation of a new memorial to promote Trump’s positive interpretation of the nation’s past. It is also consistent with his administration’s declared mission of “restoring truth and sanity to American history”.

Maybe the more immediate question is whether the Independence Arch can even be built by Independence Day on July 4, a tall order even for this president. As for its reception, history will have to be the judge.

The Conversation

Garritt C. Van Dyk has received funding from the Getty Research Institute.

ref. Monumental ambitions: the history behind Trump’s triumphal arch – https://theconversation.com/monumental-ambitions-the-history-behind-trumps-triumphal-arch-273567

ICE not only looks and acts like a paramilitary force – it is one, and that makes it harder to curb

Source: The Conversation – USA – By Erica De Bruin, Associate Professor of Government, Hamilton College

As the operations of Immigration and Customs Enforcement have intensified over the past year, politicians and journalists alike have begun referring to ICE as a “paramilitary force.”

Rep. John Mannion, a New York Democrat, called ICE “a personal paramilitary unit of the president.” Journalist Radley Balko, who wrote a book about how American police forces have been militarized, has argued that President Donald Trump was using the force “the way an authoritarian uses a paramilitary force, to carry out his own personal grudges, to inflict pain and violence, and discomfort on people that he sees as his political enemies.” And New York Times columnist Jamelle Bouie characterized ICE as a “virtual secret police” and “paramilitary enforcer of despotic rule.”

All this raises a couple of questions: What are paramilitaries? And is ICE one?

Defining paramilitaries

As a government professor who studies policing and state security forces, I believe it’s clear that ICE meets many but not all of the most salient definitions. It’s worth exploring what those are and how the administration’s use of ICE compares with the ways paramilitaries have been deployed in other countries.

The term paramilitary is commonly used in two ways. The first refers to highly militarized police forces, which are an official part of a nation’s security forces. They typically have access to military-grade weaponry and equipment, are highly centralized with a hierarchical command structure, and deploy in large formed units to carry out domestic policing.

These “paramilitary police,” such as the French Gendarmerie, India’s Central Reserve Police Force or Russia’s Internal Troops, are modeled on regular military forces.

The second definition denotes less formal and often more partisan armed groups that operate outside of the state’s regular security sector. Sometimes these groups, as with the United Self-Defense Forces of Colombia, emerge out of community self-defense efforts; in other cases, they are established by the government or receive government support, even though they lack official status. Political scientists also call these groups “pro-government militias” in order to convey both their political orientation in support of the government and less formal status as an irregular force.

Heavily armed and masked security personnel enter a home.
Indian paramilitary personnel conduct a house-to-house search in Kashmir.
AP Photo/Mukhtar Khan

They typically receive less training than regular state forces, if any. How well equipped they are can vary a great deal. Leaders may turn to these informal or unofficial paramilitaries because they are less expensive than regular forces, or because they can help them evade accountability for violent repression.

Many informal paramilitaries are engaged in regime maintenance, meaning they preserve the power of current rulers through repression of political opponents and the broader public. They may share partisan affiliations or ethnic ties with prominent political leaders or the incumbent political party and work in tandem to carry out political goals.

In Haiti, President François “Papa Doc” Duvalier’s Tonton Macouts provided a prime example of this second type of paramilitary. After Duvalier survived a coup attempt in 1970, he established the Tonton Macouts as a paramilitary counterweight to the regular military. Initially a ragtag, undisciplined but highly loyal force, it became the central instrument through which the Duvalier regime carried out political repression, surveilling, harassing, detaining, torturing and killing ordinary Haitians.

Is ICE a paramilitary?

The recent references to ICE in the U.S. as a “paramilitary force” are using the term in both senses, viewing the agency as both a militarized police force and tool for repression.

There is no question that ICE fits the definition of a paramilitary police force. It is a police force under the control of the federal government, through the Department of Homeland Security, and it is heavily militarized, having adopted the weaponry, organization, operational patterns and cultural markers of the regular military. Some other federal forces, such as Customs and Border Patrol, or CBP, also fit this definition.

The data I have collected on state security forces show that approximately 30% of countries have paramilitary police forces at the federal or national level, while more than 80% have smaller militarized units akin to SWAT teams within otherwise civilian police.

The United States is nearly alone among established democracies in creating a new paramilitary police force in recent decades. Indeed, the creation of ICE in the U.S. following the terrorist attacks of Sept. 11, 2001, is one of just four instances I’ve found since 1960 where a democratic country created a new paramilitary police force, the others being Honduras, Brazil and Nigeria.

A group of uniformed ICE personnel walk along cars in a bike lane.
ICE agents on patrol near the scene of the fatal shooting of Renee Good in Minneapolis.
AP Photo/John Locher

ICE and CBP also have some, though not all, of the characteristics of a paramilitary in the second sense of the term, referring to forces as repressive political agents. These forces are not informal; they are official agents of the state. However, their officers are less professional, receive less oversight and are operating in more overtly political ways than is typical of both regular military forces and local police in the United States.

The lack of professionalism predates the current administration. In 2014, for instance, CBP’s head of internal affairs described the lowering of standards for post-9/11 expansion as leading to the recruitment of thousands of officers “potentially unfit to carry a badge and gun.”

This problem has only been exacerbated by the rapid expansion undertaken by the Trump administration. ICE has added approximately 12,000 new recruits – more than doubling its size in less than a year – while substantially cutting the length of the training they receive.

ICE and CBP are not subject to the same constitutional restrictions that apply to other law enforcement agencies, such as the Fourth Amendment’s prohibition on unreasonable search and seizure; both have gained exemptions from oversight intended to hold officers accountable for excessive force. CBP regulations, for instance, allow it to search and seize people’s property without a warrant or the “probable cause” requirement imposed on other forces within 100 miles, or about 161 kilometers, of the border.

In terms of partisan affiliations, Trump has cultivated immigration security forces as political allies, an effort that appears to have been successful. In 2016, the union that represents ICE officers endorsed Trump’s campaign with support from more than 95% of its voting members. Today, ICE recruitment efforts increasingly rely on far-right messaging to appeal to political supporters.

Both ICE and CBP have been deployed against political opponents in nonimmigration contexts, including Black Lives Matter protests in Washington, D.C., and Portland, Oregon, in 2020. They have also gathered data, according to political scientist Elizabeth F. Cohen, to “surveil citizens’ political beliefs and activities – including protest actions they have taken on issues as far afield as gun control – in addition to immigrants’ rights.”

In these ways, ICE and CBP do bear some resemblance to the informal paramilitaries used in many countries to carry out political repression along partisan and ethnic lines, even though they are official agents of the state.

Why this matters

An extensive body of research shows that more militarized forms of policing are associated with higher rates of police violence and rights violations, without reducing crime or improving officer safety.

Studies have also found that more militarized police forces are harder to reform than less-militarized law enforcement agencies. The use of such forces can also create tensions with both the regular military and civilian police, as currently appears to be happening with ICE in Minneapolis.

The ways in which federal immigration forces in the United States resemble informal paramilitaries in other countries – operating with less effective oversight, less competent recruits and increasingly entrenched partisan identity – make all these issues more intractable. Which is why, I believe, many commentators have surfaced the term paramilitary and are using it as a warning.

The Conversation

Erica De Bruin does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. ICE not only looks and acts like a paramilitary force – it is one, and that makes it harder to curb – https://theconversation.com/ice-not-only-looks-and-acts-like-a-paramilitary-force-it-is-one-and-that-makes-it-harder-to-curb-274580

Is time a fundamental part of reality? A quiet revolution in physics suggests not

Source: The Conversation – UK – By Florian Neukart, Assistant professor of Physics, Leiden University

Pack-Shot/Shutterstock

Time feels like the most basic feature of reality. Seconds tick, days pass and everything from planetary motion to human memory seems to unfold along a single, irreversible direction. We are born and we die, in exactly that order. We plan our lives around time, measure it obsessively and experience it as an unbroken flow from past to future. It feels so obvious that time moves forward that questioning it can seem almost pointless.

And yet, for more than a century, physics has struggled to say what time actually is. This struggle is not philosophical nitpicking. It sits at the heart of some of the deepest problems in science.

Modern physics relies on different, but equally important, frameworks. One is Albert Einstein’s theory of general relativity, which describes the gravity and motion of large objects such as planets. Another is quantum mechanics, which rules the microcosmos of atoms and particles. And on an even larger scale, the standard model of cosmology describes the birth and evolution of the universe as a whole. All rely on time, yet they treat it in incompatible ways.

When physicists try to combine these theories into a single framework, time often behaves in unexpected and troubling ways. Sometimes it stretches. Sometimes it slows. Sometimes it disappears entirely.


The Insights section is committed to high-quality longform journalism. Our editors work with academics from many different backgrounds who are tackling a wide range of societal and scientific challenges.


Einstein’s theory of relativity was, in fact, the first major blow to our everyday intuition about time. Time, Einstein showed, is not universal. It runs at different speeds depending on gravity and motion. Two observers moving relative to one another will disagree about which events happened at the same time. Time became something elastic, woven together with space into a four-dimensional fabric called spacetime.

Quantum mechanics made things even stranger. In quantum theory, time is not something the theory explains. It is simply assumed. The equations of quantum mechanics describe how systems evolve with respect to time, but time itself remains an external parameter, a background clock that sits outside the theory.




Read more:
Quantum mechanics: how the future might influence the past


This mismatch becomes acute when physicists try to describe gravity at the quantum level, which is crucial for developing the much coveted theory of everything – which links the main fundamental theories. But in many attempts to create such a theory, time vanishes as a parameter from the fundamental equations altogether. The universe appears frozen, described by equations that make no reference to change.

This puzzle is known as the problem of time, and it remains one of the most persistent obstacles to a unified theory of physics. Despite enormous progress in cosmology and particle physics, we still lack a clear explanation for why time flows at all.

Now a relatively new approach to physics, building on a mathematical framework called information theory, developed by Claude Shannon in the 1940s, has started coming up with surprising answers.

Entropy and the arrow of time

When physicists try to explain the direction of time, they often turn to a concept called entropy. The second law of thermodynamics states that disorder tends to increase. A glass can fall and shatter into a mess, but the shards never spontaneously leap back together. This asymmetry between past and future is often identified with the arrow of time.

This idea has been enormously influential. It explains why many processes are irreversible, including why we remember the past but not the future. If the universe started in a state of low entropy, and is getting messier as it evolves, that appears to explain why time moves forward. But entropy does not fully solve the problem of time.

Spools of coloured embroidery threads. Huge knot is haphazardly braided.
It is hard to undo a mess.
klevo/Shutterstock

For one thing, the fundamental quantum mechanical equations of physics do not distinguish between past and future. The arrow of time emerges only when we consider large numbers of particles and statistical behaviour. This also raises a deeper question: why did the universe start in such a low-entropy state to begin with? Statistically, there are more ways for a universe to have high entropy than low entropy, just as there are more ways for a room to be messy than tidy. So why would it start in a state that is so improbable?

The information revolution

Over the past few decades, a quiet but far-reaching revolution has taken place in physics. Information, once treated as an abstract bookkeeping tool used to track states or probabilities, has increasingly been recognised as a physical quantity in its own right, just like matter or radiation. While entropy measures how many microscopic states are possible, information measures how physical interactions limit and record those possibilities.

This shift did not happen overnight. It emerged gradually, driven by puzzles at the intersection of thermodynamics, quantum mechanics and gravity, where treating information as merely mathematical began to produce contradictions.

One of the earliest cracks appeared in black hole physics. When Stephen Hawking showed that black holes emit thermal radiation, it raised a disturbing possibility: information about whatever falls into a black hole might be permanently lost as heat. That conclusion conflicted with quantum mechanics, which demands that the entirety of information be preserved.

Resolving this tension forced physicists to confront a deeper truth. Information is not optional. If we want a full description of the universe that includes quantum mechanics, information cannot simply disappear without undermining the foundations of physics. This realisation had profound consequences. It became clear that information has thermodynamic cost, that erasing it dissipates energy, and that storing it requires physical resources.

In parallel, surprising connections emerged between gravity and thermodynamics. It was shown that Einstein’s equations can be derived from thermodynamic principles that link spacetime geometry directly to entropy and information. In this view, gravity doesn’t behave exactly like a fundamental force.

Instead, gravity appears to be what physicists call “emergent” – a phenomenon describing something that’s greater than the sum of its parts, arising from more fundamental constituents. Take temperature. We can all feel it, but on a fundamental level, a single particle can’t have temperature. It’s not a fundamental feature. Instead it only emerges as a result of many molecules moving collectively.

Similarly, gravity can be described as an emergent phenomenon, arising from statistical processes. Some physicists have even suggested that gravity itself may emerge from information, reflecting how information is distributed, encoded and processed.

These ideas invite a radical shift in perspective. Instead of treating spacetime as primary, and information as something that lives inside it, information may be the more fundamental ingredient from which spacetime itself emerges. Building on this research, my colleagues and I have explored a framework in which spacetime itself acts as a storage medium for information – and it has important consequences for how we view time.

In this approach, spacetime is not perfectly smooth, as relativity suggests, but composed of discrete elements, each with a finite capacity to record quantum information from passing particles and fields. These elements are not bits in the digital sense, but physical carriers of quantum information, capable of retaining memory of past interactions.

A useful way to picture them is to think of spacetime like a material made of tiny, memory-bearing cells. Just as a crystal lattice can store defects that appeared earlier in time, these microscopic spacetime elements can retain traces of the interactions that have passed through them. They are not particles in the usual sense described by the standard model of particle physics, but a more fundamental layer of physical structure that particle physics operates on rather than explains.

This has an important implication. If spacetime records information, then its present state reflects not only what exists now, but everything that has happened before. Regions that have experienced more interactions carry a different imprint of information than regions that have experienced fewer. The universe, in this view, does not merely evolve according to timeless laws applied to changing states. It remembers.

A recording cosmos

This memory is not metaphorical. Every physical interaction leaves an informational trace. Although the basic equations of quantum mechanics can be run forwards or backwards in time, real interactions never happen in isolation. They inevitably involve surroundings, leak information outward and leave lasting records of what has occurred. Once this information has spread into the wider environment, recovering it would require undoing not just a single event, but every physical change it caused along the way. In practice, that is impossible.

This is why information cannot be erased and broken cups do not reassemble. But the implication runs deeper. Each interaction writes something permanent into the structure of the universe, whether at the scale of atoms colliding or galaxies forming.

Geometry and information turn out to be deeply connected in this view. In our work, we have showed that how spacetime curves depends not only on mass and energy, as Einstein taught us, but also on how quantum information, particularly entanglement, is distributed. Entanglement is a quantum process that mysteriously links particles in distant regions of space – it enables them to share information despite the distance. And these informational links contribute to the effective geometry experienced by matter and radiation.

From this perspective, spacetime geometry is not just a response to what exists at a given moment, but to what has happened. Regions that have recorded many interactions tend, on average, to behave as if they curve more strongly, have stronger gravity, than regions that have recorded fewer.

This reframing subtly changes the role of spacetime. Instead of being a neutral arena in which events unfold, spacetime becomes an active participant. It stores information, constrains future dynamics and shapes how new interactions can occur. This naturally raises a deeper question. If spacetime records information, could time emerge from this recording process rather than being assumed from the start?

Time arising from information

Recently, we extended this informational perspective to time itself. Rather than treating time as a fundamental background parameter, we showed that temporal order emerges from irreversible information imprinting. In this view, time is not something added to physics by hand. It arises because information is written in physical processes and, under the known laws of thermodynamics and quantum physics, cannot be globally unwritten again. The idea is simple but far-reaching.

Every interaction, such as two particles crashing, writes information into the universe. These imprints accumulate. Because they cannot be erased, they define a natural ordering of events. Earlier states are those with fewer informational records. Later states are those with more.

Quantum equations do not prefer a direction of time, but the process of information spreading does. Once information has been spread out, there is no physical path back to a state in which it was localised. Temporal order is therefore anchored in this irreversibility, not in the equations themselves.

Time, in this view, is not something that exists independently of physical processes. It is the cumulative record of what has happened. Each interaction adds a new entry, and the arrow of time reflects the fact that this record only grows.

The future differs from the past because the universe contains more information about the past than it ever can about the future. This explains why time has a direction without relying on special, low-entropy initial conditions or purely statistical arguments. As long as interactions occur and information is irreversibly recorded, time advances.

Interestingly, this accumulated imprint of information may have observable consequences. At galactic scales, the residual information imprint behaves like an additional gravitational component, shaping how galaxies rotate without invoking new particles. Indeed, the unknown substance called dark matter was introduced to explain why galaxies and galaxy clusters rotate faster than their visible mass alone would allow.

In the informational picture, this extra gravitational pull does not come from invisible dark matter, but from the fact that spacetime itself has recorded a long history of interactions. Regions that have accumulated more informational imprints respond more strongly to motion and curvature, effectively boosting their gravity. Stars orbit faster not because more mass is present, but because the spacetime they move through carries a heavier informational memory of past interactions.

Image of the Andromeda Galaxy.
Galaxies rotate faster than they should.
Wirestock Creators/Shutterstock

From this viewpoint, dark matter, dark energy and the arrow of time may all arise from a single underlying process: the irreversible accumulation of information.

Testing time

But could we ever test this theory? Ideas about time are often accused of being philosophical rather than scientific. Because time is so deeply woven into how we describe change, it is easy to assume that any attempt to rethink it must remain abstract. An informational approach, however, makes concrete predictions and connects directly to systems we can observe, model and in some cases experimentally probe.

Black holes provide a natural testing ground, as they seems to suggest information is erased. In the informational framework, this conflict is resolved by recognising that information is not destroyed but imprinted into spacetime before crossing the horizon. The black hole records it.

This has an important implication for time. As matter falls toward a black hole, interactions intensify and information imprinting accelerates. Time continues to advance locally because information continues to be written, even as classical notions of space and time break down near the horizon and appear to slow or freeze for distant observers.

As the black hole evaporates through Hawking radiation, the accumulated informational record does not vanish. Instead, it affects how radiation is emitted. The radiation should carry subtle signs that reflect the black hole’s history. In other words, the outgoing radiation is not perfectly random. Its structure is shaped by the information previously recorded in spacetime. Detecting such signs remains beyond current technology, but they provide a clear target for future theoretical and observational work.

The same principles can be explored in much smaller, controlled systems. In laboratory experiments with quantum computers, qubits (the quantum computer equivalent of bits) can be treated as finite-capacity information cells, just like the spacetime ones. Researchers have shown that even when the underlying quantum equations are reversible, the way information is written, spread and retrieved can generate an effective arrow of time in the lab. These experiments allow physicists to test how information storage limits affect reversibility, without needing cosmological or astrophysical systems.

Extensions of the same framework suggest that informational imprinting is not limited to gravity. It may play a role across all fundamental forces of nature, including electromagnetism and the nuclear forces. If this is correct, then time’s arrow should ultimately be traceable to how all interactions record information, not just gravitational ones. Testing this would involve looking for limits on reversibility or information recovery across different physical processes.

Taken together, these examples show that informational time is not an abstract reinterpretation. It links black holes, quantum experiments and fundamental interactions through a shared physical mechanism, one that can be explored, constrained and potentially falsified as our experimental reach continues to grow.

What time really is

Ideas about information do not replace relativity or quantum mechanics. In everyday conditions, informational time closely tracks the time measured by clocks. For most practical purposes, the familiar picture of time works extremely well. The difference appears in regimes where conventional descriptions struggle.

Near black hole horizons or during the earliest moments of the universe, the usual notion of time as a smooth, external coordinate becomes ambiguous. Informational time, by contrast, remains well defined as long as interactions occur and information is irreversibly recorded.

All this may leave you wondering what time really is. This shift reframes the longstanding debate. The question is no longer whether time must be assumed as a fundamental ingredient of the universe, but whether it reflects a deeper underlying process.

In this view, the arrow of time can emerge naturally from physical interactions that record information and cannot be undone. Time, then, is not a mysterious background parameter standing apart from physics. It is something the universe generates internally through its own dynamics. It is not ultimately a fundamental part of reality, but emerges from more basic constituents such as information.

Whether this framework turns out to be a final answer or a stepping stone remains to be seen. Like many ideas in fundamental physics, it will stand or fall based on how well it connects theory to observation. But it already suggests a striking change in perspective.

The universe does not simply exist in time. Time is something the universe continuously writes into itself.


For you: more from our Insights series:

To hear about new Insights articles, join the hundreds of thousands of people who value The Conversation’s evidence-based news. Subscribe to our newsletter.

The Conversation

Florian Neukart does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Is time a fundamental part of reality? A quiet revolution in physics suggests not – https://theconversation.com/is-time-a-fundamental-part-of-reality-a-quiet-revolution-in-physics-suggests-not-273841

The plan to clean up England and Wales’ water industry ignores the sector’s biggest problem

Source: The Conversation – UK – By Kate Bayliss, Research Associate, Department of Economics, SOAS, University of London

Gala Oleksenko/Shutterstock

Government plans published earlier this month around the water sector in England and Wales were heralded as a “once-in-a-generation” opportunity to transform the system. However, despite the confidence of UK environment secretary Emma Reynolds, the long-awaited plans raise significant concerns. This is a reform agenda for water as a business – but not a vision for managing a vital public and environmental resource.

The fully privatised water system in England and Wales has been facing two (self-inflicted) crises in recent years. First, companies have failed to invest enough in infrastructure and have been pouring untreated sewage into rivers and seas.

Second, some companies (acting primarily in the interests of their shareholders) have hiked up debts while still paying out dividends. The largest company, Thames Water, has been teetering on the brink of financial collapse since 2023.

Occasional but serious interruptions of water supplies prove that all is not well in water service delivery, and there is growing recognition that the water system is unfit for purpose. We, as academics who helped set up a research body called the People’s Commission on the Water Sector, would have to agree.

But crucially, missing from the government’s white paper detailing the new policy is any reflection on the processes that led to this situation. It acknowledges that companies have behaved badly and that some water companies and their owners have prioritised short-term profits over long-term resilience and the environment.

This is a serious understatement. Underlying the outcomes of the past few years are the profit-seeking activities by private investors. Private companies cut back on costs and manipulated finances to benefit shareholders.

Water users and the environment suffer the consequences. And the regulators, Ofwat and the Environment Agency, were ill-prepared for the scale of the private sector’s extractive practices.

The bottom line is that the profit motive is incompatible with treating water in a way that is socially and environmentally equitable.

Now, the government is proposing a new single regulator with dedicated teams for each company, rather than the four institutions that have been in place until now. In addition, there are plans for better regulation and enforcement for pollution, and improvements to infrastructure (the white paper reveals how little is known about water company assets).

However, the language around regulation is confused and contradictory. On the one hand there is talk of being tough: Reynolds says there will be “nowhere to hide” for errant water companies. And there could be criminal proceedings against directors, who may also be deprived of bonus payments.

But on the other, the language is remarkably accommodating in its approach to the firms that have put the whole system in jeopardy.

Those that were behind the sewage crises and the perilous state of water company finances are to be helped to improve through a “performance-improvement regime”. Considerable attention is devoted to creating an attractive climate for investors, where returns will be stable and predictable. This, despite the fact that recent unpredictability was largely due to the activities of private companies.

Power and politics

If water in England and Wales remains in private hands, the unresolvable tension between the drive for profits alongside controls to protect consumers and the environment will persist. The demands of capital tend to prevail, with considerable government attention devoted to ensuring that the sector is attractive to investors.

As an example, the government claims that the next five years will see £104 billion of private investment. But this ultimately is funded by the planned 36% rise in bills (plus inflation). And a fifth of this (£22 billion) is set aside for the costs of capital, to cover interest payments and dividends.

The focus on regulatory and management measures obscures issues of power and politics in water governance. Water is supplied by companies whose shareholders have immense political power.

Private equity investors BlackRock, the biggest asset manager in the world, has stakes in three water companies – Severn Trent, United Utilities and South West Water (via Pennon). Keir Starmer, the prime minister, entertained BlackRock’s CEO in November 2024 where an overhaul of regulation was reportedly promised.

And Hong Kong-based CKI, once a contender to take over Thames Water, is the majority owner of Northumbrian Water. It also has stakes in Britain’s gas, electricity and rail networks, as well as owning Superdrug and the discount store, Savers.

A similar story is told in other companies. These are global behemoths that have influence and huge resources, and as such may seek to shape regulation in their own interests.

parisians drinking from a water fountain with branding of the city's public water company eau de paris.
Paris has got it right.
Oliverouge 3/Shutterstock

The system in England and Wales is an outlier. No other country has copied this extreme privatised model. In fact, many have taken privatised water back into public hands. In Paris, the public water operator Eau de Paris is an award-winning example of transparency, accountability and integrity in public service.

It demonstrates that it is possible to create public services that are fair, sustainable and resilient. Key to this process has been the vision of water as a vital common good rather than a commodity.

The government’s plans will patch up the water system, particularly with the boost in revenue from bill payers. But the private sector has found unanticipated ways to maximise profits in the past and may well do so again. Rather than continually tweaking the failed private model, the only real route to operating water in the public interest is for it to be in public ownership.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. The plan to clean up England and Wales’ water industry ignores the sector’s biggest problem – https://theconversation.com/the-plan-to-clean-up-england-and-wales-water-industry-ignores-the-sectors-biggest-problem-274382

Another kind of student debt is entrenching inequality

Source: The Conversation – UK – By Cora Lingling Xu, Associate Professor in Sociology of Education, Durham University

Friends Stock/Shutterstock

In November 2012, during my first year as a PhD student, a 23-year-old medical student knocked on my door. Earlier that day, we had been discussing our ages in our shared kitchen. At 30, I had stayed silent, feeling a sharp sting of embarrassment next to my 20-something housemates.

But this student was determined to get an answer from me. He shoved his passport in my face and demanded to see mine. When I admitted my age, he laughed and said: “Wow, you’re so old.”

In that moment, I felt a deep sense of shame and failure. But after a decade of research tracking more than 100 young people, I want to tell my younger self: you weren’t failing. You simply hadn’t inherited the same amount of time as your peer.

My work with students in China shows that social inequality isn’t just about money or status. It’s also about time inheritance.

I started my PhD at 30 only after spending five years working to clear my family’s debts and move my parents out of a house where sewage regularly flooded their floors. My housemate, whose father and grandfather were doctors and Cambridge alumni, had inherited “banked time” – a cushion of security that allowed him to glide straight to the academic starting line.

Banked time v borrowed time

To make sense of this, I distinguish between two kinds of time inheritance.

Some young people receive banked time. They start life with a “full tank”: parents who can afford to support them through unpaid internships, gap years, or an extra degree, and the freedom to change course or repeat a year without financial ruin. This creates a sense of temporal security that allows them to take measured risks, explore their interests, and wait for the best opportunities to arise. They have “slack” in the system that actually generates more time in the long run.

Others live on borrowed time. They start with an “empty tank,” already owing years of labour to their families before they even begin. Because their education often relies on the extreme sacrifices of parents or the missed opportunities of siblings, these students carry a heavy debt-paying mentality.

A delay in earning feels dangerous because it isn’t just a personal setback; it is a failure to repay a moral and economic debt to those who supported them. This pressure works in two punishing ways.

Some make “self-sabotaging” choices by picking lower-tier degrees or precarious jobs just because they offer immediate income. Others find their education takes far longer as they are forced to pause their studies to work and save, trapped in a cycle of paying off “time interest” before they can finally begin their own lives.

Take Jiao, a brilliant student from a poor rural family in China. He scored high enough to enter one of the country’s top two universities: Peking or Tsinghua, the equivalent of Oxford or Cambridge. Yet he chose a second-tier university.

He felt he could not afford the “time cost” of the mandatory military training that was required at the elite universities at the time he was applying. This would have delayed his ability to earn money and support his parents. On paper, this looks like a self-sabotaging decision. In reality, it was a survival strategy shaped by time poverty: he simply did not have months to spare.

In contrast, Yi, born into a comfortable Beijing family, dropped out of university after just one year because she didn’t like the teaching. She didn’t see this as a failure, but as “cutting her losses”. With her parents’ backing, she quickly applied to an elite university in Australia. Yi had inherited banked time, which gave her the security to try again.

Both students were capable. What differed was how much time they could afford to lose.

Lost learning

Although my research focuses on China, these temporal mechanisms are not culturally unique. They show up in different forms in other countries.

We saw this during post-pandemic debates about “lost learning”. In the UK, for example, tutoring programmes and extra school hours were offered as fixes. But these only work if pupils have the spare time to use them.

For those already caring for siblings or parents, working part-time or commuting long distances, the extra provision can become another burden: deepening, rather than reducing, time debt.

In universities, the cost-of-living crisis has pushed more students into long hours of paid work during term. They get through their degrees, but at a price: less time to build networks, take internships or simply think about their next steps.

Rigid career “windows” also matter. Age-limited grants, early-career schemes that expire a few years after graduation and expectations of a seamless CV all act as a time tax on those who took longer to reach the starting line. They might have been caring for relatives, changing country, or working to stay afloat.

Making education fairer means being aware of this time disparity. This could mean designing catch-up and tutoring schemes around the actual schedules of working and caring students, not an idealised timetable.

Within academia, extending age and career-stage limits on scholarships, fellowships and early-career posts would mean that those who started “late” are not permanently penalised. And more recognition of the burden of unpaid care and emotional labour in both universities and workplaces would be a valuable step.

Ultimately, doing well in education is not just about how we spend our time. It is about who is allowed to have time in the first place, and who is quietly starting the race already in debt.

The Conversation

Dr Cora Lingling Xu receives funding from the Cambridge International Trust, the Sociological Review Foundation, the ESRC Social Science Festival,the British Academy and various grants from Queens’ College Cambridge, Cambridge University, Keele University and Durham University.

ref. Another kind of student debt is entrenching inequality – https://theconversation.com/another-kind-of-student-debt-is-entrenching-inequality-274142