Source: The Conversation – UK – By Dan Baumgardt, Senior Lecturer, School of Psychology and Neuroscience, University of Bristol
The latest absurdist offering from Yorgos Lanthimos, director of The Favourite and Poor Things, hits cinemas this week, and Bugonia promises to be another strange and rollicking masterpiece of complete, unmissable chaos.
Lanthimos’s muse Emma Stone and Jesse Plemons reunite in this darkly comic tale about a pharmaceutical CEO (Stone) kidnapped by conspiracy theorists. Believing she is an extraterrestrial intent on destroying Earth, they imprison her in an effort to save humanity.
The film is a remake of Save the Green Planet!, the 2003 South Korean cult classic. Beneath its surreal surface lies a fascinating question: why do some people genuinely believe in aliens – not as fiction, but as fact?
In psychiatry, a delusion is defined as a fixed, false belief. It is false because it is factually incorrect, and fixed because it is unshakeable and resists all evidence to the contrary. However irrational it appears to others, it feels entirely true to the person experiencing it.
Delusions often coexist with hallucinations, in which people see figures, hear voices or sense a presence that is not really there.
In the modern era, alien delusions take many forms. Some believe their bodies are controlled by extraterrestrials or that aliens are manipulating their thoughts. Others develop persecutory beliefs, convinced that aliens are trying to harm them or have implanted tracking devices in their bodies.
Some even experience identity delusions, believing they are aliens themselves or have been chosen for a special mission. Grandiose delusions involve exaggerated beliefs in one’s status, importance or power.
Such symptoms are most often seen in psychotic disorders including schizophrenia, though they can also occur in bipolar disorder or as a result of substance misuse, particularly stimulants or hallucinogens such as cocaine, amphetamines or LSD.
A brief history of alien beliefs
Today, alien delusions draw on decades of popular culture, from The X-Files and Prometheus to District 9 and ET. But what about the times before flying saucers and abduction stories filled our screens?
As far back as the middle ages, people described experiences that might now be considered delusional. Religious belief dominated, so visions of angels and devils provided the language of control and persecution. During the witchcraft panics, people claimed to be tormented or possessed by witches and demons.
As science and technology advanced, so did the content of delusions. In the early 20th century, writers such as HG Wells helped popularise the idea of intelligent life beyond Earth through works like The War of the Worlds, a story about a Martian invasion that captured both public imagination and anxiety about the unknown.
With the rise of radio, psychiatrists began recording delusions involving radio waves, in which patients believed their thoughts were being transmitted or received through the air. As technology evolved, so did the fears: people began reporting delusions of technical or alien control, convinced that X-rays, lasers or even the internet were influencing their minds.
In July 1947, debris recovered from a ranch near Roswell, New Mexico, was initially claimed to be from a “flying disc” before being reidentified by the US military as a weather balloon. The contradictory reports ignited decades of speculation about government cover-ups and alien visitation, embedding UFO imagery deep in the popular imagination. After this post-war Roswell incident, UFOs became a cultural fixture – and soon, a clinical one.
Psychiatrists soon encountered patients whose delusions mirrored these stories of flying saucers and alien abductions. Over time, such beliefs evolved alongside new technologies and social anxieties, from government surveillance to nanotechnology and artificial intelligence. The motifs, however, remain strikingly consistent: possession, control, abduction. The vocabulary changes, but the psychology endures.
Part of the “normal” brain?
While delusions are fixed and distressing, other alien experiences are not necessarily pathological. Many people report seeing unexplained lights, shapes or figures, often during the hazy transitions between wakefulness and sleep. Others interpret these sensations within cultural, religious or recreational contexts as forms of cosmic contact. Such fleeting experiences are surprisingly common and usually harmless.
So why does the mind reach for alien imagery when constructing delusions? The brain may simply use the symbols at hand – stories, myths, films – to make sense of fear or confusion. In that way, delusion is not so much nonsense as meaning-making gone astray.
This brings us back to Bugonia.
The film’s title comes from the Greek word bougonia, meaning “ox birth”. It refers to an ancient Mediterranean myth in which dead animals were believed to give rise to swarms of bees – a metaphor for how life, or meaning, can emerge from decay.
Lanthimos takes that idea both literally and symbolically. In Bugonia, delusion and revelation, horror and comedy, all blur into one. Stone and Plemons deliver outstanding performances, with Stone in particular chasing a deserved third Oscar.
Beyond its absurdity, Bugonia leaves a quietly unsettling thought: that the distance between imagination and “madness” is far thinner than we’d like to believe – and that perhaps every delusion begins as the mind’s attempt to create order from chaos.
Dan Baumgardt does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
China recently announced that it was putting new controls on the export of rare earth elements, sparking a new round in the country’s ongoing trade war with the US.
Donald Trump responded by threatening to ramp up tariffs on Chinese goods by a further 100%. This will all be under discussion when China’s president Xi Jinping and Trump meet on October 30 at the Asia Pacific Economic Conference in South Korea.
China mines 70% and refines 92% of these increasingly important metals, and manufactures 98% of the world’s rare earth magnets used in EVs, electronics, medical devices and other clean tech. In recent years, these essential minerals have become a crucial part of China’s economic agenda as it tries to focus on “high quality development” in advanced and green technology
The recent announcement from Beijing has raised concerns about global access to these essential minerals. If the supply of rare earths available to the outside world diminishes, the cost of manufacturing green tech would rise and drive up prices worldwide. If there is anything that would stall the development of the green economy, this could be it.
In response to the announcement, Trump initially suggested he might cancel an upcoming meeting with Chinese president Xi. However, the meeting now looks set to go ahead, and access to rare earths is likely to be high on the agenda.
The battle to gain access to rare earth minerals is important to developing more green tech.
Trump had also announced that he was considering a ban on exports to China of all products made with US software such as laptops and jet engines, and industrial equipment. This might reduce Beijing’s ability to design essential components for AI chips, hampering its bid for dominance in clean tech.
Prior to Trump’s latest threats, electric vehicles coming from China had already been hit by a 100% US tariff, while import duties for solar cells and lithium batteries stood at 50% and 25% respectively.
But the result might have surprised Trump. As US-made goods are exempt from tariffs from paying tariffs, Chinese firms have set up production sites in the US to circumvent Trump’s tariffs. Instead of helping domestic US companies, Trump’s policies have done the opposite.
For instance, the solar manufacturing capacity of Chinese firms based in the US has grown so large that it now accounts for 39% of all solar panel energy output in the country versus only 24% from US firms.
But even if Chinese clean tech sales in US were severely affected by the tariffs, most of China’s green tech is heading elsewhere.
Based on my estimations using data from the energy thinktank Ember, Chinese green tech exports globally in 2024 were valued at US$184.06 billion (£139 billion), while total exports to the US stood at US$20.66 billion. The US market accounted for only 11.2% of the total proportion of total Chinese green tech exports, while that number from January to September 2025 has dipped to 7.8%.
Compared to the EU (29.95%) and Asian market (27.97%) in 2024, the US market appears relatively small. So higher tariffs would harm China’s economy, but the damage may not be as substantial as Trump might imagine. However, the EU’s plans to meet climate targets is massively dependent on these Chinese exports.
Problems for Beijing?
The US has already put restrictions on which technologies China can buy from the US. China can still manufacture electric vehicles, solar panels and wind turbines without US software. But without the most advanced technologies from the US, Chinese firms will have fewer options.
While there are indications that the tech gap between Washington and Beijing may be shrinking, the US still possesses some of the most advanced technologies that are crucial for green tech development. These include advanced semiconductors, which are needed to make AI chips.
Such components and machinery are essential to China’s claim to green leadership since they allow users to automate EVs, solar panels and wind turbines, while ensuring their efficiency and optimising energy use. Simply put, without the best semiconductors and the AI chips, China won’t be able to create world-leading clean tech.
China may have metals but without US chips and software, it’s green economic momentum might stall – at least until China’s semiconductor and AI tech catches up with the US. Chinese economic progress and its green leadership may be dependent on gaining better trade deals, even if it does still have a massive advantage.
Don’t have time to read about climate change as much as you’d like?
Chee Meng Tan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Plaid Cymru’s overwhelming victory in the recent Caerphilly Senedd byelection shattered over a century of political tradition. Lindsay Whittle took the seat with 15,691 votes. Labour, which had held the seat since it was created, came away with just 3,713 votes.
Reform came second to Plaid, with 12,113 votes. And while this was an impressive performance, the fact that it failed to win Caerphilly even after vast amounts of time and money spent on the campaign has led to speculation that tactical voting played a part in this byelection.
A big clue that tactical voting was at work in Caerphilly was the recorded turnout. Typically, byelections in Wales have been low-key affairs. Turnouts are low and incumbents generally win. The national average for a Senedd vote in a constituency has never tipped over 50%. In Caerphilly, turnout climbed from 44% in the 2021 election to 50.4% in this byelection.
And while local voters clearly backed Plaid Cymru for plenty of reasons, the extremely low vote count for other parties does suggest at least some lent their vote to Plaid to keep out Reform. The Conservative vote collapsed to fewer than 700 votes and the Lib Dems and Greens, so often the recipients of tactical votes themselves, each took just 1.5% of the votes in Caerphilly.
Anecdotes from the vote count support this. The BBC recounted “extraordinary stories” of habitual supporters of the Conservatives, a pro-union party, voting Plaid to block Reform.
The increased turnout and Plaid’s 27.4% swing both suggest a mobilisation, triggered by polling and a wider national narrative which persuasively contends that Reform is ahead of other parties. Does the result therefore imply that Reform can be beaten elsewhere if voters take the right approach to tactical voting?
Reform entered the Caerphilly race with no prior foothold in the constituency. The party mobilised heavily and, it had seemed, effectively. Nigel Farage and other senior Reform figures made multiple visits to the area to campaign for their candidate, Llŷr Powell. Pre-election polls, including one by Survation which had Reform leading Plaid by 42% to 38%, raised expectations of a breakthrough.
And it is true that Reform’s ultimate 36% vote share reflects its growing appeal among disaffected working-class voters. It did capitalise on the same anti-establishment sentiment that has seen the party top UK-wide polls for much of the past year.
Yet, the result also exposes Reform’s vulnerabilities. As with the Hamilton, Larkhall and Stonehouse byelection for the Scottish parliament earlier in the summer, Reform failed to convert intensive campaigning into victory.
The role and reach of tactical voting
Underneath the hype, Farage is unpopular. Polls suggest as many as 60% of voters are opposed to him being prime minister. That presents an opportunity for opponents to unite behind a more broadly acceptable candidate.
In this volatile political era, where voters show little loyalty to tradition, smaller parties like Plaid Cymru, the SNP, Greens and even Pro-Gaza independents could frame themselves as the “real alternative” to Reform. Depending on local dynamics, they could attempt to draw tactical support.
It should be noted, however, that tactical voting cuts both ways. While it denied Reform a victory in Caerphilly, the party could attract tactical support from Conservative voters eager to oust Labour governments.
In England, without equivalents to Plaid or the SNP to siphon anti-establishment sentiment, Reform may consolidate its grip on working-class disillusionment. This trend was evident in Labour’s collapse in the Runcorn and Helsby Westminster byelection in May 2025, which enabled Reform to take the seat.
In Caerphilly, Labour’s vote fell amid grievances including the slow pace of change to improve living standards, policy u-turns and a fatigue with Welsh Labour, which has been in power in the Senedd since its creation in 1999.
Such grievances can be felt across the UK more broadly – with winter-fuel policy u-turns, and a general dissatisfaction with how long it is taking Labour to deliver on promises to improve living standards. Concern about immigration is also used to punish Labour in both the regular voting intention polls and at the ballot box in council byelections.
An anti-Reform majority does exist – and it has shown up in several contests, including in races Reform has ultimately won but on less than 50% of the vote. Harnessing this anti-Reform majority, however, requires a level of co-ordination rarely seen in the UK’s electoral history.
Unlike the 1997 anti-Conservative wave, there is no single opposition brand. Instead, the anti-Reform vote is split across Labour, Liberal Democrats, Greens, nationalists and independents – and, arguably, the Conservatives too.
In Caerphilly, we saw this fragmentation briefly turn into coalescence. This implies that a clear polling trigger, showing Reform ahead in a seat, can focus the minds of voters and drive tactical thinking. It also helped that these voters were offered a Plaid candidate with deep community roots and a strong, progressive message.
What is potentially harder in a general election is the presentation of a local contest as extremely high stakes in the media. Caerphilly drew unprecedented attention precisely because it was being framed as a test case for Reform in Wales, which may explain the level of anti-Reform vote.
In a multi-polar UK, the anti-Reform majority is real – but not pro-any one party by default. Importantly, it is anti-populist, anti-incumbent and regionally variable. Nearly all of the mainstream parties on the centre ground and left wing of politics are claiming to be the real alternative to Reform.
Reform’s path to power lies in building a lead that is too large for tactical voting to overcome, or in electoral systems which reward vote share over seat efficiency. This is why it remains hopeful of success in May 2026 in Wales, where the election is being held under a proportional voting system.
As the UK heads towards the 2026 devolved elections and a likely 2029-30 general election, Caerphilly offers a blueprint for resistance to Reform’s national surge. It also offers a warning for the other parties: stopping Reform is not the same as winning.
Thomas Lockwood does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Seasonal affective disorder (SAD) is a condition that heightens depressive symptoms during the fall and winter months, while the “winter blues” refers to a milder, temporary dip in mood.
Lower light levels affect brain chemistry by reducing serotonin — a neurotransmitter that regulates mood, sleep and appetite — while keeping melatonin elevated during daylight hours, leading to sleepiness and fatigue.
The good news is that with intention and evidence-based practices, winter can become a season of meaning, connection and even joy. As a clinical social worker and mental health therapist, here are four approaches that research and my clinical practice suggest can make the winter months more liveable.
Research in behavioural psychology shows that structured activities, even simple ones, can boost motivation. Try scheduling weekly rituals like coffee with a friend, a library visit or a favourite TV show to function as anchors when energy dips.
Treat your own time with the same care you give others, and plan moments of quality time with yourself.
Another useful tool is “body doubling” — doing tasks in parallel or synchrony with someone else, either in person or virtually. This might mean watching the same movie from different locations, chatting on the phone while folding laundry or working together in a cafe. Shared routines foster accountability and connection.
When the temperature drops, it’s tempting to stay indoors. But even brief time outside in the cold offers real benefits.
Exposure to natural light, even on overcast days, helps regulate circadian rhythms, improves sleep and stabilizes mood. Aim to go outside for at least 10 minutes a day: a brisk walk, skating or simply standing outside can lift heaviness.
Try to reframe snow as an invitation rather than an obstacle. Activities can range from winter picnics, pine cone scavenger hunts or snow painting to more contemplative pursuits like birdwatching, photography or snow-shoeing. For adrenaline seekers, winter sports like snowboarding can also provide a thrill.
One way to cultivate joy is by finding activities that invite “flow” — a term researchers use to describe moments when we become fully immersed in an activity and everything else fades away.
Flow happens when challenge and skill are in perfect balance; when an activity is engaging but not so difficult that it overwhelms us. It trains the brain’s positive emotion circuits, strengthening pathways linked to attention, motivation and creativity. Activities that invite flow differ from person to person, and can range from puzzling or video games to cooking, crocheting, painting or poetry.
Joy is also collective. Shared laughter, body doubling or acts of hospitality remind us that joy grows stronger when practised in community. Even a potluck dinner, movie night or phone call can counter isolation, making joy a renewable resource generated with others.
Meditation is a technique for cultivating calm, such as deep breathing, while mindfulness is the broader act of staying present — for example, savouring the taste of your morning coffee. Both are proven to enhance focus, regulate emotions and reduce repetitive negative thoughts.
Anchoring these moments in familiar routines can help, such as by taking five deep breaths the moment your feet touch the floor in the morning, pausing after a workout or sitting quietly in your car before entering the house. Apps offering short meditation exercises, sleep stories and reminders can help build this habit as well.
For those living with others, brief daily check-ins, such as asking, “What were your highs and lows today?” encourage reflection and gratitude. Over time, these small rituals of breathing and reflection can help protect against emotional fatigue during the winter.
Winter as a season of practice
Rather than simply surviving winter, we can approach it as a season to learn, adapt and deepen resilience. Making time your ally, seeking wonder outdoors, cultivating joy as a skill and practising meditation and mindfulness in ways that feel personal are all ways to engage meaningfully with the season.
These strategies won’t erase the challenges of shorter days or colder weather, but research suggests they can help mitigate their impact on mood and well-being. By intentionally framing winter as a period of growth, we can change our mindsets to see winter as an opportunity for renewal.
The winter solstice offers a symbolic reminder of this potential: that darkness gives way to light. Celebrating the solstice by lighting candles, gathering in community or setting intentions for the months ahead can transform the darkest day of the year into one of connection, renewal and love for the season itself.
Gio Dolcecore does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
This is the first in a two-part series. Read part two here.
For nearly four centuries, the world economy has been on a path of ever-greater integration that even two world wars could not totally derail. This long march of globalisation was powered by rapidly increasing levels of international trade and investment, coupled with vast movements of people across national borders and dramatic changes in transportation and communication technology.
According to economic historian J. Bradford DeLong, the value of the world economy (measured at fixed 1990 prices) rose from US$81.7 billion (£61.5 billion) in 1650, when this story begins, to US$70.3 trillion (£53 trillion) in 2020 – an 860-fold increase. The most intensive periods of growth corresponded to the two periods when global trade was rising fastest: first during the “long 19th century” between the end of the French revolution and start of the first world war, and then as trade liberalisation expanded after the second world war, from the 1950s up to the 2008 global financial crisis.
Now, however, this grand project is on the retreat. Globalisation is not dead yet, but it is dying.
Is this a cause for celebration, or concern? And will the picture change again when Donald Trump and his tariffs of mass disruption leave the White House? As a longtime BBC economics correspondent who was based in Washington during the global financial crisis, I believe there are sound historical reasons to worry about our deglobalised future – even once Trump has left the building.
The Insights section is committed to high-quality longform journalism. Our editors work with academics from many different backgrounds who are tackling a wide range of societal and scientific challenges.
Trump’s tariffs have amplified the world’s economic problems, but he is not the root cause of them. Indeed, his approach reflects a truth that has been emerging for many decades but which previous US administrations – and other governments around the world – have been reluctant to admit: namely, the decline of the US as the world’s no.1 economic power and engine of world growth.
In each era of globalisation since the mid-17th century, a single country has sought to be the clear world leader – shaping the rules of the global economy for all. In each case, this hegemonic power had the military, political and financial power to enforce these rules – and to convince other countries that there was no preferable path to wealth and power.
But now, as the US under Trump slips into isolationism, there is no other power ready to take its place and carry the torch for the foreseeable future. Many people’s pick, China, faces too many economic challenges, including its lack of a truly international currency – and as a one-party state, nor does it possess the democratic mandate needed to gain acceptance as the world’s new dominant power.
While globalisation has always produced many losers as well as winners – from the slave trade of the 18th century to displaced factory workers in the American Midwest in the 20th century – history shows that a deglobalised world can be an even more dangerous and unstable place. The most recent example came during the interwar years, when the US refused to take up the mantle left by the decline of Britain as the 19th century’s hegemonic global power.
In the two decades from 1919, the world descended into economic and political chaos. Stock market crashes and global banking failures led to widespread unemployment and increasing political instability, creating the conditions for the rise of fascism. Global trade declined sharply as countries put up trade barriers and started self-defeating currency wars in the vain hope of giving their countries’ exports a boost. On the contrary, global growth ground to a halt.
A century on, our deglobalising world is vulnerable again. But to chart whether this means we are destined for a similarly chaotic and unstable future, we first need to explore the birth, growth and reasons behind the imminent demise of this extraordinary global project.
French model: mercantilism, money and war
By the mid-1600s, France had emerged as the strongest power in Europe – and it was the French who developed the first overarching theory of how the global economy could work in their favour. Nearly four centuries later, many aspects of “mercantilism” have been revived by Trump’s US playbook, which could be entitled How To Dominate the World Economy by Weakening Your Rivals.
France’s version of mercantilism was based on the idea that a country should put up trade barriers to limit how much other countries could sell to it, while boosting its own industries to ensure that more money (in the form of gold) came into the country than left it.
England and the Dutch Republic had already adopted some of these mercantilist policies, establishing colonies around the globe run by powerful monopolistic trading companies that aimed to challenge and weaken the Spanish empire, which had prospered on the gold and silver it seized in the Americas. In contrast to these “seaborne empires”, the much larger empires in the east such as China and India had the internal resources to generate their own revenue, meaning international trade – although widespread – was not critical to their prosperity.
But it was France which first systematically applied mercantilism across the whole of government policy – led by the powerful finance minister Jean-Baptiste Colbert (1661-1683), who had been granted unprecedented powers to strengthen the financial might of the French state by King Louis XIV. Colbert believed trade would boost the coffers of the state and strengthen France’s economy while weakening its rivals, stating:
It is simply, and solely, the absence or abundance of money within a state [which] makes the difference in its grandeur and power.
In Colbert’s view, trade was a zero-sum game. The more France could run a trade surplus with other countries, the more gold bullion it could accumulate for the government and the weaker its rivals would become if deprived of gold. Under Colbert, France pioneered protectionism, tripling its import tariffs to make foreign goods prohibitively expensive.
At the same time, he strengthened France’s domestic industries by providing subsidies and granting them monopolies. Colonies and government trading companies were established to ensure France could benefit from the highly lucrative trade in goods such as spices, sugar – and slaves.
Colbert oversaw the expansion of French industries into areas like lace and glass-making, importing skilled craftsmen from Italy and granting these new companies state monopolies. He invested heavily in infrastructure such as the Canal du Midi, and dramatically increased the size of France’s navy and merchant marine to challenge its British and Dutch rivals.
Global trade at this time was highly exploitative, involving the forced seizure of gold and other raw materials from newly discovered lands (as Spain had been doing with its conquests in the New World from the late 15th century). It also meant benefiting from the trade in humans, with huge profits as slaves were seized and sent to the Caribbean and other colonies to produce sugar and other crops.
In this era of mercantilism, trade wars often led to real wars, fought across the globe to control trade routes and seize colonies. Following Colbert’s reforms, France began a long struggle to challenge the overseas empires of its maritime rivals, while also engaging in wars of conquest in continental Europe.
France initially enjoyed success in the 17th century both on land and sea against the Dutch. But ultimately, its state-run French Indies company was no rival to the ruthless, commercially driven activities of the Dutch and British East India companies, which delivered enormous profits to their shareholders and revenues for their governments.
Indeed, the huge profits made by the Dutch from the Far Eastern spice trade explains why they had no hesitation in handing over their small North American colony of New Amsterdam, in return for expelling the British from a small toehold of one of their spice islands in what is now Indonesia. In 1664, that Dutch outpost was renamed New York.
After a century of conflict, Britain gradually gained ascendancy over France, conquering India and forcing its great rival to cede Canada in 1763 after the Seven Years war. France never succeeded in fully countering Britain’s naval strength. Resounding defeats by fleets led by Horatio Nelson in the early 19th century, coupled with Napoleon’s defeat at Waterloo by a coalition of European powers, marked the end of France’s time as Europe’s hegemonic power.
The battle of Trafalgar, off southwestern Spain in October 1805, was decisive in ending France’s era of dominance. Yale Center for British Art/Wikimedia
But while the French model of globalisation ultimately failed in its attempt to dominate the world economy, that has not prevented other countries – and now President Trump – from embracing its principles.
France found that tariffs alone could not sufficiently fund its wars nor boost its industries. Its broad version of mercantilism led to endless wars that spread around the globe, as countries retaliated both economically and militarily and tried to seize territories.
More than two centuries later, there is an uncomfortable parallel with what the results of Trump’s endless tariff wars might bring, both in terms of ongoing conflict and the organisation of rival trade blocs. It also shows that more protectionism, as proposed by Trump, will not be enough to revive the US’s domestic industries.
British model: free trade and empire
The ideology of free trade was first spelled out by British economists Adam Smith and David Ricardo, the founding fathers of classical economics. They argued trade was not a zero-sum game, as Colbert had suggested, but that all countries could mutually benefit from it. According to Smith’s classic text, The Wealth of Nations (1776):
If a foreign country can supply us with a commodity cheaper than we ourselves can make, better buy it off them with some part of the produce of our own industry, employed in such a way that we have some advantages.
As the world’s first industrial nation, by the 1840s Britain had created an economic powerhouse based on the new technologies of steam power, the factory system, and railroads.
Smith and Ricardo argued against the creation of state monopolies to control trade, proposing minimal state intervention in industry. Ever since, Britain’s belief in the benefits of free trade has proved stronger and more long-lasting than any other major industrial power – more deeply embedded in both its politics and popular imagination.
This ironclad commitment was born out of a bitter political struggle in the 1840s between manufacturers and landowners over the protectionist Corn Laws. The landowners who had traditionally dominated British politics backed high tariffs, which benefited them but resulted in higher prices for staples like bread. The repeal of the Corn Laws in 1846 upended British politics, signalling a shift of power to the manufacturing classes – and ultimately to their working-class allies once they gained the right to vote.
An Anti-Corn Law League meeting held in London’s Exeter Hall in 1846. Wikimedia
In time, Britain’s advocacy of free trade unleashed the power of its manufacturing to dominate global markets. Free trade was framed as the way to raise living standards for the poor (the exact opposite of President Trump’s claim that it harms workers) and had strong working-class support. When the Conservatives floated the idea of abandoning free trade in the 1906 general election, they suffered a devastating defeat – the party’s worst until 2024.
As well as trade, a central element in Britain’s role as the new global hegemonic power was the rise of the City of London as the world’s leading financial centre. The key was Britain’s embrace of the gold standard which put its currency, the pound, at the heart of the new global economic order by linking its value to a fixed amount of gold, ensuring its value would not fluctuate. Thus the pound became the worldwide medium of exchange.
This encouraged the development of a strong banking sector, underpinned by the Bank of England as a credible and trustworthy “lender of last resort” in a financial crisis. The result was a huge boom in international investment, opening access to overseas markets for British companies and individual investors.
In the late 19th century, the City of London dominated global finance, investing in everything from Argentinian railways and Malaysian rubber plantations to South African gold mines. The gold standard became a talisman of Britain’s power to dominate the world economy.
The pillars of Britain’s global economic dominance were a highly efficient manufacturing sector, a commitment to free trade to ensure its industry had access to global markets, and a highly developed financial sector which invested capital around the world and reaped the benefits of global economic development. But Britain also did not hesitate to use force to open up foreign markets – for example, during the Opium Wars of the 1840s, when China was compelled to open its markets to the lucrative trade in opium from British-owned India.
By the end of the 19th century, the British empire incorporated one quarter of the world’s population, providing a source of cheap labour and secure raw materials as well as a large market for Britain’s manufactured goods. But that was still not enough for its avaricious leaders: Britain also made sure that local industries did not threaten its interests – by undermining the Indian textile industry, for example, and manipulating the Indian currency.
In reality, globalisation in this era was about domination of the world economy by a few rich European powers, meaning that much global economic development was curtailed to protect their interests. Under British rule between 1750 and 1900, India’s share of world industrial output declined from 25% to 2%.
But for those at the centre of Britain’s global formal and informal empire, such as the middle-class residents of London, this was a halcyon time – as economist John Maynard Keynes would later recall:
For middle and upper classes … life offered, at a low cost and with the least trouble, conveniences, comforts and amenities beyond the compass of the richest and most powerful monarchs of other ages. The inhabitant of London could order by telephone, sipping his morning tea in bed, the various products of the whole Earth, in such quantity as he might see fit, and reasonably expect their early delivery upon his doorstep.
US model: protectionism to neoliberalism
While Britain enjoyed its century of global dominance, the United States embraced protectionism for longer after its foundation in 1776 than all other major western economies.
The introduction of tariffs to protect and subsidise emerging US industries had first been articulated in 1791 by the fledgling nation’s first treasury secretary, Alexander Hamilton – Caribbean immigrant, founding father and future subject of a record-breaking musical. The Whig party under Henry Clay and its successor, the Republican Party, were both strong supporters of this policy for most of the 19th century. Even as US industry grew to overshadow all others, its government maintained some of the highest tariff barriers in the world.
Founding father Alexander Hamilton on the front of a US$10 note from 1934. Wikimedia
Tariff rates rose to 50% in the 1890s with the backing of future president William McKinley, both to help industrialists and pay for generous pensions for 2 million civil war veterans and their dependants – a key part of the Republican electorate. It is no accident that President Trump has festooned the White House with pictures of Hamilton, Clay and McKinley – all supporters of protectionism and high tariffs.
In part, the US’s enduring resistance to free trade was because it had access to an internal supply of seemingly limitless raw materials, while its rapidly growing population, fuelled by immigration, provided internal markets that fuelled its growth while keeping out foreign competition.
By the late 19th century, the US was the world’s biggest steel producer with the largest railroad system in the world and was moving rapidly to exploit the new technologies of the second industrial revolution – based on electricity, petrol engines and chemicals. Yet it was only after the second world war that the US assumed the role of global superpower – in part because it was the only country on either side of the war that had not suffered severe damage to its economy and infrastructure.
In the wake of global destruction in Europe and Asia, the US’s dominance was political, military and cultural, as well as financial – but the US vision of a globalised world had some important differences from its British predecessor.
The US took a much more universalist and rules-based approach, focusing on the creation of global organisations that would establish binding regulations – and open up global markets to unfettered American trade and investment. It also aimed to dominate the international economic order by replacing the pound sterling with the US dollar as the global medium of exchange.
Within a week of its entry in the second world war, plans were laid to establish US global financial hegemony. The US treasury secretary, Henry Morgenthau, began work on establishing an “inter-allied stabilisation fund” – a playbook for post-war monetary arrangements which would enshrine the US dollar at its heart.
This led to the creation of the International Monetary Fund (IMF) and World Bank at the Bretton Woods conference in New Hampshire in 1944 – institutions dominated by the US, which encouraged other countries to adopt the same economic model both in terms of free trade and free enterprise. The Allied nations who were simultaneously meeting to establish the United Nations to try to ensure future world peace, having suffered the devastating effects of the Great Depression and war, welcomed the US’s commitment to shape a new, more stable economic order.
How the 1944 Bretton Woods deal ensured the US dollar would be the world’s dominant currrency. Video: Bloomberg TV.
As the world’s biggest and strongest economy, there was (initially) little resistance to this US plan for a new international economic order in its own image. The motive was as much political as economic: the US wanted to provide economic benefits to ensure the loyalty of its key allies and counter the perceived threat of a communist takeover – in complete contrast to Trump’s mercantilist view today that all other countries are out to “rip off” the US, and that its own military might means it has no real need for allies.
After the war finally ended, the US dollar, now linked to gold at a fixed rate of $35 per ounce to guarantee its stability, assumed the role as the free world’s principal currency. It was both used for global trade transactions and held by foreign central banks as their currency reserves – giving the US economy an “exorbitant privilege”. The stable value of the dollar also made it easier for the US government to sell Treasury bonds to foreign investors, enabling it to more easily borrow money and run up trade deficits with other countries.
The conditions were set for an era of US political, financial and cultural dominance, which saw the rise of globally admired brands such as McDonald’s and Coca Cola, as well as a powerful US marketing arm in the form of Hollywood. Perhaps even more significantly, the relaxed, well-funded campuses of California would prove a perfect petri dish for the development of new computer technologies – backed initially by cold war military investment – which, decades later, would lead to the birth of the big-tech companies that dominate the tech landscape today.
The US view of globalisation was broader and more interventionist than the British model of free trade and empire. Rather than having a formal empire, it wanted to open up access to the entire world economy, which would provide global markets for American products and services.
The US believed you needed global economic institutions to police these rules. But as in the British case, the benefits of globalisation were still unevenly shared. While countries that embraced export-led growth such as Japan, Korea and Germany prospered, other resource-rich but capital-poor countries such as Nigeria only fell further behind.
From dream to despair
Though the legend of the American dream grew and grew, by the 1970s the US economy was coming under increasing pressure – in particular from German and Japanese rivals, who by then had recovered from the war and modernised their industries.
Troubled by these perceived threats and a growing trade deficit, in 1971 President Richard Nixon stunned the world by announcing that the US was going off the gold standard – forcing other countries to bear the cost of adjustment for the US balance of payments crisis by making them revalue their currencies. This had a profound effect on the global financial system: within a decade, most major currencies had abandoned fixed exchange rates for a new system of floating rates, effectively ending the 1944 Bretton Woods settlement.
US president Richard Nixon announces the US is leaving the gold standard on August 15 1971.
The end of fixed exchange rates opened the door to the “financialisation” of the global economy, vastly expanding global investment and lending – much of it by US financial firms. This gave succour to the burgeoning neoliberal movement that sought to further rewrite the rules of the financial world order. In the 1980s and ’90s, these policy prescriptions became known as the Washington consensus: a set of rules – including opening markets to foreign investment, deregulation and privatisation – that was imposed on developing economies in crisis, in return for them receiving support from US-led organisations like the World Bank and IMF.
In the US, meanwhile, the increasing reliance on the finance and hi-tech sectors increased levels of inequality and fostered resentment in large parts of American society. Both Republicans and Democrats embraced this new world order, shaping US policy to favour their hi-tech and financial allies. Indeed, it was the Democrats who played a key role in deregulating the financial sector in the 1990s.
Meanwhile, the decline of US manufacturing industries accelerated, as did the gap between the incomes of those in the hinterland, where manufacturing was based, and residents of the large metropolitan cities.
By 2023, the lowest 50% of US citizens received just 13% of total personal income, while the top 10% received almost half (47%). The wealth gap was even greater, with the bottom 50% only having 6% of total wealth, while a third (36%) was held by just the top 1%. Since 1980, real incomes of the bottom 50% have barely grown for four decades.
The bottom half of the US population was suffering from a surge in “deaths of despair” – a term coined by the Nobel-winning economist Angus Deaton to describe high mortality rates from drug abuse, suicide and murder among younger working-class Americans. Rising costs of housing, medical care and university education all contributed to widespread indebtedness and growing financial insecurity. By 2019, a study found that two-thirds of people who filed for bankruptcy cited medical issues as a key reason.
The decline in US manufacturing accelerated after China was admitted to the World Trade Organization in 2001, increasing America’s soaring trade and budget deficit even more. Political and business elites hoped the move would open up the huge Chinese market to US goods and investment, but China’s rapid modernisation made its industry more competitive than its American rivals in many fields.
Ultimately, this era of intensive financialisation of the world economy created a series of regional and then global financial crises, damaging the economies of many Latin American and Asian economies. This culminated in the 2008 global financial crisis, precipitated by reckless lending by US financial institutions. The world economy took more than a decade to recover as countries wrestled with slower growth, lower productivity and less trade than before the crisis.
For those who chose to read it, the writing was on the wall for America’s era of global domination decades ago. But it would take Trump’s victory in the 2016 presidential election – a profound shock to many in the US “liberal establishment” – to make clear that the US was now on a very different course that would shake up the world.
Making a bad situation more dangerous
In my view, Trump is the first modern-day US president to fully understand the powerful alienation felt by many working-class American voters, who believed they were left out of the US’s immense post-war economic growth that so benefited the largely urban American middle classes. His strongest supporters have always been lower-middle-class voters from rural areas who are not college-educated.
Yet Trump’s key policies will ultimately do little for them. High tariffs to protect US jobs, expulsion of millions of illegal immigrants, dismantling protections for minorities by opposing DEI (diversity, equality and inclusion) programmes, and drastically cutting back the size of government will have increasingly negative economic consequences in the future, and are very unlikely to restore the US economy to its previous dominant position.
US president Donald Trump unveils his global tariff ‘hit list’ on April 3 2025. BBC News.
Long before he first became president, Trump hated the eye-watering US trade deficit (he’s a businessman, after all) – and believed that tariffs would be a key weapon for ensuring US economic dominance could be maintained. Another key part of his “America First” ideology was to repudiate the international agreements that were at the heart of the US’s postwar approach to globalisation.
In his first term, however, Trump (having not expected to win) was ill-prepared for power. But second time around, conservative thinktanks had spent years outlining detailed policies and identifying key personnel who could implement the radical U-turn in US economic policy.
Under Trump 2.0, we have seen a return to the mercantilist point of view reminiscent of France in the 17th and 18th centuries. His assertion that countries which ran a trade surplus with the US “were ripping us off” echoed the mercantilist belief that trade was a zero-sum game – rather than the 20th-century view, pioneered by the US, that globalisation brings benefits to all, no matter the precise balance of that trade.
Trump’s tax-and-tariff plans, which extend the tax breaks to the very rich while reducing benefits for the poor through benefit cuts and tariff-driven inflation, will increase inequality in the US.
At the same time, the passing of the One Big Beautiful Bill is predicted to add some US$3.5 trillion to US government debt – even after the Elon Musk-led “Department of Government Efficiency” cuts imposed on many Washington departments. This adds pressure to the key US Treasury bond market at the centre of the world financial system, and raises the cost of financing the huge US deficit while weakening its credit rating. Continuing these policies could threaten a default by the US, which would have devastating consequences for the entire global financial system.
For all the macho grandstanding from Trump and his supporters, his economic policies are a demonstration of American weakness, not strength. While I believe his highlighting of some of the ills of the US economy were overdue, the president is rapidly squandering the economic credibility and good will that the US built up in the postwar years, as well as its cultural and political hegemony. For people living in America and elsewhere, he is making a bad situation more dangerous – including for many of his most ardent supporters.
That said, even without Trump’s economic and societal disruptions, the end of the US era of hegemonic dominance would still have happened. Globalisation is not dead, but it is dying. The troubling question we all face now, is what happens next.
This is the first of a two-part Insights long read on the rise and fall of globalisation. Read part two here: why the next global financial meltdown could be much worse with the US on the sidelines.
To hear about new Insights articles, join the hundreds of thousands of people who value The Conversation’s evidence-based news. Subscribe to our newsletter.
Steve Schifferes does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – in French – By Nick Fuller, Clinical Trials Director, Department of Endocrinology, RPA Hospital, University of Sydney
On dit souvent aux parents que les fruits sont « mauvais » parce qu’ils contiennent du sucre, ce qui les amène à se demander quelle quantité ils doivent autoriser leur enfant à manger.
Ce message a été alimenté par le mouvement « sans sucre », qui diabolise le sucre en affirmant qu’il fait grossir et provoque le diabète. Ce mouvement promeut des listes arbitraires d’aliments à éviter, qui comprennent souvent les aliments préférés des enfants, tels que les bananes et les baies.
Mais comme beaucoup d’affirmations avancées par l’industrie des diètes, celle-ci n’est pas étayée par des preuves.
Sucres naturels et sucres ajoutés
Le sucre en soi n’est pas nocif, mais le type de sucre que consomment les enfants peut l’être.
La bonne nouvelle, c’est que les fruits entiers contiennent des sucres naturels qui sont bons pour la santé et fournissent de l’énergie aux enfants. Les fruits entiers regorgent de vitamines et de minéraux nécessaires à une bonne santé. Ils contiennent notamment des vitamines A, C et E, du magnésium, du zinc et de l’acide folique. Tous les fruits sont bons pour la santé, notamment les bananes, les baies, les mandarines, les pommes et les mangues, pour n’en citer que quelques-uns.
Les fibres insolubles présentes dans la peau des fruits favorisent un transit intestinal régulier chez les enfants, tandis que les fibres solubles contenues dans la chair des fruits contribuent à maintenir leur taux de cholestérol à un niveau sain en absorbant le « mauvais » cholestérol, ce qui réduit leur risque à long terme d’accident vasculaire cérébral et de maladie cardiaque.
Les sucres ajoutés, qui apportent des calories mais aucune valeur nutritive à l’alimentation des enfants, sont les « mauvais » sucres à éviter. On les trouve dans les aliments transformés et ultra-transformés dont les enfants raffolent, tels que les bonbons, les chocolats, les gâteaux et les boissons gazeuses.
Les sucres ajoutés sont souvent présents dans des aliments emballés apparemment sains, tels que les barres de céréales. Ils sont également cachés sous plus de 60 noms différents dans les listes d’ingrédients, ce qui les rend difficiles à repérer.
Sucre, poids et risque de diabète
Il n’existe aucune preuve scientifique permettant d’affirmer que le sucre provoque directement le diabète.
Le diabète de type 1 est une maladie auto-immune qui ne peut être ni prévenue ni guérie et qui n’a aucun lien avec la consommation de sucre. Le diabète de type 2 est généralement causé par un excès de poids, qui empêche le corps de fonctionner efficacement, et non par la consommation de sucre.
Déjà des milliers d’abonnés à l’infolettre de La Conversation. Et vous ? Abonnez-vous gratuitement à notre infolettre pour mieux comprendre les grands enjeux contemporains.
Cependant, une alimentation riche en sucres ajoutés, présents dans de nombreux aliments transformés et ultra-transformés (par exemple, les collations sucrées et salés emballés), peut entraîner une consommation excessive de calories et une prise de poids inutile chez les enfants, ce qui peut augmenter leur risque de développer un diabète de type 2 à l’âge adulte.
D’autre part, des recherches montrent que les enfants qui mangent plus de fruits ont moins de graisse abdominale.
Des recherches montrent également que les fruits peuvent réduire le risque de diabète de type 2. Une étude a notamment révélé que les enfants qui consommaient 1,5 portion de fruits par jour avaient un risque 36 % moins élevé de développer la maladie.
De nombreux aliments transformés ont une faible valeur nutritive, voire aucune, c’est pourquoi les recommandations alimentaires recommandent d’en limiter la consommation.
Les enfants qui se nourrissent principalement de ces aliments sont moins susceptibles de consommer des légumes, des fruits, des céréales complètes et des viandes maigres, ce qui entraîne une alimentation pauvre en fibres et en autres nutriments essentiels à la croissance et au développement.
Mais ces « aliments facultatifs » représentent un tiers de l’apport énergétique quotidien des enfants australiens.
Mon conseil ? Donnez des fruits en abondance à vos enfants
Il n’est pas nécessaire de limiter la quantité de fruits entiers que les enfants mangent : ils sont nutritifs et peuvent protéger leur santé. Ils les rassasieront également et réduiront leur envie de réclamer des aliments transformés et emballés, pauvres en nutriments et riches en calories.
Il suffit de limiter les jus et les fruits secs, car les jus ne contiennent pas les bienfaits des fruits (les fibres) et le séchage prive les fruits de leur teneur en eau, ce qui facilite leur consommation excessive.
Certains pays recommandent seulement deux portions de fruits par jour pour les enfants de neuf ans et plus, 1,5 portion pour les enfants de 4 à 8 ans, une portion pour les enfants de 2 à 3 ans et une demi-portion pour les enfants de 1 à 2 ans. Mais ces recommandations sont dépassées et doivent être modifiées.
Nous devons réduire la consommation de sucre des enfants. Mais cela doit se faire en réduisant leur consommation d’aliments transformés contenant des sucres ajoutés, plutôt que celle des fruits.
Les sucres ajoutés ne sont pas toujours faciles à repérer, nous devons donc nous concentrer sur la réduction chez les enfants de la consommation d’aliments transformés et emballés, et leur apprendre à privilégier les fruits, « les friandises de la nature », afin d’éliminer les sucres malsains de leur alimentation.
Le professeur Nick Fuller travaille pour l’université de Sydney et l’hôpital RPA. Il a reçu des financements externes pour des projets liés au traitement du surpoids et de l’obésité. Il est l’auteur et le fondateur du programme Interval Weight Loss (Perte de poids par intervalles) et l’auteur de Healthy Parents, Healthy Kids (Des parents en bonne santé, des enfants en bonne santé), publié chez Penguin Books.
En octubre de 2024, hace hoy un año, una dana golpeó con fuerza la Comunidad Valenciana. La tragedia dejó numerosas víctimas mortales y cientos de heridos, pero hubo un aspecto menos evidente e igualmente devastador: el impacto psicológico en las personas afectadas. Multitud de ciudadanos vieron sus hogares y barrios inundados y cubiertos de barro mientras eran testigos de cómo sus vecinos y familiares sufrían, sin poder hacer nada por ayudarles. En muchos casos no podían regresar a sus casas, ni contactar con sus familiares o con los servicios de emergencia.
Esa desconexión, impotencia y desamparo marcaron profundamente la vivencia de muchas víctimas, a lo que se sumó la percepción de abandono: no hubo un aviso temprano del riesgo extremo, y la gestión inmediata de la tragedia fue percibida por los afectados como lenta y claramente insuficiente.
Semanas después, investigadoras de la Universidad Pontificia Comillas y la Universidad de Zaragoza realizamos un estudio en el que se evaluó a 72 víctimas y 69 voluntarios. Se analizaron síntomas de ansiedad, depresión y estrés postraumático, así como el grado de satisfacción con distintas fuentes de apoyo. También se les dio la opción de compartir sus experiencias.
Aunque el artículo científico todavía no está publicado, sus respuestas han permitido poner cifras y palabras a algo que suele quedar oculto: la huella emocional de los desastres naturales.
Víctimas: el peso de lo perdido
Según el estudio, el 82 % de las víctimas presentaban síntomas moderados o graves de estrés postraumático. Es la huella psicológica que deja vivir o presenciar un evento extremadamente impactante o amenazante para la vida. No se trata solo de recuerdos desagradables: implica revivir mentalmente la experiencia mediante flashbacks o pesadillas, mantenerse en constante alerta, sufrir sobresaltos ante estímulos que recuerdan al suceso y sentir que el peligro sigue presente.
En este sentido, muchas víctimas confiesan que no lo van a olvidar nunca. Algunos tienen pesadillas y recuerdos que, refieren, se repiten en su cabeza sin que puedan evitarlo. Otros narran los acontecimientos con tal nivel de detalle que parece que los estuviesen reviviendo. Por ejemplo, relataron el recuerdo vívido del ruido ensordecedor del agua y de las imágenes de la tragedia, como el abundante barro o ver a otras personas sufriendo. También evocan el miedo que sienten cada vez que vuelve a llover.
A esto se suman altos niveles de ansiedad y depresión: entre un 40 % y un 46 % de los encuestados presentaron estos síntomas. El impacto fue más severo en quienes padecieron daños físicos, tuvieron desperfectos en su domicilio o lo perdieron, o presenciaron cómo otras personas sufrían. También influyeron experiencias emocionales como el miedo a sufrir daños ellos mismos o sus familiares, el temor a fallecer, la sensación de abandono y la indefensión, que agravaron las secuelas psicológicas de la tragedia.
Estos resultados ponen de manifiesto la necesidad de que las víctimas reciban atención psicológica adecuada y sostenida en el tiempo, y de que se visibilice su sufrimiento como parte esencial de la recuperación tras una catástrofe.
Durante los primeros días tras la catástrofe, los voluntarios fueron esenciales: rescataron, asistieron y acompañaron a numerosas personas afectadas sin medios ni formación para intervenir en una emergencia de tal magnitud. Por ello también se evaluó cómo les había afectado psicológicamente haber sido testigos directos del desastre.
Los resultados muestran que la exposición a escenas de destrucción y sufrimiento, el esfuerzo físico y la tensión dejaron huella: el 68 % presentó sintomatología significativa de estrés postraumático. Entre los factores más asociados al malestar destacaron participar en rescates, ver fallecidos, presenciar saqueos o tener seres queridos afectados o en paradero desconocido.
La respuesta solidaria de la ciudadanía fue admirable y las víctimas la recuerdan con profundo agradecimiento, tal y como reflejan nuestros resultados. Sin embargo, cuando la primera reacción ante una emergencia depende de civiles sin entrenamiento ni apoyo psicológico, es esperable que su salud mental se vea afectada. Por ello, resulta fundamental ofrecer atención y acompañamiento especializado también a quienes, con la mejor de las intenciones, se convirtieron en los primeros en ayudar.
La otra inundación: la gestión institucional
Otro hallazgo clave, en línea con las numerosas protestas y reclamos de las víctimas, fue la baja satisfacción con la respuesta institucional: apenas 1,7 en una escala de 1 a 5, frente a los altos niveles de apoyo percibido de familia, amigos, vecinos y voluntarios (entre 4,2 y 4,7). Tampoco sorprende el extremo descontento sobre el aviso de la tragedia (1,2 sobre 5), que llegó cuando el nivel del agua había alcanzado niveles catastróficos.
La insatisfacción con el apoyo institucional y la percepción de lentitud en la implementación de las medidas posteriores se relacionaron con peor salud mental en las víctimas. Sentirse abandonadas por las instituciones ante la tragedia no solo debilita la confianza en las autoridades, sino que les hizo sentir desprotegidas ante futuras emergencias, poniendo en serio peligro la salud psicológica a medio y largo plazo.
Qué podemos aprender
En ocho de cada diez víctimas, la dana ha dejado una huella emocional clara: miedo, dificultad para seguir con la vida cotidiana, ansiedad y tristeza. Si no se atienden, estos síntomas pueden cronificarse y afectar gravemente a la calidad de vida.
Aunque se han puesto en marcha iniciativas de apoyo psicológico, la magnitud del impacto hace necesario reforzarlas y avanzar hacia un sistema de cuidado en salud mental que sea accesible, gratuito y sostenido en el tiempo.
Las personas firmantes no son asalariadas, ni consultoras, ni poseen acciones, ni reciben financiación de ninguna compañía u organización que pueda obtener beneficio de este artículo, y han declarado carecer de vínculos relevantes más allá del cargo académico citado anteriormente.
Generative artificial intelligence, which produces original content by drawing on large existing datasets, has been hailed as a revolutionary tool for lawyers. From drafting contracts to summarising case law, generative AI tools such as ChatGPT and Lexis+ AI promise speed and efficiency.
But the English courts are now seeing a darker side of generative AI. This includes fabricated cases, invented quotations, and misleading citations entering court documents.
As someone who studies how technology and the law interact, I argue it is vital that lawyers are taught how, and how not, to use generative AI. Lawyers need to be able to avoid the risk of sanctions for breaking the rules, but also the development of a legal system that risks deciding questions of justice based on fabricated case law.
On 6 June 2025, the high court handed down a landmark judgment on two separate cases: Frederick Ayinde v The London Borough of Haringey and Hamad Al-Haroun v Qatar National Bank QPSC and QNB Capital LLC.
The court reprimanded a pupil barrister (a trainee) and a solicitor after their submissions contained fictitious and inaccurate case law. The judges were clear: “freely available generative artificial intelligence tools… are not capable of conducting reliable legal research”.
As such, the use of unverified AI output can no longer be excused as error or oversight. Lawyers, junior or senior, are fully responsible for what they put before the court.
Hallucinated case law
AI “hallucinations” – the confident generation of non-existent or misattributed information – are well documented. Legal cases are no exception. Research has recently found that hallucination rates range from 58% to 88% in response to specific legal queries, often on precisely the sorts of issues lawyers are asked to resolve.
These errors have now leapt off the screen and into real legal proceedings. In Ayinde, the trainee barrister cited a case that did not exist at all. The erroneous example had been misattributed to a genuine case number from a completely different matter.
In Al-Haroun, a solicitor listed 45 cases provided by his client. Of these, 18 were fictitious and many others irrelevant. The judicial assistant is quoted in the judgment as saying: “The vast majority of the authorities are made up or misunderstood”.
These incidents highlight a profession facing a perfect storm: overstretched practitioners, increasingly powerful but unreliable AI tools, and courts no longer willing to treat errors as mishaps. For the junior legal profession, the consequences are stark.
Many are experimenting with AI out of necessity or curiosity. Without the training to spot hallucinations, though, new lawyers risk reputational damage before their careers have fully begun.
The high court took a disciplinary approach, placing responsibility squarely on the individual and their supervisors. This raises a pressing question. Are junior lawyers being punished too harshly for what is, at least in part, a training and supervision gap?
Education as prevention
Law schools have long taught research methods, ethics, and citation practice. What is new is the need to frame those same skills around generative AI.
While many law schools and universities are either exploring AI within their modules or creating new modules that look at AI, there is a broader shift towards considering how AI is changing the legal sector as a whole.
Students must learn why AI produces hallucinations, how to design prompts responsibly, how to verify outputs against authoritative databases and when using such tools may be inappropriate.
The high court’s insistence on responsibility is justified. The integrity of justice depends on accurate citations and honest advocacy. But the solution cannot rest on sanction alone.
How to use AI – and how not to use it – should be part of legal training. Lee Charlie/Shutterstock
If AI is part of legal practice, then AI training and literacy must be part of legal training. Regulators, professional bodies and universities share a collective duty to ensure that junior lawyers are not left to learn through error or in the most unforgiving of environments, the courtroom.
Similar issues have arisen from non-legal professionals. In a Manchester civil case, a litigant in person admitted relying on ChatGPT to generate legal authorities in support of their argument. The individual returned to court with four citations, one entirely fabricated and three with genuine case names but with fictitious quotations attributed to them.
While the submissions appeared legitimate, closer inspection by opposing counsel revealed the paragraphs did not exist. The judge accepted the litigant had been inadvertently misled by the AI tool and imposed no penalty. This shows both the risks of unverified AI-generated content entering proceedings and the challenges for unrepresented parties in navigating court processes.
The message from Ayinde and Al-Haroun is simple but profound: using GenAI does not reduce a lawyer’s professional duty, it heightens it. For junior lawyers, that duty will arrive on day one. The challenge for legal educators is to prepare students for this reality, embedding AI verification, transparency, and ethical reasoning into the curriculum.
Craig Smith does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By Cathrine Jansson-Boyd, Professor of Consumer Psychology, Anglia Ruskin University
The Kardashians are back with a new season of their reality series The Kardashians on Disney Plus.
As a researcher of consumer psychology, I have written about consumer neuroscience and how brands and media shape behaviour and self-perception. Watching The Kardashians through that lens reveals more than entertainment. It exposes how luxury and aspiration are woven into identity and sold back to us as self-worth.
The first episode is a materialistic feast. There are close-ups of Dior and Chanel handbags and belts, diamond jewellery and a house sign that reads: “Need money for Birkin.” The Kardashians drive luxury cars, wear designer sunglasses indoors and chat about their Saint Laurent outfits.
Even the camera lingers on the glittering shop windows of Rodeo Drive in Beverly Hills, home to some of the world’s most exclusive designer stores, though no one is actually shopping. If you haven’t seen it, you probably get the idea. In the Kardashian universe, the unspoken motto seems to be: “To have is to be.”
In their world, material possessions are woven into identity and presented as something to aspire to. But is it really all that glamorous?
Overconsumption can lower our wellbeing. Young people, in particular, often turn to excessive consumption to fit in, boost confidence or gain prestige. Teenagers who idolise others for their wealth or possessions are more likely to struggle with their sense of identity later in life.
Research shows that children and adolescents who place strong importance on material possessions often struggle to develop a clear sense of identity. Without learning who they are beyond what they own, they may find it harder to build lasting self-worth and life satisfaction.
Rather than helping us define who we are, possessions can get in the way. They can obscure or distort our sense of self, leading us to equate value with visibility. On top of that, materialism is linked to depression, likely because people often fail to achieve the identity and happiness they hope consumption will bring.
The Kardashian-Jenners have a massive following. Sisters Kylie, Kim and Khloé each have more than 300 million Instagram followers, a clear sign of their influence.
When we admire someone, we naturally compare ourselves to them, a process known as social comparison. It helps us judge where we stand, whether we are better or worse off than others. In this context, owning the same bag, car or outfit becomes a way to measure worth, since possessions often symbolise status and make the buyer feel closer to the celebrity, as if buying into their world.
Social comparison is known to drive materialism. It can start to feel like a competition to “catch up” with those we look up to through conspicuous consumption.
When we fail to keep up with the Kardashians, we may feel inadequate, even if we know deep down we were never in the same race. The Kardashian brand cleverly capitalises on this very idea.
The original series title, Keeping Up With the Kardashians, puns on the human instinct to compare and compete. This dynamic fuels not only the show’s popularity but also its beauty, fashion and lifestyle empires, which invite fans to buy into the brand both literally and symbolically.
You might think the solution is simply to choose better role models, but it is not that straightforward. People often compare themselves to others without realising it, automatically relying on social comparison when processing information about other people. This tendency does not stop at television.
Social media platforms intensify the same dynamic, giving us endless opportunities to measure our worth against curated snapshots of other people’s lives. Research from 2024 shows that heavy exposure to idealised social-media content is associated with increased materialism, lower life satisfaction and greater stress.
Another study found that engagement with influencer content featuring luxury goods can trigger upward social comparison – the tendency to compare ourselves with people we see as better off – leading to feelings of envy and a stronger urge to buy similar products in order to close that gap.
From influencer “unboxings,” where people film themselves opening luxury purchases, to filtered “day in the life” videos, social media users are constantly exposed to lifestyles that appear effortlessly perfect. When we scroll through feeds full of luxury, beauty and success, we can become more materialistic without ever consciously deciding to.
Seeing the extreme wealth of people like the Kardashians surrounded by luxury can spark feelings of envy and relative deprivation, leading to dissatisfaction with our own lives. That dissatisfaction can then trigger compulsive shopping as we try to soothe those uncomfortable emotions and project wealth ourselves.
Unsurprisingly, compulsive buying is closely tied to materialism. If you value possessions and feel envy toward others, you are far more likely to buy impulsively in an attempt to catch up.
Watching glamorous lifestyles where people seem to have it all can be fun escapism, but it also blurs the line between aspiration and insecurity. Shows like The Kardashians offer a fantasy of perfection that few can match, yet they invite us to measure ourselves against it.
In the end, the pursuit of luxury may leave us feeling emptier, not richer. After all, when having becomes being, it is worth asking what is left of the self once the shopping stops.
Cathrine Jansson-Boyd does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
The next time you go wild swimming, whether in a lake, river or sea, you are probably sharing the water with one of your tiniest, yet closest relatives.
This near-family member is a microscopic, single-celled organism called a choanoflagellate. Scientists are still puzzled by how animals evolved from such simple beginnings. But a new paper describes the discovery of an important new clue.
Choanoflagellates, like most single celled organisms can only survive in water where they live much of their life as a single cell, no more complex than an amoeba.
They are nevertheless more closely related to our own kingdom of life than any other kind of organism is – choanoflagellates are cousins of the animals. Engraved in the structure and function of choanoflagellate cells and written in their DNA code, scientists are finding evidence showing how the very first animals evolved.
Before we get to the new clue, it is worth thinking about what makes animals unique. As I describe in my new book, the simple answer is that, compared to most of the rest of life, animals have large and complicated bodies built from many cells. Your own body contains tens of trillions of cells and even a tiny fruit fly has close to a million.
Our oldest animal ancestor must have evolved a way for its many cells to stick together and form a super-organism from a host of cooperating cells.
The first animals must also have invented ways to produce the many different kinds of cells we have today; muscle cells, nerve cells, egg cells and sperm to name a few. The division of labour among different kinds of cells is one aspect of arguably the biggest invention of the first animals which is embryogenesis.
This is the earliest period of an animal’s life when, starting from a single fertilised egg cell, cell division begins to create all the cells that will make up the animal. These cells then each take on their own specific task and finally the many cells get carefully organised to make a functioning organism.
Scientists are hoping that studying choanoflagellates will help them learn how these skills first evolved.
Choanoflagellates don’t have large, complex bodies and they don’t have embryogenesis. They do, however, have a few animal like qualities. Their cells, for example, can adopt a handful of different shapes with different roles.
Just as our cells can take the form of a nerve or a muscle, theirs can switch from the standard funnel and flagellum form to become amorphous, shape-shifting blobs like an amoeba.
Choanoflagellates can also make tiny multi-celled colonies. In the presence of certain species of bacteria their cells stick together to make little groups of cells called rosettes. The rosettes seem to form because they are better at catching the bacteria (which the choanoflagellates eat) than single cells are.
The rosettes can grow a little, but when they reach a size of about ten cells the bonds between the cells stretch and snap, splitting the rosettes down the middle to form two smaller rosettes of cooperating choanoflagellates.
The resemblance of these rosettes to the earliest stages of an animal embryo seems like a coincidence, however. Unlike an animal embryo, they are not destined to develop into anything else. The new study is about the growth of these rosettes.
In many animals, from mice to flies, there is small group of genes that work together to control how big different organs get – how many cells they contain. Named Hippo, Warts and Yorkie, these genes sound like a mob of gangsters.
They are known collectively as the Hippo pathway. When genes in the Hippo pathway mutate in a growing fly or mouse embryo, the result is flies with huge eyes or newborn mice with monstrous livers.
In adult humans, when Hippo genes mutate, they can produce cancerous growths of uncontrollably dividing cells.
Choanoflagellates have a host of genes in common with animals. Although they don’t have organs like eyes or livers (or embryos or cancer), they do have the genes of the Hippo gang. The new paper first describes how the researchers developed a new technique that lets scientists target any gene in a choanoflagellate so that it can be deliberately mutated.
When the Warts gene was mutated, the rosettes of cells grew twice as large so that they ended up containing 20 cells or more. This uncontrolled growth is strikingly similar to what had been seen in previous studies of flies, mice and some human cancers.
The details of just how the Hippo genes control rosette size are not yet know. But the new work adds to the picture we are building of the single celled precursor of the animals. It is another animal-like characteristic of the choanoflagellates.
If we travelled back in time to meet this tiny beast, we would never mistake it for a member of the animal kingdom. But we would nevertheless find in its repertoire a handful of skills that were going to prove useful in the evolution of animals.
Evolution took the materials that were available – the ability to make different kinds of cells; to stick those cells together; to regulate the number of cells and so on – and tinkered with them. From these beginnings, natural selection would then do its thing, resulting, 600 million years later, in the amazing diversity of the animal kingdom from jellyfish to flies, tapeworms, starfish and you.
This article features references to books that have been included for editorial reasons, and may contain links to bookshop.org. If you click on one of the links and go on to buy something from bookshop.org The Conversation UK may earn a commission.
Max Telford does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.