New research reveals that almost half of Canadians believe in the paranormal — ghosts and all

Source: The Conversation – Canada – By Tony Silva, Associate Professor of Sociology, University of British Columbia

What would you say if you were told that paranormal activity exists? Well, nearly half of Canadians would agree.

What is the paranormal, exactly? It refers to phenomena that science cannot explain and are not part of a major religion in a particular society. In contrast, religious phenomena are part of an established doctrine. For example, in Canada, psychic abilities and Bigfoot or Sasquatch are considered paranormal, while angels and demons are associated with religion.

In the summer of 2025, we launched a survey of Canadian attitudes regarding paranormal beliefs in which participation was confidential. And for the first time in decades, we have nationally representative data on paranormal beliefs and encounters in Canada.

Although news outlets regularly publish stories about paranormal beliefs on Halloween, the results they discuss are usually based on convenience samples. Ours is the first study in 20 years to use randomly selected people from the Canadian population to ask these questions — meaning the results are representative.

And it turns out that almost one in two Canadians believe in at least one paranormal phenomenon, and one-quarter report encounters with spirits.

We asked about ghostly hauntings, alien visitations, psychic abilities, telekinesis, astrology and other unexplained phenomena. We also asked about cryptids — animals or creatures whose existence has been suggested but not (yet) proven by science — specific to Canada. They include creatures with roots in First Nations folklore like the large serpentine sea monster, the Cadborosaurus, off the B.C. coast and the Ogopogo in Lake Okanagan.

The believers, the skeptics and the in-between

Canada is one of the world’s most secular societies. Here, religion has little impact on the way people act or view the world.

How Canadians think about the paranormal, however, has been mostly unknown. It’s expensive to gather representative data in Canada and few social scientists think it’s important to study belief in the paranormal. The combination of these two factors has meant Canadian paranormal beliefs have gone unexamined for decades.

What we found is that Canadians have embraced the paranormal — to a point.

Almost half — 44 per cent — believe in at least one paranormal phenomenon. About one-third did not report belief in any paranormal phenomenon but did indicate neutrality about at least one. For example, several respondents did not believe in ghosts, but were on the fence about extraterrestrial visitations.

A graph shows how many canadians believe in paranormal activity
Many non-probability samples of Canadians have been surveyed over the last few years, but unlike ours, those results tell us little because they did not use random sampling to recruit respondents. This graph shows how many Canadians believe, are neutral or don’t believe in the existence of paranormal activity.
(Sophia Dimitrakopoulos), CC BY-ND

Only about one-quarter said they did not believe in any of the 10 phenomena we asked about. The percentage of firm non-believers is similar to the 28 per cent figure in the United States and the United Kingdom Belief varied by specific phenomenon. People were most likely to believe in ghostly hauntings.

A graph showing the types of paranormal activity that people most likely believe in.
Respondents answered the authors’ survey on a granular lever, revealing whether they neither disagree nor agree, somewhat agree or strongly agree with whether each type of 10 paranormal phenomenon exists or not.
(Sophia Dimitrakopoulos), CC BY-ND

Overall, it is more common for Canadians to believe in at least one paranormal phenomenon than to not believe in any.

Who is most likely to believe?

Patterns of belief vary somewhat by demographic group.

Women are more likely than men to believe in ghosts and psychics, reflecting how women have a higher probability of being open to phenomena with a spiritual dimension.

People with bachelor’s degrees or higher are less likely to believe in most paranormal phenomena. There are few racial or ethnic differences.

Interestingly, people aged 19-29 are less likely to believe in many paranormal phenomena than those aged 30-44 or 45-64. These findings suggest that young Canadians tend to opt out of any non-scientific belief system, whether religious or paranormal.

Few differences by region or language exist, though Francophones are less likely to believe in Sasquatch than Anglophones are.

Paranormal experiences in Canada

About one-quarter of Canadians claim to have heard, seen or felt a ghost or spirit. Some experiences were connected to religion, such as feeling the Christian Holy Spirit.

More often, experiences were associated with the death of a loved one and were personally meaningful. As one participant explained: “Soon after my mother’s death, I woke up suddenly and she was standing beside my bed. She smiled at me and faded away. I was comforted.”

Others reported spooky encounters associated with a place. A different participant wrote: “I was managing a motel and saw a ghostly man walking along the upper balcony. I asked the locals, and they said on the property that the motel was on, there was a house that burned down — and he lived in the house!”

Cryptid sightings are less common.

“I was operating a high-clearance sprayer, in a 1,300-acre field. I sat about 10 feet in the air in the cab on this machine,” one participant said. “I came around the corner of a bluff and saw a blurry, bipedal creature. It was furry, had a long snout and long arms, and in an instant turned into a moose. I have no idea to this day what that was.”

What our beliefs reveal

Our goal is not to prove or disprove any experience or belief, but to analyze what they mean for individuals and for Canada.

And to that end, our survey showed us that while many Canadians have replaced or supplemented religious belief with paranormal belief, most trust science. Belief in the paranormal or religion does not mean Canadians reject science, but rather that they believe some phenomena cannot yet be explained by science.

While the paranormal is fun — or creepy — to think about around Halloween, it is also part of the everyday belief system of many Canadians.

The Conversation

Tony Silva (as co-applicant) received funding from the Social Sciences and Humanities Research Council of Canada for the first survey wave of this project, which focused on attitudes about politics and decarbonisation. No grant or taxpayer funds were used for the second survey wave, which included questions about paranormal beliefs.

Emily Huddart received funding from the Social Sciences and Humanities Research Council of Canada to support an earlier wave of this project (with Tony Silva).

ref. New research reveals that almost half of Canadians believe in the paranormal — ghosts and all – https://theconversation.com/new-research-reveals-that-almost-half-of-canadians-believe-in-the-paranormal-ghosts-and-all-267912

Nuevas drogas: baratas, “legales” y a un clic de distancia

Source: The Conversation – (in Spanish) – By Martalu D Pazos, Investigadora predoctoral en neuropsicofarmacología de derivados anfetamínicos y otras nuevas sustancias psicoactivas del Departamento de Farmacología, Toxicología y Química Terapéutica, Universidad de Barcelona., Universitat de Barcelona

Maksim Kabakou/Shutterstock

¿Imagina poder comprar una droga por internet y que llegue a su casa por correo? Pues la opción existe. Y no es una sola, sino cientos. Se trata de las llamadas nuevas sustancias psicoactivas (NPS), y su presencia no para de aumentar.

Estas sustancias se crean con un objetivo muy claro: imitar los efectos de otras drogas más conocidas, como el cannabis, la cocaína, el éxtasis (MDMA) o la LSD, pero modificando ligeramente su estructura química para esquivar las leyes existentes. Que sean “legales” puede dar una falsa sensación de seguridad, pero nada más lejos de la realidad: se trata de sustancias poco investigadas sobre las que la información de seguridad es, en gran parte, desconocida. Este fenómeno es difícil de controlar porque cambia constantemente: cuando una se prohíbe, ya hay varias sustancias nuevas listas para reemplazarla.

El narcotráfico se digitaliza

Comprar drogas fiscalizadas implica, normalmente, conocer a alguien en el barrio que las tenga y las venda, es decir, “trapichear” y exponerse tanto a problemas legales como a situaciones de riesgo. Aunque existen maneras de adquirirlas por internet de forma más anónima y segura, esto conlleva saber acceder a la internet profunda o deep web y manejar criptomonedas.

Sin embargo, con las NPS la cosa cambia. Al no ser ilegales, cualquiera puede adquirirlas fácilmente online. Basta con una tarjeta de crédito y una dirección, como si fuera cualquier otro producto, y nos la envían a domicilio. Para ello, se utilizan los servicios postales o los sistemas de logística internacional. En el etiquetado se imprime la advertencia “no apto para consumo humano” o algo cotidiano como “sales de baño”.

Sales de baño

A la cocaína, la anfetamina y la MDMA en estos momentos les hace la competencia un grupo de sustancias que las imitan: las llamadas catinonas sintéticas o “sales de baño”. Son los estimulantes modernos más populares en España y han venido para quedarse.

Dentro de esta familia, la más popular es la mefedrona, que comenzó a venderse por internet en 2007 como MDMA legal. Sus efectos de euforia, mayor apreciación por la música, empatía y una leve estimulación sexual la llevaron a popularizarse en las noches de fiesta. Además, la duración de sus efectos es más corta, lo que invita a consumir repetidamente durante la misma sesión, aumentando los riesgos. Tras su prohibición, el ritmo de aparición de nuevas catinonas que cubrieran su nicho de mercado ha sido elevado.

Las drogas de los festivales

¿Qué pasó hace un año en el Primavera Sound de Barcelona? Con la llegada del buen tiempo, suelen aflorar los festivales de música. Y no es raro que, además del alcohol y el tabaco, los asistentes también busquen una pasti para “mejorar la experiencia”. Las más habituales en estos contextos suelen ser las de MDMA. Por desgracia, en un mercado desregulado como el de las drogas ilegales, estas pastillas no cuentan con los controles de calidad que sí tienen las drogas legales o medicamentos. Eso hace que el fraude en cantidad o composición de la droga sea una práctica extendida en el narcotráfico para abaratar costes, potenciar efectos o eludir consecuencias legales.

En este contexto, las nuevas drogas resultan muy atractivas: son más baratas, fáciles de conseguir y, en muchos casos, todavía “legales”. Esto las convierte en una opción ideal para reemplazar sustancias tradicionales muy demandadas como la MDMA. De esta forma, cuando escasea una droga popular, las nuevas drogas se utilizan para dar gato por liebre.

Esto ocurrió el año pasado en el Primavera Sound de Barcelona, uno de los festivales más importantes de España. Una pastilla de color rosa y forma cuadrada se vendía como si fuera MDMA. Pero al analizarla en el servicio estacionario de Energy Control, se descubrió que contenía clefedrona, una catinona sintética que la imita. Esto significa que se podría haber consumido una nueva droga sin saberlo, lo que aumenta los riesgos para la salud.

Más allá del tabú: hablar de drogas para salvar vidas

El consumo de estimulantes crece cada año y con él, también su producción. Así, las catinonas se consolidaron en el mercado para satisfacer una creciente demanda global cada vez más establecida.

Alrededor de 73 millones de personas en el mundo consumieron anfetaminas, cocaína o éxtasis en 2022. Esto muestra que el debate sobre un mundo “con o sin drogas” queda obsoleto: el consumo es una realidad instalada.

Cuando hablamos de los peligros de las drogas, solemos pensar solo en la adicción. Sin embargo, la realidad es mucho más amplia. Incluso si no nos hacemos adictos, consumir puede afectar a nuestra salud física y mental, impactar en el trabajo o los estudios, afectar a las relaciones personales y a la económica, entre otros problemas.

La forma más segura de evitar los riesgos del consumo de drogas es no consumirlas. Pero si se decide tomarlas, es crucial conocer los riesgos y tener información sobre cómo reducirlos, porque nadie sale de fiesta pensando en acabar en urgencias. La información salva vidas.

La paradoja de la prohibición

Las políticas basadas únicamente en la prohibición no logran reducir el consumo, el tráfico o los daños asociados. De hecho, pueden tener el efecto contrario: favorecen la aparición de nuevas sustancias diseñadas para esquivar la ley, pero sobre las que sabemos aún menos. Esto incrementa los riesgos para la salud de quienes las consumen.

Por eso, es necesario romper el tabú y hablar de drogas. No se trata de fomentar su consumo, sino de reconocer una realidad y apostar por estrategias basadas en la prevención, la reducción de daños y, en algunos casos, la regulación de ciertas sustancias sobre las que ya hay un conocimiento científico y médico sólido.

The Conversation

Martalu D. Pazos recibe financiación a través de una beca de doctorado concedida por la Generalitat de Catalunya (AGAUR), 2023 FISDU 00182. Desde el año 2022, es voluntaria en el programa Energy Control de reducción de riesgos en consumos recreativos de drogas de la organización sin ánimo de lucro ABD – Asociación Bienestar y Desarrollo.

David Pubill Sánchez no recibe salario, ni ejerce labores de consultoría, ni posee acciones, ni recibe financiación de ninguna compañía u organización que pueda obtener beneficio de este artículo, y ha declarado carecer de vínculos relevantes más allá del cargo académico citado.

ref. Nuevas drogas: baratas, “legales” y a un clic de distancia – https://theconversation.com/nuevas-drogas-baratas-legales-y-a-un-clic-de-distancia-267149

Fed struggles to assess state of US economy as government shutdown shuts off key data

Source: The Conversation – USA (2) – By Jason Reed, Associate Teaching Professor of Finance, University of Notre Dame

The shutdown has closed off some of the Fed’s key economic data taps. picture alliance/Getty Images

When it comes to setting monetary policy for the world’s largest economy, what data drives decision-making?

In ordinary times, Federal Reserve Chair Jerome Powell and voting members of the Federal Open Market Committee, which usually meets eight times a year, have a wealth of information at their disposal, including key statistics such as monthly employment and extensive inflation data.

But with the federal shutdown that began Oct. 1, 2025, grinding on, government offices that publish such information are shuttered and data has been curtailed. Now, Powell and his Fed colleagues might be considering the price of gas or changes in the cost of coffee as they meet on Oct. 29 to make a judgment on the strength of the U.S. economy and decide where to take interest rates.

The Federal Reserve’s mandate is to implement monetary policy that stabilizes prices and promotes full employment, but there is a delicate balance to strike. Not only do Powell and the Fed have to weigh domestic inflation, jobs and spending, but they must also respond to changes in President Donald Trump’s global tariff policy.

As an economist and finance professor at the University of Notre Dame, I know the Fed has a tough job of guiding the economy under even the most ideal circumstances. Now, imagine creating policy partially blindfolded, without access to key economic data.

But, fortunately, the Fed’s not flying blind – it still has a wide range of private, internal and public data to help it read the pulse of the U.S. economy.

Key data is MIA

The Fed is data-dependent, as Powell likes to remind markets. But the cancellation of reports on employment, job openings and turnover, retail sales and gross domestic product, along with a delay in the September consumer price information, will force the central bank to lean harder on private data to nail down the appropriate path for monetary policy.

Torsten Slok, chief economist for the Apollo asset management firm, recently released his set of “alternative data,” capturing information from a wide range of sources. This includes ISM PMI reports, which measure economic activity in the manufacturing and services sectors, and Bloomberg’s robust data on consumer spending habits.

“Generally, the private data, the alternative data that we look at is better used as a supplement for the underlying governmental data, which is the gold standard,” Powell said in mid-October. “It won’t be as effective as the main course as it would have been as a supplement.”

But at this crucial juncture, the Fed has also abruptly lost one important source of private data. Payroll processor ADP had previously shared private sector payroll information with the central bank, which considered it alongside government employment figures. Now, ADP has suspended the relationship, and Powell has reportedly asked the company to quickly reverse its decision.

espresso falls from a coffee machine into a blue cup
With some key data unavailable, the Fed may pay more attention to the price of a cup of coffee to help determine how to set interest rates.
AP Photo/Julio Cortez

Internal research

Fortunately for the Fed, it has its own sources for reliable information.

Even when government agencies are working and producing economic reports, the Federal Reserve utilizes internal research and its nationwide network of contacts to supplement data from the U.S. Census Bureau, the Bureau of Labor Statistics and the Bureau of Economic Analysis.

Since the Fed is self-funded, the government shutdown didn’t stop it from publishing its Beige Book, which comes out eight times a year and provides insight into how various aspects of the economy are performing.

Its Oct. 15 report found that consumer spending had inched down, with lower- and middle-income households facing “rising prices and elevated economic uncertainty.” Manufacturing was also hit by challenges linked to higher tariffs.

Leading indicators

And though no data is being released on the unemployment rate, historical data shows that consumer sentiment can act as a leading indicator for joblessness in the U.S.

According to the most recent consumer confidence reports, Americans are significantly more worried about their jobs over the next six months, as compared to this time last year, and expect fewer employment opportunities during that period. This suggests the Fed will likely see an uptick in the unemployment rate, once the data resumes publishing.

And if you did notice an increase in the price of your morning coffee, you’re not mistaken – both private and market-based data suggest inflation is a pressing concern, with expectations that price increases will remain at about the 2% target set by the Fed.

It’s clear that there is no risk-free path for policy, and a wrong move by the Fed could stoke inflation or even send the U.S. economy spiraling into a recession.

Uncertain path ahead

At the Fed’s September monetary policy meeting, members voted to cut benchmark interest rates by 25 basis points, while one member advocated for a 50-point cut.

It was the first interest rate cut since December – one that Trump had been loudly demanding to help spur the U.S. economy and lower the cost of government debt. Markets expect the Fed to cut interest rates by another quarter of a percentage point at its Oct. 28-29 meeting and then again in December. That would lower rates to a range of 3.5% to 3.75%, from 4% to 4.25% currently, giving the labor market a much-needed boost.

After that, the near-certainty ends, as it’s anyone’s guess where interest rates will go from there. At quarterly meetings, members of the Federal Open Market Committee give projections of where they think the Fed’s benchmark interest rate will go over the next three years and beyond to provide forward guidance to financial markets and other observers.

The median projection from the September meeting suggests the benchmark rate will end 2026 a little lower than where it began, at 3.4%, and decline to 3.1% by the end of 2027. With inflation accelerating, Fed officials will continue to weigh the weakening labor market against the threat of inflation from tariffs, immigration reform and their own lower interest rates – not to mention the ongoing impact of the government shutdown.

Unfortunately, I believe these risks will be difficult to mitigate with just Fed intervention, even with perfect foresight into the economy, and will need help from government immigration, tax and spending policy to put the economy on the right path.

The Conversation

Jason Reed does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Fed struggles to assess state of US economy as government shutdown shuts off key data – https://theconversation.com/fed-struggles-to-assess-state-of-us-economy-as-government-shutdown-shuts-off-key-data-267204

Were Neanderthals capable of making art?

Source: The Conversation – UK – By Paul Pettitt, Professor in the Department of Archaeology, Durham University

Neanderthal handprints in a replica of Maltravieso Cave, Spain. WH Pics / Shutterstock

The ability to make art has often been considered a hallmark of our species. Over a century ago, prehistorians even had trouble believing that modern humans from the Upper Palaeolithic (between 45,000 and 12,000 years ago) were capable of artistic flair.

Discoveries of uncontrovertibly old artworks from the caves and rockshelters of Europe soon dispelled their doubts. But what of the Neanderthals; an ancient, large-brained sister group to our own species? We now know that they were capable of making art too.

However, at present, all of the Neanderthal evidence is non-figurative – they have no depictions of animals, including humans. This latter form of art was perhaps exclusive to Homo sapiens. Instead, the Neanderthal examples consist of hand stencils, made by blowing pigment over the hand, finger flutings – where the fingers were pressed into a soft surface – and geometric markings.

Neanderthals inhabited western Eurasia from about 400,000 years ago until their extinction about 40,000 years ago and have often been caricatured as the archetypal “cavemen”.

Questions about their cognitive and behavioural sophistication have never quite gone away, and whether they produced art is at the forefront of this issue.

Despite the fact that we know that Neanderthals were capable of producing jewellery and using coloured pigments, there has been much objection to the notion that they explored deep caves and left art on the walls.

But recent work has confirmed beyond doubt that they did. In three Spanish caves – La Pasiega in Cantabria, Maltravieso in Extremadura and Ardales in Malaga, Neanderthals created linear signs, geometric shapes, hand stencils, and handprints using pigments. In La Roche Cotard, a cave in the Loire Valley, France, Neanderthals left a variety of lines and shapes in finger flutings (the lines that fingers leave on a soft surface).

And deep in the Bruniquel cave, southwest France, they broke off stalactites into sections of similar length and constructed a large oval wall of them, setting fires on top of it. This was not a shelter but something odder, and if it was constructed in a modern art gallery we’d no doubt assume it was installation art.

Now that we have well-established examples of Neanderthal art on cave walls in France and Spain, more discoveries are inevitable. However, the job is hard because of difficulties in establishing the age of Palaeolithic cave art. In fact, it is often the focus of intense debate among specialists.

Relative dating schemes based on the style and themes of cave art and comparisons of objects recovered from dated archaeological levels have proven useful, but they have their limits.

To produce real ages requires at least one of three conditions. The first is the presence of a charcoal pigment which can be dated using the radiocarbon method. This will establish exactly when the charcoal was created (when its wood died). However, black pigments are often from minerals (manganese) and therefore a large amount of black coloured cave art is simply not dateable.

A further problem is that the production of the charcoal may or may not be of the same age as the date that it was used as a pigment. I could pick up some 30,000-year-old charcoal from a cave floor and write “Paul was here” on a cave wall. The radiocarbon date wouldn’t reflect when my grafitto was actually made.

A second condition is the presence of calcite flowstones (stalactites and stalacmites) that have formed over the art. If they demonstrably grew on top of a piece of art, then they must be younger than it. A dating method based on the decay of uranium into an isotope – a particular form – of the element thorium can be used to establish exactly when flowstones formed, producing a minimum age for the art underneath.

I was part of a team who used this method to date flowstones overlying red pigment art in the three Spanish caves mentioned earlier, demonstrating that hand stencils, dots and colour washes must have been created over 64,000 years ago. This is a minimum age: the actual age of the images could be much older.

But even at its youngest range, the images predate the earliest arrival of modern humans (Homo sapiens) in Iberia by at least 22,000 years. As Middle Palaeolithic archaeology – the calling cards of the Neanderthals – is common in all three caves, the simplest interpretation that fits the dating is that the authors of the images were Neanderthals.

Objections to our results ignored supporting information we’d published. Did the dated samples really overlie the art? They did. Can we trust the technique? We have for half a century.

The third condition has just provided further evidence of Neanderthal artistic activity. Meandering lines left by tracing fingers along the soft muds of the walls of the Roche Cotard cave reveal another form of interacting with this mysterious subterranean realm. These markings include wavy, parallel and curved lines in organised arrangements that show they were made deliberately.

The dating of sediments which formed over its entrance show that it was completely sealed no later than 54,000 years ago – probably earlier. As with our Spanish examples, this was long before Homo sapiens arrived in the region and the cave contains only tools made by Neanderthals. It adds another art form to the Neanderthal repertoire.

Even ardent sceptics must agree that this data unambiguously reveal artistic activities in deep caves which can only have been made by Neanderthals.

The art could represent Neanderthal individuals becoming more aware of their own agency in the world. It might constitute the first evidence of engagement with an imaginary realm. The coming years will no doubt reveal even more subjects for debate.

The Conversation

Paul Pettitt does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Were Neanderthals capable of making art? – https://theconversation.com/were-neanderthals-capable-of-making-art-268239

The rise and fall of globalisation, part one: battle to be top dog

Source: The Conversation – Global Perspectives – By Steve Schifferes, Honorary Research Fellow, City Political Economy Research Centre, City St George’s, University of London

A world map showing the extent of the British Empire in 1886. Norman B. Leventhal Map & Education Center, Boston Public Library/Wikimedia Commons, CC BY

For nearly four centuries, the world economy has been on a path of ever-greater integration that even two world wars could not totally derail. This long march of globalisation was powered by rapidly increasing levels of international trade and investment, coupled with vast movements of people across national borders and dramatic changes in transportation and communication technology.

According to economic historian J. Bradford DeLong, the value of the world economy (measured at fixed 1990 prices) rose from US$81.7 billion (£61.5 billion) in 1650, when this story begins, to US$70.3 trillion (£53 trillion) in 2020 – an 860-fold increase. The most intensive periods of growth corresponded to the two periods when global trade was rising fastest: first during the “long 19th century” between the end of the French revolution and start of the first world war, and then as trade liberalisation expanded after the second world war, from the 1950s up to the 2008 global financial crisis.

Now, however, this grand project is on the retreat. Globalisation is not dead yet, but it is dying.

Is this a cause for celebration, or concern? And will the picture change again when Donald Trump and his tariffs of mass disruption leave the White House? As a longtime BBC economics correspondent who was based in Washington during the global financial crisis, I believe there are sound historical reasons to worry about our deglobalised future – even once Trump has left the building.


The Insights section is committed to high-quality longform journalism. Our editors work with academics from many different backgrounds who are tackling a wide range of societal and scientific challenges.


Trump’s tariffs have amplified the world’s economic problems, but he is not the root cause of them. Indeed, his approach reflects a truth that has been emerging for many decades but which previous US administrations – and other governments around the world – have been reluctant to admit: namely, the decline of the US as the world’s no.1 economic power and engine of world growth.

In each era of globalisation since the mid-17th century, a single country has sought to be the clear world leader – shaping the rules of the global economy for all. In each case, this hegemonic power had the military, political and financial power to enforce these rules – and to convince other countries that there was no preferable path to wealth and power.

But now, as the US under Trump slips into isolationism, there is no other power ready to take its place and carry the torch for the foreseeable future. Many people’s pick, China, faces too many economic challenges, including its lack of a truly international currency – and as a one-party state, nor does it possess the democratic mandate needed to gain acceptance as the world’s new dominant power.

While globalisation has always produced many losers as well as winners – from the slave trade of the 18th century to displaced factory workers in the American Midwest in the 20th century – history shows that a deglobalised world can be an even more dangerous and unstable place. The most recent example came during the interwar years, when the US refused to take up the mantle left by the decline of Britain as the 19th century’s hegemonic global power.

In the two decades from 1919, the world descended into economic and political chaos. Stock market crashes and global banking failures led to widespread unemployment and increasing political instability, creating the conditions for the rise of fascism. Global trade declined sharply as countries put up trade barriers and started self-defeating currency wars in the vain hope of giving their countries’ exports a boost. On the contrary, global growth ground to a halt.

A century on, our deglobalising world is vulnerable again. But to chart whether this means we are destined for a similarly chaotic and unstable future, we first need to explore the birth, growth and reasons behind the imminent demise of this extraordinary global project.

French model: mercantilism, money and war

By the mid-1600s, France had emerged as the strongest power in Europe – and it was the French who developed the first overarching theory of how the global economy could work in their favour. Nearly four centuries later, many aspects of “mercantilism” have been revived by Trump’s US playbook, which could be entitled How To Dominate the World Economy by Weakening Your Rivals.

France’s version of mercantilism was based on the idea that a country should put up trade barriers to limit how much other countries could sell to it, while boosting its own industries to ensure that more money (in the form of gold) came into the country than left it.

England and the Dutch Republic had already adopted some of these mercantilist policies, establishing colonies around the globe run by powerful monopolistic trading companies. In contrast to these “seaborne empires”, the much larger empires in the east such as China and India had the internal resources to generate their own revenue, meaning international trade – although widespread – was not critical to their prosperity.

Portrait of French finance minister Jean-Baptiste Colbert
French finance minister Jean-Baptiste Colbert, architect of mercantilism.
Metropolitan Museum of Art/Wikimedia

But it was France which first systematically applied mercantilism across the whole of government policy – led by the powerful finance minister Jean-Baptiste Colbert (1661-1683), who had been granted unprecedented powers to strengthen the financial might of the French state by King Louis XIV. Colbert believed trade would boost the coffers of the state and strengthen France’s economy while weakening its rivals, stating:

It is simply, and solely, the absence or abundance of money within a state [which] makes the difference in its grandeur and power.

In Colbert’s view, trade was a zero-sum game. The more France could run a trade surplus with other countries, the more gold bullion it could accumulate for the government and the weaker its rivals would become if deprived of gold. Under Colbert, France pioneered protectionism, tripling its import tariffs to make foreign goods prohibitively expensive.

At the same time, he strengthened France’s domestic industries by providing subsidies and granting them monopolies. Colonies and government trading companies were established to ensure France could benefit from the highly lucrative trade in goods such as spices, sugar – and slaves.

Colbert oversaw the expansion of French industries into areas like lace and glass-making, importing skilled craftsmen from Italy and granting these new companies state monopolies. He invested heavily in infrastructure such as the Canal du Midi, and dramatically increased the size of France’s navy and merchant marine to challenge its British and Dutch rivals.

Global trade at this time was highly exploitative, involving the forced seizure of gold and other raw materials from newly discovered lands (as Spain had been doing with its conquests in the New World from the late 15th century). It also meant benefiting from the trade in humans, with huge profits as slaves were seized and sent to the Caribbean and other colonies to produce sugar and other crops.




Read more:
Why London’s new slavery memorial is so important: ‘The past that is not past reappears, always, to rupture the present’


In this era of mercantilism, trade wars often led to real wars, fought across the globe to control trade routes and seize colonies. Following Colbert’s reforms, France began a long struggle to challenge the overseas empires of its maritime rivals, while also engaging in wars of conquest in continental Europe.

France initially enjoyed success in the 17th century both on land and sea against the Dutch. But ultimately, its state-run French Indies company was no rival to the ruthless, commercially driven activities of the Dutch and British East India companies, which delivered enormous profits to their shareholders and revenues for their governments.

Indeed, the huge profits made by the Dutch from the Far Eastern spice trade explains why they had no hesitation in handing over their small North American colony of New Amsterdam, in return for expelling the British from a small toehold of one of their spice islands in what is now Indonesia. In 1664, that Dutch outpost was renamed New York.

After a century of conflict, Britain gradually gained ascendancy over France, conquering India and forcing its great rival to cede Canada in 1763 after the Seven Years war. France never succeeded in fully countering Britain’s naval strength. Resounding defeats by fleets led by Horatio Nelson in the early 19th century, coupled with Napoleon’s defeat at Waterloo by a coalition of European powers, marked the end of France’s time as Europe’s hegemonic power.

Painting of French ships under fire during the Battle of Trafalgar.
The battle of Trafalgar, off southwestern Spain in October 1805, was decisive in ending France’s era of dominance.
Yale Center for British Art/Wikimedia

But while the French model of globalisation ultimately failed in its attempt to dominate the world economy, that has not prevented other countries – and now President Trump – from embracing its principles.

France found that tariffs alone could not sufficiently fund its wars nor boost its industries. Its broad version of mercantilism led to endless wars that spread around the globe, as countries retaliated both economically and militarily and tried to seize territories.

More than two centuries later, there is an uncomfortable parallel with what the results of Trump’s endless tariff wars might bring, both in terms of ongoing conflict and the organisation of rival trade blocs. It also shows that more protectionism, as proposed by Trump, will not be enough to revive the US’s domestic industries.

British model: free trade and empire

The ideology of free trade was first spelled out by British economists Adam Smith and David Ricardo, the founding fathers of classical economics. They argued trade was not a zero-sum game, as Colbert had suggested, but that all countries could mutually benefit from it. According to Smith’s classic text, The Wealth of Nations (1776):

If a foreign country can supply us with a commodity cheaper than we ourselves can make, better buy it off them with some part of the produce of our own industry, employed in such a way that we have some advantages.

As the world’s first industrial nation, by the 1840s Britain had created an economic powerhouse based on the new technologies of steam power, the factory system, and railroads.

Smith and Ricardo argued against the creation of state monopolies to control trade, proposing minimal state intervention in industry. Ever since, Britain’s belief in the benefits of free trade has proved stronger and more long-lasting than any other major industrial power – more deeply embedded in both its politics and popular imagination.

This ironclad commitment was born out of a bitter political struggle in the 1840s between manufacturers and landowners over the protectionist Corn Laws. The landowners who had traditionally dominated British politics backed high tariffs, which benefited them but resulted in higher prices for staples like bread. The repeal of the Corn Laws in 1846 upended British politics, signalling a shift of power to the manufacturing classes – and ultimately to their working-class allies once they gained the right to vote.

Illustration of an Anti-Corn Law League meeting.
An Anti-Corn Law League meeting held in London’s Exeter Hall in 1846.
Wikimedia

In time, Britain’s advocacy of free trade unleashed the power of its manufacturing to dominate global markets. Free trade was framed as the way to raise living standards for the poor (the exact opposite of President Trump’s claim that it harms workers) and had strong working-class support. When the Conservatives floated the idea of abandoning free trade in the 1906 general election, they suffered a devastating defeat – the party’s worst until 2024.

As well as trade, a central element in Britain’s role as the new global hegemonic power was the rise of the City of London as the world’s leading financial centre. The key was Britain’s embrace of the gold standard which put its currency, the pound, at the heart of the new global economic order by linking its value to a fixed amount of gold, ensuring its value would not fluctuate. Thus the pound became the worldwide medium of exchange.

This encouraged the development of a strong banking sector, underpinned by the Bank of England as a credible and trustworthy “lender of last resort” in a financial crisis. The result was a huge boom in international investment, opening access to overseas markets for British companies and individual investors.

In the late 19th century, the City of London dominated global finance, investing in everything from Argentinian railways and Malaysian rubber plantations to South African gold mines. The gold standard became a talisman of Britain’s power to dominate the world economy.

The pillars of Britain’s global economic dominance were a highly efficient manufacturing sector, a commitment to free trade to ensure its industry had access to global markets, and a highly developed financial sector which invested capital around the world and reaped the benefits of global economic development. But Britain also did not hesitate to use force to open up foreign markets – for example, during the Opium Wars of the 1840s, when China was compelled to open its markets to the lucrative trade in opium from British-owned India.




Read more:
What the Opium Wars can tell us about China, the U.S. and fentanyl


By the end of the 19th century, the British empire incorporated one quarter of the world’s population, providing a source of cheap labour and secure raw materials as well as a large market for Britain’s manufactured goods. But that was still not enough for its avaricious leaders: Britain also made sure that local industries did not threaten its interests – by undermining the Indian textile industry, for example, and manipulating the Indian currency.

In reality, globalisation in this era was about domination of the world economy by a few rich European powers, meaning that much global economic development was curtailed to protect their interests. Under British rule between 1750 and 1900, India’s share of world industrial output declined from 25% to 2%.

But for those at the centre of Britain’s global formal and informal empire, such as the middle-class residents of London, this was a halcyon time – as economist John Maynard Keynes would later recall:

For middle and upper classes … life offered, at a low cost and with the least trouble, conveniences, comforts and amenities beyond the compass of the richest and most powerful monarchs of other ages. The inhabitant of London could order by telephone, sipping his morning tea in bed, the various products of the whole Earth, in such quantity as he might see fit, and reasonably expect their early delivery upon his doorstep.

US model: protectionism to neoliberalism

While Britain enjoyed its century of global dominance, the United States embraced protectionism for longer after its foundation in 1776 than all other major western economies.

The introduction of tariffs to protect and subsidise emerging US industries had first been articulated in 1791 by the fledgling nation’s first treasury secretary, Alexander Hamilton – Caribbean immigrant, founding father and future subject of a record-breaking musical. The Whig party under Henry Clay and its successor, the Republican Party, were both strong supporters of this policy for most of the 19th century. Even as US industry grew to overshadow all others, its government maintained some of the highest tariff barriers in the world.

Alexander Hamilton on the front of a US$10 note from 1934
Founding father Alexander Hamilton on the front of a US$10 note from 1934.
Wikimedia

Tariff rates rose to 50% in the 1890s with the backing of future president William McKinley, both to help industrialists and pay for generous pensions for 2 million civil war veterans and their dependants – a key part of the Republican electorate. It is no accident that President Trump has festooned the White House with pictures of Hamilton, Clay and McKinley – all supporters of protectionism and high tariffs.

In part, the US’s enduring resistance to free trade was because it had access to an internal supply of seemingly limitless raw materials, while its rapidly growing population, fuelled by immigration, provided internal markets that fuelled its growth while keeping out foreign competition.

By the late 19th century, the US was the world’s biggest steel producer with the largest railroad system in the world and was moving rapidly to exploit the new technologies of the second industrial revolution – based on electricity, petrol engines and chemicals. Yet it was only after the second world war that the US assumed the role of global superpower – in part because it was the only country on either side of the war that had not suffered severe damage to its economy and infrastructure.

In the wake of global destruction in Europe and Asia, the US’s dominance was political, military and cultural, as well as financial – but the US vision of a globalised world had some important differences from its British predecessor.

The US took a much more universalist and rules-based approach, focusing on the creation of global organisations that would establish binding regulations – and open up global markets to unfettered American trade and investment. It also aimed to dominate the international economic order by replacing the pound sterling with the US dollar as the global medium of exchange.

Within a week of its entry in the second world war, plans were laid to establish US global financial hegemony. The US treasury secretary, Henry Morgenthau, began work on establishing an “inter-allied stabilisation fund” – a playbook for post-war monetary arrangements which would enshrine the US dollar at its heart.

This led to the creation of the International Monetary Fund (IMF) and World Bank at the Bretton Woods conference in New Hampshire in 1944 – institutions dominated by the US, which encouraged other countries to adopt the same economic model both in terms of free trade and free enterprise. The Allied nations who were simultaneously meeting to establish the United Nations to try to ensure future world peace, having suffered the devastating effects of the Great Depression and war, welcomed the US’s commitment to shape a new, more stable economic order.

How the 1944 Bretton Woods deal ensured the US dollar would be the world’s dominant currrency. Video: Bloomberg TV.

As the world’s biggest and strongest economy, there was (initially) little resistance to this US plan for a new international economic order in its own image. The motive was as much political as economic: the US wanted to provide economic benefits to ensure the loyalty of its key allies and counter the perceived threat of a communist takeover – in complete contrast to Trump’s mercantilist view today that all other countries are out to “rip off” the US, and that its own military might means it has no real need for allies.

After the war finally ended, the US dollar, now linked to gold at a fixed rate of $35 per ounce to guarantee its stability, assumed the role as the free world’s principal currency. It was both used for global trade transactions and held by foreign central banks as their currency reserves – giving the US economy an “exorbitant privilege”. The stable value of the dollar also made it easier for the US government to sell Treasury bonds to foreign investors, enabling it to more easily borrow money and run up trade deficits with other countries.

The conditions were set for an era of US political, financial and cultural dominance, which saw the rise of globally admired brands such as McDonald’s and Coca Cola, as well as a powerful US marketing arm in the form of Hollywood. Perhaps even more significantly, the relaxed, well-funded campuses of California would prove a perfect petri dish for the development of new computer technologies – backed initially by cold war military investment – which, decades later, would lead to the birth of the big-tech companies that dominate the tech landscape today.

The US view of globalisation was broader and more interventionist than the British model of free trade and empire. Rather than having a formal empire, it wanted to open up access to the entire world economy, which would provide global markets for American products and services.

The US believed you needed global economic institutions to police these rules. But as in the British case, the benefits of globalisation were still unevenly shared. While countries that embraced export-led growth such as Japan, Korea and Germany prospered, other resource-rich but capital-poor countries such as Nigeria only fell further behind.

From dream to despair

Though the legend of the American dream grew and grew, by the 1970s the US economy was coming under increasing pressure – in particular from German and Japanese rivals, who by then had recovered from the war and modernised their industries.

Troubled by these perceived threats and a growing trade deficit, in 1971 President Richard Nixon stunned the world by announcing that the US was going off the gold standard – forcing other countries to bear the cost of adjustment for the US balance of payments crisis by making them revalue their currencies. This had a profound effect on the global financial system: within a decade, most major currencies had abandoned fixed exchange rates for a new system of floating rates, effectively ending the 1944 Bretton Woods settlement.

US president Richard Nixon announces the US is leaving the gold standard on August 15 1971.

The end of fixed exchange rates opened the door to the “financialisation” of the global economy, vastly expanding global investment and lending – much of it by US financial firms. This gave succour to the burgeoning neoliberal movement that sought to further rewrite the rules of the financial world order. In the 1980s and ’90s, these policy prescriptions became known as the Washington consensus: a set of rules – including opening markets to foreign investment, deregulation and privatisation – that was imposed on developing economies in crisis, in return for them receiving support from US-led organisations like the World Bank and IMF.

In the US, meanwhile, the increasing reliance on the finance and hi-tech sectors increased levels of inequality and fostered resentment in large parts of American society. Both Republicans and Democrats embraced this new world order, shaping US policy to favour their hi-tech and financial allies. Indeed, it was the Democrats who played a key role in deregulating the financial sector in the 1990s.

Meanwhile, the decline of US manufacturing industries accelerated, as did the gap between the incomes of those in the hinterland, where manufacturing was based, and residents of the large metropolitan cities.

By 2023, the lowest 50% of US citizens received just 13% of total personal income, while the top 10% received almost half (47%). The wealth gap was even greater, with the bottom 50% only having 6% of total wealth, while a third (36%) was held by just the top 1%. Since 1980, real incomes of the bottom 50% have barely grown for four decades.

The bottom half of the US population was suffering from a surge in “deaths of despair” – a term coined by the Nobel-winning economist Angus Deaton to describe high mortality rates from drug abuse, suicide and murder among younger working-class Americans. Rising costs of housing, medical care and university education all contributed to widespread indebtedness and growing financial insecurity. By 2019, a study found that two-thirds of people who filed for bankruptcy cited medical issues as a key reason.




Read more:
International trade has cost Americans millions of jobs. Investing in communities might offset those losses


The decline in US manufacturing accelerated after China was admitted to the World Trade Organization in 2001, increasing America’s soaring trade and budget deficit even more. Political and business elites hoped the move would open up the huge Chinese market to US goods and investment, but China’s rapid modernisation made its industry more competitive than its American rivals in many fields.

Ultimately, this era of intensive financialisation of the world economy created a series of regional and then global financial crises, damaging the economies of many Latin American and Asian economies. This culminated in the 2008 global financial crisis, precipitated by reckless lending by US financial institutions. The world economy took more than a decade to recover as countries wrestled with slower growth, lower productivity and less trade than before the crisis.

For those who chose to read it, the writing was on the wall for America’s era of global domination decades ago. But it would take Trump’s victory in the 2016 presidential election – a profound shock to many in the US “liberal establishment” – to make clear that the US was now on a very different course that would shake up the world.

Making a bad situation more dangerous

In my view, Trump is the first modern-day US president to fully understand the powerful alienation felt by many working-class American voters, who believed they were left out of the US’s immense post-war economic growth that so benefited the largely urban American middle classes. His strongest supporters have always been lower-middle-class voters from rural areas who are not college-educated.

Yet Trump’s key policies will ultimately do little for them. High tariffs to protect US jobs, expulsion of millions of illegal immigrants, dismantling protections for minorities by opposing DEI (diversity, equality and inclusion) programmes, and drastically cutting back the size of government will have increasingly negative economic consequences in the future, and are very unlikely to restore the US economy to its previous dominant position.

US president Donald Trump unveils his global tariff ‘hit list’ on April 3 2025. BBC News.

Long before he first became president, Trump hated the eye-watering US trade deficit (he’s a businessman, after all) – and believed that tariffs would be a key weapon for ensuring US economic dominance could be maintained. Another key part of his “America First” ideology was to repudiate the international agreements that were at the heart of the US’s postwar approach to globalisation.

In his first term, however, Trump (having not expected to win) was ill-prepared for power. But second time around, conservative thinktanks had spent years outlining detailed policies and identifying key personnel who could implement the radical U-turn in US economic policy.

Under Trump 2.0, we have seen a return to the mercantilist point of view reminiscent of France in the 17th and 18th centuries. His assertion that countries which ran a trade surplus with the US “were ripping us off” echoed the mercantilist belief that trade was a zero-sum game – rather than the 20th-century view, pioneered by the US, that globalisation brings benefits to all, no matter the precise balance of that trade.

Trump’s tax-and-tariff plans, which extend the tax breaks to the very rich while reducing benefits for the poor through benefit cuts and tariff-driven inflation, will increase inequality in the US.

At the same time, the passing of the One Big Beautiful Bill is predicted to add some US$3.5 trillion to US government debt – even after the Elon Musk-led “Department of Government Efficiency” cuts imposed on many Washington departments. This adds pressure to the key US Treasury bond market at the centre of the world financial system, and raises the cost of financing the huge US deficit while weakening its credit rating. Continuing these policies could threaten a default by the US, which would have devastating consequences for the entire global financial system.

For all the macho grandstanding from Trump and his supporters, his economic policies are a demonstration of American weakness, not strength. While I believe his highlighting of some of the ills of the US economy were overdue, the president is rapidly squandering the economic credibility and good will that the US built up in the postwar years, as well as its cultural and political hegemony. For people living in America and elsewhere, he is making a bad situation more dangerous – including for many of his most ardent supporters.

That said, even without Trump’s economic and societal disruptions, the end of the US era of hegemonic dominance would still have happened. Globalisation is not dead, but it is dying. The troubling question we all face now, is what happens next.

This is the first of a two-part Insights long read on the rise and fall of globalisation. In part two: what comes next?


For you: more from our Insights series:

To hear about new Insights articles, join the hundreds of thousands of people who value The Conversation’s evidence-based news. Subscribe to our newsletter.

The Conversation

Steve Schifferes does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. The rise and fall of globalisation, part one: battle to be top dog – https://theconversation.com/the-rise-and-fall-of-globalisation-part-one-battle-to-be-top-dog-267910

How climate change can make people more likely to get into violent conflict

Source: The Conversation – UK – By Edward White, PhD Candidate in Psychology, Kingston University

Scharfsinn / Shutterstock

Climate change is reshaping weather patterns around the world, with monsoons, droughts, hurricanes and heatwaves all occurring with greater frequency and intensity. Aside from disturbing ecosystems, these environmental shifts risk triggering psychological reactions in people that can escalate into violent conflict.

The cognitive mechanisms that are triggered in people as a result of the effects of climate change share fundamental similarities with aggression in other settings. Consider two parallel scenarios. In the first, one person accidentally bumps into another person in a crowded bar. The bumped individual, already stressed from work woes, assumes malicious intent and retaliates violently.

In the second scenario, farmers in a water-scarce region notice their neighbours’ water well running normally while their own well runs dry. The farmers conclude that deliberate water theft has occurred rather than blaming geological differences in their ability to access the aquifer.

Both of these situations demonstrate what is known in psychology as “hostile attribution bias”, the tendency to interpret ambiguous actions as having been done with harmful intent. Forensic psychological research has recognised this bias as a factor in offending behaviour.


Wars and climate change are inextricably linked. Climate change can increase the likelihood of violent conflict by intensifying resource scarcity and displacement, while conflict itself accelerates environmental damage. This article is part of a series, War on climate, which explores the relationship between climate issues and global conflicts.


Environmental factors can intensify this cognitive tendency. When entire communities endure prolonged heatwaves and water shortages, this heightened strain can fuel perceptions that the survival strategies of other groups are deliberate acts of aggression. This can escalate tensions and increase the risk of conflict.

In Ethiopia and Kenya, even a modest drop in rainfall has been linked to more violent conflict breaking out between communities. This pattern has been repeated across sub-Saharan Africa and parts of Asia. Consecutive droughts in 1965 and 1966, for example, contributed to the widespread rural discontent that fuelled the Maoist insurgency in northern India. The insurgency remains active today.

This is not just a modern problem. In the 4th century, when much of Britain was ruled by the Romans, a series of droughts caused famine. Roman Britain descended into anarchy and, before long, Pict, Scotti and Saxon tribes were storming Hadrian’s Wall. The last Romans left Britain around 40 years later.

Mental fatigue

The prefrontal cortex is the part of a brain that is crucial for executive functions like making decisions and controlling behaviour. But, like a muscle, it can become exhausted. Research shows a link between poor self-control and offending behaviour, with a mentally fatigued person more likely to engage in violence.

Climate stress creates chronic cognitive load, in which people are having to think about too many things at the same time. Farmers calculating whether their crops will survive another heatwave, coastal communities planning evacuations, or city dwellers navigating heat-induced intrastructure failures are all operating with depleted mental resources. When a person’s mental energy is depleted, the psychological brakes that normally prevent offences begin to fail.

It is, of course, difficult to establish direct causal relationships between environmental factors and violence. Various other factors such as intelligence, socioeconomic status and demographics can also contribute.

However, some studies provide valuable insights by indicating possible trends and patterns. One study from South Korea found a correlation between rising temperatures from 1991 to 2020 and deaths by assault. The risk of assault deaths increased by 1.4% for every 1°C increase in ambient temperature.

Similarly, research in Finland from 2017 found that ambient temperature played a factor in violent crime rates, with a 1.7% increase per 1°C temperature rise. These findings together suggest that heat may erode the cognitive resources people need to regulate aggressive impulses, making hostile responses more likely when the capacity for self-control is compromised.

Indian people filling containers from a water tanker.
People in the Indian city of Beed filling containers from a municipal water tanker.
Manoej Paateel / Shutterstock

Recognising how the human mind works can guide people to make positive changes. Just like cognitive-behavioural therapy helps people spot and change harmful thinking patterns, we can collectively help communities become aware of the cognitive biases that may be worsened by climate change.

Taking measures that reduce the mental burden of climate stress can also help preserve the cognitive resources needed for peaceful resolution. This can be done through improved physical infrastructure, such as water storage and transfer facilities designed to distribute water to communities evenly.




Read more:
How water fuels conflict in Pakistan


It also includes implementing education programmes and financial mechanisms that can empower disadvantaged communities to negotiate resource access. In the Koshi River basin of eastern Nepal, for example, communities located downstream have funded activities that helped convince those upstream to share their water resources.

Recognising the role of upstream watershed areas in maintaining the quantity and quality of the water flow downstream, the municipal authority in the town of Dhankuta pays for the water services – treatment, storage and distribution – to the upstream watershed villages of Nibuwa and Tankhuwa. It has also pledged to invest in conserving the upstream ecosystem.

A key part of the intersection of climate change and conflict is how environmental stress systematically activates the psychological vulnerabilities that drive criminal behaviour. By understanding this connection, intervention can occur before rising temperatures trigger the cognitive cascade from stress to bias – and from there to violence.

The Conversation

Edward White does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. How climate change can make people more likely to get into violent conflict – https://theconversation.com/how-climate-change-can-make-people-more-likely-to-get-into-violent-conflict-263078

12 months out from the US midterms, both sides struggle to gain electoral advantage

Source: The Conversation – UK – By Richard Hargy, Visiting Research Fellow in International Studies, Queen’s University Belfast

Donald Trump is clearly concerned about the midterm elections that loom next November, which look to be a referendum on his administration. All seats in the House of Representatives will be up for grabs as will one-third of the Senate. Losing control of the House would severely crimp the US president’s ability to govern the way he has for the first nine months of his second term.

Trump has already voiced some unease about the election. In a recent interview with the One America News (OAN) network he stated: “The one thing that I worry about is that… I don’t have the numbers, but the person that wins the presidency always seems to lose the midterms”.

There’s a clue to the president’s apprehension about the numbers from the 2024 general election results. Despite winning the popular vote in 2024, the Republican vote in the House fell by 0.2 percentage points, as a result the GOP (the Republicans) lost two seats, leaving them with a majority of only five seats.

Trump knows from bitter experience what could happen if he loses control of the House. The Democrats made a net gain of 40 seats at the 2018 midterms after which the House impeached Trump twice.

So the president and his Maga coalition are well aware of how important it is to retain control of Congress.

The president is already taking steps that could tilt the midterms in his favour. Shortly after being sworn in as president in January 2025, he rescinded Joe Biden’s executive order that aimed to expand voting access and voter registration.

In April Trump ordered the Department of Justice to launch an investigation into the Democrats’ top fundraising platform ActBlue, after allegations it had allowed illegal campaign donations. The Democrats denounced the move as “Donald Trump’s latest front in his campaign to stamp out all political, electoral and ideological opposition”.

In August, Trump announced he wanted to ban mail-in-voting for the midterms. Three in every ten ballots cast in 2024 were mail-ins and are historically thought to favour the Democrats. But the US constitution mandates that the states control their elections. Congress has the power to pass legislation banning mail-ins for federal elections, but it is thought unlikely that such a measure would pass the Senate.

History has shown that the party occupying the White House usually performs poorly in the subsequent midterm elections. Three recent polls, Economist/YouGov, Morning Consult, and Emerson, show Democrats edging ahead in the generic congressional vote.

But precedent and political polling may count for little over the next year, as America’s democratic system is tested by extraordinary events and challenges.

Redistricting

There are already moves by mainly, though not exclusively, Republican controlled states to carve out additional congressional seats (referred to as redistricting) to bolster the party’s chances of retaining their majority in the House of Representatives. In three states – Texas, Missouri and North Carolina – Republican legislatures have redrawn constituency lines to the party’s electoral benefit, resulting in a notional seven new GOP-leaning congressional seats.

Americans in viting booths.
Changing electoral boundaries could affect the election result.
Alan Mazzocco/Shutterstock

After the Republican-controlled North Carolina legislature voted through a new congressional map that may provide the party an additional seat in next year’s midterms, Trump posted on Truth Social that this provided the potential for “A HUGE VICTORY for our America First Agenda.”

Democrats have responded to these events by launching their own redistricting plans, with Virginia becoming the latest blue state to announce proposals to redraw electoral boundaries that could give the party two or three additional seats.

It is, however, the largest state in the union – California – which serves as the base for Democrats counterbalancing moves. California Governor Gavin Newsom is asking his state’s voters to decide on proposition 50. If passed this would authorise state lawmakers to create new electoral wards that could favour Democrats. Academic analysis has estimated that the move could provide up to five additional Democratic seats in Congress.

This action has been endorsed by former US president Barack Obama, who stated the Democrats strategy in California gives the national party a “chance… to create a level playing field” in next year’s elections.

Partisan gerrymandering is nothing new in US politics. But what is new, according to Benjamin Schneer, a Harvard-based expert in political representation, is the scale on which this is being done. He believes:

Gerrymandering can be done more effectively now because we have fine-grained data on the population and on how people are likely to vote, and computing techniques to design maps in clever ways. Put all that together with intense polarization and that creates a perfect storm where gerrymandering can flourish.

Voting rights

The 2026 midterms would also be affected in a seismic way by an impending Supreme Court decision relating to a central pillar of the 1965 Voting Rights Act (VRA). Section 2 of the act “prohibits voting practices or procedures that discriminate on the basis of race, colour, or membership in one of the language minority groups”. The court is now weighing whether Section 2 is constitutional.

People vote in Louisiana
People vote in Louisiana: changes to voting rights laws could affect the outcome in 2026.
Allen J.M. Smith/Shutterstock

The case relates to a lawsuit in Louisiana where it was required under the VRA to redraw its congressional map to ensure two majority black districts. This is now being challenged in the Supreme Court. If successful it could weaken the voting power of minorities and result in congressional districts being redrawn throughout the American south.

This would be a major blow for the Democrats. Analysis by the BBC projects that this could “flip more than a dozen seats from Democratic to Republican”. Findings from the Economist go further, suggesting “Republicans could eliminate as many as 19 Democrat-held districts in the House of Representatives, or 9% of the party’s current caucus.”

The 2026 midterms will be hugely consequential. They will decide what party controls the US Congress for Trump’s last two years in office and therefore the extent of his power until January 2029. They will also serve as the unofficial start of the 2028 presidential campaign and determine whether it is the Republicans or Democrats with the political momentum heading into this historic election.

The Conversation

Richard Hargy does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. 12 months out from the US midterms, both sides struggle to gain electoral advantage – https://theconversation.com/12-months-out-from-the-us-midterms-both-sides-struggle-to-gain-electoral-advantage-268126

UK-linked children whose parents have been deprived of their citizenship are trapped in camps in Syria

Source: The Conversation – UK – By Madeline-Sophie Abbas, Senior Lecturer in Sociology, Lancaster University

Prazis Images/Shutterstock

Thousands of women and children with perceived links to Isis have been detained in camps by the Kurdish-controlled Autonomous Administration of North and East Syria since the demise of the militant organisation in 2019. These include women and children who have a connection to the UK.

Since 2019, the UK has brought three women and 18 children from north-east Syria to the UK. Most of the repatriated children had been orphaned or arrived at camps without caregivers and were under ten years old.

There are also children in the camps who have never lived in the UK, but have a parent who is either a British citizen or former British citizen who has been deprived of their British citizenship. In total, an estimated 60 “UK-linked children” remain in these camps.

These children may not be formally recognised as British citizens due to a lack of documentation, or because they were born after their parents were deprived of their British citizenship. This can mean that the children sometimes don’t have citizenship of any country.

Under Section 40 of the British Nationality Act 1981, the home secretary has the power to remove a person’s British citizenship if they consider it “conducive to the public good”. These decisions generally apply to cases involving national security or counter-terrorism.

The aim is to prevent people perceived to pose a security threat from returning to the UK. From 2010 to 2023, there were 222 citizenship deprivation orders, including 104 in 2017 alone.

The UK government’s use of citizenship deprivation has meant UK-linked children cannot be repatriated with their mothers. These children largely remain trapped in detention camps in Syria with limited possibility for a healthy future.

Children as victims or threats?

To get UK-linked children out of the camps, the UK government needs to act. Key non-governmental organisations (NGOs) working on statelessness, repatriation and children’s rights argue allowing them to come to the UK is the only solution for protecting these children from their precarious and dangerous existence.

But the UK government is often reluctant to do that. “Isis-associated” children are often viewed as security threats rather than victims of violent conflict.

Age and agency are used to judge threat levels. The longer children stay in camps, the greater the security threat to the UK they are considered to pose, and the less likely they are to be allowed to come to the UK.

In situations where the child’s mother has been deprived of their UK citizenship, the child risks being separated from them in the event that the child is brought to the UK. Reprieve,
a legal action NGO, reports that the Foreign, Commonwealth and Development Office told at least five British families in Syria that their children could only be brought to the UK if the mothers remain in Syria.

This meant being separated from their children (affecting 12 children between two and 12 years old). Their mothers, who despite being in the camps have often not been formally accused of terrorism offences, should be allowed to return with them.

Mothers’ citizenship should be reinstated to allow children the opportunity to make this possible and uphold the right to family life in accordance with UK law through Article 8 of the Human Rights Act 1998 and international rights enshrined in the Universal Declaration of Human Rights.

Counter-terrorism measures

A number of counter-terrorism measures are available if the government should choose to bring an Isis-linked child with their mother to the UK. For example, a temporary exclusion order (TEO) is a legal measure used by the UK government to disrupt terrorist risk by controlling the return of individuals associated with terrorism-related activities.

Using these orders would allow authorities to permit mothers to return to the UK with their children while assessing and mitigating possible security risks by imposing certain conditions. These might include deradicalisation programmes, police reporting requirements and monitoring.

However, TEOs are not available for people who have been deprived of their citizenship. This therefore obstructs one way in which a child could potentially come with their parent to the UK.

Questions also remain concerning whether adults who travelled to Isis-controlled territory when they were children should be treated in law as children. They may have been groomed or trafficked or regret their decision.

Recruitment and use of children in armed conflict violates international law. Children in these cases should be treated as victims deserving protection. Article 39 of the UN convention on the rights of the child (UNCRC) presses states to take measures to support the recovery and reintegration of child victims of conflict.

Not only that, the UK’s official independent reviewer of terrorism legislation, Jonathan Hall KC, argues citizenship deprivation potentially undermines UK national security. Former citizens are no longer monitored by the UK government. The previous government rejected Hall’s recommendation that TEOs be made available to those deprived of UK citizenship despite their lack of citizenship.

Citing the UK government’s returning families programme, Hall told me (in a research interview for as yet unpublished research) that repatriation for British families can be achieved through a combination of care and security measures.

Tavistock Returning Families Unit, funded by the Home Office, supports local authorities with British families and children returning from Syria. It coordinates the mental or emotional needs of the child and family following assessments. It offers an infrastructure for local authorities to help “Isis-associated” families to reintegrate into UK society.

Children might be separated from family members on their return to the UK if the child or caregiver is then prosecuted in the UK: residing in Isis-linked territory can now be considered a terrorist offence. Nonetheless, this provides a potential route for children to be integrated into UK society with their mothers that citizenship deprivation denies.

Without repatriation to the UK, mothers are at risk of indefinite detention within deadly camps and children risk becoming orphans.

The Conversation

Madeline-Sophie Abbas receives funding from UKRI – Policy Support grant

ref. UK-linked children whose parents have been deprived of their citizenship are trapped in camps in Syria – https://theconversation.com/uk-linked-children-whose-parents-have-been-deprived-of-their-citizenship-are-trapped-in-camps-in-syria-253771

Le « FOMO » ou peur de rater quelque chose : entre cerveau social et anxiété collective

Source: The Conversation – France (in French) – By Emmanuel Carré, Professeur, directeur de Excelia Communication School, chercheur associé au laboratoire CIMEOS (U. de Bourgogne) et CERIIM (Excelia), Excelia

Un rêve d’ubiquité entretenu par les outils numériques. Roman Odintsov/Pexels, CC BY

La « peur de rater quelque chose » (« Fear Of Missing Out », ou FOMO) n’est pas née avec Instagram. Cette peur d’être exclu, de ne pas être là où il faut, ou comme il faut, a déjà été pensée bien avant les réseaux sociaux, et révèle l’angoisse de ne pas appartenir au groupe.


Vous l’avez sans doute déjà ressentie : cette sensation distincte que votre téléphone vient de vibrer dans votre poche. Vous le sortez précipitamment. Aucune notification.

Autre scénario : vous partez en week-end, décidé à vous « déconnecter ». Les premières heures sont agréables. Puis l’anxiété monte. Que se passe-t-il sur vos messageries ? Quelles conversations manquez-vous ? Vous ressentez la « peur de rater quelque chose », connue sous l’acronyme FOMO (« Fear Of Missing Out »).

D’où vient cette inquiétude ? De notre cerveau programmé pour rechercher des récompenses ? De la pression sociale ? De nos habitudes numériques ? La réponse est probablement un mélange des trois, mais pas exactement de la manière dont on nous le raconte.

Ce que les penseurs nous ont appris sur l’anxiété sociale

En 1899, l’économiste Thorstein Veblen (1857-1929), l’un des théoriciens invoqués dans l’industrie du luxe décrit la « consommation ostentatoire » : l’aristocratie ne consomme pas pour satisfaire des besoins, mais pour signaler son statut social. Cette logique génère une anxiété : celle de ne pas être au niveau, de se retrouver exclu du cercle des privilégiés.

À la même époque, le philosophe allemand Georg Simmel (1858-1918) prolonge cette analyse en étudiant la mode. Il décrit une tension : nous voulons simultanément nous distinguer et appartenir. La mode résout temporairement cette contradiction, mais au prix d’une course perpétuelle. Dès qu’un style se diffuse, il perd sa valeur. Cette dynamique crée un système où personne n’est épargné : les élites doivent innover sans cesse tandis que les autres courent après des codes qui se dérobent.

En 1959, le sociologue Erving Goffman (1922-1982) théorise nos interactions comme des performances théâtrales. Nous gérons constamment l’impression donnée aux autres, alternant entre scène (où nous jouons notre rôle) et coulisses (où nous relâchons la performance). Sa question résonne aujourd’hui : que se passe-t-il quand les coulisses disparaissent ? Quand chaque instant devient potentiellement documentable, partageable ?

Enfin, plus récemment, le philosophe Zygmunt Bauman (1925-2017) a développé le concept de « modernité liquide » : dans un monde d’options infinies, l’anxiété n’est plus liée à la privation, mais à la saturation. Comment choisir quand tout semble possible ? Comment être certain d’avoir fait le bon choix ?

Ces quatre penseurs n’ont évidemment pas anticipé les réseaux sociaux, mais ils ont identifié les ressorts profonds de l’anxiété sociale : l’appartenance au bon cercle (Veblen), la maîtrise des codes (Simmel), la performance permanente (Goffman) et l’angoisse du choix (Bauman) – des mécanismes que les plateformes numériques amplifient de manière systématique.

FOMO à l’ère numérique

Avec la généralisation des smartphones, le terme se popularise au début des années 2010. Une étude le définit comme « une appréhension omniprésente que d’autres pourraient vivre des expériences enrichissantes desquelles on est absent ». Cette anxiété naît d’une insatisfaction des besoins fondamentaux (autonomie, compétence, relation) et pousse à un usage compulsif des réseaux sociaux.

Que change le numérique ? L’échelle, d’abord : nous comparons nos vies à des centaines de vies éditées. La permanence, ensuite : l’anxiété est désormais continue, accessible 24 heures sur 24. La performativité, enfin : nous ne subissons plus seulement le FOMO, nous le produisons. C’est ainsi que chaque story Instagram peut provoquer chez les autres l’anxiété que nous ressentons.

Le syndrome de vibration fantôme illustre cette inscription corporelle de l’anxiété. Une étude menée sur des internes en médecine révèle que 78 % d’entre eux rapportent ces vibrations fantômes, taux qui grimpe à 96 % lors des périodes de stress intense. Ces hallucinations tactiles ne sont pas de simples erreurs perceptives, mais des manifestations d’une anxiété sociale accrue.

Au-delà de la dopamine : une anxiété d’appartenance

De nombreux livres et contenus de vulgarisation scientifique ont popularisé l’idée que le FOMO s’expliquerait par l’activation de notre « circuit de récompense » cérébral.

Ce système fonctionne grâce à la dopamine, un messager chimique du cerveau (neurotransmetteur) qui déclenche à la fois du plaisir anticipé et une forte envie d’agir pour ne rien manquer. Dans le Bug humain (2019), Sébastien Bohler développe notamment la thèse selon laquelle notre cerveau serait programmé pour rechercher constamment davantage de ressources (nourriture, statut social, information).

Selon cette perspective, les plateformes de réseaux sociaux exploiteraient ces circuits neuronaux en déclenchant de manière systématique des réponses du système de récompense, notamment par le biais des signaux de validation sociale (likes, notifications), ce qui conduirait à des formes de dépendance comportementale.

D’autres travaux en neurosciences pointent vers une dimension complémentaire, peut-être plus déterminante : l’activation de zones cérébrales liées au traitement des informations sociales et à la peur de l’exclusion. Les recherches menées par Naomi Eisenberger et ses collègues depuis les années 2000 ont révélé que les expériences d’exclusion sociale activent des régions cérébrales qui chevauchent partiellement celles impliquées dans le traitement de la douleur physique.

Elles suggèrent que le rejet social constitue une forme de souffrance inscrite biologiquement. Ces deux mécanismes – recherche de récompense et évitement de l’exclusion – ne s’excluent pas mutuellement, mais pourraient opérer de manière synergique. Au fond, ce n’est pas tant le manque d’un like qui nous inquiète que le sentiment d’être en marge, de ne pas appartenir au groupe social.

Cette inscription neurobiologique de la peur de l’exclusion confirme, d’une autre manière, ce qu’avaient analysé Veblen, Simmel, Goffman et Bauman : l’anxiété d’appartenance constitue un ressort fondamental de nos comportements sociaux, que les plateformes numériques amplifient désormais de manière systématique.

Reprendre le contrôle de l’attention ?

L’anxiété sociale comparative n’a donc pas attendu Instagram pour exister. Mais il faut reconnaître une différence d’échelle : nos cerveaux, façonnés pour des groupes de quelques dizaines d’individus, ne sont pas équipés pour traiter le flux incessant de vies alternatives qui défile sur nos écrans.

Face à cette saturation, la déconnexion n’est pas une fuite mais une reconquête. Choisir de ne pas regarder, de ne pas savoir, de ne pas être connecté en permanence, ce n’est pas rater quelque chose – c’est gagner la capacité d’être pleinement présent à sa propre vie. Cette prise de conscience a donné naissance à un concept miroir du FOMO : le JOMO, ou « Joy of Missing Out », le plaisir retrouvé dans le choix conscient de la déconnexion et dans la réappropriation du temps et de l’attention.

The Conversation

Emmanuel Carré ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. Le « FOMO » ou peur de rater quelque chose : entre cerveau social et anxiété collective – https://theconversation.com/le-fomo-ou-peur-de-rater-quelque-chose-entre-cerveau-social-et-anxiete-collective-267362

How walking football is helping older adults stay fit, connected and competitive

Source: The Conversation – UK – By Ian Varley, Associate professor, Nottingham Trent University

Walking football is an adaptation of regular football, played primarily by middle-aged and older adults with rule changes to enhance accessibility A_Lesik/Shutterstock

For many older adults, staying active often means doing it alone. Walking, jogging or heading to the gym solo have long been the go-to activities for keeping fit. While these are great for physical health, they can lack that spark of competition and teamwork that makes sport so enjoyable. Unlike youth sports, where camaraderie, friendly rivalry and shared goals create excitement, older adults often miss out on that team spirit.

That may be changing. The rise of walking football is offering older adults a new way to stay active through competition, connection and fun.

In October 2025, the Walking Football World Nations Cup will take centre stage in Spain, showcasing the very best of this fast-growing sport. More than 70 teams from over 30 countries will compete across men’s (50s, 60s, 70s) and women’s (40s, 50s, 60s) categories, proving that age is no barrier to international competition.

The inaugural FA Walking Football Cup in 2024 and the expanding network of local clubs across the UK and Europe are helping to cement Walking Football’s place as a recognised and respected sporting format.

Walking football is a slower and low-impact version of traditional football, designed to make the game safer and more accessible for people of all ages and abilities. The rules are simple: no running, minimal physical contact and the ball must stay below head height. This encourages players to focus on skill, control, and enjoyment rather than speed or stamina. It is particularly appealing to older adults and those with health conditions who want to stay active in a structured, social and enjoyable way.

The benefits go well beyond physical fitness. Players often talk about how the game helps them stay active, build friendships and feel part of a community. Research has also shown that it supports healthy ageing by improving wellbeing, balance and social connection. However, some people have been hesitant to join, worried about the risk of injury, especially if they already have health conditions.

In response, new resources such as Uefa’s walking football toolkit and Age UK’s programme have encouraged further research into safety and participation. This growing body of evidence is helping to reassure players and highlight walking football as an accessible, enjoyable and health-promoting way to stay active in later life.

Injury risk

A 2025 study examined injuries during the 2024 FA Walking Football Cup, which featured 84 teams competing across women’s and mixed-gender categories. Across more than 850 hours of play, only 42 injuries were recorded, and most (81%) were minor, allowing players to continue without missing future games.

Injuries were evenly split between contact and non-contact causes, with most linked to tackles (45%). And 12% of all injuries came from running, which is technically against the rules.

A community-based study by the same researchers covered more than 6,300 hours of play and found similar results. Only around one-third of injuries led to missed training or matches. Importantly, both studies also looked at players with existing health conditions and found that only 7%-10% of injuries were related to underlying issues such as joint pain, cardiovascular conditions, or old musculoskeletal problems. This suggests that walking football is unlikely to worsen existing health concerns and can be considered a safe and low-impact way for older adults to enjoy team-based exercise.

Walking football may be redefining what it means to stay active in later life. It gives older adults the chance to experience teamwork, friendly competition, and community through a slower, safer version of the world’s favourite game. With its low injury risk and inclusivity for people with pre-existing health conditions, it offers a welcoming route to staying active and connected.

The upcoming Walking Football World Nations Cup in Spain will celebrate exactly that, showing how people of all ages can continue to enjoy the game, stay healthy and find friendship through sport.

The Conversation

Ian Varley receives funding from UEFA and the FA for projects related to injury and illness surveillance.

Philip Hennis does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. How walking football is helping older adults stay fit, connected and competitive – https://theconversation.com/how-walking-football-is-helping-older-adults-stay-fit-connected-and-competitive-268137