Why women land top jobs in struggling organisations – they may just be better in a crisis

Source: The Conversation – UK – By Rita Goyal, Assistant Professor, Centre for Resilient Business and Society, Coventry University

Women are increasingly occupying top leadership roles across organisations, political parties and even nations. This may seem unequivocally like a good thing. Yet, many of these roles are undertaken in precarious circumstances, with inherent risks that might make them unattractive to men.

High-profile examples illustrate this pattern. Sarah Mullally, the incoming Archbishop of Canterbury and first female leader of the Church of England, steps into a landscape marred by scandal. Sanae Takaichi has become Japan’s first female prime minister – albeit the fourth PM in five years. She inherits a stagnant economy, record inflation and a declining population.

Carly Fiorina became CEO of Hewlett-Packard during the bursting of the tech bubble. And Mary Barra took over as CEO of General Motors shortly before a major car recall. In the UK, politicians like Conservative leader Kemi Badenoch have also assumed high-profile roles during periods of heightened risk.

Two decades ago, this phenomenon was labelled the “glass cliff”. It highlighted a pattern where women are more likely than men to be placed in leadership positions during times of crisis.

But the perspectives of women leaders and those navigating organisations in precarious situations are rarely examined. Our study conducted in-depth interviews with 33 women in senior leadership positions in 2023 and 2024. Our goal was to explore the motivations behind appointing women to high-risk leadership roles and the strategies the women use to navigate challenges once they’re in post.

The study revealed that women are often selected because of their distinctive leadership style and ability to manage crises. In their early careers, women may be invited to lead organisations in distress (so-called “basket cases”). Yet, by focusing on collaboration and consensus, and by ditching egotism, they can often turn around precarious situations.

One woman who chaired boards told us: “Women are often given basket cases because they will often be more supportive, better listeners and more nurturing. They’re better able to cope in that environment.”

Key to this is a combination of intuition, humility and an ability to manage colleagues and associates. We found that in organisations facing scandals, inefficiency or financial mismanagement, women leaders often focus on human aspects rather than just operational factors.

Study participants consistently emphasised that people skills (such as empathy, communication and the ability to unify people) are critical for managing risk-laden environments. They felt that women often excel in these areas. For instance, Mullally has cited her background as a cancer nurse as providing a strong foundation for managing the challenges that the Church of England is facing.

Why go there?

Our study also explored why women accept these precarious roles. Early in their careers, the opportunity to lead a major organisation can be compelling, offering a sense of purpose and fulfilment – even if the organisation is in crisis.

But with experience, women become more discerning about accepting leadership positions. The research highlights that precarious appointments carry heightened reputational risks, as women are held to stricter standards (in the media, for example) than men.

One participant told us: “When a man fails or makes an error … it’s the individual man who failed; ‘he’ had no ethics. When a woman does it, it’s like, ‘Ah well, women’.”

The study also underscores the importance of networks, mentoring and alliances. Women leaders recommend having trusted advisers and mentors who can provide guidance, support and insight as they face challenges. Some emphasised that operational challenges is a normal aspect of leadership.

But women should think carefully about accepting a leadership role where problems of integrity or governance, for example, are more entrenched. As one participant in our study noted: “Don’t let challenges deter you if you believe you can lead effectively. But when structural or ethical challenges exist … leaders must assess them carefully.”

paper copy of an employment contract with wooden building blocks on top showing the letters c e o
Step away from the contract: sometimes the failings at an organisation are too serious for a new leader to turn the ship around.
Fox_Ana/Shutterstock

A mixed blessing

The conventional belief is that women are offered precarious roles because they are seen as expendable. But beyond this, our study identifies alternative reasons.

Speaking generally, women’s capacity to manage chaos, practise ego-less leadership, and encourage collective decision-making often makes them attractive candidates. Viewing it through this lens shifts the conversation from victimhood to capability. It suggests that women are not merely filling high-risk roles but are chosen for their leadership strengths.

The findings also have implications for strategy and talent management within organisations, who should recognise the specific competencies women can bring to complex, high-risk leadership scenarios.

Organisations can benefit from ensuring that women in challenging leadership roles receive appropriate support and resources, and that expectations are realistic.

At the same time, women leaders must balance ambition with caution. While challenging roles offer opportunities for development and recognition, taking a role that is not aligned with a woman’s values or if her due diligence comes up short can carry high professional risks.

The study’s participants recommend strong negotiation and careful assessment of the potential outcomes before accepting senior positions. When leaders align their expertise and values with the needs of the organisation, they can transform crises into opportunities for growth. This is based on our finding that women, before they accept precarious leadership roles, carry out due diligence, consider the pros and cons and negotiate.

Women in leadership are increasingly seen at the helm during organisational turbulence. While these roles come with greater risk, they also offer opportunities to demonstrate capability, strengthen reputations and improve the culture of an organisation.

Rather than a poisoned chalice, these opportunities can be reframed as a mixed blessing. Challenges, if navigated well, highlight and make use of women’s distinctive leadership styles. Women can lead organisations through uncertainty and at the same time redefine perceptions of leadership and expand opportunities for women in the future.

The Conversation

Rita Goyal received funding from the British Academy/Leverhulme Trust.

Nada Kakabadse received funding from the Institute of Company Secretaries and Administrators (ICSA).

ref. Why women land top jobs in struggling organisations – they may just be better in a crisis – https://theconversation.com/why-women-land-top-jobs-in-struggling-organisations-they-may-just-be-better-in-a-crisis-268592

How countries can be held responsible for staying within new legal climate target of 1.5°C

Source: The Conversation – UK – By Amy Cano Prentice, Senior Research Officer, ODI Global

PeopleImages/Shutterstock

Global emissions need to peak this year to stay within 1.5°C of global temperature rise since pre-industrial levels. This means that starting now, countries need to emit less greenhouse gas. Emissions also need to be cut in half by 2030 to prevent the worst effects of climate change.

For many nations, 1.5°C is a benchmark for survival. At that temperature, small island states in particular risk becoming uninhabitable due to rising sea levels, ecosystem loss, water insecurity, infrastructure damage and livelihood collapse.

To safeguard their futures, Vanuatu and 17 other countries spent six years campaigning to get the highest court of the UN system, the International Court of Justice, to give its opinion on whether countries have specific legal obligations when it comes to climate change. This year, the court agreed that they do, and the obligations are stringent, meaning that states are required to use all available means to prevent significant harm to the climate system.

Because the court’s advisory opinion is an articulation of existing law and legal obligations (rather than a binding legal decision in itself), it has to be given legal effect through national legislation, climate-related litigation, international treaties and conventions. In other words, it has to be kept alive.

My research identifies how to keep the advisory opinion alive via a few avenues to hold countries to account for failing to protect the climate system.

Cop30, the UN climate summit taking place in Brazil this November, is the first opportunity to hold countries accountable for collectively failing to reach stay within the 1.5°C limit with their 2025 national pledges.

In my recent paper, I outline which countries are upholding their climate change obligations and which are not, and what can be done about it.

Time is running out but climate diplomacy can be slow. Under the Paris agreement, the legally binding international treaty on climate change agreed in 2015, countries agreed to limit global warming to well below 2°C and to pursue efforts to limit the temperature increase to 1.5°C.

Since then, many countries have pushed at every annual UN climate summit for the 1.5°C goal to be the maximum temperature increase. After years of negotiation, the International Court of Justice clarified that 1.5°C is unequivocally the legal target of the Paris Agreement. This hinges on the fact that the Paris agreement uses a science-based approach, so decisions are made according to the best available science of the day. Currently, that science indicates that a warming of 2°C would be catastrophic.

shot of old building where international court of justice is based, green lawn, blue sky
The Peace Palace, home to the International Court of Justice of the United Nations, in the Hague, the Netherlands.
olrat/Shutterstock

Nationally determined contributions (NDCs) are plans created by each country outlining how they will reduce their emissions (in order to collectively meet the Paris agreement’s temperature goal) and adapt to climate change. The court ruling made it clear that countries not only are obliged to submit NDCs, but these NDCs also need to represent a country’s highest possible ambition.

The court also clarified that all NDCs need to, by law, add up to enough emissions reductions globally to meet the 1.5°C. This can be used to lobby for more ambitious pledges among countries that claim to support the interests of the most vulnerable states.

What are nationally determined contributions? An expert explains.

Every country must update its NDC every five years. Each one needs to be more ambitious than the last. The past round of NDCs was insufficient. Even if fully implemented, they would only limit global warming to a 2.6°C increase. This year, after extending the deadline for NDC submission, only about 30% of countries submitted a new NDC. That covers less than one-third of global emissions.

I found that out of ten countries that are friends of small island states, only one – the UK – submitted a new NDC that is in line with 1.5°C. Four of these countries – Australia, Canada, Japan and New Zealand – submitted new NDCs which are not on track to meet the temperature goal. Three did not submit a new NDC at all – China, India and the EU – despite having made high-level political statements.




Read more:
Only 15 countries have met the latest Paris agreement deadline. Is any nation serious about tackling climate change?


Seven of these friends of small island states (and the EU) are required to provide climate finance to developing countries under the Paris agreement. All of these spend more public money on the fossil fuel industry than on climate mitigation and adaptation finance internationally.

According to the international court, fossil fuel subsidies may constitute an internationally wrongful act, in breach of the obligation to protection the climate system from significant harm. In 2022, the UK spent almost 14 times more on fossil fuel subsidies than on international climate finance.

Australia spent over six times as much. France and New Zealand spent over twice as much. Japan spent almost twice as much. Removing fossil fuel subsidies would free up much needed fiscal resources to target those most in need, especially given the urgency of the situation.

Other legal avenues

Beyond Cop30, other legal avenues exist. The first strategic decision is whether to bring a case before domestic or international courts. For example, in Canada, two houses of the Wet’suwet’en First Nation took the government to court for failing to meet its international commitments to reduce emissions, citing the International Court of Justice.

Internationally, a highly polluting country can be brought before international legal courts by another country. In 2019, the Gambia sued Myanmar for genocide due to the universal legal nature of the obligation to prevent genocide. Similarly, one country can sue another on climate-related legal grounds.

As the window to stay within 1.5°C closes, Cop30 and the courts must become twin areas of action, where creativity, strategy and the law converge to make climate justice enforceable, not aspirational.

Concrete diplomatic gains in Belém could include a suite of ambitious NDCs, operational guidance to launch the fund for responding to loss and damage, plus bold climate finance commitments, but the work cannot end in the negotiation halls. It must continue beyond Cop30 to turn pledges into action.


Don’t have time to read about climate change as much as you’d like?

Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 45,000+ readers who’ve subscribed so far.


The Conversation

Amy Cano Prentice does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. How countries can be held responsible for staying within new legal climate target of 1.5°C – https://theconversation.com/how-countries-can-be-held-responsible-for-staying-within-new-legal-climate-target-of-1-5-c-268160

Why hurricanes rarely kill in Cuba

Source: The Conversation – UK – By Gustav Cederlöf, Associate Professor of Environmental Social Science, University of Gothenburg

Hours before Hurricane Melissa roared towards Cuba’s second-largest city, Santiago de Cuba, the island’s president, Miguel Díaz-Canel, announced that 735,000 people had been evacuated – one in every 15 Cubans. The storm had already smashed into Jamaica, the most powerful to ever strike the island, causing landslides, power failures and deaths.

By the time Melissa hit Cuba, it was downgraded from a category 5 to a still incredibly dangerous category 3 hurricane. The sea was surging up to 3.5m, torrents of rain were half a metre deep, and winds were screaming at 200km per hour.

Hurricane Melissa shows what academics studying disasters have long emphasised: disasters are shaped as much by social vulnerability and governance as they are by violent winds.

Of the 75 deaths attributed to Hurricane Melissa, 43 occurred in Haiti and 32 in Jamaica, where the storm was strongest. Cuba has reported no fatalities – a result that reflects a long history of preparation.

Jamaica was devastated by hurricane Melissa.

Cuba has long stood out in regional comparisons for its ability to prevent deaths from hurricanes, often through mass evacuations. This has endured even through decades of US sanctions, and now an economic crisis featuring a prolonged recession, massive inflation and food shortages. Daily blackouts are making it more difficult for households and hospitals to prepare and recover from disaster.

Cuba’s focus on hurricane preparedness dates back to Hurricane Flora. Flora devastated the east of the island in 1963 – the same region now struck by Melissa. On the eve of its landfall, the government had introduced a sweeping land reform to nationalise all but the smallest farms. Party militants and soldiers had been dispatched across the island.

When Flora hit, people found these representatives of the revolution enduring the hurricane alongside them. Fidel Castro flew east to lead the rescue operations. Historian Mikael Wolfe argues that Flora transformed the rebel army from “a controversial force of expropriation” into “a nearly universally admired source of rescue”.

Disaster risk reduction has continued to be a priority for Cuban leaders. Each year, the local branches of civil society groups Committees for the Defence of the Revolution and the Federation of Cuban Women conduct vulnerability mapping, culminating in the nationwide drill Meteoro. These practices anticipate disaster in everyday life and guide mass evacuations when hurricanes strike.

And yet, mandatory evacuations remain controversial. Some argue they are a sign of collective welfare; critics say they are an infringement of individual rights. Either way, they demonstrate that disaster preparedness is as much about governance as it is about weather.

A revolutionary virtue

Preparedness is also rooted in culture. In the decades after Flora, literature, film and political speeches cast Cubans as protagonists in a national drama of struggle against nature. Just as they had repelled the US-backed invasion at the Bay of Pigs in 1961, citizens were called on to play their part and mobilise against hurricanes.

The government response to Hurricane Flora was portrayed in an iconic newsreel, Ciclón (1963) by Santiago Álvarez.

Cuban cultural life is full of images of former leader Fidel Castro wading through floodwaters. In these, he personifies an ethos framing disaster response as a revolutionary virtue: to be a revolutionary is to stand up to the storm. Or as Venezuelan statesman Simón Bolívar declared after the Caracas earthquake of 1812: “Well, if nature is against us, we will also fight against nature.”

This legacy still resonates. Appearing in olive green fatigues, favoured by Fidel Castro for decades, the current president addressed Cubans via Facebook as Melissa approached:

Dear compatriots of eastern Cuba, where #Fidel defied the dangerous hurricane #Flora and taught us forever what conduct to follow to protect life, which is the most important thing. I ask you to stay alert, be supportive, and never forget discipline in the face of threat. Venceremos (We will prevail).

Trust and mobilisation

Cuba’s historical success in saving lives is rooted in the ability to evacuate its population, and that citizens agree to participate in the system. Jamaica also has a well-established disaster governance system, where responsibility is spread across parish councils and community groups. However, participation in formal government-led processes has historically been much lower. Our research suggests this often stems from low trust in authorities and a lack of resources to support decentralisation.

We can see some of this in the response to Melissa. While the Jamaican government had ordered evacuations, many households stayed put, with a peak of around 25,000 people seeking refuge in emergency shelters. Conspiracy theories circulated saying Melissa was “manufactured” by humans, while Jamaican scientists called on the public to trust official information and ignore social media rumours. The Cuban and Jamaican cases jointly show the role of political culture in shaping how countries prepare for disasters and respond to them.

The challenge ahead

Melissa is a warning shot. Its sheer force was alarming, but so was how rapidly it became so strong. More intense storms with more precipitation are coming, and rising seas amplify the risks.

Caribbean nations need resources to rebuild and to protect themselves from future hurricanes. But disaster preparedness must also be about questions of politics and culture that mobilise action. In the decades ahead, culture and trust in authorities may prove as vital as levees and shelters in preparing for extreme weather.


Don’t have time to read about climate change as much as you’d like?

Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 47,000+ readers who’ve subscribed so far.


The Conversation

For research in Cuba, Gustav Cederlöf has previously received funding from the ESRC, The Wenner-Gren Foundation, the Royal Geographical Society (with IBG), King’s College London Graduate School, and the Swedish Society for Anthropology and Geography.

Sophie Blackburn does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why hurricanes rarely kill in Cuba – https://theconversation.com/why-hurricanes-rarely-kill-in-cuba-268840

Talk of new atomic tests by Trump and Putin should make UK rethink its role as a nuclear silo for the US

Source: The Conversation – UK – By Tom Vaughan, Lecturer in International Security, University of Leeds

The Russian president, Vladimir Putin, has said that Russia may could carry out nuclear weapons tests for the first time since the cold war.

In what appears to be a response to a statement by Donald Trump on October 30, that he had ordered the US to restart nuclear tests “on an equal basis” with Russia and China, Putin said he’d been advised by his defence staff that it was “advisable to prepare for full-scale nuclear tests”.

At present there is no evidence that either Russia or China is conducting nuclear tests, which were discontinued by most nuclear states after the test ban treaties of the early 1990s.




Read more:
Nukes in space: a bad idea in the 1960s – an even worse one now


Trump may have been reacting to the news of two Russian weapons tests in late October. On October 21, Putin announced that Russia had tested the Burevestnik – the first of a new generation of nuclear-powered cruise missiles. Days later he revealed that Russia had also tested Poseidon, a nuclear-powered and capable underwater drone which operates like a torpedo.

The US Department of Energy has rowed back on the president’s statement, assuring the world that Washington has no plans for test nuclear detonations. It appears that Trump’s order may have come from his confusion between Russia’s recent tests of nuclear-capable delivery vehicles such as Burevestnik and Poseidon, and the testing of actual nuclear warheads.

Nonetheless, the two leaders’ nuclear bluster is a sobering reminder of the dangers posed by nuclear brinkmanship between the US and Russia.

It is worth remembering that at the height of the cold war, the superpowers prepared to settle their confrontation in the territories of central Europe with little regard for the millions they would kill. US strategists hoped that a “tactical” nuclear conflict might contain the war to Europe, sparing the continental United States.

Independent deterrent?

This is the context for the UK public accounts committee releasing a report last week which detailed further “delays, cost inflation, and deep-rooted management failures” in the RAF’s procurement of F-35 stealth fighter aircraft.

The F-35 is increasingly coming to be viewed in some US defence circles as an expensive failure. This year, however, the UK’s Labour government committed to buying 15 additional F-35B aircraft (having already ordered 48), but also adding 12 of the F-35A variant.

The F-35A is configured to carry the B61 nuclear gravity bomb. Although the British government trumpeted the return of “a nuclear role for the Royal Air Force” in the 2025 strategic defence review, the B61 is a US weapon which will be under US command and carried by a US-made platform. The B61 is a “tactical” but still immensely destructive nuclear weapon – which, as during the cold war, is intended for use on European battlefields in the hope of containing any conflict far from the US.

Additionally, the UK’s “independent nuclear deterrent” consists of British “Holbrook” warheads mounted on US Trident II missiles. While sole launch authority rests formally with the UK prime minister, the system is entirely reliant on US support and maintenance of the missiles for its continued operation. In the event of Scottish independence, Britain’s nuclear submarines might have to relocate to the continental US, because there are few suitable UK alternatives to the Faslane base, an hour north of Glasgow.

Elsewhere, in summer 2025, observers reported that US B61 bombs had returned to RAF Lakenheath in Suffolk, to be carried by US Air Force jets. They had been removed in 2008 amid easing tensions between Nato and Russia, but have returned amid more aggressive nuclear posturing by both Washington and Moscow.

The Nukewatch group said: “The new nuclear bombs … are entirely under the control of Donald Trump and could be used without the UK having any say at all in the matter. In fact, we wonder whether the UK government has even been notified by the USAF that the weapons are now stationed at Lakenheath.” The UK government remained silent on the matter.

This integration of UK and US nuclear forces has not been publicly deliberated. Jeremy Corbyn, the last political leader who tried to offer the electorate a meaningful choice on the matter, was forced to backtrack.

Incompatible with democracy

This is a clear demonstration that nuclear weapons and deterrence policies have always been incompatible with democracy. They require huge secrecy, and the speed involved means that launch decisions are out of the public’s hands. Instead, any decisions to use these incredibly destructive weapons – with all that this implies for the planet – are concentrated in the hands of individual leaders.

The logic of nuclear deterrence breaks down, however, once we remember that the UK’s control over its own nuclear weapons – not to mention the US weapons hosted on its soil – is very limited. The US could at any moment withdraw its assistance for the Trident programme, making questions of British willingness to fight a nuclear war irrelevant.

The F-35A purchase redoubles the UK’s commitment to serving as Donald Trump’s nuclear aircraft carrier. It makes the country a target in any nuclear war that might be started by two unpredictable and violent superpowers. Other US allies get the same treatment: Australian analysts lament that the Aukus submarine deal with the UK and US yokes the country’s future “to whoever is in the White House”.

Fortunately, the flipside of this reliance on the US is that it might be relatively easy for the UK to shut down its own nuclear programme. Aside from its role in the Nato nuclear mission, Trident has little strategic value when it comes to deterring the threats actually faced by the UK.

With so much of its nuclear weapons activity farmed out to the US, there may not be many domestic vested interests to oppose a change in UK policy if Washington does turn off the nuclear taps.

If the UK foreign secretary, Yvette Cooper, is serious about continuing Labour’s commitment to “progressive realism”, she should chart an independent path. Alternative, non-nuclear defence policies for the pursuit of internationally responsible “common security” could be implemented by a British government with the confidence to govern from London, not DC.

The Conversation

Tom Vaughan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Talk of new atomic tests by Trump and Putin should make UK rethink its role as a nuclear silo for the US – https://theconversation.com/talk-of-new-atomic-tests-by-trump-and-putin-should-make-uk-rethink-its-role-as-a-nuclear-silo-for-the-us-269040

Nigeria’s violent conflicts are about more than just religion – despite what Trump says

Source: The Conversation – UK – By Ezenwa E. Olumba, Leverhulme Early Career Research Fellow, Aston University

Nigerian police standing guard in Osun, south-western Nigeria. Tolu Owoeye / Shutterstock

The US president, Donald Trump, is threatening military action in Nigeria over what he sees as the persecution of Christians there. He has accused the Nigerian government of not doing enough to prevent radical Islamists from committing “mass slaughter” against Christians in the west African nation.

In a video posted on social media on November 5, Trump said: “Christianity is facing an existential threat in Nigeria. Thousands and thousands of Christians are being killed. Radical Islamists are responsible.” He warned that, if US forces were to attack, it would be “fast, vicious, and sweet, just like the terrorist thugs attack our CHERISHED Christians!”

Riley Moore, a Republican US congressman who has been asked by Trump to lead an investigation into violence against Christians in Nigeria, has previously called the country “the most dangerous place in the world to be a Christian”. He claims that more than 7,000 Christians have been killed in Nigeria in 2025 alone – an average of 35 per day. Hundreds more have been kidnapped, tortured or displaced, he says.

But, regardless of whether or not these figures are correct (assessing their accuracy is difficult), the US government’s framing of the violence in Nigeria as Islamists killing Christians oversimplifies a complex reality. Violence in Nigeria is driven by more than religion alone, with land disputes, politics, ethnicity, historical grievances and inequality all playing a part.

Violence in Nigeria has varying motives. In the north-western and north-eastern regions of the country, attacks are largely carried out by jihadist groups such as Boko Haram, Lakurawa and Islamic State West Africa Province. These groups seek to establish an Islamic caliphate in the region, and attack whoever opposes their ideology – Christian or otherwise.

Boko Haram has historically been the dominant Islamist militant group in Nigeria. The Brookings Institution estimates that Boko Haram has killed tens of thousands of people in Nigeria since 2009, and has displaced more than 2 million others. The regions of Nigeria in which jihadist groups operate are predominantly Muslim.

The motives for violence in other areas of Nigeria, including the fertile agricultural Middle Belt region, are different. The Middle Belt, which is mostly Christian, is badly affected by violent conflict between sedentary farmers and nomadic herders. The herders, who tend to be Fulani Muslims, move their livestock from one region to another.

I have studied this particular type of conflict in Nigeria since 2018. Violent conflicts between these groups of people are driven by poor governance, inequality, historical grievances and environmental injustice – and it would be inaccurate to suggest they are entirely motivated by religion.

They are best understood as eco-violence. The two groups clash over access to and control of water points and land, which leads to mass killings and the destruction of settlements. The Trump administration’s grouping of Nigeria’s violence together under the label of Islamist extremism is thus misleading.

Nigeria’s Middle Belt region:

A map showing Nigeria's Middle Belt, region stretching across the centre of the country.
A map showing Nigeria’s Middle Belt, a region stretching across the centre of the country.
Kambai Akau / Wikimedia Commons, CC BY-NC-SA

The Middle Belt has become one of Nigeria’s most violent regions. Armed herder groups have been accused of staging attacks on farming communities that have resulted in mass murders, the burning of homes and barns and the displacement of millions of people.

In some areas, attackers have taken over and resettled in captured communities. Farmers have destroyed herds of livestock worth thousands of US dollars in retaliation, and have killed herders.

According to Amnesty International, over 10,000 people have been killed in attacks across the Middle Belt since 2023, with Benue and Plateau states accounting for the vast majority of these deaths.

In June 2025, armed attackers stormed the farming village of Yelwata in Benue state, killing around 200 people in the space of a few hours. Then, in early November, 17 people were killed by armed men believed to be Fulani herders in the farming settlement of Kwi and the town of Damakasuwa. Both of these places are located near the border between Plateau and Kaduna states.

A few days later, after Trump had threatened military action in Nigeria, suspected armed herders killed at least seven people in an attack on Anwuel village in Benue state. Herders have also been accused of carrying out attacks in other regions of Nigeria, including the south-west and south-east.

Nigerian security operatives during a military operation.
Nigerian security operatives during a military operation ahead of an election in Benin City, southern Nigeria, in 2020.
Oluwafemi Dawodu / Shutterstock

Amnesty International has blamed the escalating violence in Nigeria on the “shocking failure” of the country’s authorities to protect lives and property from attacks by armed groups and bandits. Rural communities say that Nigerian security agencies often fail to protect them, even when they are warned of impending or ongoing attacks.

However, Nigerians do not need a foreign saviour. What the country needs is a new constitution. The current constitution was created by a military administration, not through a democratic process. Many people have argued that it lacks legitimacy and centralises excessive power at the federal level.

This concentration of power has fuelled corruption, nepotism and generally poor governance, resulting in the rampant insecurity seen in Nigeria today. Nigeria needs an inclusive, democratically drafted constitution that reflects the will of its people.

It is troubling that it has taken a threat by the US president to again remind Nigerian leaders of their duties to protect civilians. But hopefully this results in better security and support for victims of violence in rural communities across Nigeria.

The Conversation

Ezenwa E. Olumba does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Nigeria’s violent conflicts are about more than just religion – despite what Trump says – https://theconversation.com/nigerias-violent-conflicts-are-about-more-than-just-religion-despite-what-trump-says-268922

How organised crime took over areas of Rio de Janeiro – and why violent police raids won’t fix the problem

Source: The Conversation – UK – By Gemma Ware, Host, The Conversation Weekly Podcast, The Conversation

At dawn on October 28, residents of Rio de Janeiro woke to the sound of gunfire. Battles continued throughout the day in the favelas of Alemão and Penha, as police mounted a huge operation targeting the Commando Vermelho, or the Red Command, one of Brazil’s largest organised criminal gangs.

In the days that followed, as graphic images showed lines of bodies on the streets, it emerged that at least 115 civilians and four police officers had been killed, making it the most violent police operation in Brazilian history.

A poll carried out two days after the raid indicated that 62% of Rio residents supported the raid – rising to 88% in the favelas. But there were also protests against alleged extrajudicial killings and condemnation by the UN and other human rights organisations.

The violent operation overshadowed the start of the Cop30 climate summit in Belem on the edge of the Amazon. At a press conference upon his arrival in Belem, Brazil’s President Luiz Inácio Lula da Silva, who was not aware of the operation beforehand, condemned the raid as “diastrous” and a “mass killing”.

In this episode of The Conversation Weekly podcast, we speak to Robert Muggah, founder of the Institute Igarapé and a research collaborator at the Brazil LAB at Princeton University, about how organised crime become so deeply embedded in Brazil – and if there’s a better way to confront it.

The origins of the Red Command lie in Brazilian prisons during the years of Brazil’s military dictatorship in the 1970s. “ The authorities at the time often would crowd common criminals together with left-wing political prisoners in the same jails,” explains Muggah. “ You had this almost metastasis happening between these different inmates and … an alliance emerged from these two groups called the falange vermelha, which means the red phalanx.”

Incubated in the prison system, the gang moved out in the street, shedding its left-wing ties as the dictatorship ended. “By the 1980s, you have a fairly well-organised group which is diversifying its income streams from what was typically bank robberies or targeted raids, to the cocaine economy,” Muggah says.

Today, the Red Command has expanded out of Rio and is present across Brazil and in neighbouring countries. “What you’ve seen over the past decade in particular is the penetration of organized crime, not into just new geographic areas, but entirely new sectors of the economy,” says Muggah.

Listen to the interview with Robert Muggah on The Conversation Weekly podcast, and read an article he wrote in Portuguese on the October 28 operation against the Red Command.

This episode of The Conversation Weekly was written and produced by Katie Flood, Mend Mariwany and Gemma Ware. Mixing by Eleanor Brezzi and theme music by Neeta Sarl.

Newsclips from AlJazeera English, Guardian News, DRM News, Itatiaia Patrulha, AFP Portuguese, Cross World News and NewsX World.

Listen to The Conversation Weekly via any of the apps listed above, download it directly via our RSS feed or find out how else to listen here. A transcript of this episode is available on Apple Podcasts or Spotify.

The Conversation

Robert Muggah is the co-founder of the Igarape Institute, a think and do tank in Brazil and a principal and co-founder of SecDev, a geopolitical and digital advisory group.

ref. How organised crime took over areas of Rio de Janeiro – and why violent police raids won’t fix the problem – https://theconversation.com/how-organised-crime-took-over-areas-of-rio-de-janeiro-and-why-violent-police-raids-wont-fix-the-problem-269117

Why Jim Henson should be recognised as one of the foremost creators of fairytales on screen

Source: The Conversation – UK – By Andrea Wright, Senior Lecturer in Teaching and Learning Development, Edge Hill University

In March 1955, an 18-year-old Jim Henson built a puppet from his mother’s old coat, a pair of blue jeans and some ping pong balls. The lizard-like creation first appeared on Afternoon, a television series on Washington D.C.’s WRC-TV, but became a regular on the five-minute Sam and Friends puppet sketch comedy show from May 1955. Over 70 years, the creature evolved into Kermit. The bright green frog now is a cultural icon.

To mark 70 years of The Jim Henson Company, the company has curated an auction of official memorabilia, including puppets, props, costumes and artwork. In a specially-recorded promotional video, Brian Henson, Jim’s son, provides a useful reminder that his father’s legacy is far greater than The Muppets.

Indeed, Henson made a significant contribution to the screen fairytale, a genre all too often dominated by Disney. To encourage fans and viewers to think beyond The Muppet Show and Disney, I offer a reappraisal of his career in my book The Fairy Tales of Jim Henson: Keeping the Best Place by the Fire.

By far the biggest section of the auction is made of items created for the productions and publicity from The Dark Crystal (1982) and the revival Netflix series The Dark Crystal: Age of Resistance (2019). The original fantasy evolved from an idea Henson had to create a story around an anthropomorphised reptilian race, which eventually became the formidable Skeksis.

The trailer for The Dark Crystal.

His collaboration with the British artist Brian Froud led to the evolution of the intricate world of The Dark Crystal. The film follows Jen (voiced by Stephen Garlick), a delicate, fey-like creature from the nearly-extinct Gelfling race. Jen embarks on a quest to save the planet Thra by healing the Dark Crystal. He must complete his mission before the “great conjunction”, an event that would give the evil Skeksis power over the fragile world forever.

This ambitious endeavour was not the first time that Henson had used a fairytale-inspired story or aesthetic. As early as 1958, following a trip to Europe, he began to develop a version of Hansel and Gretel. Although it remained unfinished, fairytales became an established strand in Henson’s work.

This included two unaired pilots called The Tales of the Tinkerdee (1962) and The Land of Tinkerdee (1964), as well as the three television specials that make up Tales from Muppetland (1969-72). The latter are playful, gentle parodies and a Muppetisation of the well-known stories Cinderella, The Frog Prince and The Bremen Town Musicians.




Read more:
The Muppet Christmas Carol turns 30: how the film became a cult classic


Fairytales even inspired two of Henson’s mid-1960s commercials for The Compax Corporation’s Pak-Nit and Pak-Nit RX – preshrunk fabrics used to make leisurewear. The ads were titled Shrinkel and Stretchel and Rumple Wrinkle Shrinkel Stretchelstiltzkin. Fairytale themes also appeared from time to time in segments of Sesame Street (1969-present) and The Muppet Show (1976–81).

Henson’s film Labyrinth (1986) is a beguiling blend of well-known coming of age fairy stories, most overtly Alice’s Adventures in Wonderland (1865) and The Wonderful Wizard of Oz (1900). These references are combined with original and innovative puppetry and design, and, of course, David Bowie as the charismatic Goblin King.

The trailer for Labyrinth.

One of Henson’s final projects was the imaginative and technically inventive television series Jim Henson’s The Storyteller (1987-89). Inspired by her folklore studies at Harvard University, Lisa Henson encouraged her father to develop a show based on the rich European folk tale tradition, importantly, one that avoided the best-known tales, in favour of more the more unusual and challenging.

Fairytales are an important – and often overlooked – part of Henson’s legacy, from the final productions made during his lifetime to The Jim Henson Company’s later output (for example, Jim Henson’s Jack and the Beanstalk: The Real Story in 2001 and The Dark Crystal: Age of Resistance). Fans are also consistently teased with rumours of a Labyrinth sequel or reboot. Most recently, Robert Eggers is reported to be directing.

Henson should be considered one of the foremost creators of screen fairytales of the 20th century. As his fans celebrate the 70th anniversary of his creations, it’s time for the world to rediscover his magical body of work, beyond the much-beloved Muppets.


Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


This article features references to books that have been included for editorial reasons, and may contain links to bookshop.org. If you click on one of the links and go on to buy something from bookshop.org The Conversation UK may earn a commission.

The Conversation

Andrea Wright does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why Jim Henson should be recognised as one of the foremost creators of fairytales on screen – https://theconversation.com/why-jim-henson-should-be-recognised-as-one-of-the-foremost-creators-of-fairytales-on-screen-268927

Frankenstein: could an assembled body ever breathe, bleed or think? Anatomists explain

Source: The Conversation – UK – By Michelle Spear, Professor of Anatomy, University of Bristol

Frankenstein’s creature is coming back to life – again. As Guillermo del Toro’s new adaptation of Mary Shelley’s gothic masterpiece airs on Netflix, we provide an anatomist’s perspective of her tale of reanimation. Could an assembled body ever breathe, bleed or think?

When Shelley wrote Frankenstein in 1818, anatomy was a science on the edge of revelation and respectability. Public dissection theatres drew crowds, body snatchers supplied medical schools with illicit corpses and electricity promised new insights into the spark of life.

Shelley’s novel captured this moment perfectly. Victor Frankenstein’s creation was inspired by real debates: Luigi Galvani’s experiments on frog legs twitching under electric charge, and Giovanni Aldini’s demonstrations making executed criminals grimace with applied current. To early 19th-century audiences, life might indeed have seemed a matter of anatomy plus electricity.

The first problem for any modern Frankenstein is practical: how to build a body. In Shelley’s novel, Victor “collected bones from charnel houses” and “disturbed, with profane fingers, the tremendous secrets of the human frame”, selecting fragments of cadavers “with care” for their proportion and strength.

From an anatomical perspective, this is where the experiment fails before it begins. Once removed from the body, tissues rapidly deteriorate: muscle fibres lose tone, vessels collapse and cells deprived of oxygen enter necrosis within minutes. Even refrigeration cannot preserve viability for transplantation beyond a few hours.

To reattach limbs or organs would demand surgical anastomosis – precise reconnection of arteries, veins and nerves using microsutures finer than a human hair. The notion that one could sew together entire bodies with “instruments of life” and restore circulation across so many junctions defies both physiology and surgical practice.

Shelley’s description of construction is vague; we estimate that the limbs alone would require over 200 surgical connections. Each piece of tissue would have to be matched to avoid immune rejection, and everything would need to be kept sterile and supplied with blood to stop the tissue from dying.

The electrical illusion

Let’s assume the parts settle into place. Could electricity reanimate the body? Galvani’s twitching frogs misled many into believing so. Electricity stimulates nerve membranes, triggering existing cells to fire – a fleeting simulation of life, not its restoration.

Defibrillators work on this principle: a well-timed shock can reset a fibrillating heart because the organ is already alive, its tissues still capable of conducting signals. Once cells die, their membranes break down and the body’s internal chemistry collapses. No current, however strong, can restore that balance.

The thinking problem

Even if a monster could be made to move, could it think? The brain is our most hungry organ, demanding constant oxygen-rich blood and glucose for energy. A living brain’s vital functions only work under tightly-controlled body temperature and depend on the circulation of fluids – not just blood but cerebrospinal fluid (CSF), too, pumped under appropriate pressure, delivering oxygen and carrying away wastes.

Brain tissue can stay alive for only six to eight hours once it is removed from the body. To keep it going for that long, it has to be cooled on ice or placed in a special oxygen-rich solution. During this time, the brain cells can still work for a while – they can send signals and release chemicals.

Cooling the brain is already used in medicine, for example, after a stroke or in premature babies, to protect the brain and reduce damage. So, in theory, cooling a donor brain before a transplant could help it survive longer.

If we can transplant faces, hearts and kidneys, why not brains? In theory, a rapidly transplanted brain could have its vessels connected to a new body. But the severed spinal cord would leave the body paralysed, without sensation, requiring artificial ventilation.

With circulation restored, pulsing CSF flow and an intact brainstem, arousal and wakefulness might be possible. But without sensory input, could such a being have complete consciousness? As the organ for every memory, thought and action we make, receiving a donor brain would be confusing, programmed with another mind’s personality and legacy of memories. Could new memories form? Yes, but only those born from a body severely limited by the absence of movement or sensation.

Controversial surgeon Sergio Canavero has argued human head transplants may enable “extreme rejuvenation”. But beyond the ethical alarms, this would require reconnecting all peripheral nerves, not just joining the spinal cord – a feat far beyond current capability.

Life support, not resurrection

Modern medicine can replace, repair or sustain many parts once considered vital. We can transplant organs, circulate blood through machines and ventilate lungs indefinitely. But these are acts of maintenance, not creation.

In intensive care units, the boundaries between life and death are defined not by the beating heart, but by brain activity. Once that ceases irreversibly, even the most elaborate support systems can only preserve the appearance of life.

Shelley subtitled her novel The Modern Prometheus for a reason. It is not just a story about science’s ambition, but about its responsibility. Frankenstein’s failure lies not in his anatomical ignorance but in his moral blindness: he creates life without understanding what makes it human.

Two centuries later, we still wrestle with similar questions. Advances in regenerative medicine, neural organoids and synthetic biology push at the boundaries of what life means, but they also remind us that vitality cannot be reduced to mechanism alone. Anatomy shows us how the body works; it cannot tell us why life matters.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Frankenstein: could an assembled body ever breathe, bleed or think? Anatomists explain – https://theconversation.com/frankenstein-could-an-assembled-body-ever-breathe-bleed-or-think-anatomists-explain-269112

Making RE part of the national curriculum will promote tolerance – but only if it’s taught in the right way

Source: The Conversation – UK – By Daniel Moulin, Associate Professor in Philosophy and World Religions, University of Cambridge

Rawpixel.com/Shutterstock

An independent review of the national curriculum in England, commissioned by the government, has published its final report. One of the key recommendations is to work towards the addition of religious education (RE) to the curriculum. This would mean RE would have the same status as other humanities subjects for the first time.

The review recommends the creation of a “task and finish group” to devise a religious education curriculum. This would then potentially become part of the national curriculum.

In England, religious education is currently a “basic curriculum subject”. This means it is technically mandatory but not part of the national curriculum. This status has long been considered a source of problems. With no centrally determined curriculum, the quality of RE teaching is patchy. Many schools do not comply with the law in how they offer it.

But overall, the current “multi-faith” approach to RE teaching, enshrined in law in the 1988 Education Act, allows pupils to confront the big questions of life. They can develop an understanding of the diverse beliefs and practices of many different communities represented in Britain.

I am an academic expert who leads the training of teachers in how to deliver religious education. I believe any national curriculum content should embrace as fully as possible the principle of teaching religious education pluralistically. This means not adopting anyone’s or any particular understanding of religion as the only approach to learning, or the only approach to determining the curriculum.

Freedom of belief is one of the foundational principles of democracy. It is precisely because, and for, this principle that pluralistic religious education is essential.

Religious education in England

When state-administered education systems were first universalised in the 19th century, western nations either supported the religious education offered by the prevailing church of a given jurisdiction. There were some exceptions. France and the US, for instance, instigated a secular system with no official religion. To no small degree, though, these systems have been arguably culturally Christian.

The result is a map of religious education that strikingly resembles a map of the Christian reformation. For example, in Germany, students choose between secular ethics, Catholic, or Protestant instruction, or recently in some states, Islamic education.

The teacher is of that designated faith, trained and authorised by that religious authority. In predominantly Catholic countries, such as Poland, the Catholic church has a major stake in determining religious education in the state system.

In England, religious education has evolved differently. The state funds schools of a designated religion which can teach religious education to their own creeds. But most state-funded schools must teach about all the major religions represented in Great Britain.

There is very little data available on the impact of this form of religious education on individuals and society. But it is symbolic at least of a liberal, cosmopolitan and inclusive society that promotes tolerance.

Group of diverse school pupils
A multi-faith approach to RE lets students confront big questions.
Rawpixel.com/Shutterstock

The educational and social issues arising from teaching religious education in a religiously diverse and secular context have engaged English religious educators for the past 50 years. In response, they have advanced a fascinating array of ideas and methods of teaching about religions. These have drawn inspiration from postmodern philosophy, anthropology and sociology.

The latest iteration of these approaches is the “religion and worldviews” approach, advanced by many religious educators. It is based on the assumption that regardless of whether somebody practices or identifies with a religion, they still live life based on a personal construction of the world. The idea is that this can be studied just like any formalised religious or philosophical system.

On the face of it, the study of worldviews suggests a way to teach religious education pluralistically. It assumes everyone has their own worldview, which is potentially different from another’s. However, many people believe there is just one reality and the foundations of morality are more or less obvious.

This is particularly true of most religious believers. They may not see their religion as a human construct, but rather a source of God’s revelation to humankind. It is also true of many non-religious people who believe in science as the objective foundation of human knowledge.

Teaching religious education pluralistically is more radical and exciting than setting one or other parameters on each other’s beliefs in order to approach them educationally. It has to be open to completely opposing accounts of reality and the possibility of our knowledge of it. This allows for something of much value in education – the development of minds that can hold and weigh up contradictory accounts at once.

However, it can only be achieved by a curriculum that assumes no overarching narrative itself. Instead, it must fairly represent and interrogate the deep differences that actually characterise religious diversity in the real world.

The Conversation

Daniel Moulin does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Making RE part of the national curriculum will promote tolerance – but only if it’s taught in the right way – https://theconversation.com/making-re-part-of-the-national-curriculum-will-promote-tolerance-but-only-if-its-taught-in-the-right-way-266964

Could pain medication be causing your headaches?

Source: The Conversation – UK – By Dan Baumgardt, Senior Lecturer, School of Psychology and Neuroscience, University of Bristol

Krakenimages.com/Shutterstock.com

It seems contradictory: the pills you’re taking for headaches might actually be perpetuating them. Medication-overuse headache is a well-documented medical phenomenon, but the good news is it’s often reversible once identified.

Over 10 million people in the UK regularly get headaches, making up about one in every 25 visits to a GP. Most headaches are harmless and not a sign of a serious problem. Although many people worry they might have a brain tumour, less than 1% of those with headaches actually do.




Read more:
What Davina McCall’s colloid cyst removal can tell us about brain tumours


Because there are so many possible causes of headaches, GPs must play detective. A detailed medical history and examination are essential, sometimes followed by specialist referral.

The challenge lies in determining whether a headache signals a serious underlying cause, or is benign. Even benign headaches, however, can greatly affect a person’s daily life and still need proper care.

Treatment depends on the type of headache. For example, migraines may be treated with anti-sickness medicine or beta blockers, while headaches related to anxiety or depression might improve with mental health support. Lifestyle changes, such as dietary changes and exercise, can also help manage many types of long-term headache.

However, doctors often see another type of persistent headache that has a clear pattern. Patients report getting repeated headaches that started or got worse after taking painkillers regularly for three months or longer.

This can happen in people with migraines, tension headache, or other painful conditions like back or joint pain. Some may take several types of medication, often more and more frequently, and end up stuck in a frustrating cycle that doesn’t seem to make sense at first.

The probable diagnosis is medication-overuse headaches. This condition is thought to affect about 1–2% of people and is three-to-four times more common in women.

The culprit is often the painkillers themselves. Opiates like codeine, used to treat moderate pain from injuries or after surgery, come with a long list of side-effects including constipation, drowsiness, nausea, hallucinations – and headaches.

It’s not just strong opiate-based medications that can cause headaches. Common painkillers like paracetamol and NSAIDs (non-steroidal anti-inflammatories, such as ibuprofen) can also play a role. Some medications even combine paracetamol with an opiate, such as co-codamol.

Paracetamol has a simpler side-effect profile compared with drugs like codeine. When taken within the recommended daily limits – which depend on age and weight – it is generally a safe and effective painkiller. This has contributed to its widespread use and easy availability.

However, taking more than the recommended dose or using it too often can be very dangerous. This can lead to serious – sometimes fatal – complications, such as liver failure.

Even though side-effects are less common, studies have shown that regular use of paracetamol alone can also trigger chronic headaches in some people.

Other drugs besides painkillers can also cause problems. Using triptans too often – medications to stop migraine attacks – can also lead to medication-overuse headaches.

The term “overuse” might make it sound like patients are taking more than the recommended daily dose, which can happen and brings its own serious risks. However, in many cases of medication-overuse headaches, patients are neither exceeding dose limits nor taking the medication every single day.

For paracetamol or NSAIDs, medication-overuse headaches may develop if they are taken on 15 or more days per month. With opiates, headaches can appear with even less frequent use – sometimes after just ten days a month.

A pack of co-codamol.
The very drugs used to treat your headaches could be making them worse.
Eddie Jordan Photos/Shutterstock.com

That’s why it’s important to talk to a doctor if you need to use any painkiller, even over-the-counter ones, for a long time. Not everyone will develop medication-overuse headaches, and the risk seems to differ from person to person, meaning individual susceptibility plays a big role.

Treatment

Treating these headaches can be challenging. It’s often hard for patients to recognise on their own that their medication is causing the problem. The usual approach involves gradually stopping the medication under guidance, eventually stopping it completely.

This can seem unfathomable to patients, especially since they expect painkillers like paracetamol to relieve their headaches. Some worry their pain will get worse as they cut back. That’s why working closely with a doctor is essential – to confirm the diagnosis, monitor progress and plan the next steps in treatment.

If you’re having headaches on more than 15 days a month, it’s important to see your GP. Talking it through can help identify underlying causes and explain these often debilitating symptom patterns. Keeping a headache diary – noting symptoms and daily details – can also support the diagnosis.

Why some medicines, especially painkillers, can make headaches worse isn’t fully understood. However, it’s important to be aware of this now well-established link and seek medical advice.

Only when some patients stop taking certain medications altogether do they discover the uncomfortable truth: that their pain was being fuelled by the very drugs they depended on.

The Conversation

Dan Baumgardt does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Could pain medication be causing your headaches? – https://theconversation.com/could-pain-medication-be-causing-your-headaches-266912