US hospitality and tourism professors don’t reflect the diversity of the industry they serve

Source: The Conversation – USA (2) – By Michael D. Caligiuri, Assistant Professor of Organizational Behavior, California State Polytechnic University, Pomona

Tourists are diverse. Are tourism professors? Grant Baldwin/Getty Images

White and male professors continue to dominate U.S. hospitality and tourism education programs, our new research has found, even as the industry is growing increasingly diverse. This imbalance raises questions about who shapes the future of hospitality and whose voices are left out of the conversation.

Our analysis of 862 faculty members across 57 of the top U.S. college hospitality programs found that nearly three-quarters of these professors were white, and more than half were male. White men alone represented 43.5% of all faculty, showing persistent overrepresentation.

By comparison, only 3.7% of faculty identified as Black, far below the 14.4% share of the U.S. population that identifies as Black. Asian faculty accounted for 22.5% – significantly more than the Asian share of the U.S. population, with slightly more Asian women than men represented.

Because publicly available data did not allow us to reliably identify faculty from Hispanic or Indigenous backgrounds, our analysis focuses on representation among Black and Asian professors.

Our findings are based on a review of online faculty directories for every U.S. hospitality and tourism program included in the Academic Ranking of World Universities for 2020. We coded each faculty member by gender, race and academic rank using publicly available information gathered through university websites, LinkedIn and other professional profiles.

While this approach cannot capture the full complexity of individual identity, it reflects how representation is typically perceived by students and prospective faculty. For example, when a student browses a university’s website or sits in a classroom, they notice who looks like them and who does not.

Our results point to a stark imbalance. The people teaching, researching and preparing the next generation of hospitality leaders do not mirror the demographics of either the workforce or the student population.

Despite growing institutional attention to fairness and belonging across higher education, the tourism and hospitality field has been slow to evolve.

Why it matters

Representation in higher education isn’t just a matter of fairness. It affects student outcomes and the long-term sustainability of the field. Researchers have found that when students see role models who share their racial or ethnic identity, they report stronger connections to their academic community, higher retention rates and greater academic confidence.

For hospitality programs, which emphasize service, empathy and cultural understanding, these effects are especially meaningful. The hospitality workforce is one of the most diverse in the United States, spanning global hotels, restaurants, events and tourism operations. Yet the lack of variety among those teaching hospitality sends a conflicting message. Diversity is valued in the workforce, but it remains underrepresented in the classrooms training future leaders.

Major employers such as Marriott, Hyatt and IHG have invested heavily in programs that promote access and belonging, creating leadership pipelines for underrepresented groups. Meanwhile, academic programs that prepare these future leaders have not made comparable progress.

The absence of representation among hospitality and tourism academia also shapes the kinds of research questions that get asked. When faculty from underrepresented backgrounds are missing, issues such as racialized guest experiences, workplace bias and equitable career advancement may be overlooked.

What still isn’t known

Our study provides a snapshot, rather than a complete picture of faculty representation in U.S. hospitality and tourism programs. Because the sample focused on research-intensive universities, it excluded many historically Black universities and teaching-focused institutions, which may have more professors of color.

The research also relied on publicly available photographs and institutional profiles to identify race and gender. While this method mirrors how students visually perceive representation, it cannot fully capture multiethnic or intersectional identities.

We believe that future studies should track how faculty composition evolves over time and explore the lived experiences of educators from underrepresented backgrounds. Understanding the barriers that prevent these scholars from entering or staying in academia is essential for creating environments where all faculty can thrive.

The Research Brief is a short take on interesting academic work. Abigail Foster, admissions specialist at the University of the District of Columbia’s David A. Clarke School of Law, contributed to this article.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. US hospitality and tourism professors don’t reflect the diversity of the industry they serve – https://theconversation.com/us-hospitality-and-tourism-professors-dont-reflect-the-diversity-of-the-industry-they-serve-273345

Malaria researchers are getting closer to outsmarting the world’s deadliest parasite

Source: The Conversation – USA (3) – By Kwesi Akonu Adom Mensah Forson, PhD. Candidate in Biology, University of Virginia

Malaria is transmitted to people by mosquitoes infected with a parasite from the _Plasmodium_ family. Jim Gathany via CDC/Dr. William Collins

Every year, malaria kills more than 600,000 people worldwide. Most of them are children under 5 in sub-Saharan Africa. But the disease isn’t confined to poor, rural areas – it’s a global threat that travels with people across borders.

For decades, the fight against malaria has felt like running in place. Bed nets and drugs save lives, but the family of parasites that cause malaria, called Plasmodium, keeps evolving new ways to survive. These parasites transmitted to humans through the bites of infected mosquitoes.

But something is shifting. As a malaria researcher working on my Ph.D., I study how the malaria parasite develops resistance to drugs. I know what malaria feels like. I’ve had it, and I’ve lost a family member to it. That experience drove me into this field.

When I started this work in 2023, few good options existed for protecting the youngest children – the group most likely to die from malaria. Now, for the first time in my career, I’m watching real breakthroughs happen simultaneously: new vaccines, powerful antibodies and genetic surveillance tools that can predict resistance before it spreads.

2 new vaccines for children

In 2023, the World Health Organization approved two malaria vaccines for children: one called RTS,S/AS01, also known as Mosquirix, and another referred to as R21/Matrix-M. Given in four doses starting around 5 months of age, they’re the first vaccines ever shown to prevent severe malaria.

These vaccines don’t provide perfect protection. They reduce the incidence of clinical malaria cases in vaccinated children by about 75% in the first year after receiving the primary dose, and the protection they offer fades over time. But when combined with bed nets and preventive drugs, they’re already preventing thousands of deaths. As of late 2025, about 20 countries, primarily in Africa where malaria burden is highest, have introduced these vaccines into childhood immunization programs.

A baby receiving a vaccine at a hospital.
In the past two years, two malaria vaccines have become available for babies starting at 5 months of age.
ER Productions Limited/DigitalVision via Getty Images

This matters enormously because children under 5 years old do not have fully developed immune systems and haven’t built up any natural resistance to malaria. A single infection can turn deadly within hours.

The vaccine is effective because it contains a molecule that mimics a key protein on the parasite’s surface, called circumsporozoite protein. This molecule trains the immune system to recognize the parasite upon infection after a mosquito bite, before the parasite can hide inside human cells.

Discovering a parasite’s hidden weak spot

In January 2025, researchers found something surprising about how the malaria parasite invades cells.

To invade liver cells, the parasite must shed a dense surface protein that acts as a protective shield. This briefly exposes specific hidden spots of proteins, called epitopes, that were previously invisible. That momentary unmasking could give the immune system a chance to recognize the parasite and stop the invasion.

Because this vulnerability is exposed only for a split second, most immune responses miss it. However, scientists discovered an antibody called MAD21-101 that is precise enough to catch it.

An antibody is essentially a microscopic security tag produced by the immune system that can stick to invaders. While standard antibodies fail to latch because of the parasite’s protein shield, MAD21-101 waits for the unmasking moment and locks directly onto the exposed spot.

In lab tests, this action blocked the parasite from entering liver cells, stopping the infection completely. Scientists envision turning this antibody into a treatment that prevents infections in high-risk infants, potentially to be used alongside existing vaccines to strengthen protection against malaria.

A laboratory technician examines samples in a research laboratory.
By exploiting vulnerabilities in the malaria parasite’s defense system, researchers hope to develop a treatment that blocks the parasite from entering cells.
wilpunt/E+ via Getty Images

Protecting and treating the youngest patients

Because of their undeveloped immune systems, infants have historically faced a double gap: limited ways to prevent malaria, and almost no safe treatments formulated for their tiny bodies when they inevitably got sick.

In 2022, the WHO began recommending a malaria prevention strategy called perennial malaria chemoprevention for babies starting at 2 months. Infants receive a full dose of a standard antimalarial medication, such as sulfadoxine-pyrimethamine, during their routine vaccination checkups. The treatment clears out parasites and provides temporary prevention, regardless of whether the child has a fever or other symptoms.

A new treatment has recently become available. Coartem Baby, approved by Swiss regulators in 2025, is the first malaria treatment designed specifically for infants weighing as little as 4.4 pounds. Unlike older drugs, this formula safely accounts for a baby’s immature metabolism. It contains one ingredient, artemether, which acts fast to reduce the parasite count immediately, and a second ingredient, lumefantrine, which stays in the blood longer to mop up any survivors.

Tracking parasite evolution around the globe

The malaria parasite has an uncanny ability to rewrite its genetic code under pressure, allowing it to adapt and withstand the very medicines designed to destroy it. This adaptability is now threatening the drug artemisinin, the backbone of global malaria treatment, which is starting to fail in parts of Africa and Southeast Asia. But researchers like me are getting a clearer picture of how resistance develops and how it might be interrupted.

One of the parasite’s tricks is to make extra copies of the genes that help it survive antimalarial drug treatment. In my research, I use a high-precision technique that counts the number of genes to estimate a sort of resistance score: A parasite with more copies is far better equipped to survive treatment than a parasite with only one.

Scientists around the world are using molecular scanning tools to hunt for specific mutations – single-letter changes in the parasite’s DNA – that make the parasite more resistant to the drug. For example, researchers in my lab are working to pin down the parasite’s genetic code as it’s in the act of changing, in order to catch dangerous mutations while they’re still rare. That would give researchers time to deploy alternative treatments before children start dying from drug-resistant infections.

These tracking tools allow epidemiologists to create early warning systems that can identify where drug resistance is emerging and predict where it might spread next, as the pathogen hitchhikes across continents in travelers’ bloodstreams. Based on those warnings, health officials can switch treatment strategies before a drug fails completely. What’s more, knowing exactly which genes the parasite modifies may enable researchers to block those changes to prevent resistance from emerging.

Malaria research is entering a new era where, although the parasite adapts, scientists like me can now adapt faster. A malaria-free childhood isn’t guaranteed yet, but for the first time in my career, it feels like a realistic goal rather than a distant dream.

The Conversation

Kwesi Akonu Adom Mensah Forson receives no funding, compensation or financial support from any companies or organizations related to malaria vaccines, drugs or diagnostic technologies. His research on malaria parasite genetics is conducted as part of Ph.D. project at the University of Virginia, supported by university and academic research funds.

ref. Malaria researchers are getting closer to outsmarting the world’s deadliest parasite – https://theconversation.com/malaria-researchers-are-getting-closer-to-outsmarting-the-worlds-deadliest-parasite-268316

What we get wrong about forgiveness – a counseling professor unpacks the difference between letting go and making up

Source: The Conversation – USA (3) – By Richard Balkin, Distinguished Professor of Counselor Education, University of Mississippi

Take stock of your feelings, and the other person’s, before you decide what kind of forgiveness to offer. Jacob Wackerhausen/iStock via Getty Images Plus

Two in five Americans have fought with a family member about politics, according to a 2024 study by the American Psychiatric Association. One in five have become estranged over controversial issues, and the same percentage has “blocked a family member on social media or skipped a family event” due to disagreements.

Difficulty working through conflict with those close to us can cause irreparable harm to families and relationships. What’s more, inability to heal these relationships can be detrimental to physical and emotional well-being, and even longevity.

Healing relationships often involves forgiveness – and sometimes we have the ability to truly reconcile. But as a professor and licensed professional counselor who researches forgiveness, I believe the process is often misunderstood.

In my 2021 book, “Practicing Forgiveness: A Path Toward Healing,” I talk about how we often feel pressure to forgive and that forgiveness can feel like a moral mandate. Consider 18th-century poet Alexander Pope’s famous phrase: “To err is human; to forgive, divine” – as though doing so makes us better people. The reality is that reconciling a relationship is not just difficult, but sometimes inadvisable or dangerous, especially in cases involving harm or trauma.

I often remind people that forgiveness does not have to mean a reconciliation. At its core, forgiveness is internal: a way of laying down ill will and our emotional burden, so we can heal. It should be seen as a separate process from reconciliation, and deciding whether to renegotiate a relationship.

But either form of forgiveness is difficult – and here may be some insights as to why:

Forgiveness, karma and revenge

In 2025, I conducted a study with my colleagues Alex Hodges and Jason Vannest to explore emotions people may experience around forgiveness, and how those emotions differ from when they experience karma or revenge.

We defined forgiveness as relinquishing feelings of ill will toward someone who engaged in a harmful action or behavior toward you. “Karma” refers to a situation where someone who wronged you got what they deserved without any action from you. “Revenge,” on the other hand, happens when you retaliate.

First, we prompted participants to share memories of three events related to offering forgiveness, witnessing karma and taking revenge. After sharing each event, they completed a questionnaire indicating what emotions they experienced as they retold their story.

A hand holding a car key traces it along the side of a beige-colored car to leave a scratch.
Revenge can feel easier than forgiveness, which often brings sadness or anxiety.
nattul/iStock via Getty Images Plus

We found that most people say they aspire to forgive the person who hurt them. To be specific, participants were about 1.5 times more likely to desire forgiveness than karma or revenge.

Most admitted, though, that karma made them happier than offering forgiveness.

Working toward forgiveness tended to make people sad and anxious. In fact, participants were about 1.5 times more likely to experience sadness during forgiveness than during karma or revenge. Pursuing forgiveness was more stressful, and harder work, because it forces people to confront feelings that may often be perceived as negative, such as stress, anger or sadness.

Two different processes

Forgiveness is also confusing, thanks to the way it is typically conflated with reconciliation.

Forgiveness researchers tie reconciliation to “interpersonal forgiveness,” in which the relationship is renegotiated or even healed. However, at times, reconciliation should not occur – perhaps due to a toxic or unsafe relationship. Other times, it simply cannot occur, such as when the offender has died, or is a stranger.

But not all forgiveness depends on whether a broken relationship has been repaired. Even when reconciliation is impossible, we can still relinquish feelings of ill-will toward an offender, engaging in “intrapersonal forgiveness.”

Not all forgiveness has to involve renegotiating a relationship with the person who hurt you.

I used to practice counseling in a hospital’s adolescent unit, in which all the teens I worked with were considered a danger to themselves or others. Many of them had suffered abuse. When I pictured what “success” could look like for them, I hoped that, in adulthood, my clients would not be focused on their past trauma – that they could experience safety, health, belonging and peace.

Most often, such an outcome was not dependent upon reconciling with the offender. In fact, reconciliation was often ill-advised, especially if offenders had not expressed remorse or commitment to any type of meaningful change. Even if they had, there are times when the victim chooses not to renegotiate the relationship, especially when working through trauma.

Still, working toward intrapersonal forgiveness could help some of these young people begin each day without the burden of trauma, anger and fear. In effect, the client could say, “What I wanted from this person I did not get, and I no longer expect it.” Removing expectations from people by identifying that we are not likely to get what we want can ease the burden of past transgressions. Eventually, you decide whether to continue to expend the emotional energy it takes to stay angry with someone.

Relinquishing feelings of ill will toward someone who has caused you harm can be difficult. It may require patience, time and hard work. When we recognize that we are not going to get what we wanted from someone – trust, safety, love – it can feel a lot like grief. Someone may pass through the same stages, including denial, anger, bargaining and depression, before they can accept and forgive within themselves, without the burden of reconciliation.

Taking stock

With this in mind, I offer four steps to evaluate where you are on your forgiveness journey. A simple tool I developed, the Forgiveness Reconciliation Inventory, looks at each of these steps in more depth.

  1. Talk to someone. You can talk to a friend, mentor, counselor, grandma – someone you trust. Talking makes the unmentionable mentionable. It can reduce pain and help you gain perspective on the person or event that left you hurt.

  2. Examine if reconciliation is beneficial. Sometimes there are benefits to reconciliation. Broken relationships can be healed, and even strengthened. This is especially more likely when the offender expresses remorse and changes behavior – something the victim has no control over.

  3. In some cases, however, there are no benefits, or the benefits are outweighed by the offender’s lack of remorse and change. In this case, you might have to come to terms with processing an emotional – or even tangible – debt that will not be repaid.

  4. Consider your feelings toward the offender, the benefits and consequences of reconciliation, and whether they’ve shown any remorse and change. If you want to forgive them, determine whether it will be interpersonal – talking to them and trying to renegotiate the relationship – or intrapersonal, in which you reconcile your feelings and expectations within yourself.

Either way, forgiveness comes when we relinquish feelings of ill will toward another.

The Conversation

Richard Balkin does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. What we get wrong about forgiveness – a counseling professor unpacks the difference between letting go and making up – https://theconversation.com/what-we-get-wrong-about-forgiveness-a-counseling-professor-unpacks-the-difference-between-letting-go-and-making-up-273317

How Trump’s Greenland threats amount to an implicit rejection of the legal principles of Nuremberg

Source: The Conversation – USA (3) – By Michael Blake, Professor of Philosophy, Public Policy and Governance, University of Washington

Daily life on a street at sunset in Nuuk, Greenland, on Jan. 21, 2026. AP Photo/Evgeniy Maloletka

U.S. President Donald Trump has, for the moment, indicated a willingness to abandon his threat to take over Greenland through military force – saying that he prefers negotiation to invasion. He is, however, continuing to assert that the United States ought to acquire ownership of the self-governing territory.

Trump has repeatedly raised the possibility of using military action, against both Greenland and Canada.

These threats were often taken as fanciful. The fact that he has, successfully, used military force to remove Venezuelan President Nicolas Maduro from power has lent some plausibility to these threats.

Crucially, these military possibilities have been justified almost exclusively with reference to what Trump’s administration sees as America’s national interests. Anything short of ownership in the case of Greenland, the president has emphasized, would fail to adequately protect American interests.

As a political philosopher concerned with the moral analysis of international relations, I am deeply troubled by this vision of warfare – and by the moral justifications used to legitimize the making of war.

This view of warfare is radically different from the one championed by the U.S. for much of the 20th century. Most notably, it repudiates the legal principle that informed the Nuremberg trials: that military force cannot be justified on the basis of national self-interest alone.

Those trials, set up after World War II to prosecute the leaders of the Nazi regime, were foundational for modern international law; Trump, however, seems to disregard or reject the legal ideas the Nuremberg tribunal sought to establish.

Aggressive war as international crime

The use of warfare as a means by which states might seek political and economic advantage was declared illegal by 1928’s Kellogg-Briand Pact – an international instrument by which many nations, including both Germany and the U.S., agreed to abandon warfare as a tool for national self-interests.

After 1928, invading another country in the name of advancing national interests was formally defined as a crime, rather than a legitimate policy option.

The existence of this pact did not prevent the German military actions that led to World War II. The prosecution for the International Military Tribunal at Nuremberg, accordingly, took two aims as central: reaffirming that aggressive warfare was illegal, and imposing punishment on those who had chosen to use military force against neighboring states.

The first charge laid against the Nazi leadership at Nuremberg was therefore the initiation of a “war of aggression” – a war chosen by a state for its own national interests.

The chief prosecutor in Nuremberg was Robert H. Jackson, who at the time also served as a justice on the U.S. Supreme Court. Jackson began his description of the crime by saying that Germany, in concert with other nations, had bound itself in 1928 to “seek the settlement of disputes only by pacific means.”

More particularly, Jackson noted, Germany had justified its invasion of neighboring countries with reference to “Lebensraum” – living room, or, more generally, space for German citizens – which marked those invasions out as illegal.

A courtroom scene shows several people seated in three rows, with national flags displayed behind them and additional rows of seated attendees visible in front.
Nuremberg trial, Dec. 4, 1945.
Sepia Times/ Universal Images Group via Getty Images

Germany used its own national interests as sufficient reason to initiate deadly force against other nations. In so doing, said Jackson, it engaged in a crime for which individual criminal punishment was an appropriate response.

In the course of this crime, Jackson noted, Germany had shown a willingness to ignore both international law and its own previous commitments – and had given itself “a reputation for duplicity that will handicap it for years.”

Jackson asserted, further, that the extraordinary violence of the 20th century required the building of some legal tools, by which the plague of warfare and violence might be constrained.

If such principles were not codified in law, and respected by nations, then the world might well see, in Jackson’s phrase, the “doom of civilization.” Nuremberg’s task, for Jackson, was nothing less than ensuring that aggressive war was forever to be understood as a criminal act – a proposition backed, crucially, by the U.S. as party to the Nuremberg trials.

The morality of warfare

It is fair to say that the U.S, like other nations, has had a mixed record of living up to the legal principles articulated at Nuremberg, given its record of military intervention in places like Vietnam and Iraq.

President Donald Trump, wearing a blue suit and red tie, is seated in front of the American flag, with the NATO flag displayed beside it.
President Donald Trump at the World Economic Forum in Davos, Switzerland, on Jan. 21, 2026.
AP Photo/Evan Vucci

Trump’s prior statements about Greenland, however, hint at something more extreme: They represent an abandonment of the principle that aggressive war is a criminal act, in favor of the idea that the U.S. can use its military as it wishes, to advance its own national interests.

Previous presidents have perhaps been guilty of paying too little attention to the moral importance of such international principles. Trump, in contrast, has announced that such principles do not bind him in the least.

In a recent interview with The New York Times, Trump asserted that he did not “need international law” to know what to do. He would, instead, be limited only by “his own morality” and “his own mind.”

European leaders, for their part, have increasingly decried Trump’s willingness to go back on his word, or abandon previously insisted-upon principles, if such revisions seem to provide him with some particular advantage.

Trump’s statements, however, imply that his administration has adopted a position strikingly similar to that decried by Justice Jackson: The U.S., on this vision, can simply decide that its own moral interests are more important than those of other countries, and can initiate violence against those countries on its own discretion. It can do this, moreover, regardless of either the content of international law or of previously undertaken political commitments.

This vision, finally, is being undertaken in a world in which the available tools of destruction are even more complex – and more deadly – than those available during the Second World War.

It is, indeed, a historic irony that the U.S. of today has so roundly repudiated the moral values it both helped developed and championed globally during the 20th century.

The Conversation

Michael Blake receives funding from the National Endowment for the Humanities.

ref. How Trump’s Greenland threats amount to an implicit rejection of the legal principles of Nuremberg – https://theconversation.com/how-trumps-greenland-threats-amount-to-an-implicit-rejection-of-the-legal-principles-of-nuremberg-274018

Iran’s biggest centres of protest are also experiencing extreme pollution and water shortages

Source: The Conversation – UK – By Nima Shokri, Professor, Applied Engineering, United Nations University

Iran’s current wave of protests is often interpreted as having been sparked by inflation, currency collapse, corruption and repression. These explanations are not wrong, but they are incomplete.

Beneath the country’s political and economic crisis lies a more destabilising force that is still largely missing from international analysis: environmental breakdown.

Iran is experiencing not one environmental crisis but the convergence of several: water shortages, land subsidence, air pollution and energy failure. All added together, life is a struggle for survival.

So when citizens protest today, they are not only resisting authoritarian governance. They are responding to a state that can no longer reliably provide the most basic forms of security: water to drink, air to breathe, land to stand on, and electricity to carry on their daily lives.

From 2003-2019, Iran lost an estimated 211 cubic kilometres of groundwater, or twice its annual water consumption, leaving the country facing water bankruptcy. Excessive pumping – driven by agricultural expansion, energy subsidies and weak regulation – has caused land subsidence rates of up to 30cm per year, affecting areas where around 14 million people, more than one-fifth of the population, live.

Provinces such as Kerman, Alborz, Khorasan Razavi, Isfahan and the capital Tehran now have more than a quarter of their population living with the risk of subsidence. In all, large sections of the country – particularly around the capital Tehran, the agricultural centre Rafsanjan, and the city of Mashhad – are subsiding at alarming rates of close to 10cm per year.




Read more:
Iran’s record drought and cheap fuel have sparked an air pollution crisis – but the real causes run much deeper


Subsidence has cracked homes, damaged railways, destabilised highways, and threatened airports as well as Unesco-listed heritage sites.

Iran’s lack of water has become politically explosive. When reservoirs fall to extremely low levels, when taps run dry at night in major cities, or when farmers watch rivers and lakes disappear, grievances turn into protest.

As wetlands, lakes and riverbeds dry up, their exposed surfaces generate dust and salt storms that can blanket cities hundreds of kilometres away.

The aftermath of recent protests in Tehran.

At the same time, chronic electricity shortages – caused by underinvestment, inefficiency and poor infrastructure – have forced power plants and industries to burn heavy fuels. The result is extreme concentrations of sulfur dioxide, nitrogen oxides and fine particulate matter.

Ignoring environmental problems

The World Health Organization notes that Iran is facing severe problems in terms of its air quality. Around 11% of deaths and 52% of the burden of diseases across the country are attributable to environmental risk factors.

In recent months, major cities have repeatedly closed schools and offices due to hazardous air quality, while hospitals report surges in respiratory and cardiovascular emergencies.

These environmental failures do not exist in isolation. They are the predictable outcome of decades of distorted national priorities.

Since the 1980s, Iran has channelled vast financial, institutional and political resources into ideological expansion and regional disputes — supporting groups in Lebanon, Syria, Iraq, and Yemen – while systematically underinvesting in domestic environmental governance, infrastructure renewal and job creation.

Meanwhile, Iran’s political economy has been structured around energy subsidies and megaprojects that reward short-term extraction over long-term sustainability. Cheap fuel has encouraged water-intensive agriculture and inefficient industry.

Environmental agencies have remained fragmented and politically weak, unable to restrain more powerful ministries or governmentally linked economic actors. International isolation has compounded these failures.

Sanctions deepened the environmental crisis by restricting access to modern monitoring technologies, clean-energy systems, efficient irrigation and external finance.

While much of the world invested in technology and regulation to curb pollution and stabilise water systems, Iran doubled down on emergency fixes that deepened ecological damage rather than containing it. Sanctions and climate stress amplified the problems, but the root cause lay in state priorities that have consistently ignored environmental security.

The political consequences are now unmistakable. Environmental stress reshapes not only why people protest, but where and how. Maps of unrest in 92 Iranian cities reveal a clear pattern. Protests increasingly erupt in areas where there is groundwater collapse, land subsidence and water rationing.

Water shortages and protest

In provinces such as Tehran, Khuzestan in the south-west and Isfahan in central Iran – all areas with high levels of protest – there are acute water shortages, subsidence causing damage to roads and pipelines, and disputes over access to water.

In other cities such as Kermanshah and Ilam, intensifying unrest reflects the interaction of major environmental problems of drought, rainfall decline and groundwater depletion with severe economic problems and poverty.

But Iran is not unique in this regard. Similar conflicts over water and economic issues have played a destabilising role in neighbouring Syria. Prolonged drought, conflicts over water and access to it, and limited rainfall have affected crop yields and animals there. Hundreds of thousands of people living in agricultural communities have been driven to cities and camps nearby in a desperate attempt to survive.

Water mismanagement and access to decent drinking water have also fuelled unrest in Basra in the south of Iraq.

Iran is not facing a cyclical protest problem that can be stabilised through repression, subsidies or tactical concessions. It is confronting a structural collapse of the systems that make governance possible, and are at the heart of human survival.

When there’s no water and the air becomes unbreathable, the social contract fractures. Citizens no longer debate ideology or reform timelines, they question the state’s right to rule at all.

What Iran sees today is not simply environmental stress but irreversible simultaneous failures across water, land, air and energy. These are not shocks that fade with rainfall or budget injections. They permanently shrink the state’s capacity to deliver security and economic opportunity.

Coercion can disperse crowds but it cannot reverse subsidence, restore collapsed aquifers or neutralise airborne toxins. A state cannot govern indefinitely where the ecological foundations of life, agriculture and public health are failing all at once.

The Conversation

Nima Shokri is affiliated with Hamburg University of Technology.

ref. Iran’s biggest centres of protest are also experiencing extreme pollution and water shortages – https://theconversation.com/irans-biggest-centres-of-protest-are-also-experiencing-extreme-pollution-and-water-shortages-274217

A brief history of sugar

Source: The Conversation – UK – By Seamus Higgins, Associate Professor Food Process Engineering, Chemical & Environmental Engineering, University of Nottingham

Still Life by Edward Hartley Mooney (1918). Manchester Art Gallery, CC BY

A few thousand years ago, sugar was unknown in the western world. Sugarcane, a tall grass first domesticated in New Guinea around 6000BC, was initially chewed for its sweet juice rather than crystallised. By around 500BC, methods to boil sugarcane juice into crystals was first developed in India.

One of the earliest references to sugar we have dates to 510BC, when Emperor Darius I of what was then Persia invaded India. There he found “the reed which gives honey without bees”.

Knowledge of sugar-making spread west to Persia, then across the Islamic world after the 7th century AD. Sugar reached medieval Europe only via trade routes. It was extremely expensive and used more like a spice. Indeed, in the 11th century Crusaders returning home talked of how pleasant this “new spice” was.

It was the supply potential of this “new spice” in the early 16th century that encouraged Portuguese entrepreneurs to export enslaved people to newly discovered Brazil. There, they rapidly started growing highly profitable sugar cane crops. By the 1680s, the Dutch, English and French all had their own sugar plantations with enslaved colonies in the Caribbean.

In the 18th century, the increasing popularity of tea and coffee led to the widespread adoption of sugar as a sweetener. In 1874, prime-minister William Gladstone abolished a 34% tax on sugar to ease the costs of basic food for workers. Cheap jam (one-third fruit pulp to two-thirds sugar) began to appear on the table of every working-class household. The growing demand for sugar in Britain and Europe encouraged further growth and profit, earning the name “white gold”.

Painting of a woman carrying sugar cane
Getting in the Sugar Cane, River Nile by Frederick Trevelyan Goodall (1875).
Grundy Art Gallery, CC BY

Britain’s per capita sugar consumption skyrocketed from four pounds in 1704 to 90 pounds by 1901. While slavery was eventually abolished, the supply of cheap labour was sustained by new flows of indentured workers from India, Africa and China.

Britain’s naval blockade of Napoleonic France at the start of the 19th century prodded the French to seek an alternative to Caribbean sugar supplies. It gave birth to the European sugar beet industry.

Sugar beet is a biennial root crop grown for its high sucrose content, which is extracted to produce table sugar. The 20th century has seen this traditionally heavily subsidised and tariff-protected industry grow to produce approximately 50% of Europe’s sugar. This includes the UK’s consumption, which is now around 2 million tons of beet (60%) and cane sugar (40%) annually.

Delights and dangers

In 1886, Atlanta’s prohibition laws forced the businessman and chemist John Pemberton to reformulate his popular drink, Pemberton’s Tonic French Wine Coca. He replaced the alcohol with a 15% sugar syrup and added citric acid. His bookkeeper, Frank Robinson, chose a new name for the drink after its main ingredients – cocaine leaves and kola nuts – and created the Coca-Cola trademark in the flowing script we know today.

In 1879, Swiss chocolatier Daniel Peter invented the world’s first commercial milk chocolate using sweetened condensed milk developed by his neighbour, Henri Nestlé. Milk chocolate, which contains about 50-52 grams of sugar per 100 grams, has now become a global favourite for its sweet taste and creamy texture.

Chocolate and cola have since solidified their status as global staples in the realm of fizzy drinks and sweet treats and have become essential indulgences for people worldwide.

In 1961, an American epidemiologist Ancel Keys appeared on the cover of Time magazine for his “diet-heart hypothesis”. Through his “seven countries” study, he found an association between saturated fat intake, blood cholesterol and heart disease. Keys remarked: “People should know the facts. Then, if they want to eat themselves to death, let them.”

An advert for Cocoa-Cola from 1961.

With competing scientific advice John Yudkin, founder of the nutrition department at Queen’s College, published an article in the Lancet. He argued that international comparisons do not support the claim that total or animal fat is the main cause of coronary thrombosis, highlighting that sugar intake has a stronger correlation with heart disease.

He published his book, Pure, White and Deadly, in 1972. It highlighted the evidence linking sugar consumption to increased coronary thrombosis and its involvement in dental caries, obesity, diabetes and liver disease. He ominously noted: “If only a small fraction of what is already known about the effects of sugar were to be revealed about any other material used as a food additive, that material would promptly be banned.”

The British Sugar Bureau dismissed Yudkin’s claims about sugar as “emotional assertions”, and the World Sugar Research Organisation called his book “science fiction”. In the 1960s and 1970s, the sugar industry promoted sugar as an appetite suppressant and funded research that downplayed the risks of sucrose, while emphasising dietary fat as the primary driver of coronary heart disease.

Scientific debate over the relative health effects of sugar and fat continued for decades. In the meantime, governments began publishing dietary guidelines advising people to eat less saturated fats and high-cholesterol foods. An unavoidable consequence of this was that people began eating more carbohydrates and sugar instead.

Official dietary guidelines did not begin to clearly acknowledge the health risks of excessive sugar consumption until much later, as evidence accumulated toward the end of the 20th century.

In my new book, Food and Us: the Incredible Story of How Food Shapes Humanity I explore the fact that sugar is a relatively new addition to our diet. In just a short period of 300 years, or 0.0001% of our food evolution, sugar has become ubiquitous in our food supply. It has even evolved its own terms of endearment and affection for people, such as sugar, honey and sweetheart.

However, the global addiction to sugar poses significant and interconnected challenges for public health, the economy, society and the environment. The pervasive nature of sugar in processed foods, combined with its effects on the brain’s reward system, creates a cycle of dependency that is driving a worldwide crisis of diet-related diseases and straining health systems.


Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


This article features references to books that have been included for editorial reasons, and may contain links to bookshop.org. If you click on one of the links and go on to buy something from bookshop.org The Conversation UK may earn a commission.

The Conversation

Seamus Higgins does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. A brief history of sugar – https://theconversation.com/a-brief-history-of-sugar-266189

Creatine for women: should you add this supplement into your diet?

Source: The Conversation – UK – By Justin Roberts, Professor of Nutritional Physiology, Anglia Ruskin University

Creatine supplements can be particularly beneficial for building strength. Chay_Tee/ Shutterstock

Creatine is one of the most popular sports supplements out there. It’s shown to help build muscle and improve strength, boost speed and power in athletes and benefit sports performance all round.

Research also suggests this superstar nutrient may have other health benefits, including for brain function, memory, bone health and even mood.

While creatine has been a mainstay supplement for gym enthusiasts, most of the research on this supplement’s benefits has been conducted on men. With recent increased advertising specifically promoting creatine for women, there is growing interest in whether this nutrient can also be equally beneficial for them.

It’s already clear from the research that creatine could benefit women by reducing fatigue during exercise. It may also be particularly beneficial for maintaining muscle as women get older.

Creatine is a natural compound produced in the body from several amino acids (the building blocks from protein). We can also get it from protein-rich foods, such as meat and seafood.

Creatine plays a role in short-term energy, particularly during intense exercise, helping us to recover quicker between exercises. This makes it possible to do more work each time we train, leading to around 20% greater performance gains when regularly taking the supplement.

We naturally use around 2g-4g of creatine per day. But as our bodies don’t store much creatine, this is why we need to consume it in our diet or get it from supplements. Think of it like a short-term energy store that needs topping up.

Around 1kg of raw beef or seafood would supply around 3g-5g of creatine. However, cooking can reduce creatine content. This makes it challenging to consistently get enough from the diet alone, which is where supplements can be useful.

Research also shows that vegans, vegetarians and women tend to have diets lower in creatine – meaning lower overall body stores. However, women do appear to store a bit more creatine in their muscles than men, suggesting they may respond to it slower or differently than men.

The most studied form of creatine is creatine monohydrate. This can be taken as a powder, capsule or gummy. If women consume around 3g-5g of creatine a day as a supplement, it will help gradually increase muscle creatine stores over a period of two to four weeks.

But if you’re looking to boost muscle stores faster, research shows taking around 20g of creatine a day for seven days (before dropping down to 3g-5g daily) can safely boost stores.

Creatine benefits for women

There are many factors which influence a women’s health over their lifetime. This includes hormonal changes, the gradual loss of muscle that comes with ageing, loss of bone density and slower metabolism post-menopause – as well as fluctuating energy levels and poor concentration or focus.

Resistance exercise may be beneficial in mitigating some of these changes, particularly in supporting muscle mass and function, bone health and energy levels.

An older woman wearing a pink shirt and standing outdoors drinks out of a shaker bottle used for protein or creatine shakes.
Daily creatine may have many benefits for womens’ health and fitness.
SvetikovaV/ Shutterstock

This is where creatine comes in. Doing resistance training for several weeks while taking around 3g-5g of additional creatine per day can enable you to maintain the quality and consistency of your training. This combination can be particularly beneficial for strength in mid to later life.

Women who take creatine consistently are shown to have improved muscle function, which ultimately can impact quality of life. There’s also some evidence that taking it alongside resistance training may support bone health in postmenopausal women – although not all studies agree on this.

It’s worth noting as well that creatine does not appear to lead to weight gain or cause a bulky, muscular appearance, which are often concerns for women thinking about taking the supplement.

More recently, research has been exploring whether creatine can affect brain health, cognitive function and possibly even mood in older women. Evidence also shows that in younger women, it can improve mood and cognitive function after a bad nights’ sleep.

There’s emerging evidence as well that taking 5g of creatine daily can help younger women sleep longer (particularly on days they’ve done a workout). The same dose may also improve sleep quality in perimenopausal women – possibly by supporting the energy required by the brain.

Another study also reported greater reductions in depressive symptoms in women taking 5g of creatine daily alongside antidepressants, compared to those just taking antidepressants.

Given many women report experiencing symptoms such as “brain fog”, poor concentration, stress, low energy and poor sleep during their menstrual cycle and throughout the menopause, this could make creatine a low-cost solution for many of these symptoms. However, a higher dose of creatine may be needed daily (around 5g-10g) to increase the brain’s creatine stores.

Creatine is by no means a cure-all supplement, and clearly more research on women is needed. But the research so far shows that even just a small amount of creatine daily – when paired with a healthy lifestyle and resistance training – holds promise in supporting many aspects of women’s health.

The Conversation

Professor Justin Roberts is employed by Anglia Ruskin University and Danone Research & Innovation, and has previously received external research funding unrelated to this article.

ref. Creatine for women: should you add this supplement into your diet? – https://theconversation.com/creatine-for-women-should-you-add-this-supplement-into-your-diet-272773

Andy Burnham: what now for the King in the North?

Source: The Conversation – UK – By Alex Nurse, Reader in Urban Planning, University of Liverpool

Andy Burnham, the mayor of Greater Manchester, has been blocked from standing for parliament – a step that would have been essential to mount a leadership challenge against Keir Starmer.

Andrew Gwynne, who has been suspended for some time, has stepped down as MP for Gorton and Denton, citing ill health. A byelection will now be held in the seat, which is in the greater Manchester area – Burnham’s home turf. But the party’s National Executive Committee has voted eight to one to prevent Burnham from standing in the byelection, citing the expense of running a mayoral election to replace him as the main reason.

However, their ruling has been taken as a signal that Starmer is too worried about the threat Burnham would pose from the backbenches to allow him to return to Westminster.

Starmer is right to be worried. Burnham has been following a long history by hovering around in the background as a party leader struggles.

Margaret Thatcher spent the second half of her premiership heading off the threat from Michael Heseltine. He didn’t replace her but she was toppled and John Major assumed power as a consequence of those tussles.

Tony Blair and Gordon Brown’s rivalry was infamous and at times all-consuming. Both David Cameron and Theresa May had to deal with Boris Johnson’s ambition to occupy their job. And we know how that ended.

In some ways, Burnham is trudging a similar path to Johnson: a former MP who left parliament to take up office as the mayor of a large city, and who enjoys a national profile that perhaps exceeds that of his office. However, the similarities end there.

Burnham served in Blair’s government, before holding multiple roles within Brown’s cabinet, including as health secretary. Burnham also tested his leadership credentials on the Labour membership on two occasions – losing to Ed Miliband in 2010 and Jeremy Corbyn in 2015.

Burnham has often spoken of his disdain for the Westminster model and has done very well for himself out of being a mayor rather than an MP. It’s true that he was taking what many saw as a convenient off ramp out of Jeremy Corbyn’s shadow cabinet when he initially ran for the position, but he won the 2017 election with 63% of the vote. He increased his majority upon re-election in 2021 and has become the figurehead of the English mayors.

His most impressive credentials lie in his approach to transport. He has taken the lead on bringing buses back into public ownership – a move that has been popular among people frustrated by spiralling fare prices. His was the first city outside London to appoint a walking and cycling commissioner – something that was then copied by every other mayor. He has ultimately formulated what has become known as “the Bee network” – a fully integrated system of tram and bus lines and cycle routes.

Of course, not all of Burnham’s actions have seen successes. For example, the ten-year plan for Greater Manchester, which is overseen by his office, has become increasingly fractured as local authorities break away from it – particularly over concerns that its housing targets aren’t achievable.

However, it was during the COVID-19 pandemic that he really burnished his credentials as the so-called “King in the North” – a title that has endured in popularity longer than the TV show from which it was derived. Amid confusing advice over lockdowns and inconsistent support from national government, Burnham took to giving live press conferences on the steps of Manchester town hall railing against Westminster.

He eventually won some concessions from the Johnson government over lockdown restrictions in his region. This, perhaps for the first time, really showcased the value of a talismanic mayor who could argue for their city, and certainly reaffirmed Burnham’s position as a national player.

A king on the march?

Given his two previous tilts at the role, Burnham’s leadership ambitions have rarely been in doubt. Indeed they have always bubbled beneath the surface. Although he has little choice but to lick his wounds for now, Burnham’s status as a potential replacement for Starmer remains undiminished.

There will also, undoubtedly, be others in the Labour party who have their own leadership ambitions, and who will have mixed emotions that the main stalking horse liable to topple Starmer and instigate a leadership race has been stabled.

Perhaps in a case of life imitating art, we should remember that in Game of Thrones the King in the North is fatally undone by poor tactical decisions. The most successful example of returning to parliament and obtaining power remains Johnson.

Even so, this took nearly four years and a party that largely wanted him back. With his path to Westminster currently blocked, that timeline might leave Burnham questioning his long-term strategy.

The Conversation

Alex Nurse does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Andy Burnham: what now for the King in the North? – https://theconversation.com/andy-burnham-what-now-for-the-king-in-the-north-266455

Americans have fought back against authoritarianism at home before

Source: The Conversation – UK – By George Lewis, Professor of American History, University of Leicester

The first year of Donald Trump’s second term has been marked by increasing authoritarianism at the heart of the US federal government. He has openly defied court orders, worked beyond the established remit of executive power and is making no secret of his strongman ambitions. History tells us that such an authoritarian presence is not new and offers a blueprint for how it might be overcome.

From the 1930s to the 1970s, a congressional committee called the House Un-American Activities Committee (Huac) operated with near impunity. Granted extraordinary powers to investigate subversion and subversive propaganda, Huac sidelined political opponents, ruined careers and crushed organisations.

In popular memory, Huac remains inexorably tied to the “red scare” politics of the 1950s when cold war tensions led to intense anti-communist paranoia in the US. But in reality, it operated across five decades and its demise only came with the careful plotting of a concerted and organised campaign.

Huac derived much of its power from the vagueness of its mandate, with no objective definition of un-Americanism ever being universally agreed. Earl Warren, the chief justice of the US supreme court at the time, even openly questioned in 1957 whether un-Americanism could be defined and, thus, whether it ought to be investigated.

But the committee was not cowed by that lack of definition. For decades, Huac sought to be seen as the sole arbiter of the meaning of un-Americanism. That way, the committee could target its own enemies at will under the guise of investigating un-Americanism for the public good.

Curbing Huac’s authoritarianism was a delicate business. It had extraordinary powers, ill-defined parameters and vituperative members. The committee had also been a fixture of American life for so long that its existence seemed inevitable. The answer to overcoming its authoritarianism came in two separate stages.

Fighting against authoritarianism

First was the building of a broad coalition. Huac had many opponents both in politics and culture, the issue was uniting them behind a single cause.

Individuals, groups and protest movements that had been operating separately had to be encouraged to put their specific concerns aside and coalesce instead around an overall concern for democratic values. It was here that civil liberties protesters first forged an alliance with their civil rights counterparts at the turn of the 1960s.

Civil liberties organisations were primarily concerned with the free speech provision of the US constitution’s first amendment. Civil rights groups, on the other hand, were most concerned with the 14th and 15th amendments’ equal rights provisions. Huac’s assault on American principles was a reminder that these were amendments to the same document and it was the constitution as a whole that needed protection.

Momentum was key. A Huac memo from around that time recorded American civil rights leader John Lewis stating that “civil rights and liberties are the same”. Lewis worked across generational and geographical divides to unite sit-in students at segregated public spaces in the southern states with students who stormed Huac hearings in the west.

Gender divides also allowed women’s activists to humiliate the masculine conservatism of Huac committeemen. Poems described the committee shivering in its own manure, vinyl records captured anti-Huac protests and singers satirised its proceedings. The supreme court confronted Huac’s overreach, which activists and public intellectuals translated into popular broadsides.

However, this activism alone was insufficient. The second stage in bringing Huac’s authoritarianism to heel saw the carefully planned intervention of national mainstream politicians. Here, Congressman Jimmy Roosevelt provided bold but also tactically astute leadership. He delivered a speech from the floor of the US Capitol in 1960 that changed the movement from one designed to protest Huac’s authoritarianism to one demanding the committee’s outright abolition.

Roosevelt used the committee’s own actions against it. As he recognised, Huac’s meticulous record keeping also detailed its own failings. It spent public money on propaganda and its members, including staff director Richard Arens, were found to have been in the pay of scientific racists even as they investigated the civil rights movement. They also used designated wartime powers in peacetime.

Roosevelt stepped back, though, and concentrated on questions of principle at the heart of American democracy and the nation’s founding ideals. In his speech, Roosevelt told the House that Huac was “at war with our profoundest principles”. The un-American committee had used its powers in un-American ways.

James Roosevelt wearing a suit and glasses.
James Roosevelt was a prominent opponent of Huac in the 1950s and 60s.
Bettmann Archive / Wikimedia Commons

By appealing to matters of principle, Roosevelt was also able to appeal to principled members of the new congressional intake following elections that year which saw Democrat John F. Kennedy enter the White House.

Liberal House members had long given Huac a wide berth on account of its reputation. But riding a wave of liberalism, and encouraged by Roosevelt’s political leadership, some of that new intake now actively sought appointment to Huac so they could oppose its authoritarianism head on.

For the first time, the committee shifted from trying to frame civil rights activists as un-American to investigating the un-Americanism of the Ku Klux Klan. Its reformed membership also began opposing the scale of the congressional appropriations that had underwritten its investigations.

Its remaining conservative members were drawn into making increasingly desperate claims to maintain their national profile, but succeeded only in drawing the committee towards ridicule and irrelevance. Huac limped towards the end of the decade and was finally dissolved in 1975.

History tells those in Washington today that democratic pressures can be brought to bear on an authoritarian presence, however entrenched it may appear. Building a broad coalition is vital, as is labelling authoritarian behaviour appropriately. Denying any one individual ownership of what constitutes un-Americanism is equally important.

The record also shows that disparate groups can apply pressure most effectively when they are bound to a single issue. Here, as in the campaign against Huac, that issue is the principle of American democracy.

Roosevelt left three lessons for US citizens. First, that the momentum generated by a growing popular coalition can be harnessed in national politics. Second, that bold and principled leadership brings reward. And third, that elections can be the harbinger of significant and substantive change.

The Conversation

George Lewis has received funding from the British Academy for his research into un-Americanism.

ref. Americans have fought back against authoritarianism at home before – https://theconversation.com/americans-have-fought-back-against-authoritarianism-at-home-before-273638

The India-UK trade deal is a prime opportunity to protect to some of the world’s most vulnerable workers

Source: The Conversation – UK – By Pankhuri Agarwal, Leverhulme Early Career Research Fellow, University of Bath; King’s College London

AlexAnton/Shutterstock

A new trade agreement between India and the UK is due to come into force this year.
The deal is expected to completely remove tariffs from nearly 99% of Indian goods, including clothing and footwear, that are headed for the UK.

In both countries, this has been widely celebrated as a win for economic growth and competitiveness. And for Indian garment workers in particular, the trade agreement carries real promise.

This is because in recent years, clothing exports from India have declined sharply as well-known fashion brands moved production to places like Morocco and Turkey, which were cheaper.

India’s internal migrant workers (those who move from one region of the country to another looking for work) have been hit hardest, often waiting outside factories for days for the chance of a single shift of insecure work.

Against this backdrop, more opportunities for steadier employment and a more competitive sector under the new trade agreement looks like a positive outcome. But free trade agreements are not merely economic instruments – they shape labour markets and working conditions along global supply chains.

So, the critical question about this trade deal is not whether it will generate employment in India – it almost certainly will – but what kind of employment it will create.

Few sectors illustrate this tension more clearly than the manufacture of clothing. As one of India’s biggest exports, its garments sector is expected to be one of the primary beneficiaries of the trade deal.

But it is also among the country’s most labour-intensive and exploitative industries. From denim mills in Karnataka to knitwear and spinning hubs in Tamil Nadu, millions of Indian workers receive low wages and limited job security.

Research also shows that gender and caste-based exploitation is widespread.

So, if the trade deal goes ahead without addressing these issues, it risks perpetuating a familiar cycle where we see more orders and more jobs, but the same patterns of unfair wages, insecurity and – in some cases – forced labour.

Marginalised

For women workers, who form the backbone of garment production in India, these vulnerabilities are even sharper.

Gender-based violence, harassment and unsafe working conditions have been documented repeatedly across India’s export-oriented factories. Regimes which bound young women to factories under the promise of future benefits that often never materialised show how caste- and gender-based discrimination have long been embedded within the sector.

Even in factories that formally comply with labour laws, wages that meet basic living costs remain rare. Many workers earn wages which are not enough to pay for housing, food, healthcare and education, pushing families into debt as suppliers absorb price pressures imposed by global brands.

On the plus side, the India-UK agreement does not entirely sidestep these issues. There is a chapter which outlines commitments to the elimination of forced labour and discrimination.

But these provisions are mostly framed as guidance rather than enforceable obligation. They rely on cooperation and voluntary commitments, instead of binding standards.

While this approach is common in trade agreements, it limits this deal’s capacity to drive meaningful change. But perhaps even more striking is what has been left out.

Despite the role India’s social stratification system, known as caste, plays in shaping labour markets in India, it is entirely absent from the text of the agreement.

Yet caste determines who enters garment work and who performs the most hazardous and lowest-paid tasks. A significant proportion of India’s garment workforce comes from marginalised caste communities with limited bargaining power and few alternatives.

By addressing labour standards without acknowledging caste, the free trade agreement falls short. It could have required the monitoring of issues concerning caste and gender, and demanded grievance mechanisms and transparency measures that account for social hierarchies.

Instead, a familiar gap remains between commitments to “decent work” on paper and the reality which exists on factory floors.

Missed opportunity

If the India-UK deal is to be more than a tariff-cutting exercise, protections around caste and gender must be central to its implementation.

The deal is rightly being celebrated in both countries as an economic milestone. For the UK, it promises more resilient supply chains and cheaper imports. For India, it offers renewed export growth and the prospect of some more stable employment.

But the agreement’s long-term legitimacy will rest on whether it also delivers social justice.

India can use the deal to strengthen labour protections and ensure growth does not come at the cost of dignity and safety. The UK, as a major consumer market, can use its leverage to insist on enforceable standards for fair wages and decent work.

For trade deals do not simply move goods across borders – they shape the conditions under which those goods are produced.

The Conversation

Pankhuri Agarwal receives funding from the Leverhulme Trust as an Early Career Research Fellow.

ref. The India-UK trade deal is a prime opportunity to protect to some of the world’s most vulnerable workers – https://theconversation.com/the-india-uk-trade-deal-is-a-prime-opportunity-to-protect-to-some-of-the-worlds-most-vulnerable-workers-274055