South Africa’s foreign policy is rooted in negotiation with all nations – a shifting global order makes this difficult

Source: The Conversation – Africa (2) – By Tinashe Sithole, Postdoctoral research fellow at the SARChI Chair: African Diplomacy and Foreign Policy, University of Johannesburg

Since Russia’s invasion of Ukraine in 2022, South Africa’s foreign policy has been under sustained international scrutiny.

Its stance on the war in Ukraine has been one of active non-alignment. This means it has called for negotiations while abstaining from UN resolutions condemning Russia. However, it decided to take Israel to the International Court of Justice over the Gaza conflict in December 2023.

To many observers, including US policymakers and international analysts, these decisions suggest uncertainty or inconsistency. However, a closer look suggests a different interpretation.

In my recent research, I show how South Africa’s negotiated transition to democracy has shaped a foreign policy tradition that prioritises mediation, multilateralism and non-alignment.

I argue that South Africa’s foreign policy since 1994, including the period after the 2024 election, has been shaped by more than political shifts.

Instead, its negotiated democratic transition experience continues to guide how the country understands conflict and cooperation. This is even as the costs of maintaining this approach rise in a more fragmented and competitive global order. I describe this trajectory as “idealism under strain” – a principle-based foreign policy maintained under growing external pressure.

As a middle power, South Africa exerts influence most effectively through international institutions. By working through the African Union (AU), the Southern African Development Community (SADC) and the UN, it helps broker agreements and shape regional and global agendas.

What has changed is the international environment. Global politics has become more polarised and more transactional. States are increasingly expected to take clear sides on major issues, from security alignments to trade relations. This shift has narrowed the space for diplomatic independence.

In this context, South Africa’s preference for dialogue and institutional process has become harder to sustain and easier to misinterpret. Positions that once appeared principled are now criticised as evasive or contradictory.

This matters because South Africa’s influence depends less on power and more on trust.

To remain effective, it needs to continue leading regional mediation and peace efforts and to apply its principles consistently. When its positions on international law or human rights appear selective, its credibility weakens. When they are consistent, its voice carries more weight.

Behind the choices

South Africa’s post-apartheid foreign policy reflects the negotiated nature of its democratic transition. The end of apartheid in 1994 came through compromise rather than a military victory. This experience shaped a preference for mediation over coercion; for dialogue over exclusion.

These preferences shaped the country’s early diplomatic engagements on the continent. In Burundi (1999-2003), the Democratic Republic of Congo (2002-2003) and Lesotho (1998 and again from 2014 to 2017), South Africa promoted negotiated political settlements and power-sharing arrangements rather than military solutions.

This history helps explain current policy choices, including the call for negotiations on Ukraine.

It also explains the contrast in how the country engages across crises. For example, in 2023 it brought the case against Israel at the International Court of Justice. In other situations, such as political tensions in Zimbabwe, it has relied on quiet diplomacy, working behind the scenes rather than openly criticising the government.

In my view, these decisions reflect an adaptation to constraint rather than inconsistency.

This pattern has persisted across administrations. Successive governments have sought to balance democratic values with geopolitical realities rather than abandon one in favour of the other.

What has changed is the level of external pressure under which this balance must now be maintained.

A changing world

A more polarised and transactional world has narrowed the space for diplomatic independence.

Pressure from the US to align with the west has become more explicit, particularly following the South Africa vs Israel case.

Tensions have also affected the trade relationship. In Washington, some lawmakers called for a review of South Africa’s eligibility for the African Growth and Opportunity Act (Agoa). The legislation provides duty-free access to the US market. The criticism reflected concerns that South Africa’s positions on Russia and Israel, and its broader stance of non-alignment, were increasingly seen as out of step with US foreign policy priorities.

South Africa’s Agoa benefits expired in September 2025 and were only renewed on 3 February 2026. The months of uncertainty highlighted the economic risks that can accompany geopolitical pressure.

For a country whose influence depends more on diplomacy and external partnerships, such signals matter. They show how the costs of maintaining diplomatic independence are rising in a more competitive international environment.

What needs to happen next

South Africa is unlikely to abandon its preference for mediation, multilateralism and non-alignment. The key challenge is how it sustains this approach as international pressure intensifies.

First, South Africa needs to use the institutions where it already has influence more actively. This means taking visible leadership roles in the AU and SADC, and continuing its involvement in UN peace missions. These platforms are the main channels through which the country exercises diplomatic influence.

Second, regional cooperation needs to result in coordinated action. Conflicts in places such as Mozambique or eastern Democratic Republic of Congo affect neighbouring states and cannot be managed by one country alone. Working with regional partners on joint mediation and shared responses helps avoid fragmented or competing interventions.

Third, consistency matters. When South Africa calls for international law, negotiated settlements or civilian protection, the same principles should guide its positions across different conflicts. Applying these standards evenly reduces accusations of selectivity and helps preserve trust in its role.

These priorities do not require a new foreign policy. They reflect the need to apply an existing approach more clearly and more consistently.

The Conversation

Tinashe Sithole does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. South Africa’s foreign policy is rooted in negotiation with all nations – a shifting global order makes this difficult – https://theconversation.com/south-africas-foreign-policy-is-rooted-in-negotiation-with-all-nations-a-shifting-global-order-makes-this-difficult-275033

Kizza Besigye: the firebrand who has shaped opposition politics in Uganda

Source: The Conversation – Africa (2) – By Barney Walsh, Senior Lecturer in Security, Leadership and Development Education, King’s College London

Uganda’s Kizza Besigye has been described as possibly the most arrested man in Africa. Besigye was once President Yoweri Museveni’s ally, and personal physician. He broke ranks with Museveni in 1999, and emerged as the most long-standing political opponent to the ageing president, who has run the country since 1986. For this, Besigye has been jailed, kept under house arrest, renditioned, forced into exile, and endured state violence countless times. He has been in jail since 2024. Barney Walsh and Dennis Jjuuko have studied Besigye’s remarkable political career.

Who is Kizza Besigye?

Kizza Besigye was born in Rukungiri district, south-western Uganda, in April 1956. After graduating with a degree in human medicine from Makerere University in 1980, he joined the National Resistance Army (NRA) rebellion, which dislodged the dictatorial rule of President Milton Obote in 1986.

Besigye served in different senior positions in Museveni’s new government, including minister of state for internal affairs and the president’s office. In 1993, he was appointed the army’s chief of logistics and engineering and later senior military adviser to the defence minister. He was part of the inner sanctum of the National Resistance Movement which became the civilian government.

Besigye remained close to Museveni until 1999, when he abandoned the ruling party. He said the movement had departed from its original principles, like democracy through free elections, security for all and eliminating corruption.

He believed Uganda needed liberation again, this time from a government he’d helped establish. This would define his life’s work.

Under the pressure group “Reform Agenda”, and later the political party Forum for Democratic Change, he was the leading contender against Museveni in successive presidential polls. He scored 27% of the vote in 2001, 37% in 2006, 26% in 2011, and 35% in 2016. The Ugandan supreme court acknowledged irregularities but refused to overturn the result in 2006.

After leaving the government, Besigye became the focal point for Ugandans wary of Museveni’s increasingly vicious authoritarianism. He was forced to flee to South Africa after the 2001 presidential elections. He has been brutalised, detained and charged numerous times. His younger brother died in 2007 from illness associated with incarceration on trumped-up conspiracy charges.

When the opposition one day take the reins of power in Uganda, the debt it owes Besigye will be immense.

What are the highlights of this legacy?

Besigye, 69, stands out as the foremost opposition figure who was part of Museveni’s original Bush War victory. His 2011 “walk to work” protests, in response to dramatic fuel prices and general inflation, will not be forgotten in the history of Uganda’s political economy.

Besigye seemed to think this civil action could be Uganda’s “Arab Spring” moment. Some mocked his efforts as a mis-reading of the socio-economic conditions in sub-Saharan Africa.

The protests were, indeed, subdued in the face of brutal repression by security agencies.

But similar protests would soon remove Robert Mugabe in Zimbabwe in 2017 and Blaise Compaoré in Burkina Faso in 2014.

Besigye has developed credibility as someone trustworthy because of what he has been through. And he has a heartfelt connection with supporters.

His leadership has transcended other opposition figures during Museveni’s administration in terms of longevity and consistent vision for change. Other opposition leaders have emerged only fleetingly, failing to sustain any moral standing or coherent transformative vision.

As we argue in our recent paper, it is unclear whether opposition leader Robert Kyagulanyi (better known as Bobi Wine) could have emerged without Besigye laying the foundation and sustaining the momentum for change.

It’s important, too, to recognise his failings.

He is given to outbursts. His call for Chinese debts to be written off as odious was thought to alienate an essential development partner. His storming of what he described as a “rigging centre” during the 2016 election led to accusations of leading mobs to take over elections.

He is also partly to blame for the fact that Uganda’s opposition has not yet mustered a single candidate against Museveni due to competing egos and moral certitudes. Besigye has never seemed to be able to convince other opposition candidates to drop their candidacy and support him (or to do that for them).

Nevertheless, his individual role has been fundamental to the emergence of the idea and principle of peaceful opposition politics emerging in Uganda in the post-1986 era.

This is not to be underestimated in a country which has yet to experience a peaceful change of government since independence in 1962.

What is the context in which you assess his legacy?

Uganda’s post-1986 political landscape has been dominated, and controlled, by Museveni. His most recent election victory in January 2026 will extend his reign beyond 40 years.

While his public popularity has been in decline, Museveni has relied on two things. First is the Ugandan political and military elite. Since the mid-2000s he has taken steps to proof his regime against a military coup, by keeping influential military personnel on board.

Second is external support, mainly from western governments. This stems from Uganda’s involvement as a key security actor in the sub-region at the behest of western powers. This role has gradually been prioritised over the west’s pursuit of human rights.

Partly for these reasons, Besigye was never able to get the full backing of western donors to support his democratic goals. Instead they supported Museveni’s regime.

A lack of support for Besigye in western capitals was evident in 2024 when he was abducted while visiting Kenya, and returned as a prisoner to Kampala. It barely registered international condemnation or action – save for a belated push from US lawmakers.

This silence must be seen within a global context of democratic backsliding, including developments within President Donald Trump’s second term.

In east Africa, Kenya’s violent response to the 2024 Gen Z riots in Nairobi included state-led abductions and enforced disappearances targeting young people linked to the protest movement.

In Tanzania, the October 2025 presidential elections also saw human rights abuses of protestors met with unjustified lethal force.

What next?

Besigye has not managed to shake up Museveni’s inner circle of corrupt powerbrokers. This is because his progressive democratic vision of change threatens their privileges.

Neither has he ever enjoyed the global profile that he would have hoped for, such as Raila Odinga of Kenya or Morgan Tsvangirai of Zimbabwe managed – even as Bobi Wine did briefly before the 2021 election.

But the idea of a free and fair election is now at least ingrained in Uganda’s people. In a February 2025 interview, he revealed the lens through which his life work should be viewed:

We can only influence whether change happens quickly or is delayed, but change is inevitable. Sooner or later, Ugandans will take charge of their destiny and rebuild their country in a way that ensures equal opportunities for everyone.

If Besigye’s decades of sacrifice are to mean anything beyond retrospective praise, they demand engagement now, not memorialisation later. To remain silent is to collude in the slow erasure of a political life spent insisting that a truly democratic Uganda was a cause worth fighting for.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Kizza Besigye: the firebrand who has shaped opposition politics in Uganda – https://theconversation.com/kizza-besigye-the-firebrand-who-has-shaped-opposition-politics-in-uganda-275568

Developing lab-grown human cartilage… using apples!

Source: The Conversation – France – By Karim Boumédiene, Professeur de biochimie et biologie moléculaire, ingénierie tissulaire, Université de Caen Normandie

Astonishing but true: apples are one of the latest plant-based materials tissue engineers are experimenting with to create lab-grown human cartilage. Priscilla Du Preez/Unsplash, CC BY

A research lab at the University of Caen Normandy (France) has succeeded in making cartilage using decellularized apples.

The Bioconnect laboratory at the university, which I head, has just published a scientific paper in the Journal of Biological Engineering. In this original study, we used apples that had been “decellularized” (their cells removed) as a material, combined with human stem cells, to rebuild cartilage outside of the body, in laboratory Petri dishes.

The process is called tissue engineering. Its goal is to rebuild human tissue in the laboratory so it can be used at a later date as grafts to repair damaged parts of the body. The idea is to place a patient’s cells onto a supporting material and grow them in the right conditions so they can form tissues such as bone, muscle, or cartilage.

Many diseases and injuries damage or destroy tissues, which then require reconstruction. Among these are degenerative diseases where tissue slowly wither away over time, such as osteoarthritis (cartilage damage) or osteoporosis (bone loss). There is therefore a real need for tissue grafts.

However, finding healthy tissues for transplant is difficult because donors are rare and compatibility is a major issue. Tissue engineering offers an effective solution to this problem. Using the patient’s own cells when possible also avoids the risk of immune rejection.

Apples provide excellent scaffolding

Although scientists can easily grow cells in the lab, these cells do not naturally organise themselves into full and functional tissues. That is why they need a support material. These materials act like scaffolding, giving cells a structure so they can grow in three dimensions and form real tissue.

One approach is to use human tissues or organs that have been decellularized, leaving only their structure. New healthy cells can then be added. However, this method is limited by the lack of available human tissues. For about ten years now, plant-based tissues have also been decellularized and used as supports.

Several studies (including those conducted in our lab), have already explored different materials. But this work is the first in the world to rebuild cartilage using a plant-based support. The idea came from a Canadian study showing that decellularized apples are compatible with mammalian cells 2. Since cartilage is our specialty, we decided to apply this method.

Plant-based materials offer many advantages: they are widely available, very cheap, already shown to be compatible with living organisms, and easy to shape to match the form of the tissue to be repaired. This study is only a first step. More tests are needed, first in animals and then in humans, to understand how these tissues behave over time and how beneficial they are for patients.

Multiple application possibilities

Potential applications include joint cartilage repair (after injury or osteoarthritis), reconstruction of nasal cartilage (after trauma or cancer), and even ear cartilage. Overall, this research opens new possibilities in tissue engineering, both for reconstructive surgery and for reducing animal experimentation. Indeed, lab-grown tissues can also be used to better model disease and test treatments in so-called organoid models, which can reduce or even replace animal testing.

Finally, because plants are incredibly diverse, there is still a wealth of potential to explore. Future research will aim to identify which plants, or which parts of plants, are best suited for rebuilding specific human tissues. Other plants, such as celery, are already being studied.


A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!


The Conversation

Karim Boumédiene ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. Developing lab-grown human cartilage… using apples! – https://theconversation.com/developing-lab-grown-human-cartilage-using-apples-274438

3 generations of Black Philadelphia students report persistent anti-Black attitudes in schools

Source: The Conversation – USA (2) – By Leana Cabral, Researcher at the Consortium for Policy Research in Education, Teachers College, Columbia University

Over 70 years after Brown v. Board of Education, public schools in the U.S. remain deeply segregated. AP Photo/Phil Long

John Washington, now in his 50s, attended a public elementary and middle school in the Chestnut Hill neighborhood of Philadelphia and then went to a large magnet high school, a type of public school that has a selective admission process. As he has gotten older, he has understood that in the education system in Philadelphia, “the more things change, the more they stay the same.”

John was bused during the integration movement of the 1970s and graduated from high school in 1990. Back then, he recognized that his school was not as segregated as the schools his parents and grandparents had attended in Philadelphia. As a parent of three current students, however, he has noticed how racially segregated most of the schools in Philadelphia remain.

As research demonstrates, U.S. public schools in general are not more integrated than they were just after the Supreme Court’s Brown v. Board of Education ruling in 1954.

I am a sociologist whose research focuses on education, race and social inequality. For my dissertation research, I interviewed over 45 former and current Black students to learn about their intergenerational experiences in Philly public schools. “John” and the other names used in this article are pseudonyms to protect the privacy of the research participants.

Intergenerational research is underexplored within educational research. I wanted to understand how different generations of Black public school students in Philadelphia understood and experienced racial inequality, as well as how families’ memories and perspectives around schooling shape students’ educational journeys.

The people I interviewed ranged in age from 14 to 95, and all attended a Philadelphia public elementary or high school, or both. Across the generations, I heard both clear awareness of anti-Blackness and its presence in schools alongside an unyielding hope and vision for a better future.

As Naya, a 30-year-old former student from Germantown, put it, there is a “magic” in being Black. “You have to see what’s possible when nobody else can see it,” she said.

Black-and-white photo of elementary school children seated at desks in classroom
Black and white students sit together in an integrated classroom in Philadelphia in 1968.
AP Photo

Anti-Blackness and American education

Historian Carter G. Woodson warned of the danger in allowing Black students to be treated as inferior within the educational system.

“There would be no lynching if it did not start in the classroom,” he wrote in his seminal book “The Mis-Education of the Negro,” published in 1933. “Why not exploit, enslave, or exterminate a class that everybody is taught to regard as inferior?”

Anti-Blackness is made visible in schools today through, to name just a few examples, the sanitization of the United States’ violent racial history, the school-to-prison pipeline, wrongful placements of Black students into special education or remedial classes, racial violence in schools and the ongoing disinvestment in and closures of majority-Black schools.

‘We weren’t troublemakers, we were just kids’

Several current and former students I interviewed said their parents taught them “they had to work twice as hard” as white students.

A former student who is now in her 30s shared how she understood the idea that “you have to continue to prove yourself in ways that white kids aren’t expected to … and that’s how supremacy shows up.”

I repeatedly heard from former and current students of all ages how they believed their white teachers held low expectations of Black students and did not challenge them academically.

“I honestly feel like there was a divide, there was less patience for us,” said Jazmine, who graduated from a Philadelphia public high school in 2003. “It was just so obvious, the difference in how the adults treated us, which in turn led to a lot of animosity with the children.”

Hank, who graduated from high school in 1981, said the low expectations his white teachers held limited students’ motivation. “We were just going through the motions,” he said. “You could definitely see a difference with the expectations of the Black teachers than many of the white teachers. And then if the white teachers had expectations, it was sterile. It wasn’t with the love that you felt from some of the Black teachers.”

Current high school students shared incidents of white teachers using racial epithets, including the n-word, and one saying, “You’re acting like a park ape.” Another teacher, a student shared, said slavery was in the past “and not connected to today.”

A recent graduate who attended a magnet middle school recalled being treated by her white teachers as “disposable.”

“I feel like the school actively tried to strip away a lot of my confidence, but not just for me, but also other Black kids,” she said. “It was the first place where I didn’t feel like my teachers thought that I was smart and capable.”

I repeatedly heard both current and former students describe white teachers treating them as if they were “criminals” and receiving harsher discipline and punishments than their non-Black peers, which research has long demonstrated. Students I spoke to described feeling degraded and “singled out” by white teachers – and even blamed for things they did not do.

For example, Naima, a current high school student, shared a painful memory from fourth grade when she had an older white teacher who kept a candy jar on her classroom desk. One afternoon, someone took many pieces of candy from the jar.

“And, of course, it was the white girl, but me and my other Black friend were the last people in her room that she saw walk out, so she assumed it was us,” Naima said. “She said, ‘You stole my candy jar. Y’all were the last people in there. I know y’all did it.’”

Naima could not believe they were accused because, as she explained, she and her friend “weren’t troublemakers, we were just kids.” Despite their innocence – and that they were only in the fourth grade – they were suspended.

High school students wearing backpacks shown walking down a school hallway
Schools can be sites of both racial harm and affirmation for Black students.
AP Photo/Matt Slocum

Experiences of affirmation too

Speaking to multiple generations of students provides unique insight into the ways in which Black students continue to experience racial harm and trauma in Philly public schools.

On the other hand, at some point in their schooling, many of the former students I spoke to were fortunate to also experience classrooms or schools that affirmed their Blackness and did instill in them a sense of pride.

However, this tended to happen only in majority Black schools where Black teachers were also in the majority.

Delise, who graduated in 2004, shared that at her elementary and high schools, “Blackness was a norm. It was the standard. … Black cultural norms and my identity was affirmed in that school.”

Black communities in Philadelphia have always resisted and mobilized for educational justice. Such efforts include the Black People’s Unity Movement, Philadelphia’s first Black Power political organization, in the 1960s and the many movements that have come since, as well as the creation of alternative educational spaces such as the freedom library, freedom schools, faith-based groups and other Black-led community and art spaces focused on Afrocentric history and curricula.

Former and current students are proud of this legacy.

“We have yet to grasp the significance of our experience as far as I’m concerned,” said James, a former student from North Philly who is now in his 80s, reflecting on Black communities’ resilience and resistance. “And when I look at how we have navigated, I mean, it’s just constant, man … and still we rise.”

Read more of our stories about Philadelphia and Pennsylvania, or sign up for our Philadelphia newsletter on Substack.

The Conversation

Leana Cabral receives funding from Teachers College, Columbia University.

ref. 3 generations of Black Philadelphia students report persistent anti-Black attitudes in schools – https://theconversation.com/3-generations-of-black-philadelphia-students-report-persistent-anti-black-attitudes-in-schools-266439

Revisiting the story of Clementine Barnabet, a Black woman blamed for serial murders in the Jim Crow South

Source: The Conversation – USA (3) – By Lauren Nicole Henley, Assistant Professor of Leadership Studies, University of Richmond

A grainy photograph of Clementine Barnabet. A 1912 edition of The Atlanta Constitution newspaper via Wikimedia Commons

In April 1912, a young Black woman named Clementine Barnabet confessed to murdering four families in and around Lafayette, Louisiana. The widespread news coverage at the time effectively branded her a serial killer.

Her confession, however, did not align with the timeline of crimes that had gripped America’s rice belt region with fear. Even today, her guilt is debated.

From November 1909 until August 1912, an unknown assailant – or assailants – zigzagged across southwestern Louisiana and southeastern Texas. Many Black families were slaughtered in their homes under the cover of darkness. An ax – the telltale weapon – was almost always found in the bloody aftermath.

All but one of the scenes were located within a mile of the Southern Pacific Railroad’s Sunset Route. In each case, a mother and child were always among the victims. Evidence of additional weapons was often found nearby, suggesting a deliberate cruelty to the carnage.

Dubbed the “axman”, the unknown assailant eluded the authorities and terrified local Black communities.

Today, when scholars and laypeople alike discuss Clementine Barnabet, they oscillate between two extremes: portraying her as a fear-inducing, cult-leading Black female serial killer, or as an innocent young Black woman caught in circumstances beyond her control.

In more than a decade of researching Clementine Barnabet, I’ve been struck by how print media created overtly sensationalized accounts of the mythology of the axman and, by extension, the axwoman. Whether Barnabet committed the crimes she said she did – or any of the axman murders, for that matter – is irrelevant to the primary motive the media constructed for her fatal violence: religion.

Diverse faith traditions

In Jim Crow Louisiana, various expressions of faith were possible. The state’s history as a French colony – one that also practiced slavery – meant it was home to the largest percentage of Black Catholics in the United States.

A black-and-white sketch depicts people walking and sitting around a square cloth on the ground, with small items arranged on it.
A sketch supposedly depicting a Voodoo ceremony in Louisiana.
Photos.com/Getty images plus

At the same time, religions like Voodoo, that originated in West Africa, reached the region on slave ships. Voodoo was not necessarily at odds with Catholicism; enslaved practitioners creatively adapted their ancestral faith to that of their enslavers.

Some displays of faith were not organized religions at all, but folkways. Hoodoo, for example, has West African origins, though it also draws upon European and Native American elements. Hoodoo practitioners – sometimes called doctors – and their clients often practice a religion, yet they also seek comfort in the supernatural possibilities of their craft.

This craft involves the physical manipulation of earthly elements such as graveyard dirt or plants like John the Conqueror root to achieve magical ends, often resulting in conjures – or ritual objects – needed to bring about desired goals. Conjures are believed to help people protect themselves, harm one’s adversaries, alter one’s circumstances, intervene in one’s relationships and more.

In their most powerful form, believers contend that conjures can bring about a person’s death.

For some believers, elements of Catholicism, Voodoo, Protestantism and hoodoo combine into syncretic faith practices. Incorporating multiple systems of beliefs has been an aspect of many Louisianans’ identities for generations. Most of the time, this blending of practices, ideologies and communities is depicted as a quirky – even “backward” – way to make sense of the world.

Yet during the axman’s reign in the early 1900s, a Black woman’s confession to murder was interpreted through the lens of religious deviance rather than diversity.

A timeline of events

When Barnabet confessed in April 1912, it was technically the second time she had done so. The first time was in November 1911 in the aftermath of the Randall family murder. Five members of the Randall family and their overnight guest had been brutally slaughtered in Lafayette, Louisiana at the end of the month.

According to regional newspapers, Barnabet was in the crowd that had gathered near the Randall family’s home after the murders were discovered. Reportedly, she caught the attention of the local sheriff. Not only did she live near the slain, but, according to a New Orleans daily, the authorities found “her room saturated with blood and covered with human brains.”

Barnabet was given a “third degree” examination – meaning she was tortured – by the New Orleans Police Department, and then supposedly confessed that she had killed the Randalls because, according to a Midwestern newspaper, they “disobeyed the orders of the church.” That church would become a topic of scrutiny and sensationalism by regional lawmen and news outlets alike throughout much of 1912.

At that time, Barnabet is also said to have confessed to killing another family in Lafayette.

Thus, Barnabet had already been in jail for over four months before her springtime confession. Between January and March 1912, four more families had been axed to death between Crowley, Louisiana and Glidden, Texas. In April, when Barnabet re-confessed, she added two more families to her victim roster.

In aggregate, the four families Barnabet confessed to killing had been slain between November 1909 and November 1911. Four more families had been murdered between her arrest and second confession, meaning she was in jail when they occurred. After her second confession and while she was still in custody, another three families were attacked with an ax, though for the first time, people survived the axman.

This convoluted timeline, in which more than half of the axman murders occurred after Barnabet had been apprehended, presented a challenge for investigators. They generally believed the crimes were related. Yet Barnabet could not have physically carried out the attacks in 1912.

To explain the continuation of the killings despite Barnabet’s incarceration, local lawmen leveraged the young woman’s own statements that had landed her in jail in the first place: that religion compelled her to murder.

It was this November 1911 confession that gave investigators the motive of religious fanaticism to attach to the axman crimes. Then, in January 1912, when the Broussards – another Black family – were murdered with an ax in Lake Charles, Louisiana, the local police found a Bible verse scrawled on their front door. This overtly religious symbol appeared roughly two months after Barnabet’s first confession and seemed to confirm her claims.

By April 1912, the idea of religiously motivated serial murder had been circulating in the rice belt region for months.

Hoodoo, conjures, and sensationalism

Barnabet’s confession was transcribed by R. H. Broussard (no relation to the victims), a newspaper reporter for the “New Orleans Item,” in April 1912.

According to the report, Barnabet claimed that she and four friends purchased conjures from a local hoodoo doctor one evening while socializing. They paid the practitioner for his services. Supposedly, the group then used the charms to move about undetected while committing murder.

In both her November 1911 and April 1912 confessions, Barnabet offered faith-based motives, albeit different ones. In the first case, it was the victims who reportedly erred in their religious duties. In the second, it was Barnabet’s own belief in hoodoo that facilitated such carnage. White media outlets did not interpret either of these statements as evidence of the region’s deep history of diverse faith expressions.

Instead, they labeled Barnabet “a black borgia,” “the directing head of a fanatical cult,” and the “Priestess of [a] Colored Human Sacrifice Cult.”

Moreover, sensationalized news coverage labeled the church Barnabet mentioned as the “Sacrifice Church.” Not surprisingly, the press depicted it as a cult-like organization, portraying Barnabet as either a low-level member or the “high priestess.” Sometimes, news reports also conflated the Sacrifice Church with Voodoo, thereby criminalizing a legitimate West African-derived religion as a cult.

According to unsubstantiated media accounts, the so-called Sacrifice Church promoted human sacrifice to gain immortality. Simultaneously, newspapers treated the conjure Barnabet possessed as proof of her fanaticism, reporting her claim that the only reason she confessed was because she had lost her charm.

Combined these selective – and sensational – interpretations of Barnabet’s supposed religious beliefs ignored the possibility of diverse spiritual practices that enriched life in the rice belt region.

Jim Crow and Black faith

I have yet to find evidence the Sacrifice Church existed. My research suggests the white press conflated the word “sacrifice” with the word “sanctified.” This might have been due, in part, to both sensationalism and ignorance.

Pentecostalism, a branch of evangelical Christianity that emphasizes baptism by the Holy Spirit and direct communication from God, started growing in popularity in the U.S. in the early 1900s. Many Pentecostal denominations call their adherents saints and their churches sanctified. Since sanctified churches were relatively new to Louisiana and some Pentecostal teachings – like speaking in tongues – challenged more mainstream Protestant doctrine, Pentecostalism might have contributed to the media’s reporting.

Although the Sacrifice Church may have simply been a linguistic error in reference to any number of sanctified churches in the rice belt, it is possible that Barnabet did indeed possess a conjure. The hoodoo doctor she accused of selling her and her comrades their charms was arrested and questioned by the Lafayette authorities. The statements he gave to the police aligned with hoodoo practices even as he denied knowing Barnabet or being involved in such folkways.

Given the variety of faith practices in Jim Crow Louisiana, it is possible both that Barnabet believed in her conjure and that sanctified churches were growing in popularity in the region. Whether she ever attended one is hard to know, just as the legitimacy of either confession is difficult to determine.

What is clear is that faith anchored the statements Barnabet made to the authorities. The other anchor, however, was murder. The consequences of how these events aligned reverberate in how Barnabet has been depicted.

Barnbet was front-page news in 1912. People knew her name, even as they debated her guilt. When she was convicted of murder, she was sentenced to life at the Louisiana State Penitentiary. A little over a decade later, she was released and disappeared from public view.

Today, however, no Black female serial killer occupies a similar place in America’s collective memory.

In recent years, there have been calls for a more serious acceptance of Black women’s experiences, knowledge and beliefs within the dominant culture. This shift also invites, I believe, a fresh look at Barnabet’s confessions and the crimes that were attributed to her.

The Conversation

Lauren Nicole Henley does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Revisiting the story of Clementine Barnabet, a Black woman blamed for serial murders in the Jim Crow South – https://theconversation.com/revisiting-the-story-of-clementine-barnabet-a-black-woman-blamed-for-serial-murders-in-the-jim-crow-south-271298

Coffee crops are dying from a fungus with species-jumping genes – researchers are ‘resurrecting’ their genomes to understand how and why

Source: The Conversation – USA (2) – By Lily Peck, Postdoctoral Scholar in Evolutionary Biology, University of California, Los Angeles

For anyone who relies on coffee to start their day, coffee wilt disease may be the most important disease you’ve never heard of. This fungal disease has repeatedly reshaped the global coffee supply over the past century, with consequences that reach from African farms to cafe counters worldwide.

Infection with the fungus Fusarium xylarioides results in a characteristic “wilt” in coffee plants by blocking and reducing the plant’s ability to transport water. This blockage eventually kills the plant.

Some of the most destructive plant pathogens in the world infect their hosts in this way. Since the 1990s, outbreaks of coffee wilt have cost over US$1 billion, forced countless farms to close and caused dramatic drops in national coffee production. In Uganda, one of Africa’s largest producers, coffee production did not recover to pre-outbreak levels until 2020, decades after coffee wilt was first detected there. And in 2023, researchers found evidence that coffee wilt disease had resurfaced across all coffee-producing regions of Ivory Coast.

Studying the genetics of plant pathogens is crucial to understanding why this disease continues to return and how to prevent another major outbreak.

Rise and fall of coffee wilt disease in Africa

While early outbreaks of coffee wilt disease affected a wide range of coffee types, later epidemics primarily affected the two coffee species dominating global markets today: arabica and robusta.

First identified in 1927, coffee wilt disease decimated several varieties of coffee grown in western and central Africa. Although farmers combated the fungus with a shift to supposedly resistant robusta crops in the 1950s, the reprieve was short-lived.

The disease reemerged in the 1970s on robusta coffee, spreading through eastern and central Africa. By the mid-1990s, yields had collapsed and coffee production could not recover in countries like the Democratic Republic of Congo.

Separately, researchers identified the disease on arabica coffee in Ethiopia in the 1950s and watched it become widespread by the 1970s

Two side-by-side maps of Africa with several regions highlighted to indicate coffee wilt disease outbreaks
Coffee wilt disease has spread widely in Africa. The first outbreak before the 1950s affected mainly central and western Africa (left map) while the second outbreak originated in central Africa and spread east (right map). Affected countries are colored by the decade the disease was first detected.
Peck et al 2023/Plant Pathology, CC BY-SA

Although coffee wilt disease is currently endemic at low and manageable levels across eastern and central Africa, any future resurgence of the disease could be catastrophic for African coffee production. Coffee wilt also poses a threat to producers in Asia and the Americas.

New types of disease emerge

Coffee wilt disease evolved alongside coffee itself. Over the past century, it has repeatedly reemerged, attacking different types of coffee each time. But did these shifts reflect the rapid evolution of new types of disease, or something else entirely?

Fungal disease has devastated plants for millennia, with the earliest records of outbreaks dating from the biblical plagues. Like humans, plants have an immune system that protects them against attacks from pathogens like fungi.

While most fungal attempts at infection fail, a small number do succeed thanks to the constant evolutionary pressure on pathogens to overcome host plant defenses. In this evolutionary arms race, pathogens and hosts continuously adapt to each other by genetically changing their DNA. Boom and bust cycles of disease occur as one gains advantage over the other.

The rise of modern agriculture has led to widespread monocultures of genetically uniform crops. While monocultures have significantly boosted food production, they have also contributed to environmental degradation and increased plant vulnerability to disease.

Crop breeders have attempted to protect monocultures by introducing disease resistance genes, with farms widely applying fungicides and other environmentally damaging products. But these relatively weak protections for hundreds of acres of identical plants have resulted in outbreaks decimating crops that people depend on.

It’s likely that modern agriculture’s reliance on monocultures has enabled and accelerated the evolution of new types of pathogen capable of overcoming resistance in plants. As a result, crops become more susceptible to disease outbreaks.

Resurrecting fungal strains

Understanding the lessons of the past is essential to avoiding future plant pandemics. But this can be challenging, because the specific pathogen strains that caused previous disease outbreaks may no longer exist in nature or may have changed substantially.

In my research on the evolutionary arms race between host and pathogen in coffee wilt disease, I sought to address these problems by “resurrecting” historical strains of the fungus that causes the disease, Fusarium xylarioides. Researchers know little about why the earlier and later outbreaks targeted different types of coffee, so I explored the genetic changes in F. xylarioides that underlie this narrowing of its hosts.

I reconstructed historical genetic changes in the major coffee wilt disease outbreaks over the past seven decades by using strains from a fungus library – culture collections that preserve living fungi. These libraries store long-term living data and reflect the fungal genetic diversity present at the time of collection.

Microscopy image of blue fuzzy sphere with long extensions
Gibberella (Fusarium) xylarioides, with arrow pointing to its spore-containing sac.
Julie Flood

Whether a pathogen takes the upper hand in the evolutionary arms race depends on its ability to generate new types of genes. It can do so either by changing and rearranging its DNA sequence or by moving DNA sequences between organisms in a process called horizontal gene transfer. These mechanisms can create new effector genes that enable pathogens to infect and colonize a host plant.

Initially, I sequenced six whole genomes of strains involved in outbreaks before the 1970s as well as later outbreaks that specifically targeted arabica or robusta coffee plants. I found that strains of F. xylarioides specific to arabica or robusta genetically differed from each other, with most of these differences inherited from parent to offspring. This process is called vertical inheritance.

Genes that jump between species

However, I also found that several regions of the F. xylarioides genome were potentially acquired horizontally from F. oxysporum, a global plant pathogen that infects over 120 crops, including bananas and tomatoes. These included different regions of the genome across strains specific to arabica and robusta coffee.

But did these changes introduce new effector genes in the F. xylarioides strains that infect arabica and robusta coffee plants specifically? To answer this question, I first sequenced and assembled the first F. xylarioides reference genome, stitching together long stretches of DNA. I then sequenced and compared this reference genome to the whole genomes of three more pre-1970s F. xylarioides strains and 10 additional historical Fusarium strains found on or around diseased coffee bushes, as well as F. xylarioides strains from infected arabica coffee seedlings.

I found substantial evidence for horizontal transfer of disease-causing genes between species of Fusarium. This includes the presence of giant genetic components called Starships in Fusarium. These so-called jumping genes carry their own molecular machinery, allowing them to move around or between genomes. Genes involved in adaptation, such as those linked to virulence, metabolism or host interaction, also move with them. Scientists think Starships may potentially enable fungi to adapt to changing environmental conditions.

I found that large and highly similar genetic regions, including Starships and active effector genes involved in disease, had moved from F. oxysporum to F. xylarioides. Importantly, different genetic regions were present across strains of F. xylarioides specific to arabica and robusta, but they were absent from other related Fusarium species. This suggests that these genes were gained from F. oxysporum.

Arming farmers with knowledge

Today, a third of all global crop yields are lost to pest and disease. Reconciling the tension between agricultural productivity and environmental protection is important to balance humanity’s needs for the future. Central to this challenge is reducing the spread of disease and new outbreaks.

On the flip side to monocultures, many plant species surrounding and within small and family-run coffee farms in sub-Saharan Africa may act as disease reservoirs, where fungi pathogens can lurk. These include banana trees and Solanum weeds in the tomato family that are susceptible to fungal infection.

Human farming practices may have inadvertently created an artificial niche for these fungi, with coffee bushes brought into widespread contact with banana plants and Solanum weeds. If fungi in the same genus can frequently exchange genetic material, it could accelerate the ability of plant pathogens to adapt to new hosts.

Close-up of someone's hand full of unshelled coffee beans, colored red, yellow and dark brown
Balancing agricultural productivity with sustainability will ultimately benefit both crops and people.
Wayne Hutchinson/Farm Images/Universal Images Group via Getty Images

Testing noncoffee plants for F. xylarioides infection could reveal alternative plant species where different Fusarium fungi come into contact and exchange genetic material. This matters because across sub-Saharan Africa, coffee plants often share fields with banana trees and weeds. If these neighboring plants can harbor fungi that act as new sources of genetic variation, they may help fuel new disease strains.

Identifying the plants that can act as hosts to fungi could give farmers practical options to reduce coffee plants’ risk of disease, from targeted weed management to avoiding the planting of vulnerable crops side by side.

The Conversation

This research was supported by the Natural Environment Research Council. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of this essay or any manuscripts.

ref. Coffee crops are dying from a fungus with species-jumping genes – researchers are ‘resurrecting’ their genomes to understand how and why – https://theconversation.com/coffee-crops-are-dying-from-a-fungus-with-species-jumping-genes-researchers-are-resurrecting-their-genomes-to-understand-how-and-why-273997

In World War II’s dog-eat-dog struggle for resources, a Greenland mine launched a new world order

Source: The Conversation – USA (2) – By Thomas Robertson, Visiting Associate Professor of Environmental Studies, Macalester College

Greenland’s cryolite mine, essential for U.S. airplane production, was below sea level and vulnerable to Nazi sabotage. Reginald Wilcox, ca. 1941. Peary–MacMillan Arctic Museum, Bowdoin College

On April 9, 1940, Nazi tanks stormed into Denmark. A month later, they blitzed into Belgium, Holland and France. As Americans grew increasingly rattled by the spreading threat, a surprising place became crucial to U.S. national security: the vast, ice-capped island of Greenland.

The island, a colony of Denmark’s at the time, was rich in mineral resources. The Nazi invasions left it and several other European colonies as international orphans.

Greenland was essential for air bases as U.S. planes flew to Europe, and also for strategic minerals. Greenland’s Ivittuut (formerly Ivigtut) mine contained the world’s only reliable supply of the most important material you’ve probably never heard of: cryolite, a frosty white mineral that the U.S. and Canadian industries relied upon to refine bauxite into aluminum, and thus essential to assembling a modern air force.

A month after the Nazis seized Denmark, five American Coast Guard cutters set sail for Greenland, in part to protect the Ivittuut mine from the Nazis.

An illustration of Uncle Sam pounding a sign into Greenland labeled 'Keep Out!' with a tiny drawing of Adolf Hitler on the horizon.
This April 1941 drawing by famous political cartoonist Herbert L. Block, known as Herblock, was published shortly after Greenland became a de facto protectorate of the U.S.
A Herblock Cartoon, © The Herb Block Foundation

People sometimes forget that World War II was a dog-eat-dog struggle for resources – oil and uranium but also dozens of other materials, everything from rubber to copper. Without these strategic materials, no modern military could produce crucial new weapons such as tanks and airplanes. The resource struggle often started before actual fighting.

Foreign materials fueled American global power, but also raised tricky questions about access to resources and about sovereignty, just as the old European imperial order was being rethought. As in 2026, U.S. presidents had to skillfully balance force and diplomacy.

Two people look over a production line with dozens of military aircraft in a large building.
Walter H. Beech and Olive Ann Beech view wartime production lines at Beech Aircraft Corp. in Wichita, Kan., in 1942.
Courtesy of Wichita State University Libraries, Special Collections and University Archives. Walter H. and Olive Ann Beech Collection, wsu_ms97-02.3.9.1

As a historian at Macalester College, I research how Americans shape environments around the world through their purchasing and national security needs, and how foreign landscapes enable and constrain American actions. Today, control of Greenland’s natural resources is again on an American president’s radar as demand for critical minerals rises and supply tightens.

During the spring of 1940, America and its European allies mapped out patterns of resource use and ideas of global interconnection that would shape the international order for decades. Greenland helped give birth to this new order.

Rethinking American vulnerability

On May 16, 1940, President Franklin Roosevelt addressed a joint session of Congress, including many “American first” isolationists wary of European entanglements. Roosevelt implored Americans to wake up to new threats in the world – to, in his words, “recast their thinking about national protection.”

New weapons, he warned, had shrunk the world, and oceans could no longer shield the United States. The nation’s fate was inextricably tied to Europe’s. Nothing showed this better than Greenland: “From the fiords of Greenland,” FDR warned, “it is four hours by air to Newfoundland; five hours to Nova Scotia, New Brunswick and to the province of Quebec; and only six hours to New England.”

A 1942 map of the world at war and which countries were on which side.
Richard Edes Harrison’s famous WWII maps in Fortune magazine, including this one from 1942, changed American understandings of vulnerability by highlighting short aerial routes. Dark areas are considered Axis, dotted areas pro-Axis neutral or Axis-occupied, red areas Allies and yellow areas neutral. Pink areas, including Greenland, were considered Allies-occupied.
Cornell University – PJ Mode Collection of Persuasive Cartography

But Greenland set off alarm bells for another reason. To protect itself in a dangerous world, Roosevelt famously called for the U.S. to hammer out 50,000 planes a year. But in 1938, America had produced only 1,800 planes.

To meet this ambitious goal, Roosevelt and his advisers knew that little could be done without Greenland. No Greenland, no cryolite. No cryolite, no massive American air force. Without cryolite, making 50,000 planes would be infinitely more difficult.

The age of alloys

Americans, National Geographic explained in 1942, lived in an “age of alloys.” Without aluminum alloys and other metallic mixtures, assembly lines churning out modern tanks, trucks and airplanes would grind to a halt. “More than any other struggle in history, this is a war of many metals, and the lack of a single one may be a blow far worse than the loss of a battle.”

Two military mechanics work on the propeller engine of an aircraft.
Aluminum was crucial for modern militaries. Mechanics check an airplane engine at Naval Air Station Corpus Christi, Texas, in November 1942.
Fenno Jacobs/Department of Defense

Few materials mattered more than aluminum. Light yet strong, aluminum formed 60% of a heavy bomber’s engines, 90% of its wings and fuselage, and all of its propellers.

But there was a problem: Refining aluminum from bauxite ore required working with dangerously hot metallic mixtures, over 2,000 degrees Fahrenheit (1,100 degrees Celsius). Cryolite solved the problem by reducing the temperature to a more manageable 900 F (480 C).

The Nazis’ chemical industry had found a substitute for cryolite using fluorspar, but the U.S. preferred the more resource-efficient cryolite and wanted to prevent the Germans from having it.

After the Nazis seized Denmark

Just days after German tanks rolled into Denmark in April 1940, Allied officials huddled to devise ways to protect Ivittuut’s magical mineral. On May 3, Danish Ambassador to the U.S. Henrik de Kauffmann, risking trial for treason, requested American assistance. On May 10, the U.S. Coast Guard Cutter Comanche departed New England for Ivittuut. Four others soon followed, one with guns for the mine’s defenders.

A Coast Guard cutter and Army freighter off Greenland.
The U.S. Coast Guard Cutter Comanche played a role in protecting Greenland mining operations starting long before the U.S. officially entered World War II.
Thomas B. MacMillan, Courtesy of Peary-MacMillan Arctic Museum, Bowdoin College

That very week in Washington, at a meeting of the Pan American Union, Roosevelt and his advisers spoke with hundreds of geologists and other representatives from Latin America — a resource-rich region that the U.S. saw as an answer to its strategic materials shortages.

Nervous about the history of U.S. imperial high-handedness in the region, some Latin Americans thought that their countries should seal off their resources to outside control, as Mexico had in nationalizing U.S. and European oil holdings in 1938.

A post reading: America needs your scrap rubber and noting uses, such as a heavy bomber needs 1,825 pounds of rubber.
Japan’s advances in Southeast Asia after Pearl Harbor cut off rubber from the Dutch East Indies and Malaysia, prompting a rush for rubber in the Amazon and the development of synthetics. World War II posters urged Americans to conserve rubber for the war effort.
U.S. Government Printing Office, Courtesy of Northwestern University Libraries

With European empires crumbling, Roosevelt faced a delicate diplomatic dance with Greenland. He wanted to maintain the appearance of neutrality, keep skeptical isolationists in Congress from revolting and give no provocations to Latin American anti-imperialists to cut off resources. Crucially, he also needed to avoid giving the resource-starved Japanese a legal justification to seize the oil-rich Dutch East Indies, now Indonesia – another European colony orphaned by the Nazi invasion.

Roosevelt’s solution: enlist Coast Guard “volunteers” to guard Ivittuut. By the end of the summer, long before the U.S. officially entered the war, 15 sailors resigned from their ships and took up residence near the mine.

Seeing Greenland as crucial to US security

Roosevelt also got creative with geography.

In an April 12, 1940, press conference, just days after the Nazi invasion, he began to emphasize Greenland as part of the Western Hemisphere, more American than European, and thus falling under Monroe Doctrine protections. To calm fears in Latin America, U.S. officials recast the doctrine as development-oriented hemispheric solidarity.

Maj. William S. Culbertson, a former U.S. trade official speaking before the Army Industrial College in fall 1940, noted how the scramble for resources pulled the U.S. into a form of nonmilitary warfare: “We are engaged at the present time in economic warfare with the totalitarian powers. Publicly, our politicians don’t state it quite as bluntly as that, but it is a fact.” For the rest of the century, the front line was just as likely a far-off mine as an actual battlefield.

On April 9, 1941, exactly a year after the Nazis seized Denmark, Kauffmann met with U.S. Secretary of State Cordell Hull to sign an agreement “on behalf of the King of Denmark” placing Greenland and its mines under the U.S. security blanket. At Narsarsuaq, on the island’s southern tip, the U.S. began constructing an airbase named “Bluie West One.”

A photo from a plane of an airbase surrounded by mountains with glaciers above – in June.
An aerial view shows Bluie West One, a U.S. air base at Narsarsuaq, Greenland, in June 1942. Later, during the Cold War, the U.S. used Thule Air Base, now called Pituffik Space Base, in northwest Greenland as a key missile defense site because of its proximity to the USSR.
USAF Historical Research Agency

During the rest of World War II and throughout the Cold War, Greenland would house several important U.S. military installations, including some that forced Inuit families to relocate.

Critical minerals today

What transpired in Greenland in the 18 months before Pearl Harbor fit into a larger emerging pattern.

As the U.S. ascended to global leadership and realized that it couldn’t maintain military dominance without wide access to foreign materials, it began to redesign the global system of resource flows and the rules for this new international order.

A chart showing costs significantly higher for steel, aluminum and copper in the 1950s compared with the early 1940s.
A 1952 chart from the President’s Materials Policy Commission, established by President Harry Truman to study the security of U.S. raw materials during the Cold War. The group was commonly known as the Paley Commission.
Resources for Freedom: A Report to the President

It rejected the Axis’ “might makes right” territorial conquest for resources, but found other ways to guarantee American access to critical resources, including loosening trade restrictions in European colonies.

The U.S. provided a lifeline to the British with the destroyers-for-bases deal in September 1940 and the Lend-Lease Act in March 1941, but it also gained strategic military bases around the world. It used aid as leverage to also pry open the British Empire’s markets.

The result was a postwar world interconnected by trade and low tariffs, but also a global network of U.S. bases and alliances of sometimes questionable legitimacy designed in part to protect U.S. access to strategic resources.

Two men, one in military uniform, stand in front of a White House door talking.
President John F Kennedy meets with Mobutu Sese Seko of the former Belgian Congo, now the Democratic Republic of Congo, at the White House in 1963. Starting in the 1940s, the African country provided the U.S. with cobalt and uranium, including for the Hiroshima bomb. CIA-supported coups in 1960 and 1965 helped put Mobutu, known for corruption, in power.
Keystone/Getty Images

During the Cold War, these global resources helped defeat the Soviet Union. However, these security imperatives also gave the U.S. license for support of authoritarian regimes in places like Iran, Congo and Indonesia.

America’s voracious appetite for resources also often displaced local populations and Indigenous communities, justified by the old claim that they misused the resources around them. It left environmental damage from the Arctic to the Amazon.

Five white men standing on snow smile for the cameras with a Greenland village behind them.
Donald Trump’s son visited Greenland in 2025, shortly after the U.S. president began talking about wanting to control the island and its resources. The people with Donald Trump Jr., second from right, are wearing jackets reading ‘Trump Force One.’
Emil Stach/Ritzau Scanpix/AFP via Getty Images

Strategic resources have been at the center of the American-led global system for decades. But U.S. actions today are different. The cryolite mine was a working mine, rarer than today’s proposed critical mineral mines in Greenland, and the Nazi threat was imminent. Most important, Roosevelt knew how to gain what the U.S. needed without a “damn-what-the world-thinks” military takeover.

The Conversation

Thomas Robertson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. In World War II’s dog-eat-dog struggle for resources, a Greenland mine launched a new world order – https://theconversation.com/in-world-war-iis-dog-eat-dog-struggle-for-resources-a-greenland-mine-launched-a-new-world-order-275630

New dietary guidelines prioritize ‘real food’ – but low-income pregnant women can’t easily obtain it

Source: The Conversation – USA (3) – By Bethany Barone Gibbs, Professor, Epidemiology and Biostatistics, West Virginia University

Most pregnant women in the U.S. aren’t meeting dietary recommendations, especially in rural communities. ArtistGNDphotography/Getty Images

The federal government’s message in its new Dietary Guidelines for Americans, released in January 2026, couldn’t be simpler: “Eat real food.”

But for pregnant women in rural America, that straightforward advice runs headlong into a harsh reality: Rural women have less access to healthy whole foods.

We are a public health professor and postdoctoral researcher who are working on the Pregnancy 24/7 Cohort Study at West Virginia University and the University of Iowa. The five-year observational study investigated how 24-hour behavioral patterns throughout pregnancy affected maternal and fetal health, including pregnancy complications.

Most pregnant women in the United States aren’t meeting dietary recommendations. This is especially true for women living in rural communities. In our recent study, 500 pregnant were recruited from university-affiliated clinics in Pennsylvania, West Virginia and Iowa reported their dietary habits during each trimester using a questionnaire.

About 1 in 5 participants lived in rural areas, as determined by a federal classification system that used the women’s home address. We found that pregnant women living in rural areas consumed more added sugars from sugar-sweetened beverages — about half a teaspoon more per day — than women living in urban areas. Rural women also consumed less fiber and ate fewer vegetables.

Research suggests less healthy dietary habits could be why rural pregnant women tend to have more pregnancy complications, such as preterm birth, gestational diabetes and hypertensive disorders.

Diets lacking adequate nutrition during pregnancy can also lead to can not only lead to pregnancy complications, but also result in obesity and diabetes. Left unaddressed, these nutrition gaps could perpetuate cycles of poor health across generations.

Poverty, not location, drives differences in pregnancy diets

Our study also assessed whether socioeconomic status influenced pregnant women’s diets in both rural and urban areas. West Virginia and Iowa site participants provided the majority of rural data.

There were 124 participants from Pittsburgh, and all but three were considered “urban” based on where they live. Compared to rural participants across the three-state sample, urban women consumed significantly fewer added sugars from sugar-sweetened beverages in the first and second trimesters and had consistently higher fiber intake across pregnancy.

However, socioeconomic status in the Pittsburgh site emerged as the stronger predictor of diet quality: Participants with a low socioeconomic status – including those in Pittsburgh – consumed 1.29 to 1.49 more teaspoons per day of added sugars from sugar-sweetened beverages and 1.5 to 1.6 grams less fiber per day than their high socioeconomic status counterparts. The lower-income women also consumed 31 to 58 milligrams less calcium per day.

While Pittsburgh’s participants and urban participants at the other study sites fared better than their rural peers on some measures, income and education level were more strongly tied to diet quality than geography alone.

A pregnant woman sits in a clinic exam room while a health care provider talks to her.
For those with lower income or living in rural areas, adequate nutrition is harder to achieve.
Jason Connolly/AFP via Getty Images

About 20% of the U.S. population is rural. Pregnant women in these areas often travel long distances to access fresh produce and whole grains. The food outlets closer to home are often convenience stores, gas stations or dollar stores, which primarily sell processed, calorie-dense foods with lower nutritional value. Even when healthier options are available, they tend to cost more.

These less healthy dietary patterns are particularly concerning since pregnant women have additional dietary needs than women who are not pregnant. Low-income and rural women are often missing out on nutrients such as calcium, iron, folate and choline. Calcium supports bone development and is found in dairy, fortified plant milks and leafy greens. Iron and folate, found in beans, lentils and dark green vegetables, support the growing baby. Choline assists with brain and spinal cord development and can be found in eggs, beans and nuts.

Making ‘eat real food’ accessible

The new dietary guidelines have a few key messages for all adults, including instructions to eat whole and minimally processed foods, and to avoid sugar-sweetened beverages and highly processed foods.

Telling Americans to “eat real food” may seem like straightforward advice based on decades of research. But our study highlights that following this advice might be harder for some women during pregnancy. Pregnant women in rural and low-income communities could benefit from subsidies for fresh produce, or supplemental nutrition assistance.

A pregnant woman and a man place a bunch of bananas into a bag while shopping in a grocery store.
Meal planning and buying a mix of fresh, frozen and canned foods can reduce grocery bills.
Frazao Studio Latino/E+ via Getty Images

The USDA’s Shop Simple with MyPlate tool offers practical strategies for eating well on a budget. Planning meals for the week, avoiding impulse purchases and buying a mix of fresh, frozen and canned foods are cost-effective ways to accomplish this.

Frozen and canned fruits and vegetables – without added salt or sugar – are just as nutritious, last longer, often cost less than fresh produce and help reduce waste. Choosing water over sodas, buying whole grains like oatmeal and brown rice, and using low-cost protein sources such as beans, lentils and eggs can help stretch a grocery budget. This can also improve diet quality, and make a meaningful difference for both mom and baby.

The Conversation

Bethany Barone Gibbs receives funding from the National Institutes of Health and the American Heart Association.

Alex Crisp does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. New dietary guidelines prioritize ‘real food’ – but low-income pregnant women can’t easily obtain it – https://theconversation.com/new-dietary-guidelines-prioritize-real-food-but-low-income-pregnant-women-cant-easily-obtain-it-274576

Self-driving cars are poorly prepared for high-risk road situations – here’s how AI can improve them

Source: The Conversation – UK – By Mingming Liu, Assistant Professor, School of Electronic Engineering, Dublin City University

Andrey_Popov / Shutterstock

Self-driving cars have made impressive progress. They can follow lanes, keep their distance, and navigate familiar routes with ease. However, despite years of development, they still struggle with one critical problem: the rare and dangerous situations that cause the most serious accidents.

These “edge cases” include sharp bends on wet roads, sudden changes in slope, or situations where a vehicle approaches its physical limits of grip and stability. In real-world deployments, which often involve some level of shared control between driver and automation, such moments can arise from human misjudgment or from automated systems failing to anticipate rapidly changing conditions.

They happen infrequently, but when they occur, the consequences can be severe. A car might handle a thousand gentle curves perfectly, but fail on the one sharp bend taken a little too fast.

Current autonomous systems are not trained well enough to handle these moments reliably. From a data perspective, these events form what scientists call a “long tail”: they are statistically rare, but disproportionately important.

Collecting more real world data does not fully solve the problem, because deliberately seeking out dangerous conditions is costly, slow, and risky. Many of these scenarios are simply too dangerous to practise in real life. We cannot deliberately put vehicles into near-crashes on public roads just to see whether the software can cope. If an AI system rarely sees extreme situations during training, it has little chance to respond well when they occur in real life.

In the current fleet of self-driving cars, a human in a control centre is often at hand to intervene if something goes wrong. But to achieve fully driverless cars, researchers need to find ways of effectively training AI systems to handle high-risk situations.

Our research team at Dublin City University, working with colleagues at the University of Birmingham, has been tackling this gap.

We have developed a virtual “proving ground” that uses generative AI to safely create rare, high-risk driving scenarios, allowing vehicles to learn from them without putting anyone in danger. Instead of waiting for rare events to happen naturally, we can teach an AI model to create realistic but challenging driving scenarios on demand, including ones that push vehicles close to their physical limits.

Practising safely

The generative AI that is used in our system is designed to learn from real driving data and then produces new, realistic scenarios. Crucially, it does not just reproduce typical roads and speeds.

It focuses deliberately on the most demanding situations including sharp curves, steep slopes and high speeds, combined in ways that challenge both human drivers and automated systems. This allows us to expand the range of situations a vehicle can experience during training, without ever leaving the simulator.

In effect, the car can “practise” dangerous situations safely, repeatedly and systematically. However, the goal of our work is not to replace the human driver entirely. Instead, we focus on human–machine shared driving: a partnership in which the car and the driver support each other.

Humans are very good at intuition, anticipation and adapting to unfamiliar situations. Machines excel at fast reactions and precise control. Shared driving aims to combine these strengths. In our system, control is continuously adjusted depending on risk.

When the road is straight and safe, the driver remains firmly in charge, but when the system detects a high-risk situation, such as a sharp bend that the driver may be approaching too quickly, it smoothly increases the level of automated assistance to help stabilise the vehicle. Importantly, this is not a sudden takeover. The transition is gradual and adaptive, designed to feel natural rather than intrusive.

To evaluate the system, we went beyond pure simulation. We used a driver-in-the-loop platform, where real people sit in a high-fidelity driving simulator and interact with the AI in real time. The results were encouraging. Less experienced drivers benefited most: when they struggled on complex or winding roads, the system provided timely support, reducing the risk of losing control.

At the same time, the system avoided unnecessary intervention during safe driving, helping drivers feel more engaged rather than overridden. Overall, this adaptive approach led to safer, smoother driving compared with fixed or overly conservative control strategies. It also allows both the human driver and the AI to improve at their handling of extreme road situations.

Autonomous vehicles are often judged by how well they handle routine driving, but public trust will ultimately depend on how they behave when things go wrong. By using generative AI to train vehicles on rare but critical scenarios, we can expose weaknesses early, improve decision making, and build systems that are better prepared for the real world.

Just as importantly, by keeping humans in the loop, we can design automation that supports drivers rather than replacing them outright. Fully driverless cars may still be some way off, but smarter training systems like this can help bridge the gap by making both human-driven and automated vehicles safer on today’s roads.

The Conversation

Mingming Liu does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Self-driving cars are poorly prepared for high-risk road situations – here’s how AI can improve them – https://theconversation.com/self-driving-cars-are-poorly-prepared-for-high-risk-road-situations-heres-how-ai-can-improve-them-275841

How sailing voyages can inspire the next generation of ocean scientists and advocates

Source: The Conversation – UK – By Pamela Buchan, Research Fellow, Geography, University of Exeter

Setting sail from the busy port of Plymouth in Devon, the tall ship Pelican of London takes young people to sea, often for the first time.

During each nine-day voyage, the UK-based sailing trainees, who often come from socio-economically challenging backgrounds, become crew members. They not only learn the ropes (literally) but also engage in ocean science and stewardship activities.

As marine and outdoor education researchers, we wanted to find out whether mixing sail training and Steams (science, technology, engineering, art, mathematics and sustainability) activities can inspire young people to pursue a more ocean-focused career, and a long-term commitment to ocean care.

Research shows that a strong connection with the ocean can drive people to be active marine citizens. This means they take responsibility for ocean health not only in their own lives but as advocates for more sustainable interactions with the ocean.

Over the past year, we have worked with Charly Braungardt, head scientist with the charity Pelican of London, to create a new theory of how sail training with Steams activities can change the paths that trainees pursue.

Based on scientific evidence, our theory of change models how Steams activities can cause positive changes in personal development and knowledge and understanding of the ocean (known as ocean literacy). It shows how the voyages can develop trainees’ strong connections with the ocean and encourage them to act responsibly towards it.

Tracking change

Surveys with the participants before and after the voyage, and six months later, measure any changes that occur – and how these persist. Through our evaluation, we’re exploring how combining voyages with Steams activities can go beyond personal development to produce deep, long-lasting effects.

Our pilot study has already shown how the sail training and Steams combination helps to develop confidence, ocean literacy and ocean connections.

For example, the boost to self-esteem and feelings of capability that occur on board help young people develop their marine identity – the ocean becomes an important part of a person’s sense of who they are. As one trainee put it: “I think the ocean is me and the ocean will and forever be part of me.”


Swimming, sailing, even just building a sandcastle – the ocean benefits our physical and mental wellbeing. Curious about how a strong coastal connection helps drive marine conservation, scientists are diving in to investigate the power of blue health.

This article is part of a series, Vitamin Sea, exploring how the ocean can be enhanced by our interaction with it.


As crew members, trainees access a world and traditional culture largely unknown to them before the voyage. They learn to live with others in a confined space, working together in small teams to keep watch on 24-hour rotas.

Trainees are encouraged to step out of their comfort zone through activities such as climbing the rigging and swimming off the vessel. Our pilot evaluation found the voyages built the trainees’ confidence and social skills, boosting self-esteem and feelings of capability.

One trainee said: “I’ve felt pretty disappointed in myself not committing to my education or only doing something with minimal effort. But after this voyage, I want to give it my all.”




Read more:
Five ways to inspire ocean connection: reflections from my 40-year marine ecology career


The Steams voyages encourage the development of scientific skills and ocean literacy through the lens of creative tasks at sea. These activities are led by a scientist-in-residence who provides mentoring and introduces research techniques.

The voyage gives trainees the opportunity to use scientific equipment, ranging from plankton nets and microscopes to cutting-edge technology such as remotely operated vehicles. The Steams activities introduce marine research as a potential career to these young people. One said they wanted to train as a marine engineer at nautical college following the voyage.

Ocean experiences provide a foundation for ocean connection. Trainees experience the ocean in sunshine and in gales, day and night, rolling with the waves and observing marine life in its natural environment.

Citizen science projects such as wildlife surveys and recorded beach cleans also develop their ocean stewardship knowledge and skills. One trainee explained how they have “become more interested [in] our marine life and creative ways to help protect it”.

Over the next 12 months, the information we collect from the voyages will help us to better understand the benefits and contribute to an important marine social science data gap in young people. It is important to understand how to develop young people’s relationships with the ocean, and the knowledge and skills that will empower the next generation of marine citizens.

As one trainee put it: “Being out on the Pelican showed me how vast and powerful the sea is – and how important it is to respect and care for it.”


Don’t have time to read about climate change as much as you’d like?

Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 47,000+ readers who’ve subscribed so far.


The Conversation

Pamela Buchan received funding from Economic and Social Research Council for the research cited in this article. The sail training evaluation project received funding from Sail Training International. We would like to thank Charlotte Braungardt for her contribution to this project.

Alun Morgan is affiliated with the Pelican of London as an Ambassador for the organisation

ref. How sailing voyages can inspire the next generation of ocean scientists and advocates – https://theconversation.com/how-sailing-voyages-can-inspire-the-next-generation-of-ocean-scientists-and-advocates-273715