3 generations of Black Philadelphia students report persistent anti-Black attitudes in schools

Source: The Conversation – USA (2) – By Leana Cabral, Researcher at the Consortium for Policy Research in Education, Teachers College, Columbia University

Over 70 years after Brown v. Board of Education, public schools in the U.S. remain deeply segregated. AP Photo/Phil Long

John Washington, now in his 50s, attended a public elementary and middle school in the Chestnut Hill neighborhood of Philadelphia and then went to a large magnet high school, a type of public school that has a selective admission process. As he has gotten older, he has understood that in the education system in Philadelphia, “the more things change, the more they stay the same.”

John was bused during the integration movement of the 1970s and graduated from high school in 1990. Back then, he recognized that his school was not as segregated as the schools his parents and grandparents had attended in Philadelphia. As a parent of three current students, however, he has noticed how racially segregated most of the schools in Philadelphia remain.

As research demonstrates, U.S. public schools in general are not more integrated than they were just after the Supreme Court’s Brown v. Board of Education ruling in 1954.

I am a sociologist whose research focuses on education, race and social inequality. For my dissertation research, I interviewed over 45 former and current Black students to learn about their intergenerational experiences in Philly public schools. “John” and the other names used in this article are pseudonyms to protect the privacy of the research participants.

Intergenerational research is underexplored within educational research. I wanted to understand how different generations of Black public school students in Philadelphia understood and experienced racial inequality, as well as how families’ memories and perspectives around schooling shape students’ educational journeys.

The people I interviewed ranged in age from 14 to 95, and all attended a Philadelphia public elementary or high school, or both. Across the generations, I heard both clear awareness of anti-Blackness and its presence in schools alongside an unyielding hope and vision for a better future.

As Naya, a 30-year-old former student from Germantown, put it, there is a “magic” in being Black. “You have to see what’s possible when nobody else can see it,” she said.

Black-and-white photo of elementary school children seated at desks in classroom
Black and white students sit together in an integrated classroom in Philadelphia in 1968.
AP Photo

Anti-Blackness and American education

Historian Carter G. Woodson warned of the danger in allowing Black students to be treated as inferior within the educational system.

“There would be no lynching if it did not start in the classroom,” he wrote in his seminal book “The Mis-Education of the Negro,” published in 1933. “Why not exploit, enslave, or exterminate a class that everybody is taught to regard as inferior?”

Anti-Blackness is made visible in schools today through, to name just a few examples, the sanitization of the United States’ violent racial history, the school-to-prison pipeline, wrongful placements of Black students into special education or remedial classes, racial violence in schools and the ongoing disinvestment in and closures of majority-Black schools.

‘We weren’t troublemakers, we were just kids’

Several current and former students I interviewed said their parents taught them “they had to work twice as hard” as white students.

A former student who is now in her 30s shared how she understood the idea that “you have to continue to prove yourself in ways that white kids aren’t expected to … and that’s how supremacy shows up.”

I repeatedly heard from former and current students of all ages how they believed their white teachers held low expectations of Black students and did not challenge them academically.

“I honestly feel like there was a divide, there was less patience for us,” said Jazmine, who graduated from a Philadelphia public high school in 2003. “It was just so obvious, the difference in how the adults treated us, which in turn led to a lot of animosity with the children.”

Hank, who graduated from high school in 1981, said the low expectations his white teachers held limited students’ motivation. “We were just going through the motions,” he said. “You could definitely see a difference with the expectations of the Black teachers than many of the white teachers. And then if the white teachers had expectations, it was sterile. It wasn’t with the love that you felt from some of the Black teachers.”

Current high school students shared incidents of white teachers using racial epithets, including the n-word, and one saying, “You’re acting like a park ape.” Another teacher, a student shared, said slavery was in the past “and not connected to today.”

A recent graduate who attended a magnet middle school recalled being treated by her white teachers as “disposable.”

“I feel like the school actively tried to strip away a lot of my confidence, but not just for me, but also other Black kids,” she said. “It was the first place where I didn’t feel like my teachers thought that I was smart and capable.”

I repeatedly heard both current and former students describe white teachers treating them as if they were “criminals” and receiving harsher discipline and punishments than their non-Black peers, which research has long demonstrated. Students I spoke to described feeling degraded and “singled out” by white teachers – and even blamed for things they did not do.

For example, Naima, a current high school student, shared a painful memory from fourth grade when she had an older white teacher who kept a candy jar on her classroom desk. One afternoon, someone took many pieces of candy from the jar.

“And, of course, it was the white girl, but me and my other Black friend were the last people in her room that she saw walk out, so she assumed it was us,” Naima said. “She said, ‘You stole my candy jar. Y’all were the last people in there. I know y’all did it.’”

Naima could not believe they were accused because, as she explained, she and her friend “weren’t troublemakers, we were just kids.” Despite their innocence – and that they were only in the fourth grade – they were suspended.

High school students wearing backpacks shown walking down a school hallway
Schools can be sites of both racial harm and affirmation for Black students.
AP Photo/Matt Slocum

Experiences of affirmation too

Speaking to multiple generations of students provides unique insight into the ways in which Black students continue to experience racial harm and trauma in Philly public schools.

On the other hand, at some point in their schooling, many of the former students I spoke to were fortunate to also experience classrooms or schools that affirmed their Blackness and did instill in them a sense of pride.

However, this tended to happen only in majority Black schools where Black teachers were also in the majority.

Delise, who graduated in 2004, shared that at her elementary and high schools, “Blackness was a norm. It was the standard. … Black cultural norms and my identity was affirmed in that school.”

Black communities in Philadelphia have always resisted and mobilized for educational justice. Such efforts include the Black People’s Unity Movement, Philadelphia’s first Black Power political organization, in the 1960s and the many movements that have come since, as well as the creation of alternative educational spaces such as the freedom library, freedom schools, faith-based groups and other Black-led community and art spaces focused on Afrocentric history and curricula.

Former and current students are proud of this legacy.

“We have yet to grasp the significance of our experience as far as I’m concerned,” said James, a former student from North Philly who is now in his 80s, reflecting on Black communities’ resilience and resistance. “And when I look at how we have navigated, I mean, it’s just constant, man … and still we rise.”

Read more of our stories about Philadelphia and Pennsylvania, or sign up for our Philadelphia newsletter on Substack.

The Conversation

Leana Cabral receives funding from Teachers College, Columbia University.

ref. 3 generations of Black Philadelphia students report persistent anti-Black attitudes in schools – https://theconversation.com/3-generations-of-black-philadelphia-students-report-persistent-anti-black-attitudes-in-schools-266439

Revisiting the story of Clementine Barnabet, a Black woman blamed for serial murders in the Jim Crow South

Source: The Conversation – USA (3) – By Lauren Nicole Henley, Assistant Professor of Leadership Studies, University of Richmond

A grainy photograph of Clementine Barnabet. A 1912 edition of The Atlanta Constitution newspaper via Wikimedia Commons

In April 1912, a young Black woman named Clementine Barnabet confessed to murdering four families in and around Lafayette, Louisiana. The widespread news coverage at the time effectively branded her a serial killer.

Her confession, however, did not align with the timeline of crimes that had gripped America’s rice belt region with fear. Even today, her guilt is debated.

From November 1909 until August 1912, an unknown assailant – or assailants – zigzagged across southwestern Louisiana and southeastern Texas. Many Black families were slaughtered in their homes under the cover of darkness. An ax – the telltale weapon – was almost always found in the bloody aftermath.

All but one of the scenes were located within a mile of the Southern Pacific Railroad’s Sunset Route. In each case, a mother and child were always among the victims. Evidence of additional weapons was often found nearby, suggesting a deliberate cruelty to the carnage.

Dubbed the “axman”, the unknown assailant eluded the authorities and terrified local Black communities.

Today, when scholars and laypeople alike discuss Clementine Barnabet, they oscillate between two extremes: portraying her as a fear-inducing, cult-leading Black female serial killer, or as an innocent young Black woman caught in circumstances beyond her control.

In more than a decade of researching Clementine Barnabet, I’ve been struck by how print media created overtly sensationalized accounts of the mythology of the axman and, by extension, the axwoman. Whether Barnabet committed the crimes she said she did – or any of the axman murders, for that matter – is irrelevant to the primary motive the media constructed for her fatal violence: religion.

Diverse faith traditions

In Jim Crow Louisiana, various expressions of faith were possible. The state’s history as a French colony – one that also practiced slavery – meant it was home to the largest percentage of Black Catholics in the United States.

A black-and-white sketch depicts people walking and sitting around a square cloth on the ground, with small items arranged on it.
A sketch supposedly depicting a Voodoo ceremony in Louisiana.
Photos.com/Getty images plus

At the same time, religions like Voodoo, that originated in West Africa, reached the region on slave ships. Voodoo was not necessarily at odds with Catholicism; enslaved practitioners creatively adapted their ancestral faith to that of their enslavers.

Some displays of faith were not organized religions at all, but folkways. Hoodoo, for example, has West African origins, though it also draws upon European and Native American elements. Hoodoo practitioners – sometimes called doctors – and their clients often practice a religion, yet they also seek comfort in the supernatural possibilities of their craft.

This craft involves the physical manipulation of earthly elements such as graveyard dirt or plants like John the Conqueror root to achieve magical ends, often resulting in conjures – or ritual objects – needed to bring about desired goals. Conjures are believed to help people protect themselves, harm one’s adversaries, alter one’s circumstances, intervene in one’s relationships and more.

In their most powerful form, believers contend that conjures can bring about a person’s death.

For some believers, elements of Catholicism, Voodoo, Protestantism and hoodoo combine into syncretic faith practices. Incorporating multiple systems of beliefs has been an aspect of many Louisianans’ identities for generations. Most of the time, this blending of practices, ideologies and communities is depicted as a quirky – even “backward” – way to make sense of the world.

Yet during the axman’s reign in the early 1900s, a Black woman’s confession to murder was interpreted through the lens of religious deviance rather than diversity.

A timeline of events

When Barnabet confessed in April 1912, it was technically the second time she had done so. The first time was in November 1911 in the aftermath of the Randall family murder. Five members of the Randall family and their overnight guest had been brutally slaughtered in Lafayette, Louisiana at the end of the month.

According to regional newspapers, Barnabet was in the crowd that had gathered near the Randall family’s home after the murders were discovered. Reportedly, she caught the attention of the local sheriff. Not only did she live near the slain, but, according to a New Orleans daily, the authorities found “her room saturated with blood and covered with human brains.”

Barnabet was given a “third degree” examination – meaning she was tortured – by the New Orleans Police Department, and then supposedly confessed that she had killed the Randalls because, according to a Midwestern newspaper, they “disobeyed the orders of the church.” That church would become a topic of scrutiny and sensationalism by regional lawmen and news outlets alike throughout much of 1912.

At that time, Barnabet is also said to have confessed to killing another family in Lafayette.

Thus, Barnabet had already been in jail for over four months before her springtime confession. Between January and March 1912, four more families had been axed to death between Crowley, Louisiana and Glidden, Texas. In April, when Barnabet re-confessed, she added two more families to her victim roster.

In aggregate, the four families Barnabet confessed to killing had been slain between November 1909 and November 1911. Four more families had been murdered between her arrest and second confession, meaning she was in jail when they occurred. After her second confession and while she was still in custody, another three families were attacked with an ax, though for the first time, people survived the axman.

This convoluted timeline, in which more than half of the axman murders occurred after Barnabet had been apprehended, presented a challenge for investigators. They generally believed the crimes were related. Yet Barnabet could not have physically carried out the attacks in 1912.

To explain the continuation of the killings despite Barnabet’s incarceration, local lawmen leveraged the young woman’s own statements that had landed her in jail in the first place: that religion compelled her to murder.

It was this November 1911 confession that gave investigators the motive of religious fanaticism to attach to the axman crimes. Then, in January 1912, when the Broussards – another Black family – were murdered with an ax in Lake Charles, Louisiana, the local police found a Bible verse scrawled on their front door. This overtly religious symbol appeared roughly two months after Barnabet’s first confession and seemed to confirm her claims.

By April 1912, the idea of religiously motivated serial murder had been circulating in the rice belt region for months.

Hoodoo, conjures, and sensationalism

Barnabet’s confession was transcribed by R. H. Broussard (no relation to the victims), a newspaper reporter for the “New Orleans Item,” in April 1912.

According to the report, Barnabet claimed that she and four friends purchased conjures from a local hoodoo doctor one evening while socializing. They paid the practitioner for his services. Supposedly, the group then used the charms to move about undetected while committing murder.

In both her November 1911 and April 1912 confessions, Barnabet offered faith-based motives, albeit different ones. In the first case, it was the victims who reportedly erred in their religious duties. In the second, it was Barnabet’s own belief in hoodoo that facilitated such carnage. White media outlets did not interpret either of these statements as evidence of the region’s deep history of diverse faith expressions.

Instead, they labeled Barnabet “a black borgia,” “the directing head of a fanatical cult,” and the “Priestess of [a] Colored Human Sacrifice Cult.”

Moreover, sensationalized news coverage labeled the church Barnabet mentioned as the “Sacrifice Church.” Not surprisingly, the press depicted it as a cult-like organization, portraying Barnabet as either a low-level member or the “high priestess.” Sometimes, news reports also conflated the Sacrifice Church with Voodoo, thereby criminalizing a legitimate West African-derived religion as a cult.

According to unsubstantiated media accounts, the so-called Sacrifice Church promoted human sacrifice to gain immortality. Simultaneously, newspapers treated the conjure Barnabet possessed as proof of her fanaticism, reporting her claim that the only reason she confessed was because she had lost her charm.

Combined these selective – and sensational – interpretations of Barnabet’s supposed religious beliefs ignored the possibility of diverse spiritual practices that enriched life in the rice belt region.

Jim Crow and Black faith

I have yet to find evidence the Sacrifice Church existed. My research suggests the white press conflated the word “sacrifice” with the word “sanctified.” This might have been due, in part, to both sensationalism and ignorance.

Pentecostalism, a branch of evangelical Christianity that emphasizes baptism by the Holy Spirit and direct communication from God, started growing in popularity in the U.S. in the early 1900s. Many Pentecostal denominations call their adherents saints and their churches sanctified. Since sanctified churches were relatively new to Louisiana and some Pentecostal teachings – like speaking in tongues – challenged more mainstream Protestant doctrine, Pentecostalism might have contributed to the media’s reporting.

Although the Sacrifice Church may have simply been a linguistic error in reference to any number of sanctified churches in the rice belt, it is possible that Barnabet did indeed possess a conjure. The hoodoo doctor she accused of selling her and her comrades their charms was arrested and questioned by the Lafayette authorities. The statements he gave to the police aligned with hoodoo practices even as he denied knowing Barnabet or being involved in such folkways.

Given the variety of faith practices in Jim Crow Louisiana, it is possible both that Barnabet believed in her conjure and that sanctified churches were growing in popularity in the region. Whether she ever attended one is hard to know, just as the legitimacy of either confession is difficult to determine.

What is clear is that faith anchored the statements Barnabet made to the authorities. The other anchor, however, was murder. The consequences of how these events aligned reverberate in how Barnabet has been depicted.

Barnbet was front-page news in 1912. People knew her name, even as they debated her guilt. When she was convicted of murder, she was sentenced to life at the Louisiana State Penitentiary. A little over a decade later, she was released and disappeared from public view.

Today, however, no Black female serial killer occupies a similar place in America’s collective memory.

In recent years, there have been calls for a more serious acceptance of Black women’s experiences, knowledge and beliefs within the dominant culture. This shift also invites, I believe, a fresh look at Barnabet’s confessions and the crimes that were attributed to her.

The Conversation

Lauren Nicole Henley does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Revisiting the story of Clementine Barnabet, a Black woman blamed for serial murders in the Jim Crow South – https://theconversation.com/revisiting-the-story-of-clementine-barnabet-a-black-woman-blamed-for-serial-murders-in-the-jim-crow-south-271298

Coffee crops are dying from a fungus with species-jumping genes – researchers are ‘resurrecting’ their genomes to understand how and why

Source: The Conversation – USA (2) – By Lily Peck, Postdoctoral Scholar in Evolutionary Biology, University of California, Los Angeles

For anyone who relies on coffee to start their day, coffee wilt disease may be the most important disease you’ve never heard of. This fungal disease has repeatedly reshaped the global coffee supply over the past century, with consequences that reach from African farms to cafe counters worldwide.

Infection with the fungus Fusarium xylarioides results in a characteristic “wilt” in coffee plants by blocking and reducing the plant’s ability to transport water. This blockage eventually kills the plant.

Some of the most destructive plant pathogens in the world infect their hosts in this way. Since the 1990s, outbreaks of coffee wilt have cost over US$1 billion, forced countless farms to close and caused dramatic drops in national coffee production. In Uganda, one of Africa’s largest producers, coffee production did not recover to pre-outbreak levels until 2020, decades after coffee wilt was first detected there. And in 2023, researchers found evidence that coffee wilt disease had resurfaced across all coffee-producing regions of Ivory Coast.

Studying the genetics of plant pathogens is crucial to understanding why this disease continues to return and how to prevent another major outbreak.

Rise and fall of coffee wilt disease in Africa

While early outbreaks of coffee wilt disease affected a wide range of coffee types, later epidemics primarily affected the two coffee species dominating global markets today: arabica and robusta.

First identified in 1927, coffee wilt disease decimated several varieties of coffee grown in western and central Africa. Although farmers combated the fungus with a shift to supposedly resistant robusta crops in the 1950s, the reprieve was short-lived.

The disease reemerged in the 1970s on robusta coffee, spreading through eastern and central Africa. By the mid-1990s, yields had collapsed and coffee production could not recover in countries like the Democratic Republic of Congo.

Separately, researchers identified the disease on arabica coffee in Ethiopia in the 1950s and watched it become widespread by the 1970s

Two side-by-side maps of Africa with several regions highlighted to indicate coffee wilt disease outbreaks
Coffee wilt disease has spread widely in Africa. The first outbreak before the 1950s affected mainly central and western Africa (left map) while the second outbreak originated in central Africa and spread east (right map). Affected countries are colored by the decade the disease was first detected.
Peck et al 2023/Plant Pathology, CC BY-SA

Although coffee wilt disease is currently endemic at low and manageable levels across eastern and central Africa, any future resurgence of the disease could be catastrophic for African coffee production. Coffee wilt also poses a threat to producers in Asia and the Americas.

New types of disease emerge

Coffee wilt disease evolved alongside coffee itself. Over the past century, it has repeatedly reemerged, attacking different types of coffee each time. But did these shifts reflect the rapid evolution of new types of disease, or something else entirely?

Fungal disease has devastated plants for millennia, with the earliest records of outbreaks dating from the biblical plagues. Like humans, plants have an immune system that protects them against attacks from pathogens like fungi.

While most fungal attempts at infection fail, a small number do succeed thanks to the constant evolutionary pressure on pathogens to overcome host plant defenses. In this evolutionary arms race, pathogens and hosts continuously adapt to each other by genetically changing their DNA. Boom and bust cycles of disease occur as one gains advantage over the other.

The rise of modern agriculture has led to widespread monocultures of genetically uniform crops. While monocultures have significantly boosted food production, they have also contributed to environmental degradation and increased plant vulnerability to disease.

Crop breeders have attempted to protect monocultures by introducing disease resistance genes, with farms widely applying fungicides and other environmentally damaging products. But these relatively weak protections for hundreds of acres of identical plants have resulted in outbreaks decimating crops that people depend on.

It’s likely that modern agriculture’s reliance on monocultures has enabled and accelerated the evolution of new types of pathogen capable of overcoming resistance in plants. As a result, crops become more susceptible to disease outbreaks.

Resurrecting fungal strains

Understanding the lessons of the past is essential to avoiding future plant pandemics. But this can be challenging, because the specific pathogen strains that caused previous disease outbreaks may no longer exist in nature or may have changed substantially.

In my research on the evolutionary arms race between host and pathogen in coffee wilt disease, I sought to address these problems by “resurrecting” historical strains of the fungus that causes the disease, Fusarium xylarioides. Researchers know little about why the earlier and later outbreaks targeted different types of coffee, so I explored the genetic changes in F. xylarioides that underlie this narrowing of its hosts.

I reconstructed historical genetic changes in the major coffee wilt disease outbreaks over the past seven decades by using strains from a fungus library – culture collections that preserve living fungi. These libraries store long-term living data and reflect the fungal genetic diversity present at the time of collection.

Microscopy image of blue fuzzy sphere with long extensions
Gibberella (Fusarium) xylarioides, with arrow pointing to its spore-containing sac.
Julie Flood

Whether a pathogen takes the upper hand in the evolutionary arms race depends on its ability to generate new types of genes. It can do so either by changing and rearranging its DNA sequence or by moving DNA sequences between organisms in a process called horizontal gene transfer. These mechanisms can create new effector genes that enable pathogens to infect and colonize a host plant.

Initially, I sequenced six whole genomes of strains involved in outbreaks before the 1970s as well as later outbreaks that specifically targeted arabica or robusta coffee plants. I found that strains of F. xylarioides specific to arabica or robusta genetically differed from each other, with most of these differences inherited from parent to offspring. This process is called vertical inheritance.

Genes that jump between species

However, I also found that several regions of the F. xylarioides genome were potentially acquired horizontally from F. oxysporum, a global plant pathogen that infects over 120 crops, including bananas and tomatoes. These included different regions of the genome across strains specific to arabica and robusta coffee.

But did these changes introduce new effector genes in the F. xylarioides strains that infect arabica and robusta coffee plants specifically? To answer this question, I first sequenced and assembled the first F. xylarioides reference genome, stitching together long stretches of DNA. I then sequenced and compared this reference genome to the whole genomes of three more pre-1970s F. xylarioides strains and 10 additional historical Fusarium strains found on or around diseased coffee bushes, as well as F. xylarioides strains from infected arabica coffee seedlings.

I found substantial evidence for horizontal transfer of disease-causing genes between species of Fusarium. This includes the presence of giant genetic components called Starships in Fusarium. These so-called jumping genes carry their own molecular machinery, allowing them to move around or between genomes. Genes involved in adaptation, such as those linked to virulence, metabolism or host interaction, also move with them. Scientists think Starships may potentially enable fungi to adapt to changing environmental conditions.

I found that large and highly similar genetic regions, including Starships and active effector genes involved in disease, had moved from F. oxysporum to F. xylarioides. Importantly, different genetic regions were present across strains of F. xylarioides specific to arabica and robusta, but they were absent from other related Fusarium species. This suggests that these genes were gained from F. oxysporum.

Arming farmers with knowledge

Today, a third of all global crop yields are lost to pest and disease. Reconciling the tension between agricultural productivity and environmental protection is important to balance humanity’s needs for the future. Central to this challenge is reducing the spread of disease and new outbreaks.

On the flip side to monocultures, many plant species surrounding and within small and family-run coffee farms in sub-Saharan Africa may act as disease reservoirs, where fungi pathogens can lurk. These include banana trees and Solanum weeds in the tomato family that are susceptible to fungal infection.

Human farming practices may have inadvertently created an artificial niche for these fungi, with coffee bushes brought into widespread contact with banana plants and Solanum weeds. If fungi in the same genus can frequently exchange genetic material, it could accelerate the ability of plant pathogens to adapt to new hosts.

Close-up of someone's hand full of unshelled coffee beans, colored red, yellow and dark brown
Balancing agricultural productivity with sustainability will ultimately benefit both crops and people.
Wayne Hutchinson/Farm Images/Universal Images Group via Getty Images

Testing noncoffee plants for F. xylarioides infection could reveal alternative plant species where different Fusarium fungi come into contact and exchange genetic material. This matters because across sub-Saharan Africa, coffee plants often share fields with banana trees and weeds. If these neighboring plants can harbor fungi that act as new sources of genetic variation, they may help fuel new disease strains.

Identifying the plants that can act as hosts to fungi could give farmers practical options to reduce coffee plants’ risk of disease, from targeted weed management to avoiding the planting of vulnerable crops side by side.

The Conversation

This research was supported by the Natural Environment Research Council. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of this essay or any manuscripts.

ref. Coffee crops are dying from a fungus with species-jumping genes – researchers are ‘resurrecting’ their genomes to understand how and why – https://theconversation.com/coffee-crops-are-dying-from-a-fungus-with-species-jumping-genes-researchers-are-resurrecting-their-genomes-to-understand-how-and-why-273997

In World War II’s dog-eat-dog struggle for resources, a Greenland mine launched a new world order

Source: The Conversation – USA (2) – By Thomas Robertson, Visiting Associate Professor of Environmental Studies, Macalester College

Greenland’s cryolite mine, essential for U.S. airplane production, was below sea level and vulnerable to Nazi sabotage. Reginald Wilcox, ca. 1941. Peary–MacMillan Arctic Museum, Bowdoin College

On April 9, 1940, Nazi tanks stormed into Denmark. A month later, they blitzed into Belgium, Holland and France. As Americans grew increasingly rattled by the spreading threat, a surprising place became crucial to U.S. national security: the vast, ice-capped island of Greenland.

The island, a colony of Denmark’s at the time, was rich in mineral resources. The Nazi invasions left it and several other European colonies as international orphans.

Greenland was essential for air bases as U.S. planes flew to Europe, and also for strategic minerals. Greenland’s Ivittuut (formerly Ivigtut) mine contained the world’s only reliable supply of the most important material you’ve probably never heard of: cryolite, a frosty white mineral that the U.S. and Canadian industries relied upon to refine bauxite into aluminum, and thus essential to assembling a modern air force.

A month after the Nazis seized Denmark, five American Coast Guard cutters set sail for Greenland, in part to protect the Ivittuut mine from the Nazis.

An illustration of Uncle Sam pounding a sign into Greenland labeled 'Keep Out!' with a tiny drawing of Adolf Hitler on the horizon.
This April 1941 drawing by famous political cartoonist Herbert L. Block, known as Herblock, was published shortly after Greenland became a de facto protectorate of the U.S.
A Herblock Cartoon, © The Herb Block Foundation

People sometimes forget that World War II was a dog-eat-dog struggle for resources – oil and uranium but also dozens of other materials, everything from rubber to copper. Without these strategic materials, no modern military could produce crucial new weapons such as tanks and airplanes. The resource struggle often started before actual fighting.

Foreign materials fueled American global power, but also raised tricky questions about access to resources and about sovereignty, just as the old European imperial order was being rethought. As in 2026, U.S. presidents had to skillfully balance force and diplomacy.

Two people look over a production line with dozens of military aircraft in a large building.
Walter H. Beech and Olive Ann Beech view wartime production lines at Beech Aircraft Corp. in Wichita, Kan., in 1942.
Courtesy of Wichita State University Libraries, Special Collections and University Archives. Walter H. and Olive Ann Beech Collection, wsu_ms97-02.3.9.1

As a historian at Macalester College, I research how Americans shape environments around the world through their purchasing and national security needs, and how foreign landscapes enable and constrain American actions. Today, control of Greenland’s natural resources is again on an American president’s radar as demand for critical minerals rises and supply tightens.

During the spring of 1940, America and its European allies mapped out patterns of resource use and ideas of global interconnection that would shape the international order for decades. Greenland helped give birth to this new order.

Rethinking American vulnerability

On May 16, 1940, President Franklin Roosevelt addressed a joint session of Congress, including many “American first” isolationists wary of European entanglements. Roosevelt implored Americans to wake up to new threats in the world – to, in his words, “recast their thinking about national protection.”

New weapons, he warned, had shrunk the world, and oceans could no longer shield the United States. The nation’s fate was inextricably tied to Europe’s. Nothing showed this better than Greenland: “From the fiords of Greenland,” FDR warned, “it is four hours by air to Newfoundland; five hours to Nova Scotia, New Brunswick and to the province of Quebec; and only six hours to New England.”

A 1942 map of the world at war and which countries were on which side.
Richard Edes Harrison’s famous WWII maps in Fortune magazine, including this one from 1942, changed American understandings of vulnerability by highlighting short aerial routes. Dark areas are considered Axis, dotted areas pro-Axis neutral or Axis-occupied, red areas Allies and yellow areas neutral. Pink areas, including Greenland, were considered Allies-occupied.
Cornell University – PJ Mode Collection of Persuasive Cartography

But Greenland set off alarm bells for another reason. To protect itself in a dangerous world, Roosevelt famously called for the U.S. to hammer out 50,000 planes a year. But in 1938, America had produced only 1,800 planes.

To meet this ambitious goal, Roosevelt and his advisers knew that little could be done without Greenland. No Greenland, no cryolite. No cryolite, no massive American air force. Without cryolite, making 50,000 planes would be infinitely more difficult.

The age of alloys

Americans, National Geographic explained in 1942, lived in an “age of alloys.” Without aluminum alloys and other metallic mixtures, assembly lines churning out modern tanks, trucks and airplanes would grind to a halt. “More than any other struggle in history, this is a war of many metals, and the lack of a single one may be a blow far worse than the loss of a battle.”

Two military mechanics work on the propeller engine of an aircraft.
Aluminum was crucial for modern militaries. Mechanics check an airplane engine at Naval Air Station Corpus Christi, Texas, in November 1942.
Fenno Jacobs/Department of Defense

Few materials mattered more than aluminum. Light yet strong, aluminum formed 60% of a heavy bomber’s engines, 90% of its wings and fuselage, and all of its propellers.

But there was a problem: Refining aluminum from bauxite ore required working with dangerously hot metallic mixtures, over 2,000 degrees Fahrenheit (1,100 degrees Celsius). Cryolite solved the problem by reducing the temperature to a more manageable 900 F (480 C).

The Nazis’ chemical industry had found a substitute for cryolite using fluorspar, but the U.S. preferred the more resource-efficient cryolite and wanted to prevent the Germans from having it.

After the Nazis seized Denmark

Just days after German tanks rolled into Denmark in April 1940, Allied officials huddled to devise ways to protect Ivittuut’s magical mineral. On May 3, Danish Ambassador to the U.S. Henrik de Kauffmann, risking trial for treason, requested American assistance. On May 10, the U.S. Coast Guard Cutter Comanche departed New England for Ivittuut. Four others soon followed, one with guns for the mine’s defenders.

A Coast Guard cutter and Army freighter off Greenland.
The U.S. Coast Guard Cutter Comanche played a role in protecting Greenland mining operations starting long before the U.S. officially entered World War II.
Thomas B. MacMillan, Courtesy of Peary-MacMillan Arctic Museum, Bowdoin College

That very week in Washington, at a meeting of the Pan American Union, Roosevelt and his advisers spoke with hundreds of geologists and other representatives from Latin America — a resource-rich region that the U.S. saw as an answer to its strategic materials shortages.

Nervous about the history of U.S. imperial high-handedness in the region, some Latin Americans thought that their countries should seal off their resources to outside control, as Mexico had in nationalizing U.S. and European oil holdings in 1938.

A post reading: America needs your scrap rubber and noting uses, such as a heavy bomber needs 1,825 pounds of rubber.
Japan’s advances in Southeast Asia after Pearl Harbor cut off rubber from the Dutch East Indies and Malaysia, prompting a rush for rubber in the Amazon and the development of synthetics. World War II posters urged Americans to conserve rubber for the war effort.
U.S. Government Printing Office, Courtesy of Northwestern University Libraries

With European empires crumbling, Roosevelt faced a delicate diplomatic dance with Greenland. He wanted to maintain the appearance of neutrality, keep skeptical isolationists in Congress from revolting and give no provocations to Latin American anti-imperialists to cut off resources. Crucially, he also needed to avoid giving the resource-starved Japanese a legal justification to seize the oil-rich Dutch East Indies, now Indonesia – another European colony orphaned by the Nazi invasion.

Roosevelt’s solution: enlist Coast Guard “volunteers” to guard Ivittuut. By the end of the summer, long before the U.S. officially entered the war, 15 sailors resigned from their ships and took up residence near the mine.

Seeing Greenland as crucial to US security

Roosevelt also got creative with geography.

In an April 12, 1940, press conference, just days after the Nazi invasion, he began to emphasize Greenland as part of the Western Hemisphere, more American than European, and thus falling under Monroe Doctrine protections. To calm fears in Latin America, U.S. officials recast the doctrine as development-oriented hemispheric solidarity.

Maj. William S. Culbertson, a former U.S. trade official speaking before the Army Industrial College in fall 1940, noted how the scramble for resources pulled the U.S. into a form of nonmilitary warfare: “We are engaged at the present time in economic warfare with the totalitarian powers. Publicly, our politicians don’t state it quite as bluntly as that, but it is a fact.” For the rest of the century, the front line was just as likely a far-off mine as an actual battlefield.

On April 9, 1941, exactly a year after the Nazis seized Denmark, Kauffmann met with U.S. Secretary of State Cordell Hull to sign an agreement “on behalf of the King of Denmark” placing Greenland and its mines under the U.S. security blanket. At Narsarsuaq, on the island’s southern tip, the U.S. began constructing an airbase named “Bluie West One.”

A photo from a plane of an airbase surrounded by mountains with glaciers above – in June.
An aerial view shows Bluie West One, a U.S. air base at Narsarsuaq, Greenland, in June 1942. Later, during the Cold War, the U.S. used Thule Air Base, now called Pituffik Space Base, in northwest Greenland as a key missile defense site because of its proximity to the USSR.
USAF Historical Research Agency

During the rest of World War II and throughout the Cold War, Greenland would house several important U.S. military installations, including some that forced Inuit families to relocate.

Critical minerals today

What transpired in Greenland in the 18 months before Pearl Harbor fit into a larger emerging pattern.

As the U.S. ascended to global leadership and realized that it couldn’t maintain military dominance without wide access to foreign materials, it began to redesign the global system of resource flows and the rules for this new international order.

A chart showing costs significantly higher for steel, aluminum and copper in the 1950s compared with the early 1940s.
A 1952 chart from the President’s Materials Policy Commission, established by President Harry Truman to study the security of U.S. raw materials during the Cold War. The group was commonly known as the Paley Commission.
Resources for Freedom: A Report to the President

It rejected the Axis’ “might makes right” territorial conquest for resources, but found other ways to guarantee American access to critical resources, including loosening trade restrictions in European colonies.

The U.S. provided a lifeline to the British with the destroyers-for-bases deal in September 1940 and the Lend-Lease Act in March 1941, but it also gained strategic military bases around the world. It used aid as leverage to also pry open the British Empire’s markets.

The result was a postwar world interconnected by trade and low tariffs, but also a global network of U.S. bases and alliances of sometimes questionable legitimacy designed in part to protect U.S. access to strategic resources.

Two men, one in military uniform, stand in front of a White House door talking.
President John F Kennedy meets with Mobutu Sese Seko of the former Belgian Congo, now the Democratic Republic of Congo, at the White House in 1963. Starting in the 1940s, the African country provided the U.S. with cobalt and uranium, including for the Hiroshima bomb. CIA-supported coups in 1960 and 1965 helped put Mobutu, known for corruption, in power.
Keystone/Getty Images

During the Cold War, these global resources helped defeat the Soviet Union. However, these security imperatives also gave the U.S. license for support of authoritarian regimes in places like Iran, Congo and Indonesia.

America’s voracious appetite for resources also often displaced local populations and Indigenous communities, justified by the old claim that they misused the resources around them. It left environmental damage from the Arctic to the Amazon.

Five white men standing on snow smile for the cameras with a Greenland village behind them.
Donald Trump’s son visited Greenland in 2025, shortly after the U.S. president began talking about wanting to control the island and its resources. The people with Donald Trump Jr., second from right, are wearing jackets reading ‘Trump Force One.’
Emil Stach/Ritzau Scanpix/AFP via Getty Images

Strategic resources have been at the center of the American-led global system for decades. But U.S. actions today are different. The cryolite mine was a working mine, rarer than today’s proposed critical mineral mines in Greenland, and the Nazi threat was imminent. Most important, Roosevelt knew how to gain what the U.S. needed without a “damn-what-the world-thinks” military takeover.

The Conversation

Thomas Robertson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. In World War II’s dog-eat-dog struggle for resources, a Greenland mine launched a new world order – https://theconversation.com/in-world-war-iis-dog-eat-dog-struggle-for-resources-a-greenland-mine-launched-a-new-world-order-275630

New dietary guidelines prioritize ‘real food’ – but low-income pregnant women can’t easily obtain it

Source: The Conversation – USA (3) – By Bethany Barone Gibbs, Professor, Epidemiology and Biostatistics, West Virginia University

Most pregnant women in the U.S. aren’t meeting dietary recommendations, especially in rural communities. ArtistGNDphotography/Getty Images

The federal government’s message in its new Dietary Guidelines for Americans, released in January 2026, couldn’t be simpler: “Eat real food.”

But for pregnant women in rural America, that straightforward advice runs headlong into a harsh reality: Rural women have less access to healthy whole foods.

We are a public health professor and postdoctoral researcher who are working on the Pregnancy 24/7 Cohort Study at West Virginia University and the University of Iowa. The five-year observational study investigated how 24-hour behavioral patterns throughout pregnancy affected maternal and fetal health, including pregnancy complications.

Most pregnant women in the United States aren’t meeting dietary recommendations. This is especially true for women living in rural communities. In our recent study, 500 pregnant were recruited from university-affiliated clinics in Pennsylvania, West Virginia and Iowa reported their dietary habits during each trimester using a questionnaire.

About 1 in 5 participants lived in rural areas, as determined by a federal classification system that used the women’s home address. We found that pregnant women living in rural areas consumed more added sugars from sugar-sweetened beverages — about half a teaspoon more per day — than women living in urban areas. Rural women also consumed less fiber and ate fewer vegetables.

Research suggests less healthy dietary habits could be why rural pregnant women tend to have more pregnancy complications, such as preterm birth, gestational diabetes and hypertensive disorders.

Diets lacking adequate nutrition during pregnancy can also lead to can not only lead to pregnancy complications, but also result in obesity and diabetes. Left unaddressed, these nutrition gaps could perpetuate cycles of poor health across generations.

Poverty, not location, drives differences in pregnancy diets

Our study also assessed whether socioeconomic status influenced pregnant women’s diets in both rural and urban areas. West Virginia and Iowa site participants provided the majority of rural data.

There were 124 participants from Pittsburgh, and all but three were considered “urban” based on where they live. Compared to rural participants across the three-state sample, urban women consumed significantly fewer added sugars from sugar-sweetened beverages in the first and second trimesters and had consistently higher fiber intake across pregnancy.

However, socioeconomic status in the Pittsburgh site emerged as the stronger predictor of diet quality: Participants with a low socioeconomic status – including those in Pittsburgh – consumed 1.29 to 1.49 more teaspoons per day of added sugars from sugar-sweetened beverages and 1.5 to 1.6 grams less fiber per day than their high socioeconomic status counterparts. The lower-income women also consumed 31 to 58 milligrams less calcium per day.

While Pittsburgh’s participants and urban participants at the other study sites fared better than their rural peers on some measures, income and education level were more strongly tied to diet quality than geography alone.

A pregnant woman sits in a clinic exam room while a health care provider talks to her.
For those with lower income or living in rural areas, adequate nutrition is harder to achieve.
Jason Connolly/AFP via Getty Images

About 20% of the U.S. population is rural. Pregnant women in these areas often travel long distances to access fresh produce and whole grains. The food outlets closer to home are often convenience stores, gas stations or dollar stores, which primarily sell processed, calorie-dense foods with lower nutritional value. Even when healthier options are available, they tend to cost more.

These less healthy dietary patterns are particularly concerning since pregnant women have additional dietary needs than women who are not pregnant. Low-income and rural women are often missing out on nutrients such as calcium, iron, folate and choline. Calcium supports bone development and is found in dairy, fortified plant milks and leafy greens. Iron and folate, found in beans, lentils and dark green vegetables, support the growing baby. Choline assists with brain and spinal cord development and can be found in eggs, beans and nuts.

Making ‘eat real food’ accessible

The new dietary guidelines have a few key messages for all adults, including instructions to eat whole and minimally processed foods, and to avoid sugar-sweetened beverages and highly processed foods.

Telling Americans to “eat real food” may seem like straightforward advice based on decades of research. But our study highlights that following this advice might be harder for some women during pregnancy. Pregnant women in rural and low-income communities could benefit from subsidies for fresh produce, or supplemental nutrition assistance.

A pregnant woman and a man place a bunch of bananas into a bag while shopping in a grocery store.
Meal planning and buying a mix of fresh, frozen and canned foods can reduce grocery bills.
Frazao Studio Latino/E+ via Getty Images

The USDA’s Shop Simple with MyPlate tool offers practical strategies for eating well on a budget. Planning meals for the week, avoiding impulse purchases and buying a mix of fresh, frozen and canned foods are cost-effective ways to accomplish this.

Frozen and canned fruits and vegetables – without added salt or sugar – are just as nutritious, last longer, often cost less than fresh produce and help reduce waste. Choosing water over sodas, buying whole grains like oatmeal and brown rice, and using low-cost protein sources such as beans, lentils and eggs can help stretch a grocery budget. This can also improve diet quality, and make a meaningful difference for both mom and baby.

The Conversation

Bethany Barone Gibbs receives funding from the National Institutes of Health and the American Heart Association.

Alex Crisp does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. New dietary guidelines prioritize ‘real food’ – but low-income pregnant women can’t easily obtain it – https://theconversation.com/new-dietary-guidelines-prioritize-real-food-but-low-income-pregnant-women-cant-easily-obtain-it-274576

Warming winters are disrupting the hidden world of fungi – the result can shift mountain grasslands to scrub

Source: The Conversation – USA (2) – By Stephanie Kivlin, Associate Professor of Ecology, University of Tennessee

Warmer winters in normally snowy places can interfere with the important activities of microbes in the soil. Seogi/500px via Getty Images

When you look out across a snowy winter landscape, it might seem like nature is fast asleep. Yet, under the surface, tiny organisms are hard at work, consuming the previous year’s dead plant material and other organic matter.

These soil microorganisms – Earth’s recyclers – liberate nutrients that will act as fertilizer once grasses and other plants wake up with the spring snowmelt.

Key among them are arbuscular mycorrhizal fungi, found in over 75% of plant species around the planet. These threadlike fungi grow like webs inside plant roots, where they provide up to 50% of the plant’s nutrient and water supply in exchange for plant carbon, which the fungi use to grow and reproduce.

A magnified image shows dots and thin filaments weaving through the outer cells of a root.
A magnified view shows filaments and vesicles of arbuscular mycorrhizal fungi weaving through the outer cells of a plant root. Outside the root, the filaments of hyphae gather nutrients from the soil.
Edouard Evangelisti, et al., New Phytologist, 2021, CC BY

In winter, the snowpack insulates mycorrhizal fungi and other microorganisms like a blanket, allowing them to continue to decompose soil organic matter, even when air temperatures above the snow are well below freezing. However, when rain washes out the snowpack or a healthy snowpack doesn’t form, water in the soil can later freeze – as can mycorrhizal fungi.

In a new study in the Rocky Mountain grasslands, we dug into plots of land that for three decades scientists led by ecologist John Harte had warmed by 2 degrees Celsius (3.6 Fahrenheit) using suspended heaters that mimicked the air temperature the area is likely to see by the end of this century.

Above ground, the plots shifted over that time from predominantly grassland to more desertlike shrublands. Under the surface, we found something else: There were noticeably fewer beneficial mycorrhizal fungi, which left plants less able to acquire nutrients or buffer themselves from environmental stressors like freezing temperatures and drought.

These changes represent a major shift in the ecosystem, one that, on a wide scale, could reverberate through the food web as the grasses and forbs, such as wildflowers, that cattle and wildlife rely on decline and are replaced by a more desertlike environment.

When plants and fungi get out of sync

Warmer winters and a changing snowpack can affect the growth of plants and fungi in a few important ways.

One of the first signs of changing winters is when the timing of plant, fungal and animal activities that rely on one another get out of sync. For example, a mountain of evidence from around the world has documented how early snowmelt can lead to flowers blooming before pollinators arrive.

Timing also matters for plants that rely on mycorrhizal fungi – their growth must overlap.

Since plants are cued to light in addition to temperature, whereas underground microorganisms are cued to temperature and nutrient availability, warmer winters may cause microorganisms to be active well before their plant counterparts.

A mountain with a meadow filled with grasses and wildflowers in the foreground.
A view across the subalpine grasslands outside the experimental plots.
Stephanie Kivlin

At our research site, in a subalpine meadow in Colorado, we also initiated an early snowmelt experiment in April 2023 that advanced snowmelt in five large plots by about two weeks.

We found that the early snowmelt advanced mycorrhizal fungal growth by one week, but we didn’t find a corresponding change in the growth of plant roots. When mycorrhizal fungi are active before plants, the plants don’t benefit from the nutrients that mycorrhizal fungi are taking up from the soil.

Disappearing nutrients

Early snowmelt can also lead to a loss of nutrients from the soil.

When microorganisms decompose organic matter in warmer soils, nutrients accumulate in the air and water pockets between soil particles. These nutrients are then available for mycorrhizal fungi to transfer to plants. While mycorrhizal fungi transfer nutrients to the plant, other fungi are primarily decomposers that keep the nutrients for themselves.

However, if rain falls on the snow or the snow melts early, before plants are active, the nutrients can leach from the soil into lakes and streams. The effect is similar to fertilizer runoff from farm fields – the nutrients fuel algae growth, which can create low-oxygen dead zones. At the same time, plants in the field have fewer nutrients available.

This kind of nutrient leaching has happened in a variety of ecosystems with warming winters and rain-on-snow events, ranging from mountain grasslands in Colorado to temperate forests in New England and the Midwest.

Without a thick snowpack, soils can also freeze for longer periods in the winter, leading to lower microbial activity and scarce resources at the onset of spring.

The future of changing winters

Under all of these scenarios – a timing mismatch, more rain causing nutrients to leach out or frozen soil – warmer winters are leading to less spring growth.

Ecosystems are often resilient, however. Organisms could acclimate to lower nutrient concentrations or shift their ranges to more favorable conditions. How plants and mycorrhizal fungi both adapt will determine how this hidden world adjusts to changing winters.

So, the next time rain on snow or a snow drought delays your outdoor winter plans, remember that it’s more than a hassle for humans – it’s affecting that hidden world below, with potentially long-term effects.

The Conversation

Stephanie Kivlin received funding from NSF Award #2338421, #1936195 and DOE Award #DE-FOA-0002392. She is an associate professor in the Ecology and Evolutionary Biology department at the University of Tennessee, Knoxville and the Rocky Mountain Biological Laboratory.

Aimee Classen receives funding from the US Department of Energy and the US National Science Foundation. She is a professor in Ecology and Evolutionary Biology at the University of Michigan and the director of the University of Michigan Biological Station.

Lara A. Souza received funding from National Science Foundation and The United States Department of Agriculture. She is affiliated with The University of Oklahoma, Norman and the Rocky Mountain Biological Laboratory.

ref. Warming winters are disrupting the hidden world of fungi – the result can shift mountain grasslands to scrub – https://theconversation.com/warming-winters-are-disrupting-the-hidden-world-of-fungi-the-result-can-shift-mountain-grasslands-to-scrub-274087

How do people know their interests? The shortest player in the NBA shows how self-belief matters more than biology

Source: The Conversation – USA – By Greg Edwards, Adjunct Lecturer of English and Technical Communications, Missouri University of Science and Technology

Muggsy Bogues didn’t let his height get in the way of his mastery of the game. Focus on Sport/Getty Images

Curious Kids is a series for children of all ages. If you have a question you’d like an expert to answer, send it to CuriousKidsUS@theconversation.com.


How do people know their interests? For example, one person likes art and the other does not, but how and why does that happen? – Leia K., age 12, Redmond, Washington


Standing at 5 feet 3 inches tall and weighing 136 pounds, Muggsy Bogues did not fit the typical profile of a National Basketball Association athlete when he played professionally from 1987 to 2001. The average NBA player during Bogues’ rookie season was 6 feet 7 inches tall and weighed 208 pounds.

Despite that, Bogues had a successful NBA career, finishing among the league’s all-time leaders in career assists. He even made an appearance alongside Michael Jordan in “Space Jam.”

Muggsy Bogues in mid-air, arm extended to the net with basketball in hand, players of the competing team surrounding him
Believing you can fly to the net can help you stand among the giants.
Focus on Sport/Getty Images

It’s true that a person’s DNA shapes their physical traits, which can influence what activities feel possible for someone. For example, Jérémy Gohier, the 7-foot-6 Canadian eighth-grader, towers over his peers, making basketball an activity that likely felt possible and worth trying early on.

But biology alone would not fully explain why Bogues developed a lasting interest in basketball. Given his small stature, it may have suggested the opposite.

Instead, Bogues was introduced to basketball early in his life and had opportunities to learn the game in ways that helped him feel capable. He credited his coach, Leon Howard, as someone who supported him and taught him the game. Those early experiences gave him confidence and made him want to continue playing.

Bogues’ story raises a broader question that extends far beyond the world of sports: How do people recognize what they are interested in, and what motivates them to keep pursuing an activity?

Based on my research and what I have observed when teaching students in my own classroom, I believe whether people decide to stick with an interest comes down to self-efficacy: A person’s belief in their ability to succeed at a specific task.

Experience builds confidence

Motivation to keep doing specific activities often grows from access to opportunities, encouragement from others and chances to practice and improve. Moments of success in a task or activity, known as mastery experiences, can help people believe in their abilities.

Albert Bandura, a social psychologist who proposed the concept of self-efficacy, also identified other factors that shape self-efficacy. These include encouragement from others, learning by watching others be successful, and a person’s psychological and emotional state – such as whether they feel energized and excited or tense and anxious.

Bogues likely experienced all of these while practicing basketball. He benefited from coaches who believed in him, from studying the game by watching others and from learning how to perform under pressure.

Young person playing piano on a spotlit stage
Having people who support you in your endeavors makes it easier to step on stage.
sot/Stone via Getty Images

In my own research, I found that how confident teachers were with using classroom technologies varied depending on how much support and opportunity to learn they had. Those same factors often shape whether people feel capable enough to keep engaging with and being interested in an activity.

I have seen something similar in my almost 15 years of teaching students ranging from middle schoolers to 70-year-olds who decided to go back to school. When students struggle to get started on an assignment, they sometimes assume they are simply bad at it. However, once they take a small step and experience even minor success, their attitude often shifts to “I can do this,” which makes them more willing to keep going and ultimately end up liking the subjects.

This was even true in my own experiences as a student. When I took my first speech course as a high school senior at Missouri University of Science and Technology, I felt like a ball of nerves. I had no inkling I would one day enjoy being a professional communicator and return to this same institution decades later, winning awards and teaching speech and writing courses to students who seem just as nervous as I once was.

Embrace new opportunities

When people have new opportunities to discover what they can do, their small moments of success can help interests blossom into full-fledged passions.

If someone never gets the chance to experience early success and encouragement, they might disengage or lose interest in an activity over time.

But success does not always mean getting better at the activity itself.

People don’t have to be the best at whatever they become interested in it. Their interests may help them accomplish other goals such as stress relief or a sense of belonging. They may stay engaged not because they feel especially skilled in the activity, but because they believe it helps them reach these other goals that matter in their lives.

A specific activity may matter because it connects to someone’s life in personal ways. It might remind them of someone they love, offer an escape from a bad home life or help them make social connections. Even if people do not feel confident in the activity itself, they can still see it helping them reach these goals, which can be enough to keep them interested.

Close-up of child's hand fingerpainting on sheets of paper
Trying something new could lead to your favorite activity.
Virojt Changyencham/Moment via Getty Images

This is why it is important for people of all ages to try new things. Without access to basketball and training opportunities, Muggsy Bogues’ path might have looked very different. And if Bob Ross had not decided to take an art class while he was in the Air Force and continue practicing, the world may have never experienced “The Joy of Painting.”

Trying new things is the first step in developing interests. After that, having opportunities to build confidence and improve can help people sustain those interests for years to come.


Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to CuriousKidsUS@theconversation.com. Please tell us your name, age and the city where you live.

And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.

The Conversation

Greg Edwards does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. How do people know their interests? The shortest player in the NBA shows how self-belief matters more than biology – https://theconversation.com/how-do-people-know-their-interests-the-shortest-player-in-the-nba-shows-how-self-belief-matters-more-than-biology-272492

White men file workplace discrimination claims but are less likely to face inequity than other groups

Source: The Conversation – USA – By Donald T. Tomaskovic-Devey, Professor of Sociology and Director of the Center for Employment Equity, UMass Amherst

In March 2025 the EEOC characterized DEI programs as potentially discriminatory against white men. Wong Yu Liang/Getty Images

In December 2025, Andrea Lucas, the chair of the U.S. Equal Employment Opportunity Commission, invited white men to file more sex- and race-based discrimination complaints against their employers.

“Are you a white male who has experienced discrimination at work based on your race or sex? You may have a claim to recover money under federal civil rights laws. Contact the @USEEOC as soon as possible,” she wrote in a post on X.

In February 2026, the EEOC began to investigate Nike on what the agency said was suspicion of discrimination against white workers.

Both initiatives followed the EEOC’s March 2025 characterization of diversity, equity and inclusion efforts, or DEI, as potentially discriminatory against white men. The EEOC characterization falls within the Trump administration’s larger pattern of calling DEI “illegal discrimination.”

At the Center for Employment Equity at the University of Massachusetts, we have done extensive research on who files discrimination charges with the EEOC.

Given the EEOC’s December 2025 solicitation for white men to file discrimination complaints, we revisited our prior research to see what is known about discrimination against white people and, in particular, what is known about white and white male discrimination charges registered with the EEOC.

As part of our research, the EEOC gave us access to discrimination charges submitted to the agency and state Fair Employment Practices Agencies from 2012 to 2016. By law, all U.S. employment discrimination claims must be submitted to the EEOC, or state agencies with equivalent roles, prior to any legal actions.

While the EEOC has a history of sharing its data with researchers stretching back to the 1970s, the EEOC stopped sharing current and historical data with researchers in 2016. As a result, we do not have any data on discrimination complaints after 2016. Judging by the EEOC’s yearly reports, the basic patterns have not changed much in the interim.

White men already file complaints

When we looked at all sex- and race-based discrimination charges received by the EEOC, unsurprisingly we found that men are much less likely than women to file sex-based discrimination charges. But white men do file about 10% of sex discrimination complaints. While Black, Hispanic and Asian male employees are more likely to file racial discrimination complaints, white men file about 9% of such complaints.

In the same study, when we compared legal charges filed with the EEOC to national survey data, we found that percentages submitting a legal complaint to the EEOC roughly correspond to the percentages of survey-reported experiences of discrimination at work. Together, these two findings suggest that white people generally, and white men in particular, were already filing employment discrimination charges.

A blonde-haired woman speaks in front of a microphone.
EEOC chair Andrea Lucas in December 2025 encouraged white men to file more discrimination complaints against their employers.
AP Photo/Mariam Zuhaib, File

Second, we did a deeper dive on sexual harassment charges. We found that while white men were 46% of the labor force, they filed 11% of sexual harassment charges and 11% of all other charges, most commonly tied to disability and age.

The general pattern is that, while white men already file discrimination charges, they are less likely to experience employment discrimination than other groups.

The risk of filing complaints

Charges filed with the EEOC can result in two types of benefits to the charging party: monetary settlements and mandated changes in workplace practices.

White men who filed sexual harassment charges received some benefit 21% of the time, lower than white women, at 29%. That’s also lower than Black women, 23%, and higher than Black men, 19%. The EEOC already receives discrimination charges from white men and, at least for sexual harassment, treats them similarly to other groups.

Most people who submit a discrimination charge do so to improve their employment experience and those of their co-workers. But submitting these claims to the EEOC or a state Fair Employment Practices Agency is a high-risk, low-reward act.

We found that, at least for sexual harassment, employers responded to white men’s complaints in much the same way as to other groups. White men who filed sexual harassment discrimination charges lost their job 68% of the time and experienced employer retaliation at about the same rate. Retaliation can include firing but also other forms of harassment at work, such as abusive supervision and close monitoring by human resource departments.

A swoosh logo is seen on a building.
The Nike logo is shown on a store in Miami Beach, Fla., on Aug. 8, 2017.
AP Photo/Alan Diaz, File

We found this pattern of employer retaliation and worker firings for all demographic groups that file any type of discrimination complaint. White men who file discrimination charges receive the same harsh treatment from their employers as any other group.

Urging more white men to submit discrimination complaints based on the perceived unfairness of DEI practices, as the EEOC has done, is likely to lead to job loss and retaliation from employers.

What will happen?

It’s possible that EEOC chair Lucas’ call for more discrimination charges from white men will increase the number of filings.

This is exactly what happened after 2012 when the EEOC ruled that the 1964 Civil Rights Act’s prohibition of sex discrimination also protected LGBTQ workers from sexual-orientation and gender-identity discrimination.

More concerning is the EEOC defining employer efforts to prevent discrimination and create inclusive workplaces as discrimination against white men.

In the end, all workers want to be treated fairly and with respect. Employer efforts to create such workplaces should be supported. It would be a better use of EEOC resources to support companies’ efforts to create such workplaces.

The Conversation

When this research was completed the authors received funding from the W.K.Kellogg Foundation, the U.S. National Science Foundation, and the U.S. Department of Labor.

When this research was completed the author received funding from the U.S. National Science Foundation, and the U.S. Department of Labor.

ref. White men file workplace discrimination claims but are less likely to face inequity than other groups – https://theconversation.com/white-men-file-workplace-discrimination-claims-but-are-less-likely-to-face-inequity-than-other-groups-273664

Cement has a climate problem — here’s how geopolymers with add-ins like cork could help fix it

Source: The Conversation – USA (2) – By Alcina Johnson Sudagar, Research Scientist in Chemistry, Washington University in St. Louis

Portland cement, widely used for concrete, is responsible for about 8% of global greenhouse gas emissions. Photovs/iStock/Getty Images Plus

Concrete is all around you – in the foundation of your home, the bridges you drive over, the sidewalks and buildings of cities. It is often described as the second-most used material by volume on Earth after water.

But the way concrete is made today also makes it a major contributor to climate change.

Portland cement, the key component of concrete, is responsible for about 8% of global greenhouse gas emissions. That’s because it’s made by heating limestone to high temperatures, a process that burns a large amount of fossil fuels for energy and releases carbon dioxide from the limestone in the process.

The good news is that there are alternatives, and they are gaining attention.

Portland cement: A greenhouse gas problem

Cementlike substances have been used in construction for thousands of years. Architects have found evidence of their use in the pyramids of Egypt and the buildings and aqueducts of the Roman Empire.

The Portland cement commonly used in construction today was patented in 1824 by Joseph Aspdin, a British bricklayer.

Modern cement preparation starts with crushing the excavated raw materials limestone and clay and then heating them in a kiln at around 2,650 degrees Fahrenheit (about 1,450 degrees Celsius) to form clinker, a hard, rocklike residue. The clinker is then cooled and ground with gypsum into a fine powder, which is called cement.

About 40% of the carbon dioxide emissions from cement production come from burning fossil fuels to generate the high heat needed to run the kiln. The rest come as the heat converts limestone (calcium carbonate) to lime (calcium oxide), releasing carbon dioxide.

In all, between half a ton and 1 ton of greenhouse gas is released per ton of Portland cement. Cement is a binding agent that, mixed with water, holds aggregate together to create concrete. It makes up about 10% to 15% of the concrete mix by weight.

Alternative technologies can lower emissions

As populations, cities and the need for new infrastructure expand, the use of cement is growing, making it important to find alternatives with lower environmental costs.

Concrete has seen the fastest growth among commonly used construction materials with rising population between 1950 and 2023
As population has increased, annual global Portland cement production has risen with it.
Hao Chen, et al., 2025, CC BY-NC-ND

Some techniques for reducing carbon dioxide emissions include substituting some of the clinker – the hard residue typically made from limestone – with supplementary materials such as clay, or fly ash and slag from industries. Other methods reduce the amount of cement by mixing in waste sawdust or recycled materials like plastics.

The long-term solution for reducing cement’s emissions, however, is to replace traditional cement completely with alternatives. One option is geopolymers made from earthen clay and industrial wastes.

Geopolymers: A more climate-friendly solution

Geopolymers can be made by mixing claylike materials that are rich in aluminum and silicon minerals with a chemical activator through a process called geopolymerization. The activator transforms the silicon and aluminum into a structure that will look like cement. All of this can happen at room temperature.

The major difference between cement and geopolymer is that cement is mainly made of calcium, whereas geopolymers are made of silicon and aluminum with some possible calcium in their structure.

Geopolymers offer advantages with lower number of steps, lower CO2 emission and lower water requirement over Portland cement
How the production of Portland cement and geopolymers compare.
Alcina Johnson Sudagar, CC BY-NC

These geopolymers have been found to possess high strength and durability, including resilience in freeze-thaw cycles and resistance to heat and fire, which are important requirements in construction. Studies have found that some geopolymers can provide comparable if not better strength than traditional cement and, because they don’t require heat the way clinker does, they can be produced with significantly lower greenhouse gas emissions.

Geopolymers can also be produced from a variety of raw materials rich in aluminum and silicon, including earthen clays, fly ash, blast furnace slag, rice husk ash, iron ore wastes and recycled construction brick waste. Geopolymer technology can be adapted depending on the clay or industrial waste locally available in a region.

A brief history of cement and geopolymers. Geopolymer International.

An added advantage of geopolymers is that changes to the mixture can produce a range of features.

For example, I and my co-researchers at the University of Aveiro in Portugal added a small amount of cork industry waste – the leftovers from creating bottle corks – to clay-based geopolymer and found it could improve the strength of the material by up to twofold. The cork particles filled the spaces in the geopolymer structure, making it denser, which increased the strength.

Similarly, additives such as sisal fibers from the agave plant, recycled plastic and steel fibers can change geopolymer properties. The additives do not participate in the geopolymerization process but act as fillers in the structure.

The structure of geopolymers can also be designed to act as adsorbents, attracting toxic metals in wastewater and capturing and storing radioactive wastes. Specifically, incorporating materials like zeolite that are natural adsorbents in the geopolymer structure can make them useful for such applications as well.

Where geopolymers are used now

Geopolymers have been used in many types of construction, including roads, coatings, 3D printing, coastal environmental protection, the steel and chemical industries, sewer rehabilitation and building radiation shielding and rocket launchpad and bunker infrastructure.

One of the earliest examples of a modern geopolymer concrete project was the Brisbane West Wellcamp airport in Australia.

It was built in 2014 with 70,000 metric tons of geopolymer concrete, which was estimated to have reduced the project’s carbon dioxide emissions by as much as 80%.

The geopolymer market is currently estimated to be between US$7 billion and $10 billion, with the largest growth in the Asia-Pacific region.

Analysts have estimated that the market could grow at a rate of 10% to 20% per year and reach about $62 billion by 2033.

In several countries, greenhouse gas regulations and green-building certifications are expected to support the continued growth of geopolymers in the construction industry.

Expanding the use of cement alternatives

The advantage of using industrial wastes in geopolymers is a double-edged sword, however. The composition of industrial wastes varies, so it can be difficult to standardize the processing methods. The geopolymer components need to be mixed in particular ratios to achieve desired properties.

Producing the activator for the geopolymer, typically done in chemical facilities, can raise the cost and contribute to the carbon footprint. And the long-term data about these materials’ stability is only now being developed given their newness. Also, these geopolymers can take longer to set than cement, though the setting time can be sped up by using raw materials that react quickly.

Developing cheaper, naturally available activators like agricultural waste rice husk with sustainable supply chains could help lower the costs and environmental impact. Also, printing the recipe on the raw material packaging could help simplify the job of determining the mixing ratio so geopolymers can be more widely used with confidence.

Even though geopolymer technology has some drawbacks, these low-carbon alternatives have great potential for reducing emissions from the construction sector.

The Conversation

Alcina Johnson Sudagar has received funding from GeoBioTec.

ref. Cement has a climate problem — here’s how geopolymers with add-ins like cork could help fix it – https://theconversation.com/cement-has-a-climate-problem-heres-how-geopolymers-with-add-ins-like-cork-could-help-fix-it-270354

How a largely forgotten Supreme Court case can help prevent an executive branch takeover of federal elections

Source: The Conversation – USA – By Derek T. Muller, Professor of Law, University of Notre Dame

Georgia General Election 2020 ballots are loaded by the FBI onto trucks at the Fulton County Election hub on Jan. 28, 2026, in Union City, Ga. AP Photo/Mike Stewart

The recent FBI search of the Fulton County, Georgia, elections facility and the seizure of election-related materials pursuant to a warrant has attracted concern for what it might mean for future elections.

What if a determined executive branch used federal law enforcement to seize election materials to sow distrust in the results of the 2026 midterm congressional elections?

Courts and states should be wary when an investigation risks commandeering the evidence needed to ascertain election results. That is where a largely forgotten Supreme Court case from the 1970s matters, a case about an Indiana recount that sets important guardrails to prevent post-election chaos in federal elections.

A clipping from a Nov. 4, 1970 newspaper with the headline 'Hartke in close battle for Senate.'
The day after Election Day in 1970, votes were very close in the Indiana election for U.S. Senate. A challenge to the outcome would lead to an important U.S. Supreme Court case.
The Purdue Exponent, Nov. 4, 1970

Congress’s constitutionally-delegated role

The case known as Roudebush v. Hartke arose from a razor-thin U.S. Senate race in Indiana in 1970. The ballots were cast on Election Day, and the state counted and verified the results, a process known as the “canvass.” The state certified R. Vance Hartke as the winner. Typically, the certified winner presents himself to Congress, which accepts his certificate of election and seats the member to Congress.

The losing candidate, Richard L. Roudebush, invoked Indiana’s recount procedures. Hartke then sued to stop the recount. He argued that a state recount would intrude on the power of each chamber, the Senate or the House of Representatives, to judge its own elections under Article I, Section 5 of the U.S. Constitution. That clause gives each chamber the sole right to judge elections. No one else can interfere with that power.

Hartke worried that a recount might result in ballots that could be altered or destroyed, which would diminish the ability of the Senate to engage in a meaningful examination of the ballots if an election contest arose.

But the Supreme Court rejected that argument.

It held that a state recount does not “usurp” the Senate’s authority because the Senate remains free to make the ultimate judgment of who won the election. The recount can be understood as producing new information – in this case, an additional set of tabulated results – without stripping the Senate of its final say.

Furthermore, there was no evidence that a recount board would be “less honest or conscientious in the performance of its duties” than the original precinct boards that tabulated the election results the first time around, the court said.

A state recount, then, is perfectly acceptable, as long as it does not impair the power of Congress.

In the Roudebush decision, the court recognized that states run the mechanics of congressional elections as part of their power under Article I, Section 4 of the U.S. Constitution to set the “Times, Places and Manner of holding Elections for Senators and Representatives,” subject to Congress’s own regulation.

At the same time, each chamber of Congress judges its own elections, and courts and states should not casually interfere with that core constitutional function. They cannot engage in behaviors that usurp Congress’s constitutionally-delegated role in elections.

The U.S. Capitol dome in a photo at night with a dark blue sky behind it.
Each chamber of Congress judges its own elections, with no interference by courts and states with that core constitutional function.
David Shvartsman, Moment/Getty Images

Evidence can be power

The Fulton County episode is legally and politically fraught not because federal agents executed a warrant – courts authorize warrants all the time – but because of what was seized: ballots, voting machines, tabulation equipment and related records.

Those items are not just evidence. They are also the raw materials for the canvassing of votes and certification of winners. They provide the foundation for audits and recounts. And, importantly, they are necessary for any later inquiry by Congress if a House or Senate race becomes contested.

That overlap creates a structural problem: If a federal investigation seizes, damages, or destroys election materials, it can affect who has the power to assess the election. It can also inject uncertainty into the chain of custody: Because ballots are removed from absentee envelopes or transferred from Election Day precincts to county election storage facilities, states ensure the ballots cast on Election Day are the only ones tabulated, and that ballots are not lost or destroyed in the process.

Disrupting this chain of custody by seizing ballots, however, can increase, rather than decrease, doubts about the reliability of election results.

That is the modern version of “usurpation.”

From my perspective as an election law scholar, Roudebush is a reminder that courts should be skeptical of executive actions that shift decisive control over election proof away from the institutions the Constitution expects to do the judging.

Congress doesn’t just adjudicate contests

A screenshot of a news story with a headline that says 'Congressional election observers deploy to Iowa for recount in uncalled House race.'
Congressional election observers were sent to Iowa in 2024 to monitor a recount.
Fox News

There is another institutional reason courts should be cautious about federal actions that seize or compromise election materials: The House already has a long-running capacity to observe state election administration in close congressional races.

The Committee on House Administration maintains an Election Observer Program. That program deploys credentialed House staff to be on-site at local election facilities in “close or difficult” House elections. That staff observes casting, processing, tabulating and canvassing procedures.

The program exists for a straightforward reason: If the House may be called upon to judge a contested election under Article I, Section 5, it has an institutional interest in understanding how the election was administered and how records were handled.

That observation function is not hypothetical. The committee has publicly announced deployments of congressional observers to watch recount processes in tight House races throughout the country.

I saw it take place first-hand in 2020. The House deployed election observers in Iowa’s 2nd Congressional District to oversee a recount of a congressional election that was ultimately certified by a margin of just six votes.

Democratic and Republican observers from the House politely observed, asked questions, and kept records – but never interfered with the state election apparatus or attempted to lay hands on election equipment or ballots.

Congress has not rejected a state’s election results since 1984, and for good reason. States now have meticulous recordkeeping, robust chain-of-custody procedures for ballots, and multiple avenues of verifying the accuracy of results. And with Congress watching, state results are even more trustworthy.

When federal investigations collide with election materials

Evidence seizures can adversely affect election administration. So courts and states ought to be vigilant, enforcing guardrails that help respect institutional boundaries.

To start, any executive branch effort to unilaterally inject itself into a state election apparatus should face meaningful scrutiny. Unlike the Fulton County warrant, which targeted an election nearly six years old, warrants that interrupt ongoing state processes in an election threaten to usurp the constitutional role of Congress. And executive action cannot proceed if it impinges upon the ultimate ability of Congress to judge the election of its members.

In the exceedingly unlikely event that a court issues a warrant, a court should not permit seizure of election equipment and ballots during a state’s ordinary post-election canvass. Instead, inspection of items, provision of copies of election materials, or orders to preserve evidence are more tailored means to accomplish the same objectives. And courts should establish clear chain-of-custody procedures in the event that evidence must be preserved for a future seizure in a federal investigation.

The fear driving much public commentary about the danger to midterm elections is not merely that election officials will be investigated or that evidence would be seized. It is that investigations could be used as a pretense to manage or, worse, disrupt elections – chilling administrators, disorganizing record keeping or manufacturing doubt by disrupting custody of ballots and systems.

Roudebush provides a constitutional posture that courts should adopt, a recognition that some acts can usurp the power of Congress to judge elections. That will provide a meaningful constraint on the executive ahead of the 2026 election and reduce the risk of intervention in an ongoing election.

The Conversation

Derek T. Muller does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. How a largely forgotten Supreme Court case can help prevent an executive branch takeover of federal elections – https://theconversation.com/how-a-largely-forgotten-supreme-court-case-can-help-prevent-an-executive-branch-takeover-of-federal-elections-275603