What an old folktale can teach us about the ‘annoying persistence’ of political comedians

Source: The Conversation – USA (3) – By Perin Gürel, Associate professor of American Studies, University of Notre Dame

Stephen Colbert has been defiant following the cancellation of The Late Show. Photo by Richard Shotwell/Invision/AP

Fear of reprisals from the Trump administration has made many people cautious about expressing their opinions. Fired federal workers are asking not to be quoted by their name, for fear of losing housing. Business leaders are concerned about harm to their companies. Universities are changing their curricula, and scholars are self censoring.

But one group has refused to back down is the hosts of America’s late night comedy shows.

Jon Stewart and the rest of The Daily Show team, for example, have been scathing in their coverage of the Epstein case. John Oliver continues to amass colorful analogies for describing the president and his actions. After the “Late Show” was canceled, ostensibly due to financial reasons, host Stephen Colbert was defiant: “They made one mistake – they left me alive!

We may think of being loud, persistent, and edgy as the modern comedians’ job. However, unrelenting, critical humor has a long history in folklore.

I’m a scholar who examines the intersections between culture and politics and I teach a class on “Humor and Power.” A timeless folktale, known as “The Bird Indifferent to Pain,” can help us understand why comedy fans enjoy the annoying persistence of the jester, and explain why this trope has endured across cultures for centuries.

The invincible rooster

“The Bird Indifferent to Pain” belongs to a genre known as “formula tales.” Such tales consist of repeated patterns or chains of events, often with rhymes weaving through them. “The Gingerbread Man” captures this style perfectly with its infectious, teasing rhyme – “Run, run, run as fast as you can…”

“The Bird Indifferent to Pain” also stars a persistent and irritating creature. In most versions, a bird – often a rooster – angers a master or king for singing too loudly or saying the wrong things. The king comes up with elaborate punishments, but the bird always seems indifferent to them, responding to each move with an increasingly defiant and sometimes vulgar rhyme. At the end, the king cooks and eats the rooster, but the bird flies unharmed out of his body, rhyming and singing ever more.

Because folklore is shared casually across cultures and languages, it’s hard to tell when and where this tale first originated. However, folklorists have identified versions all over the world, from Tajikistan in Central Asia to India and Sri Lanka in South Asia, as well as Sudan in northeast Africa.

Armenia’s famous poet Hovhannes Tumanyan collected one version of this tale, which he titled “Anhaght Aklore” or “The Invincible Rooster.” In this version, a rooster finds a gold coin, and boasts about it from the rooftop: “Cock-a-doodle-doo, I’ve found gold!” When the king’s servants take the gold, the rooster continues crowing defiantly: “Cock-a-doodle-doo … the king lives on my account!” Frustrated, the king orders his servants to return the money. But the rooster still won’t shut up: “The king got scared of me!”

Finally, the king orders him slaughtered for dinner. “The king has invited me to his palace!” the rooster boasts. While he’s cooked, he claims the king is treating him to “a hot bath.” Served as the main course, he crows, “I’m dining with the king!”

The tale reaches its climax when the rooster, now in the king’s belly, complains about the darkness. The king, driven to fury by the persistent voice, orders his servants to cut open his own stomach. The rooster escapes and flies to the rooftops, crowing triumphantly once more: “Cock-a-doodle-doo!”

Tumanyan doesn’t tell us what happens to the king after that.

My great-grandmother told us a Turkish version of this tale, featuring a rooster defying his “bey,” or master, in the 1980s. Her rooster crowed in rhyming couplets and used some naughty words to describe the master’s digestive system. Plus, in her version, the master’s behind – and not his stomach – tore open during the bird’s escape. We were obsessed with this story and begged her to tell it over and over.

Hovhannes Tumanyan’s ‘The Invincible Rooster’

The power of persistent irritation

What makes this tale, and its many variations, so compelling across languages and centuries? Why do so many cultures enjoy the rooster’s humorous defiance and literal indifference to punishment?

In our case, as children, we were drawn in by the rhythm of repetition and rhyme. The rooster’s colorful language held a delightful sense of transgression. Children also often identify with animals because of a shared vulnerability to adults’ power. Therefore, it is significant that the bird, the weaker of the two parties, survives the ordeal, whereas the master’s fate is uncertain. But the rooster doesn’t merely survive – he thrives and keeps on squawking. This is a story of hope.

In fact, when I told Tumanyan’s version to my 6-year-old son, he said he loved the rooster’s optimism.

Modern American popular culture contains many jocular characters that resemble this folkloric bird, who is delightfully impervious to pain, from cartoon characters such as the Road Runner – an actual bird – to the foulmouthed, self-regenerating antihero Deadpool.

Today’s political comedians, I argue, are using the rooster’s tactics as well.

Release or resistance?

Debates about political humor often circle back to its purpose. Scholars debate whether anti-authoritarian humor is just a coping mechanism, or whether can it spark change.

Psychologist Sigmund Freud believed humor’s main function was “release”: jokes offered a way to reveal our unacceptable urges in a socially acceptable way. A mean joke, for example, allowed its teller to express aggression without risking serious repercussions.

Philosophers Theodor Adorno and Max Horkheimer argued that humor in corporate capitalist media was a mere safety valve, siphoning off protest and releasing righteous outrage as laughter.

Anthropologist James Scott, however, gives jokesters more political credit. In his 1992 book “Domination and the Arts of Resistance,” Scott agreed that authorities allow some dissident humor as a safety valve. But he also identified a powerful “imaginative function” in humorous resistance. Humor, he claimed, can help people envision alternatives to the status quo.

Scott pointed out that release and resistance need not be mutually exclusive. Instead of reducing the chance of actual rebellion, comedy could serve as practice for it.

Authorities do perceive some danger in comedians’ output. In countries with fewer free speech protections, comedians may face more serious repercussions than a stern tweet.

In the case of Colbert, President Donald Trump’s gleeful response to the show’s cancellation, and his suggestion that others will be “next up,” shows just how seriously some political figures take comedic critique. At the very least, they are irritated.

And the story of the “Bird Indifferent to Pain” reminds us that sometimes the best a jokester can do is to keep irritating the bowels of the system, singing all the way.

The Conversation

Perin Gürel does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. What an old folktale can teach us about the ‘annoying persistence’ of political comedians – https://theconversation.com/what-an-old-folktale-can-teach-us-about-the-annoying-persistence-of-political-comedians-262860

One of Hurricane Katrina’s most important lessons isn’t about storm preparations – it’s about injustice

Source: The Conversation – USA (2) – By Ivis García, Associate Professor of Landscape Architecture and Urban Planning, Texas A&M University

New Orleans residents wait to be rescued from a rooftop two days after Hurricane Katrina made landfall. AP Photo/David J. Phillipp

Twenty years after Hurricane Katrina swept through New Orleans, the images still haunt us: entire neighborhoods underwater, families stranded on rooftops and a city brought to its knees.

We study disaster planning at Texas A&M University and look for ways communities can improve storm safety for everyone, particularly low-income and minority neighborhoods.

Katrina made clear what many disaster researchers have long found: Hazards such as hurricanes may be natural, but the death and destruction is largely human-made.

A man with a destressed look walks in thigh-deep water while people watch from a doorway several steps above street level.
People walk out of their homes into New Orleans’ flooded streets after Hurricane Katrina on Aug. 29, 2005. In parts of the city, homes were underwater up to their roofs.
Mark Wilson/Getty Images

How New Orleans built inequality into its foundation

New Orleans was born unequal. As the city grew as a trade hub in the 1700s, wealthy residents claimed the best real estate, often on higher ground formed by river sediment. The city had little high ground, so everyone else was left in “back-of-town” areas, closer to swamps where land was cheap and flooding common.

In the early 1900s, new pumping technology enabled development in flood-prone swamplands and housing spread, but the pumping caused land subsidence that made flooding worse in neighborhoods such as Lakeview, Gentilly and Broadmoor.

Then redlining started in the 1930s. To guide federal loan decisions, government agencies began using maps that rated neighborhoods by financial risk. Predominantly Black neighborhoods were typically marked as “high risk,” regardless of the actual housing quality.

This created a vicious cycle: Black and low-income families were already stuck in flood-prone areas because that’s where cheap land was. Redlining kept their property values lower. Black Americans were also denied government-backed mortgages and GI Bill benefits that could have helped them move to safer neighborhoods on higher ground.

In this 1939 map prepared for the Federal Home Loan Bank Board, redlining separated New Orleans into grades. Green is an A, or first grade; followed by blue, yellow and red, which is last as a D, or fourth grade. The Lower Ninth Ward is the red block farthest to the right.
National Archives via Mapping Inequality/University of Richmond

Hurricane Katrina showed how those lines translate to vulnerability.

When history came calling

On Aug. 29, 2005, as Hurricane Katrina battered New Orleans, the levees protecting the city broke and water flooded about 80% of the city. The damage followed racial geography − the spatial patterns of where Black and white residents lived due to decades of segregation − like a blueprint.

About three-quarters of Black residents experienced serious flooding, compared with half of white residents.

People off all ages, including young children, stand in line to board a bus.
New Orleans residents who evacuated to the Superdome during Hurricane Katrina board buses to be taken to Houston on Sept. 1, 2005. Many of them lost their homes, and with much of New Orleans damaged, Houston took in evacuees.
Robert Sullivan/AFP via Getty Images

Between 100,000 and 150,000 people couldn’t evacuate. They were disproportionately people who were elderly, Black, poor and without cars. Among survivors who did not evacuate, 55% did not have a car or another way to get out, and 93% were Black. More than 1,800 people lost their lives.

This lack of transportation — what scholars call “transportation poverty” — left people stranded in the city’s bowl-shaped geography, unable to escape when the levees failed.

Recovery that made things worse

After Hurricane Katrina, the federal government created the Road Home program to help homeowners rebuild. But the program had a devastating design flaw: It calculated aid based on prehurricane home value or repair costs, whichever was less.

That meant low-income homeowners, who already lived in areas with lower property values due to the history of discrimination, received less money. A family whose US$50,000 home needed $80,000 in repairs would receive only $50,000, while a family whose $200,000 home needed the same $80,000 in repairs would receive the full repair amount. The average gap between damage estimates and rebuilding funds was $36,000.

As a result, people in poor and Black neighborhoods had to cover about 30% of rebuilding costs after all aid, while those in wealthy areas faced only about 20%. Families in the poorest areas had to pay thousands of dollars out-of-pocket to complete repairs, even after government help and insurance, and that slowed the recovery process.

A house missing it walls and with a torn-down fence.
Homes damaged by Hurricane Katrina still sat vacant in New Orleans’ Lower Ninth Ward in 2013.
AP Photo/Patrick Semansky

This pattern isn’t unique to New Orleans. A study examining data from Hurricane Andrew in Miami (1992) and Hurricane Ike in Galveston (2008) found that housing recovery was consistently slow and unequal in low-income and minority neighborhoods. Lower-income families are less likely to have adequate insurance or savings for quick rebuilding. Low-value homes with extensive damage still had not regained their prestorm value four years later, while higher-value homes sustaining even moderate damage gained value.

Ten years after Katrina, while 70% of white residents felt New Orleans had recovered, only 44% of Black residents could look around their neighborhood and say the same.

Community-led solutions for climate resilience

Katrina’s lessons in the inequality of disasters are important for communities today as climate change brings more extreme weather.

Federal Emergency Management Agency denial rates for disaster aid remain high due to bureaucratic obstacles such as complex application processes that bounce survivors among multiple agencies, often resulting in denials and delays of critical funds. These are the same systemic barriers that added to the reasons Black communities recovered more slowly after Hurricane Katrina. FEMA’s own advisory council reported that institutional assistance policies tend to enrich wealthier, predominantly white areas, while underserving low-income and minority communities throughout all stages of disaster response.

A 2021 photo showing the low-lying neighborhood and  the canal just across a flood wall.
Homes were rebuilt along the Industrial Canal, shown here in 2021, where a levee break flooded the Lower Ninth Ward during Hurricane Katrina.
Patrick T. Fallon/AFP via Getty Images

The lessons from New Orleans also point to ways communities can build disaster resilience across the entire population. In particular, as cities plan protective measures — elevating homes, buyout programs and flood-proofing assistance — Hurricane Katrina showed the need to pay attention to social vulnerabilities and focus aid where people need the most assistance.

The choice America faces

In our view, one of Katrina’s most important lessons is about social injustice. The disproportionate suffering in Black communities wasn’t a natural disaster but a predictable result of policies concentrating risk in marginalized neighborhoods.

In many American cities, policies still leave some communities facing a greater risk of disaster damage. To protect residents, cities can start by investing in vulnerable areas, empowering a community-led recovery and ensuring race, income or ZIP code never again determine who receives help with the recovery.

Natural disasters don’t have to become human catastrophes. Confronting the policies and other factors that leave some groups at greater risk can avoid a repeat of the devastation the world saw in Katrina.

The Conversation

Ivis García receives funding from National Science Foundation, U.S. Department of Housing and Urban Development, Ford Foundation, National Academy of Sciences, Fundación Comunitaria de Puerto Rico, UNIDOS, Texas Appleseed, Natural Hazard Center, Chicago Community Trust, American Planning Association, and Salt Lake City Corporation.

Deidra D Davis receives funding from the National Academy of Sciences, Engineering, and Medicine. The views expressed are those of Deidra D Davis and do not necessarily represent those of the National Academy of Sciences, Engineering, and Medicine.

Walter Gillis Peacock receives funding from the National Science Foundation to conduct research related to issues discussed in this article. The opinions expressed are those of Walter Gillis Peacock and do not necessarily reflect those of the National Science Foundation.

ref. One of Hurricane Katrina’s most important lessons isn’t about storm preparations – it’s about injustice – https://theconversation.com/one-of-hurricane-katrinas-most-important-lessons-isnt-about-storm-preparations-its-about-injustice-261936

Data centers consume massive amounts of water – companies rarely tell the public exactly how much

Source: The Conversation – USA (2) – By Peyton McCauley, Water Policy Specialist, Sea Grant UW Water Science-Policy Fellow, University of Wisconsin-Milwaukee

The Columbia River running through The Dalles, Oregon, supplies water to cool data centers. AP Photo/Andrew Selsky

As demand for artificial intelligence technology boosts construction and proposed construction of data centers around the world, those computers require not just electricity and land, but also a significant amount of water. Data centers use water directly, with cooling water pumped through pipes in and around the computer equipment. They also use water indirectly, through the water required to produce the electricity to power the facility. The amount of water used to produce electricity increases dramatically when the source is fossil fuels compared with solar or wind.

A 2024 report from the Lawrence Berkeley National Laboratory estimated that in 2023, U.S. data centers consumed 17 billion gallons (64 billion liters) of water, and projects that by 2028, those figures could double – or even quadruple. The same report estimated that in 2023, U.S. data centers consumed an additional 211 billion gallons (800 billion liters) of water indirectly through the electricity that powers them. But that is just an estimate in a fast-changing industry.

We are researchers in water law and policy based on the shores of Lake Michigan. Technology companies are eyeing the Great Lakes region to host data centers, including one proposed for Port Washington, Wisconsin, which could be one of the largest in the country. The Great Lakes region offers a relatively cool climate and an abundance of water, making the region an attractive location for hot and thirsty data centers.

The Great Lakes are an important, binational resource that more than 40 million people depend on for their drinking water and supports a US$6 trillion regional economy. Data centers compete with these existing uses and may deplete local groundwater aquifers.

Our analysis of public records, government documents and sustainability reports compiled by top data center companies has found that technology companies don’t always reveal how much water their data centers use. In a forthcoming Rutgers Computer and Technology Law Journal article, we walk through our methods and findings using these resources to uncover the water demands of data centers.

In general, corporate sustainability reports offered the most access and detail – including that in 2024, one data center in Iowa consumed 1 billion (3.8 billion liters) gallons of water – enough to supply all of Iowa’s residential water for five days.

The computer processors in data centers generate lots of heat while doing their work.

How do data centers use water?

The servers and routers in data centers work hard and generate a lot of heat. To cool them down, data centers use large amounts of water – in some cases over 25% of local community water supplies. In 2023, Google reported consuming over 6 billion gallons of water (nearly 23 billion liters) to cool all its data centers.

In some data centers, the water is used up in the cooling process. In an evaporative cooling system, pumps push cold water through pipes in the data center. The cold water absorbs the heat produced by the data center servers, turning into steam that is vented out of the facility. This system requires a constant supply of cold water.

In closed-loop cooling systems, the cooling process is similar, but rather than venting steam to the air, air-cooled chillers cool down the hot water. The cooled water is then recirculated to cool the facility again. This does not require constant addition of large volumes of water, but it uses a lot more energy to run the chillers. The actual numbers showing those differences, which likely vary by the facility, are not publicly available.

One key way to evaluate water use is the amount of water that is considered “consumed,” meaning it is withdrawn from the local water supply and used up – for instance, evaporated as steam – and not returned to the ecosystem.

For information, we first looked to government data, such as that kept by municipal water systems, but the process of getting all the necessary data can be onerous and time-consuming, with some denying data access due to confidentiality concerns. So we turned to other sources to uncover data center water use.

Sustainability reports provide insight

Many companies, especially those that prioritize sustainability, release publicly available reports about their environmental and sustainability practices, including water use. We focused on six top tech companies with data centers: Amazon, Google, Microsoft, Meta, Digital Realty and Equinix. Our findings revealed significant variability in both how much water the companies’ data centers used, and how much specific information the companies’ reports actually provided.

Sustainability reports offer a valuable glimpse into data center water use. But because the reports are voluntary, different companies report different statistics in ways that make them hard to combine or compare. Importantly, these disclosures do not consistently include the indirect water consumption from their electricity use, which the Lawrence Berkeley Lab estimated was 12 times greater than the direct use for cooling in 2023. Our estimates highlighting specific water consumption reports are all related to cooling.

Amazon releases annual sustainability reports, but those documents do not disclose how much water the company uses. Microsoft provides data on its water demands for its overall operations, but does not break down water use for its data centers. Meta does that breakdown, but only in a companywide aggregate figure. Google provides individual figures for each data center.

In general, the five companies we analyzed that do disclose water usage show a general trend of increasing direct water use each year. Researchers attribute this trend to data centers.

A closer look at Google and Meta

To take a deeper look, we focused on Google and Meta, as they provide some of the most detailed reports of data center water use.

Data centers make up significant proportions of both companies’ water use. In 2023, Meta consumed 813 million gallons of water globally (3.1 billion liters) – 95% of which, 776 million gallons (2.9 billion liters), was used by data centers.

For Google, the picture is similar, but with higher numbers. In 2023, Google operations worldwide consumed 6.4 billion gallons of water (24.2 billion liters), with 95%, 6.1 billion gallons (23.1 billion liters), used by data centers.

Google reports that in 2024, the company’s data center in Council Bluffs, Iowa, consumed 1 billion gallons of water (3.8 billion liters), the most of any of its data centers.

The Google data center using the least that year was in Pflugerville, Texas, which consumed 10,000 gallons (38,000 liters) – about as much as one Texas home would use in two months. That data center is air-cooled, not water-cooled, and consumes significantly less water than the 1.5 million gallons (5.7 million liters) at an air-cooled Google data center in Storey County, Nevada. Because Google’s disclosures do not pair water consumption data with the size of centers, technology used or indirect water consumption from power, these are simply partial views, with the big picture obscured.

Given society’s growing interest in AI, the data center industry will likely continue its rapid expansion. But without a consistent and transparent way to track water consumption over time, the public and government officials will be making decisions about locations, regulations and sustainability without complete information on how these massive companies’ hot and thirsty buildings will affect their communities and their environments.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Data centers consume massive amounts of water – companies rarely tell the public exactly how much – https://theconversation.com/data-centers-consume-massive-amounts-of-water-companies-rarely-tell-the-public-exactly-how-much-262901

Misspelled names may give brands a Lyft – if the spelling isn’t too weird

Source: The Conversation – USA (2) – By Annika Abell, Assistant Professor of Marketing, University of Tennessee

Misspelled brand names can be catchy – but don’t always connect with consumers. AP Photo/David Zalubowski

Consumers don’t mind when companies use misspelled words – think Lyft for “lift” or Froot Loops for “fruit loops” – as their brand names, as long as the alterations aren’t too extreme and the misspelling makes sense.

Those are the main findings of a new peer-reviewed paper I published with fellow marketing scholar Leah Warfield Smith. This builds on previous work that found that using misspelled brand names usually backfires.

Misspelled brand names like Kool-Aid, Reddi-wip and Crumbl seem to be everywhere. They are especially common in the names of smartphone apps and in certain industries, like fashion. Companies often do this to stand out or perhaps so they can use the misspelled word as their domain name.

Despite their popularity, we know little about how consumers respond to different types of misspelled names, especially when those names deviate significantly from correct or standard spelling. Our study aims to fill this gap.

In a series of six experiments, we tested consumer reactions to fictional and several real brand names with varying levels and types of misspellings.

Mild misspellings, such as combining two real words such as SoftSoap were perceived just as positively as correctly spelled names. When consumers saw different levels of misspellings – consider the brand names Eazy Clean, Eazy Klean and Eezy Kleen – they reacted more negatively the further the name deviated from the correct spelling “easy clean.”

However, we also found that relevance matters. A misspelled name that aligns with the product or brand identity can still be successful. For example, consumers responded just as well to Bloo Fog – a playful nod to Oolong tea – as to the correctly spelled “blue fog.” In contrast, Blewe Fog – a misspelling without a linguistic connection to the product’s name – performed worse.

Other experiments showed similar, more positive effects when the name related to the owner’s identity, for example Sintymental Moments by Joe Sinty, or a visual cue as in Toadal Fitness with a toad logo. In each case, the misspelling was more acceptable when it made conceptual sense to consumers.

Why it matters

The findings suggest that two main concepts play a role in how consumers process brand names: linguistic fluency – or how easily a name is pronounced and read – and conceptual fluency – how easily the meaning of a name is understood or how well it aligns with the product.

Linguistic fluency decreases with more severe misspellings, resulting in more negative responses. But if the misspelling adds some kind of meaning – related to the product, person or logo – these adverse effects can be easily mitigated.

For marketers and brand strategists, the takeaway is that misspellings can work, but only when they make sense. Naming a tea brand Bloo Fog might succeed where Blewe Fog fails, but only if consumers understand the name-product connection. Understanding when a misspelling helps or hurts a brand is crucial to crafting the right brand name; ideally, one that can be perceived positively while reaping the benefits of misspellings, such as increased memorability, uniqueness or trademark acquisition.

What still isn’t known

While this research uncovers how consumers react to different types of misspellings, it leaves open important questions about long-term effects. For example, do consumers still notice the misspelling in a 60-year-old brand name like Kwik Trip, a convenience store chain in the Midwest?

We also do not know how the effects of misspellings play out across different languages, cultures or product categories.

The Research Brief is a short take on interesting academic work.

The Conversation

Annika Abell does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Misspelled names may give brands a Lyft – if the spelling isn’t too weird – https://theconversation.com/misspelled-names-may-give-brands-a-lyft-if-the-spelling-isnt-too-weird-256792

Reverse discrimination? In spite of the MAGA bluster over DEI, data shows white Americans are still advantaged

Source: The Conversation – USA (2) – By Fred L. Pincus, Emeritus Professor of Sociology, University of Maryland, Baltimore County

There’s no evidence of widespread racial discrimination against white people. Sebastian Gorczowski/iStock/Getty Images Plus

Two big assumptions underlie President Donald Trump’s attack on diversity, equity and inclusion policies. The first is that discrimination against people of color is a thing of the past. The second is that DEI policies and practices discriminate against white people – especially white men – in what’s sometimes called “reverse discrimination.”

I’m a sociologist who’s spent decades studying race and inequality, and when I read the documents and statements coming out of the Trump White House, these assumptions jump out at me again and again – usually implicitly, but always there.

The problem is that the evidence doesn’t back these assumptions up.

For one thing, if discrimination against white Americans were widespread, you might expect large numbers to report being treated unfairly. But polling data shows otherwise. A 2025 Pew survey found that 70% of white Americans think Black people face “some” or “a lot” of discrimination in general, and roughly two-thirds say the same of Asian and Hispanic people. Meanwhile, only 45% of white Americans believe that white people in general experience that degree of discrimination.

In other words, white Americans believe that people of color, as a group, face more discrimination than white people do. People of color agree – and so do Americans overall.

In a second national study, using data collected in 2023, Americans were asked if they had personally experienced discrimination within the past year. Thirty-eight percent of white people said they had, compared to 54% of Black Americans, 50% of Latinos and 42% of Asian Americans. In other words, white Americans are much less likely to say that they’ve been discriminated against than people of color.

The ‘hard’ numbers show persistent privilege

These statistics are sometimes called “soft” data because they reflect people’s perceptions rather than verified incidents. To broaden the picture, it’s worth looking at “hard” data on measures like income, education and employment outcomes. These indicators also suggest that white Americans as a group are advantaged relative to people of color.

For example, federal agencies have documented racial disparities in income for decades, with white Americans, as a group, generally outearning Black and Latino Americans. This is true even when you control for education. When the Census Bureau looked at median annual earnings for Americans between 25 and 64 with at least a bachelor’s degree, it found that Black Americans received only 81% of what comparably educated white Americans earned, while Latinos earned only 80%. Asian Americans, on the other hand, earned 119% of what white people earned.

These gaps persist even when you hold college major constant. In the highest-paying major, electrical engineering, Black Americans earned only 71% of what white people did, while Latinos earned just 73%. Asian Americans, in contrast, earned 104% of what white people earned. In the lowest-paid major, family and consumer sciences, African Americans earned 97% of what white people did, and Latinos earned 94%. Asian Americans earned 117% of what white people earned. The same general pattern of white income advantage existed in all majors with two exceptions: Black people earned more in elementary education and nursing.

Remember, this is comparing individuals with a bachelor’s degree or higher to people with the same college major. Again, white Americans are still advantaged in most career paths over Black Americans and Latinos.

Disparities persist in the job market

Unemployment data show similar patterns. The July 2025 figures for workers at all education levels show that Black people were 1.9 times more likely to be unemployed than white Americans. Latinos were 1.4 times more likely to be unemployed, and Asian Americans, 1.1 times.

This same white advantage still occurs when looking only at workers who have earned a bachelor’s degree or more. Black Americans who have earned bachelor’s degrees or higher were 1.3 times more likely to be unemployed than similarly educated white Americans as of 2021, the last year for which data is available. Latinos with college degrees were 1.4 times more likely to be unemployed than similar white Americans. The white advantage was even higher for those with only a high school degree or less. Unfortunately, data for Asian Americans weren’t available.

In another study, researchers sent 80,000 fake resumes in response to 10,000 job listings posted by 97 of the largest employers in the country. The credentials on the resumes were essentially the same, but the names signaled race: Some had Black-sounding names, like Lakisha or Leroy, while others had more “white-sounding” names like Todd or Allison. This method is known as an “audit study.”

This research, which was conducted between 2019 and 2021, found that employers were 9.5% more likely to contact the Todds and Allisons than the Lakishas and Leroys within 30 days of receiving a resume. Of the 28 audit studies that have been conducted since 1989, each one showed that applicants with Black- or Latino-sounding names were less likely to be contacted that those with white-sounding or racially neutral names.

Finally, a 2025 study analyzed 600,000 letters of recommendation for college-bound students who used the Common App form during the 2018-19 and 2019-20 academic years. Only students who applied to at least one selective college were included. The study found that letters for Black and Latino students were shorter and said less about their intellectual promise.

Similarly, letters in support of first-generation students – that is, whose parents hadn’t graduated from a four-year college, and who are disproportionately likely to be Black and Latino – had fewer sentences dedicated to their scientific, athletic and artistic abilities, or their overall academic potential.

These and other studies don’t provide evidence of massive anti-white discrimination. Although scattered cases of white people being discriminated against undoubtedly exist, the data suggest that white people are still advantaged relative to non-Asian people of color. White Americans may be less advantaged than they were, but they’re still advantaged.

While it’s true that many working-class white Americans are having a tough time in the current economy, it’s not because of their race. It’s because of their class. It’s because of automation and overseas outsourcing taking away good jobs. It’s because of high health care costs and cuts in the safety nets.

In other words, while many working-class white people are struggling now, there’s little evidence race is the problem.

The Conversation

Fred L. Pincus does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Reverse discrimination? In spite of the MAGA bluster over DEI, data shows white Americans are still advantaged – https://theconversation.com/reverse-discrimination-in-spite-of-the-maga-bluster-over-dei-data-shows-white-americans-are-still-advantaged-262394

Trump-Putin summit: Veteran diplomat explains why putting peace deal before ceasefire wouldn’t end Russia-Ukraine war

Source: The Conversation – USA – By Donald Heflin, Executive Director of the Edward R. Murrow Center and Senior Fellow of Diplomatic Practice, The Fletcher School, Tufts University

U.S. President Donald Trump (R) and Russian President Vladimir Putin leave at the conclusion of a press conference on Aug. 15, 2025 in Alaska. Andrew Harnik/Getty Images

If you’re confused about the aims, conduct and outcome of the summit meeting between U.S. President Donald Trump and Russian leader Vladimir Putin held in Anchorage, Alaska, on August 15, 2025, you’re probably not alone.

As summits go, the meeting broke with many conventions of diplomacy: It was last-minute, it appeared to ignore longstanding protocol and accounts of what happened were conflicting in the days after the early termination of the event.

The Conversation U.S.’s politics editor Naomi Schalit interviewed Donald Heflin, a veteran diplomat now teaching at Tufts University’s Fletcher School, to help untangle what happened and what could happen next.

It was a hastily planned summit. Trump said they’d accomplish things that they didn’t seem to accomplish. Where do things stand now?

It didn’t surprise me or any experienced diplomat that there wasn’t a concrete result from the summit.

First, the two parties, Russia and Ukraine, weren’t asking to come to the peace table. Neither one of them is ready yet, apparently. Second, the process was flawed. It wasn’t prepared well enough in advance, at the secretary of state and foreign minister level. It wasn’t prepared at the staff level.

What was a bit of a surprise was the last couple days before the summit, the White House started sending out what I thought were kind of realistic signals. They said, “Hopefully we’ll get a ceasefire and then a second set of talks a few weeks in the future, and that’ll be the real set of talks.”

Two men in dark clothes hugging each other.
UK Prime Minister Keir Starmer, here embracing Ukrainian President Volodymyr Zelenskyy in London on Aug. 14, 2025, is one of many European leaders voicing strong support for Ukraine and Zelenskyy.
Jordan Pettitt/PA Images via Getty Images

Now, that’s kind of reasonable. That could have happened. That was not a terrible plan. The problem was it didn’t happen. And we don’t know exactly why it didn’t happen.

Reading between the lines, there were a couple problems. The first is the Russians, again, just weren’t ready to do this, and they said, “No ceasefire. We want to go straight to permanent peace talks.”

Ukraine doesn’t want that, and neither do its European allies. Why?

When you do a ceasefire, what normally happens is you leave the warring parties in possession of whatever land their military holds right now. That’s just part of the deal. You don’t go into a 60- or 90-day ceasefire and say everybody’s got to pull back to where they were four years ago.

But if you go to a permanent peace plan, which Putin wants, you’ve got to decide that people are going to pull back, right? So that’s problem number one.

Problem number two is it’s clear that Putin is insisting on keeping some of the territory that his troops seized in 2014 and 2022. That’s just a non-starter for the Ukrainians.

Is Putin doing that because that really is his bottom line demand, or did he want to blow up these peace talks, and that was a good way to blow them up? It could be either or both.

Russia has made it clear that it wants to keep parts of Ukraine, based on history and ethnic makeup.

The problem is, the world community has made it clear for decades and decades and decades, you don’t get what you want by invading the country next door.

Remember in Gulf War I, when Saddam Hussein invaded and swallowed Kuwait and made it the 19th province of Iraq? The U.S. and Europe went in there and kicked him out. Then there are also examples where the U.S. and Europe have told countries, “Don’t do this. You do this, it’s going to be bad for you.”

So if Russia learns that it can invade Ukraine and seize territory and be allowed to keep it, what’s to keep them from doing it to some other country? What’s to keep some other country from doing it?

You mean the whole world is watching.

Yes. And the other thing the world is watching is the U.S. gave security guarantees to Ukraine in 1994 when they gave up the nuclear weapons they held, as did Europe. The U.S. has, both diplomatically and in terms of arms, supported Ukraine during this war. If the U.S. lets them down, what kind of message does that send about how reliable a partner the U.S. is?

The U.S. has this whole other thing going on the other side of the world where the country is confronting China on various levels. What if the U.S. sends a signal to the Taiwanese, “Hey, you better make the best deal you can with China, because we’re not going to back your play.”

Police dressed in combat gear help an old woman across rubble left after a bombing.
Ukrainian police officers evacuate a resident from a residential building in Bilozerske following an airstrike by Russian invading forces on Aug. 17, 2025.
Pierre Crom/Getty Images

At least six European leaders are coming to Washington along with Zelenskyy. What does that tell you?

They’re presenting a united front to Trump and Secretary of State Marco Rubio to say, “Look, we can’t have this. Europe’s composed of a bunch of countries. If we get in the situation where one country invades the other and gets to keep the land they took, we can’t have it.”

President Trump had talked to all of them before the summit, and they probably came away with a strong impression that the U.S. was going for a ceasefire. And then, that didn’t happen.

Instead, Trump took Putin’s position of going straight to peace talks, no ceasefire.

I don’t think they liked it. I think they’re coming in to say to him, “No, we have to go to ceasefire first. Then talks and, PS, taking territory and keeping it is terrible precedent. What’s to keep Russia from just storming into the three Baltic states – Estonia, Latvia, and Lithuania – next? The maps of Europe that were drawn 100 years ago have held. If we’re going to let Russia erase a bunch of the borders on the map and incorporate parts, it could really be chaotic.”

Where do you see things going?

Until and unless you hear there’s a ceasefire, nothing’s really happened and the parties are continuing to fight and kill.

What I would look for after the Monday meetings is, does Trump stick to his guns post-Alaska and say, “No, we’re gonna have a big, comprehensive peace agreement, and land for peace is on the table.”

Or does he kind of swing back towards the European point of view and say, “I really think the first thing we got to have is a ceasefire”?

Even critics of Trump need to acknowledge that he’s never been a warmonger. He doesn’t like war. He thinks it’s too chaotic. He can’t control it. No telling what will happen at the other end of war. I think he sincerely wants for the shooting and the killing to stop above all else.

The way you do that is a ceasefire. You have two parties say, “Look, we still hate each other. We still have this really important issue of who controls these territories, but we both agree it’s in our best interest to stop the fighting for 60, 90 days while we work on this.”

If you don’t hear that coming out of the White House into the Monday meetings, this isn’t going anywhere.

There are thousands of Ukrainian children who have been taken by Russia – essentially kidnapped. Does that enter into any of these negotiations?

It should. It was a terror tactic.

This could be a place where you can make progress. If Putin said, well, “We still don’t want to give you any land, but, yeah, these kids here, you can have them back,” it’s the kind of thing you throw on the table to show that you’re not a bad guy and you are kind of serious about these talks.

Whether they’ll do that or not, I don’t know. It’s really a tragic story.

The Conversation

Donald Heflin does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Trump-Putin summit: Veteran diplomat explains why putting peace deal before ceasefire wouldn’t end Russia-Ukraine war – https://theconversation.com/trump-putin-summit-veteran-diplomat-explains-why-putting-peace-deal-before-ceasefire-wouldnt-end-russia-ukraine-war-263314

Do people dream in color or black and white?

Source: The Conversation – USA – By Kimberly Fenn, Professor of Psychology, Michigan State University

One way to remember your dreams better: Write them down the moment you wake up. Andriy Onufriyenko/Moment via Getty Images

Curious Kids is a series for children of all ages. If you have a question you’d like an expert to answer, send it to curiouskidsus@theconversation.com.


Do we visualize dreams in color or black and white? – Srihan, age 7, West Bengal, India


Dreams are an astonishing state of consciousness. As you sleep, your mind creates fantastic and bizarre stories, rich with visual details – all without any conscious input from you.

Some dreams are boring. Others show you shocking events or magnificent images. I frequently dream of alligators walking upright, wearing sunglasses and yellow T-shirts. Often the alligators are friendly and go on adventures with me, but sometimes they’re aggressive and chase me.

The way the brain operates while you’re dreaming explains why dreams can be so fantastic. A small structure called the amygdala is largely responsible for processing emotional information, and it’s very active while dreaming. In contrast, the brain’s frontal cortex, which helps you plan and strategize, tends to be rather quiet. This pattern explains why dreams can jump from one peculiar scene to the next, with no clear story line. It’s as if you are sailing an emotional wave, without a captain.

Dreams can indeed be emotional and sometimes scary. But dreams can be enjoyable too – maybe you’ve had a dream so delightful you were disappointed to wake up and realize it wasn’t reality.

Are the images in your dreams in vivid color? Perhaps you had a dream about playing Candy Crush and can remember the brightly colored red, purple and yellow candies cascading in your dream.

As a neuroscientist who studies sleep, I can tell you that about 70% to 80% of people report dreaming in color, as opposed to just in shades of black and white. But this estimate may be low, because scientists can’t actually see what a dreamer sees. There’s no sophisticated technology showing them exactly what’s happening in a dreamer’s mind. Instead, they have to rely on what dreamers remember about their dreams.

man lying in bed with little electrodes stuck around his face and head, wires run to a machine
Researchers record brain and eye activity while they monitor volunteers’ sleep in the lab.
Greg Kohuth

Studying sleep in the laboratory

To study dreams, researchers ask people to sleep in laboratories, and they simply wake them while they’re dreaming and then ask them what they were just thinking about. It’s pretty rudimentary science, but it works.

How do scientists know when people are dreaming? Although dreams can occur in any sleep stage, research has long shown that dreams are most likely to occur during rapid eye movement sleep, or REM sleep.

Scientists can identify REM by the electrical activity on your scalp and your eye movements. They do this by using an electroencephalogram, which uses several small electrodes placed directly on the scalp to measure brain activity. During REM, the dreamer’s eyes move back and forth repeatedly. This likely means they’re scanning – that is, looking around in their dream.

That’s when dream researchers wake up their participants. Dreams are really tricky to study because they evaporate so quickly. So instead of asking participants to remember a dream – even one they were having a moment ago – we ask them what they were just “thinking.” Dreamers don’t have time to think or reflect, they just respond – before the dream is lost.

Dreams are full-sensory experiences

There seem to be age differences in color dreaming. Older people report far less color in their dreams than younger people. The prevailing explanation for this is based on the media they experienced while young. If the photographs, movies and television you saw as a child were all in black and white, then you are more likely to report more black-and-white dreams than color dreams.

This phenomenon raises some interesting questions. Are people really dreaming in black and white or just remembering their dreams that way after the fact? Was it as common for people to say they dreamed in black and white before these visual media were invented? There wasn’t any focused research that relied on in-the-moment dream reports back before black-and-white photos and movies existed, so we will never know.

Although visual features dominate, you can also hear, smell, taste and feel things in your dreams. So if you dream about visiting Disneyland, you might hear the music from the parade or smell french fries from a food stand.

You may have also wondered whether blind people dream. They do. If a person becomes blind after age 5 or 6, their dreams will contain visual images. However, someone who is congenitally blind, or becomes blind before about age 5, will not have visual images in their dreams. Instead, their dreams contain more information from the other senses.

Remembering your dreams

Some people may say they don’t dream at all. They do, but many people don’t remember their dreams. The vast majority of dreams are forgotten. That’s because when we’re in REM sleep, the hippocampus, the area of the brain responsible for long-term memory, is largely turned off.

Others may remember a dream immediately upon awakening but quickly forget it. That’s because the hippocampus is a bit sluggish and takes some time to wake up, so you’re not able to create a long-term memory right after waking.

Perhaps the biggest question about dreams is whether they mean anything. People have been discussing this since ancient times. Sigmund Freud, the founder of psychoanalysis, called dreams the “royal road to the unconscious.” He believed they had a profound meaning that’s hidden from the dreamer.

But today, scientists agree that dreams do not have any hidden meaning. So while it’s entertaining to think about what your dreams mean, there’s no scientific basis, for example, to think that a dream about your teeth falling out automatically means you’re anxious about a loss.

If you would like to remember your dreams better, simply keep a notepad and pen by your bed and practice writing down your dreams right when you wake. This is the best way to remember the fantastical stories your brain creates for you every night.


Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to CuriousKidsUS@theconversation.com. Please tell us your name, age and the city where you live.

And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.

The Conversation

Kimberly Fenn does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Do people dream in color or black and white? – https://theconversation.com/do-people-dream-in-color-or-black-and-white-256971

NASA wants to put a nuclear reactor on the Moon by 2030 – choosing where is tricky

Source: The Conversation – USA – By Clive Neal, Professor of Civil and Environmental Engineering and Earth Sciences, University of Notre Dame

Several missions have already attempted to land on the lunar surface in 2025, with more to come. AP Photo

In a bold, strategic move for the U.S., acting NASA Administrator Sean Duffy announced plans on Aug. 5, 2025, to build a nuclear fission reactor for deployment on the lunar surface in 2030. Doing so would allow the United States to gain a foothold on the Moon by the time China plans to land the first taikonaut, what China calls its astronauts, there by 2030.

Apart from the geopolitical importance, there are other reasons why this move is critically important. A source of nuclear energy will be necessary for visiting Mars, because solar energy is weaker there. It could also help establish a lunar base and potentially even a permanent human presence on the Moon, as it delivers consistent power through the cold lunar night.

As humans travel out into the solar system, learning to use the local resources is critical for sustaining life off Earth, starting at the nearby Moon. NASA plans to prioritize the fission reactor as power necessary to extract and refine lunar resources.

As a geologist who studies human space exploration, I’ve been mulling over two questions since Duffy’s announcement. First, where is the best place to put an initial nuclear reactor on the Moon, to set up for future lunar bases? Second, how will NASA protect the reactor from plumes of regolith – or loosely fragmented lunar rocks – kicked up by spacecraft landing near it? These are two key questions the agency will have to answer as it develops this technology.

Where do you put a nuclear reactor on the Moon?

The nuclear reactor will likely form the power supply for the initial U.S.-led Moon base that will support humans who’ll stay for ever-increasing lengths of time. To facilitate sustainable human exploration of the Moon, using local resources such as water and oxygen for life support and hydrogen and oxygen to refuel spacecraft can dramatically reduce the amount of material that needs to be brought from Earth, which also reduces cost.

In the 1990s, spacecraft orbiting the Moon first observed dark craters called permanently shadowed regions on the lunar north and south poles. Scientists now suspect these craters hold water in the form of ice, a vital resource for countries looking to set up a long-term human presence on the surface. NASA’s Artemis campaign aims to return people to the Moon, targeting the lunar south pole to take advantage of the water ice that is present there.

A close-up shot of the Moon's surface, with the left half covered in shadow, and the right half visible, with gray craters. Tiny blue dots in the center indicate PSRs.
Dark craters on the Moon, parts of which are indicated here in blue, never get sunlight. Scientists think some of these permanently shadowed regions could contain water ice.
NASA’s Goddard Space Flight Center

In order to be useful, the reactor must be close to accessible, extractable and refinable water ice deposits. The issue is we currently do not have the detailed information needed to define such a location.

The good news is the information can be obtained relatively quickly. Six lunar orbital missions have collected, and in some cases are still collecting, relevant data that can help scientists pinpoint which water ice deposits are worth pursuing.

These datasets give indications of where either surface or buried water ice deposits are. It is looking at these datasets in tandem that can indicate water ice “hot prospects,” which rover missions can investigate and confirm or deny the orbital observations. But this step isn’t easy.

Luckily, NASA already has its Volatiles Investigating Polar Exploration Rover mission built, and it has passed all environmental testing. It is currently in storage, awaiting a ride to the Moon. The VIPER mission can be used to investigate on the ground the hottest prospect for water ice identified from orbital data. With enough funding, NASA could probably have this data in a year or two at both the lunar north and south poles.

The VIPER rover would survey water at the south pole of the Moon.

How do you protect the reactor?

Once NASA knows the best spots to put a reactor, it will then have to figure out how to shield the reactor from spacecraft as they land. As spacecraft approach the Moon’s surface, they stir up loose dust and rocks, called regolith. It will sandblast anything close to the landing site, unless the items are placed behind large boulders or beyond the horizon, which is more than 1.5 miles (2.4 kilometers) away on the Moon.

Scientists already know about the effects of landing next to a pre-positioned asset. In 1969, Apollo 12 landed 535 feet (163 meters) away from the robotic Surveyor 3 spacecraft, which showed corrosion on surfaces exposed to the landing plume. The Artemis campaign will have much bigger lunar landers, which will generate larger regolith plumes than Apollo did. So any prepositioned assets will need protection from anything landing close by, or the landing will need to occur beyond the horizon.

Until NASA can develop a custom launch and landing pad, using the lunar surface’s natural topography or placing important assets behind large boulders could be a temporary solution. However, a pad built just for launching and landing spacecraft will eventually be necessary for any site chosen for this nuclear reactor, as it will take multiple visits to build a lunar base. While the nuclear reactor can supply the power needed to build a pad, this process will require planning and investment.

Human space exploration is complicated. But carefully building up assets on the Moon means scientists will eventually be able to do the same thing a lot farther away on Mars. While the devil is in the details, the Moon will help NASA develop the abilities to use local resources and build infrastructure that could allow humans to survive and thrive off Earth in the long term.

The Conversation

Clive Neal receives funding from NASA.

ref. NASA wants to put a nuclear reactor on the Moon by 2030 – choosing where is tricky – https://theconversation.com/nasa-wants-to-put-a-nuclear-reactor-on-the-moon-by-2030-choosing-where-is-tricky-263146

Some pro athletes keep getting better as they age − neuroscience can explain how they stay sharp

Source: The Conversation – USA – By Fiddy Davis Jaihind Jothikaran, Associate Professor of Kinesiology, Hope College

Recovery and mental resilience support the development of neuroplasticity, which helps athletes like Allyson Felix stay sharp. AP Photo/Charlie Riedel

In a world where sports are dominated by youth and speed, some athletes in their late 30s and even 40s are not just keeping up – they are thriving.

Novak Djokovic is still outlasting opponents nearly half his age on tennis’s biggest stages. LeBron James continues to dictate the pace of NBA games, defending centers and orchestrating plays like a point guard. Allyson Felix won her 11th Olympic medal in track and field at age 35. And Tom Brady won a Super Bowl at 43, long after most NFL quarterbacks retire.

The sustained excellence of these athletes is not just due to talent or grit – it’s biology in action. Staying at the top of their game reflects a trainable convergence of brain, body and mindset. I’m a performance scientist and a physical therapist who has spent over two decades studying how athletes train, taper, recover and stay sharp. These insights aren’t just for high-level athletes – they hold true for anyone navigating big life changes or working to stay healthy.

Increasingly, research shows that the systems that support high performance – from motor control to stress regulation, to recovery – are not fixed traits but trainable capacities. In a world of accelerating change and disruption, the ability to adapt to new changes may be the most important skill of all. So, what makes this adaptability possible – biologically, cognitively and emotionally?

The amygdala and prefrontal cortex

Neuroscience research shows that with repeated exposure to high-stakes situations, the brain begins to adapt. The prefrontal cortex – the region most responsible for planning, focus and decision-making – becomes more efficient in managing attention and making decisions, even under pressure.

During stressful situations, such as facing match point in a Grand Slam final, this area of the brain can help an athlete stay composed and make smart choices – but only if it’s well trained.

In contrast, the amygdala, our brain’s threat detector, can hijack performance by triggering panic, freezing motor responses or fueling reckless decisions. With repeated exposure to high-stakes moments, elite athletes gradually reshape this brain circuit.

They learn to tune down amygdala reactivity and keep the prefrontal cortex online, even when the pressure spikes. This refined brain circuitry enables experienced performers to maintain their emotional control.

Creating a brain-body loop

Brain-derived neurotrophic factor, or BDNF, is a molecule that supports adapting to changes quickly. Think of it as fertilizer for the brain. It enhances neuroplasticity: the brain’s ability to rewire itself through experience and repetition. This rewiring helps athletes build and reinforce the patterns of connections between brain cells to control their emotion, manage their attention and move with precision.

BDNF levels increase with intense physical activity, mental focus and deliberate practice, especially when combined with recovery strategies such as sleep and deep breathing.

Elevated BDNF levels are linked to better resilience against stress and may support faster motor learning, which is the process of developing or refining movement patterns.

For example, after losing a set, Djokovic often resets by taking deep, slow breaths – not just to calm his nerves, but to pause and regain control. This conscious breathing helps him restore focus and likely quiets the stress signals in his brain.

In moments like these, higher BDNF availability likely allows him to regulate his emotions and recalibrate his motor response, helping him to return to peak performance faster than his opponent.

Rewiring your brain

In essence, athletes who repeatedly train and compete in pressure-filled environments are rewiring their brain to respond more effectively to those demands. This rewiring, from repeated exposures, helps boost BDNF levels and in turn keeps the prefrontal cortex sharp and dials down the amygdala’s tendency to overreact.

This kind of biological tuning is what scientists call cognitive reserve and allostasis – the process the body uses to make changes in response to stress or environmental demands to remain stable. It helps the brain and body be flexible, not fragile.

Importantly, this adaptation isn’t exclusive to elite athletes. Studies on adults of all ages show that regular physical activity – particularly exercises that challenge both body and mind – can raise BDNF levels, improve the brain’s ability to adapt and respond to new challenges, and reduce stress reactivity.

Programs that combine aerobic movement with coordination tasks, such as dancing, complex drills or even fast-paced walking while problem-solving have been shown to preserve skills such as focus, planning, impulse control and emotional regulation over time.

After an intense training session or a match, you will often see athletes hopping on a bike or spending some time in the pool. These low-impact, gentle movements, known as active recovery, help tone down the nervous system gradually.

Outside of active recovery, sleep is where the real reset and repair happen. Sleep aids in learning and strengthens the neural connections challenged during training and competition.

A tennis player wearing all white hits a forehand
Serbian tennis player Novak Djokovic practices meditation, which strengthens the mental pathways that help with stress regulation.
AP Photo/Kin Cheung

Over time, this convergence creates a trainable loop between the brain and body that is better equipped to adapt, recover and perform.

Lessons beyond sport

While the spotlight may shine on sporting arenas, you don’t need to be a pro athlete to train these same skills.

The ability to perform under pressure is a result of continuing adaptation. Whether you’re navigating a career pivot, caring for family members, or simply striving to stay mentally sharp as the world changes, the principles are the same: Expose yourself to challenges, regulate stress and recover deliberately.

While speed, agility and power may decline with age, some sport-specific skills such as anticipation, decision-making and strategic awareness actually improve. Athletes with years of experience develop faster mental models of how a play will unfold, which allows them to make better and faster choices with minimal effort. This efficiency is a result of years of reinforcing neural circuits that doesn’t immediately vanish with age. This is one reason experienced athletes often excel even if they are well past their physical prime.

Physical activity, especially dynamic and coordinated movement, boosts the brain’s capacity to adapt. So does learning new skills, practicing mindfulness and even rehearsing performance under pressure. In daily life, this might be a surgeon practicing a critical procedure in simulation, a teacher preparing for a tricky parent meeting, or a speaker practicing a high-stakes presentation to stay calm and composed when it counts. These aren’t elite rituals – they’re accessible strategies for building resilience, motor efficiency and emotional control.

Humans are built to adapt – with the right strategies, you can sustain excellence at any stage of life.

The Conversation

Fiddy Davis Jaihind Jothikaran does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Some pro athletes keep getting better as they age − neuroscience can explain how they stay sharp – https://theconversation.com/some-pro-athletes-keep-getting-better-as-they-age-neuroscience-can-explain-how-they-stay-sharp-261927

AI is about to radically alter military command structures that haven’t changed much since Napoleon’s army

Source: The Conversation – USA – By Benjamin Jensen, Professor of Strategic Studies at the Marine Corps University School of Advanced Warfighting; Scholar-in-Residence, American University School of International Service

This U.S. Army command post, seen from a drone, is loaded with modern technology but uses a centuries-old structure. Col. Scott Woodward, U.S. Army

Despite two centuries of evolution, the structure of a modern military staff would be recognizable to Napoleon. At the same time, military organizations have struggled to incorporate new technologies as they adapt to new domains – air, space and information – in modern war.

The sizes of military headquarters have grown to accommodate the expanded information flows and decision points of these new facets of warfare. The result is diminishing marginal returns and a coordination nightmare – too many cooks in the kitchen – that risks jeopardizing mission command.

AI agents – autonomous, goal-oriented software powered by large language models – can automate routine staff tasks, compress decision timelines and enable smaller, more resilient command posts. They can shrink the staff while also making it more effective.

As an international relations scholar and reserve officer in the U.S. Army who studies military strategy, I see both the opportunity afforded by the technology and the acute need for change.

That need stems from the reality that today’s command structures still mirror Napoleon’s field headquarters in both form and function – industrial-age architectures built for massed armies. Over time, these staffs have ballooned in size, making coordination cumbersome. They also result in sprawling command posts that modern precision artillery, missiles and drones can target effectively and electronic warfare can readily disrupt.

Russia’s so-called “Graveyard of Command Posts” in Ukraine vividly illustrates how static headquarters where opponents can mass precision artillery, missiles and drones become liabilities on a modern battlefield.

a satellite view of mountainous terrain overlaid by blobs of bright colors
This satellite image shows the electronic emissions of a brigade combat team training at Fort Irwin, Calif. The bright red areas are emissions from command posts.
Col. Scott Woodward, U.S. Army

The role of AI agents

Military planners now see a world in which AI agents – autonomous, goal-oriented software that can perceive, decide and act on their own initiative – are mature enough to deploy in command systems. These agents promise to automate the fusion of multiple sources of intelligence, threat-modeling, and even limited decision cycles in support of a commander’s goals. There is still a human in the loop, but the humans will be able to issue commands faster and receive more timely and contextual updates from the battlefield.

These AI agents can parse doctrinal manuals, draft operational plans and generate courses of action, which helps accelerate the tempo of military operations. Experiments – including efforts I ran at Marine Corps University – have demonstrated how even basic large language models can accelerate staff estimates and inject creative, data-driven options into the planning process. These efforts point to the end of traditional staff roles.

There will still be people – war is a human endeavor – and ethics will still factor into streams of algorithms making decisions. But the people who remain deployed are likely to gain the ability to navigate mass volumes of information with the help of AI agents.

These teams are likely to be smaller than modern staffs. AI agents will allow teams to manage multiple planning groups simultaneously.

For example, they will be able to use more dynamic red teaming techniques – role-playing the opposition – and vary key assumptions to create a wider menu of options than traditional plans. The time saved not having to build PowerPoint slides and updating staff estimates will be shifted to contingency analysis – asking “what if” questions – and building operational assessment frameworks – conceptual maps of how a plan is likely to play out in a particular situation – that provide more flexibility to commanders.

Designing the next military staff

To explore the optimal design of this AI agent-augmented staff, I led a team of researchers at the bipartisan think tank Center for Strategic & International Studies’ Futures Lab to explore alternatives. The team developed three baseline scenarios reflecting what most military analysts are seeing as the key operational problems in modern great power competition: joint blockades, firepower strikes and joint island campaigns. Joint refers to an action coordinated among multiple branches of a military.

In the example of China and Taiwan, joint blockades describe how China could isolate the island nation and either starve it or set conditions for an invasion. Firepower strikes describe how Beijing could fire salvos of missiles – similar to what Russia is doing in Ukraine – to destroy key military centers and even critical infrastructure. Last, in Chinese doctrine, a Joint Island Landing Campaign describes the cross-strait invasion their military has spent decades refining.

Any AI agent-augmented staff should be able to manage warfighting functions across these three operational scenarios.

The research team found that the best model kept humans in the loop and focused on feedback loops. This approach – called the Adaptive Staff Model and based on pioneering work by sociologist Andrew Abbott – embeds AI agents within continuous human-machine feedback loops, drawing on doctrine, history and real-time data to evolve plans on the fly.

In this model, military planning is ongoing and never complete, and focused more on generating a menu of options for the commander to consider, refine and enact. The research team tested the approach with multiple AI models and found that it outperformed alternatives in each case.

Gen. Mark Milley, former chairman of the Joint Chiefs of Staff, describes on ‘60 Minutes’ the dramatic upheaval AI is poised to cause in military operations.

AI agents are not without risk. First, they can be overly generalized, if not biased. Foundation models – AI models trained on extremely large datasets and adaptable to a wide range of tasks – know more about pop culture than war and require refinement. This makes it important to benchmark agents to understand their strengths and limitations.

Second, absent training in AI fundamentals and advanced analytical reasoning, many users tend to use models as a substitute for critical thinking. No smart model can make up for a dumb, or worse, lazy user.

Seizing the ‘agentic’ moment

To take advantage of AI agents, the U.S. military will need to institutionalize building and adapting agents, include adaptive agents in war games, and overhaul doctrine and training to account for human-machine teams. This will require a number of changes.

First, the military will need to invest in additional computational power to build the infrastructure required to run AI agents across military formations. Second, they will need to develop additional cybersecurity measures and conduct stress tests to ensure the agent-augmented staff isn’t vulnerable when attacked across multiple domains, including cyberspace and the electromagnetic spectrum.

Third, and most important, the military will need to dramatically change how it educates its officers. Officers will have to learn how AI agents work, including how to build them, and start using the classroom as a lab to develop new approaches to the age-old art of military command and decision-making. This could include revamping some military schools to focus on AI, a concept floated in the White House’s AI Action Plan released on July 23, 2025.

Absent these reforms, the military is likely to remain stuck in the Napoleonic staff trap: adding more people to solve ever more complex problems.

The Conversation

Benjamin Jensen led a research project that was a collaboration between CSIS and Scale AI. He did not personally receive any funding from the company.

ref. AI is about to radically alter military command structures that haven’t changed much since Napoleon’s army – https://theconversation.com/ai-is-about-to-radically-alter-military-command-structures-that-havent-changed-much-since-napoleons-army-262200