How fire, people and history shaped the South’s iconic longleaf pine forests

Source: The Conversation – USA (2) – By Andrea De Stefano, Assistant Professor of Forestry, Mississippi State University

A land manager examines young longleaf pines, some in their grassy phase, in a private forest in South Carolina. AP Photo/James Pollard

For thousands of years, one tree species defined the cultural and ecological identity of what is now the American South: the longleaf pine. The forest once stretched across 92 million acres from Virginia to Texas, but about 5% of that original forest remains. It was one of North America’s richest ecosystems, and it nearly disappeared.

As part of my job with the Mississippi State University forestry extension, I help private landowners, public agencies and nonprofit conservation groups restore these ecosystems. The forests’ story begins before European settlement, when Native peoples shaped and sustained this vast landscape using one of nature’s oldest tools: fire.

Longleaf pine trees depend on fire for survival and regeneration. Fire reduces competition from other plants, recycles nutrients into the soil and maintains the open structure of the landscape where longleaf pines grow best. In its open, grassy woodlands, red-cockaded woodpeckers, gopher tortoises, orchids, pitcher plants and hundreds of other species find homes.

A map of the southeastern United States shows the historical longleaf pine forest range in yellow and National Forests in green.
Historically, the longleaf pine forest had a vast range.
Andrea De Stefano, CC BY

Native stewardship

Longleaf pine seedlings spend about three to 10 years in a low, grasslike stage, building deep roots and resisting flames that sweep across the forest floor. Regular, low-intensity fires keep the ground open and sunny, and allow an incredibly diverse understory to flourish: pine lilies, meadow beauties, white bog orchids, carnivorous pitcher plants and dozens of native grasses.

For millennia, Native American tribes intentionally set fires to keep these areas open for hunting, travel and agriculture. This practice is evident from Indigenous oral histories, early European accounts and archaeological findings. Fire was part of daily life – a tool, not a danger.

People stand in a spacious open grove of trees.
A postcard from the early 20th century shows people standing next to longleaf pine trees in Mississippi.
Mississippi Department of Archives and History via Wikimedia Commons

European settlers arrive

When the first Europeans made it to that part of North America, they encountered a landscape that seemed almost limitless: tall, straight pines ideal for shipbuilding; deep soils in the uplands suited for farming; and understory, the plants that grow in the shade of the forest, perfect for open-range grazing.

Longleaf pine trees became the backbone of early industries. They provided lumber, fuel and naval supplies, such as tar, pitch and turpentine, which are essential for waterproofing wooden ships. By the mid-1800s, the naval industry alone consumed millions of longleaf pines each year, especially in the Carolinas, Georgia and Florida.

At the same time, livestock, especially hogs, roamed freely and caused unexpected ecological damage. Hogs rooted up the starchy, above-ground stems of young longleaf seedlings, often wiping out an area’s entire year of seedlings before they could grow beyond the grass stage.

Still, even into the mid-1800s, millions of acres of longleaf forest remained intact. That would soon change.

People, equipment and machines stand amid tall trees.
Workers build a logging railroad through a longleaf pine forest in Texas in 1902.
Corbis Historical via Getty Images

Industrial logging and the collapse of a forest

By the late 19th century, the industrial South entered a new era of logging. Railroads could reach deep into forests that were previously inaccessible. Steam-powered skidders dragged huge logs to mobile mills that could turn thousands of acres of trees into lumber in a single season. Lumber towns appeared overnight, then disappeared once the last trees were cut.

Most longleaf forests were felled between 1880 and 1930, with little thought given to regrowth. Land was cheap, timber was valuable, and scientific forestry was in its infancy. After logging, what was left on the ground at many sites burned in wildfires too hot for young longleaf pines to survive. Some of the fires were ignited accidentally by sparks from railroads or logging operations, others by lightning, and some by people attempting to clear land.

Other parcels of land were overrun by hogs or were converted to farms. Other forestland simply failed to regenerate because longleaf requires both good seed years and carefully timed burning to establish new generations of seedlings. By 1930, the once-vast longleaf forest was effectively gone.

A video shows the process of railroad-enabled logging of longleaf pine forests.

A turning point

The early 20th century brought public debates about fire. National forestry leaders, trained in northern ecosystems where wildfire was destructive, insisted that all fire was harmful and should be quickly extinguished. Southern landowners disagreed. They had long understood that fire kept the woods open, reduced pests and improved forage.

A series of pioneering researchers, including Herbert Stoddard, Austin Cary and others, proved scientifically what Native peoples had practiced for centuries: Prescribed fire is essential for longleaf pine forests.

By the 1930s, prescribed fire began to gain acceptance among Southern landowners and wildlife biologists, and by the 1940s it was recognized by state forestry agencies and the U.S. Forest Service as a legitimate management tool. This shift marked the beginning of a slow recovery of the forest.

Yet, after the logging of old-growth longleaf pine forests ended, foresters faced challenges regenerating the trees. Early planting attempts often failed. The longleaf species grows more slowly than loblolly or slash pine, making it less attractive to industry.

Millions of acres that once supported longleaf pines were converted to fast-growing plantation pines through the mid-20th century. By 1990, only 2.9 million acres of longleaf pine forest remained.

An open grassy area is punctuated by tall trees that are spaced well apart.
A view of a stand of young longleaf pines near Waycross, Ga., in 1936.
Carl Mydans via Library of Congress

A new era of restoration

But beginning in the 1980s, research breakthroughs had begun to offer the prospect of change. Studies across the Southeast demonstrated that longleaf pine trees could be reliably planted if seedling quality, site preparation and fire timing were carefully managed.

Improved genetics – for instance, choosing those seedlings more likely to grow straight and tall and those more resistant to disease and drought – and starting seedlings in containers increased survival dramatically.

A tree trunk shows black burn marks on its bark.
A longleaf pine tree shows marks from past controlled burns.
AP Photo/Chris Carlson

At the same time, landowners and agencies began to appreciate the benefits of longleaf pines. They are strong enough to withstand hurricanes, resistant to pests and disease, and provide high-quality timber and exceptional wildlife habitat. And they are compatible with grazing, need little to no fertilizer or other support to grow, and are ready to adapt to a warming, more fire-prone climate.

Today, many organizations are restoring longleaf pine trees across national forests, private lands and working farms.

Landowners are choosing the species not only for conservation but for recreation, hunting and cultural reasons.

In many parts of the South, longleaf pines have become a symbol of both heritage and resilience to hurricanes, drought, wildfire and climate change.

The longleaf pine ecosystem is more than a forest: It is the story about how people shape landscapes over centuries. It thrived under Native fire stewardship, declined under industrial exploitation, and is now returning – thanks to science, collaboration and cultural rediscovery.

The future of the longleaf pine forest will depend on continued use of prescribed fire, support for and from private landowners and recognition that restoring a complex ecosystem takes time. But across the South, the open, grassy longleaf pine ecosystems are coming back. A forest once given up for lost is becoming, again, a living emblem of the southern landscape.

The Conversation

Andrea De Stefano does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. How fire, people and history shaped the South’s iconic longleaf pine forests – https://theconversation.com/how-fire-people-and-history-shaped-the-souths-iconic-longleaf-pine-forests-272003

Gifts of gym memberships and Botox treatments can lead to hurt feelings – and bad reviews for the businesses

Source: The Conversation – USA (2) – By Linnéa Chapman, Assistant Professor of Marketing, Florida International University

Oh, you really shouldn’t have. ljubaphoto/E+ via Getty Images

How would you feel if someone gave you a gym membership as a holiday or Valentine’s Day gift?

What about Botox?

Laser hair removal?

Services like those are part of the estimated US$48 billion self-improvement industry. Does this suggest that many people would appreciate self-improvement gifts?

Retailers seem to think so.

The Planet Fitness chain of gyms encourages buying workout equipment for Mother’s Day. The Republic of Tea offers beauty tea, which the company says can improve your complexion, as part of their gift sets. Instagram posts call paying for other people’s Botox treatments “the new flowers,” and tell men that it is what women want for Valentine’s Day.

As an academic who studies consumer behavior, I am particularly interested in social aspects of consumption. Seeing these promotions, I wondered whether consumers take the bait. In other words, do people really give self-improvement products as presents?

Different responses to different gifts

To study what happens when people get self-improvement goods or services as presents, I teamed up with Farnoush Reshadi, a fellow marketing scholar with expertise in both self-improvement and gift-giving.

First we asked 97 adults living in the United States whether they had ever gotten a self-improvement product as a gift. About 60% of these consumers, whom we recruited through an online platform, were women and they were 38.6 years old on average. Two-thirds of them indicated that they had received a self-improvement gift at some point.

Next, we created an experiment to find out how consumers might feel when they receive these gifts.

In it, 209 people imagined that they had received either a self-improvement calendar geared toward sharpening their communication skills or a “did you know” calendar with fun facts, such as bananas are berries.

Participants viewed the calendars, then answered some questions about how they would feel if someone gave them one.

Specifically, we asked to what extent they would feel hurt, wounded and crushed. On average, the people who saw the self-improvement calendar expressed stronger hurt feelings than those who saw the fun facts one.

What can happen to retailers

We also wanted to know how the people who receive self-improvement gifts might cope with their hurt feelings.

Explaining how they felt to the gift-giver seems unlikely, since social norms dictate that you should feel grateful for presents. Expressing other kinds of feelings about gifts, including hurt feelings, is relatively taboo.

Another possibility is that people in this situation cope by venting – either to someone else or by giving the gift a bad review.

This is exactly what we found.

Compared to those who imagined receiving gifts not geared toward self-improvement, people who imagined receiving self-improvement items as gifts consistently said they would give them lower ratings. They also said they were more likely to criticize them.

To be clear, this had nothing to do with the quality of those items.

To verify that, we asked 205 people to imagine either buying the self-improvement calendar or the “did you know?” calendar for themselves. Then, we asked them to rate the calendar. On average, participants gave both the self-improvement calendar and the other calendar about 3.7 out of 5 stars.

This helped us rule out the possibility that people generally disliked the self-improvement calendar or thought it was a bad product.

A woman gets a Botox injection.
Getting Botox is a personal decision that probably doesn’t lend itself to presents.
Isa Foltin/Getty Images

Bad reviews are bad for business

Spreading negative word of mouth about self-improvement gifts might help people deal with their hurt feelings. But, to state the obvious, it doesn’t help retailers.

Negative product reviews can affect retailers’ revenue and reputations. That means self-improvement gifts don’t just hurt the people receiving them. By stimulating negative word of mouth, they hurt the retailers selling them, too.

To discourage bad reviews from people who get unwelcome gifts, we would suggest that companies not promote self-improvement products as gifts.

Instead, retailers could encourage consumers to buy those goods and services for themselves. This might be especially effective in January, when many people challenge themselves to meet self-improvement goals with New Year’s resolutions. This strategy might work throughout the year, as well.

To deter people from buying these gifts, retailers could refrain from marketing such goods and services that way or putting them on sale before Valentine’s Day and other gift-giving occasions.

Even if retailers were to follow this advice, some of their customers might buy these gifts. What can retailers do then?

2 work-arounds

Our research identified two potential solutions.

First, retailers can offer financial incentives for leaving product reviews.

We conducted a study in which 311 people imagined receiving either a weight-loss tea or a regular tea as a birthday present. Some of the participants also imagined that they would be given a Visa gift card in exchange for leaving a review of the product.

On average, people gave the weight-loss tea a lower rating than the regular tea – unless they had been offered a Visa gift card in exchange for their review. Participants who imagined receiving a weight-loss tea along with a Visa gift card provided ratings that were comparable to those who received a regular tea.

Second, retailers can take care with how they send review requests.

Sometimes these requests aren’t framed as being from anyone in particular. Other times, they’re framed as though a real person sent them, along these lines: “Please review this product. Thanks, Alex.”

We had 306 people imagine receiving a weight-loss tea or a regular tea, accompanied by a review request. Participants then rated the imagined product. On average, the weight-loss tea got lower ratings than the regular tea – unless they received a review request that apparently came from a human.

This suggests that sending review requests that appear to be from a specific person might help retailers avoid negative product reviews from people who get self-improvement products as presents.

This is a good thing, because self-improvement gifts aren’t necessarily bad goods or services. They’re just bad gifts.

So, the next time you shop for presents, my advice is that you skip the self-improvement aisle. Your friend or loved one – and the business you might have bought it from – will be glad you did.

The Conversation

Linnéa Chapman does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Gifts of gym memberships and Botox treatments can lead to hurt feelings – and bad reviews for the businesses – https://theconversation.com/gifts-of-gym-memberships-and-botox-treatments-can-lead-to-hurt-feelings-and-bad-reviews-for-the-businesses-272986

White men held less than half the board seats on the top 50 Fortune list for the third straight year — but their numbers are rising

Source: The Conversation – USA (2) – By Richie Zweigenhaft, Emeritus Professor of Psychology, Guilford College

Who gets a seat at the table? AP Photo/Kiichiro Sato

Historically, corporate board rooms have been mostly white and mostly male. Yet the trend started shifting in the 1970s, in part due to gains from the civil rights era and pro-diversity efforts by activists and business groups.

I have been monitoring the degree of diversity in the corporate and political worlds for decades. One useful diversity metric is the percentage of boardroom members who are not white men.

And for the third year in a row, white men did not hold the majority of seats on the boards of America’s 50 largest corporations, according to my analysis of the most recent Fortune 500 list. However, the share of white men nonetheless ticked up after a two-year decline.

But knowing the white man/nonwhite man board split in itself is a blunt tool. It doesn’t tell us the nature of the current diversity, how it is related to the broader political climate, and what can be learned about diversity by looking at who the 2025 corporate directors were.

Patterns in the data

Whereas about a decade ago, white men held two-thirds of the seats on the top 50 Fortune boards, in 2023, for the first time, they held fewer than 50%. In 2024, that number dropped to 48.4%, but this year it climbed back to 49.7%.

Since white men make up about 31% of the U.S. population, they still have been very much overrepresented in all three years.

As the percentage of seats held by white men rose from 2024 to 2025, however, the percentage held by white women dropped, from 25% to 24.5%. Other researchers found this same pattern for the entire Fortune 500.

The percentage of seats held by Black people also dropped, from 15% to 14.2%, and likewise those held by Hispanic people, from 6.1% to 5.9%. Meanwhile, the percentage of seats held by Asian people rose slightly, from 5.6% to 5.7%.

The education factor

The large majority of the men and women with Asian backgrounds who held 33 seats on the top 50 Fortune boards in 2025 were born outside the United States, did undergraduate work in their home countries, and then came to the U.S. to attend graduate school.

Most of the Hispanic directors were similarly born outside the country, and many of them did undergraduate or graduate work – or both – in the U.S.

Education matters for future diversity monitoring in part because of the Trump administration’s efforts to make it much harder for noncitizens to come to the U.S. for higher education.

Indeed, denying access to Asian and Hispanic people who wish to study in the U.S. could well, over time, diminish the pipeline to the corporate suite, and it could decrease the number of Asian and Hispanic corporate directors as well.

The politics beyond some notable board changes

It is revealing to look at some of the people who left boards and the appointments of others – changes that resulted in this year’s drop in diversity.

For example, Meta added five people to its board: four white men and an Egyptian American woman. One of the white men was Dana White, the CEO of the Ultimate Fighting Championship and a longtime and currently active Trump supporter.

A man wearing a sport jacket smiles.
UFC CEO, Trump ally and recently minted Meta board member Dana White.
AP Photo/Evan Vucci

The woman that Meta added to its board is Dina Powell McCormick. She was deputy national security adviser in Trump’s first term and is married to Dave McCormick, a Republican financier who is currently a U.S. senator from Pennsylvania.

With the addition of White, Powell McCormick and three other white men, the Meta board went from 50% white males in 2024 to 60% in 2025, and it added two Trump supporters with close connections to the president. In late December 2025, Powell McCormick resigned from her position to become Meta’s president and vice chair.

Some other notable changes in diversity from 2024 to 2025 took place on the boards of Fannie Mae and Freddie Mac.

Because the Federal Housing Finance Agency regulates these two companies, in 2025 the Trump administration’s hostility toward diversity, equity and inclusion, or DEI, appeared to have a direct effect on the level of diversity on these two boards. In January 2025, Trump nominated William Pulte, a Trump donor, to become the director of the FHFA.

Pulte swiftly got rid of some women directors, Black directors and an Asian director. As a result, the percentage of white male directors on those two boards increased from 40% in 2024 to 65% in 2025. Notably, however, among the new appointees to the board were a Black man, another man whose mother is Iranian and whose father is Pakistani, and a man of Spanish ancestry whose parents were Turkish immigrants.

Trump’s second-term cabinet – which includes five white women, a Black man, and a Hispanic woman – included far less diversity than the cabinets of Presidents Barack Obama and Joe Biden, but twice as much diversity as Trump’s first cabinet. Trump has shown himself to be open to some diversity as long as the diverse appointments – in line with his general policy on recruitment – are sufficiently willing to support him. Similarly, Pulte’s changes decreased diversity while at the same time including some people from diverse backgrounds who were loyal to Trump.

A portrait of a woman.
Dina Powell McCormick became Meta’s president in early 2026, after serving for a year on its board.
Business Wire

The ironies of elite diversity

All of that ties into a subject I have explored in three editions of a book I co-authored with Bill Domhoff, “Diversity in the Power Elite.” In it, we have looked at what we have called “the ironies of diversity.”

One central irony of diversity is that as a small number of people from previously excluded groups are granted entry into the power elite, the processes by which they are chosen and their very presence provide justification for the continuation of the status quo when it comes to power and the distribution of wealth.

The continued selections of some directors who provide diversity on the boards of the top 50 Fortune companies are part of this process, as is Trump’s surprisingly diverse Cabinet.

The fear among those pushing for greater diversity among corporate leadership is that the data for 2025 might be the beginning of a longer declining trend.

The Conversation

Richie Zweigenhaft does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. White men held less than half the board seats on the top 50 Fortune list for the third straight year — but their numbers are rising – https://theconversation.com/white-men-held-less-than-half-the-board-seats-on-the-top-50-fortune-list-for-the-third-straight-year-but-their-numbers-are-rising-272996

All foods can fit in a balanced diet – a dietitian explains how flexibility can be healthier than dieting

Source: The Conversation – USA (3) – By Charlotte Carlson, Director of the Kendall Reagan Nutrition Center, Colorado State University

There are no ‘good’ or ‘bad’ foods when thinking holistically about health. Halfpoint Images/Moment via Getty Images

Eat this, not that. This one food will cure everything. That food is poison. Cut this food out. Try this diet. Don’t eat at these times. Eat this food and you’ll lose weight. With society’s obsession with food, health and weight, statements like these are all over social media, gyms and even health care offices.

But do you need to follow rules like these to be healthy? Most often the answer is no, because health and nutrition is much more complex and nuanced than a simple list of what to eat and what to avoid. Despite this, rules about health and nutrition are so common because of diet culture – a morality imposed by society that sees falling outside the arbitrary ideal of thinness as a personal failure. Diet culture and the people promoting it expect you to pursue or maintain thinness at all times.

Diet culture norms have led to a multibillion-dollar industry promoting diets that each come with their own set of rules, with each claiming it’s the only way to be healthy or lose weight. When access to nutrition information is at an all-time high online, people are often left digging through conflicting information when trying to figure out what to eat or what a healthy diet look likes.

Person standing in front of grocery store aisle
What foods would you pick without diet culture telling you what to do?
PeopleImages/iStock via Getty Images Plus

As a registered dietitian specializing in eating disorders, the majority of my clients have been, and continue to be, harmed by diet culture. They wrestle with guilt and shame around food, and their health is often negatively affected by rigid rules about nutrition. Rather than improving health, research has shown that diet culture increases your risk of unhealthy behaviors, including yo-yo dieting, weight cycling and eating disorders.

If the solution to health isn’t following the rules of diet culture, what is the answer? I believe an all-foods-fit approach to nutrition can offer an antidote.

What is all foods fit?

All foods fit may sound like “eat whatever you want, whenever you want,” but that is an oversimplification of this approach to nutrition. Rather, this model is based on the idea that all foods can fit into a healthy diet by balancing food and nutrition in a way that promotes health. It does this by enabling flexibility in your diet through listening to internal body cues to decide what and when to eat instead of following external rules.

All foods fit allows for nuance to exist in health and nutrition. Diet culture is black and white – foods are either “good” or “bad.” But nutrition and health are much more complex. For starters, many factors beyond diet affect health: exercise, sleep, stress, mental health, socioeconomic status, access to food, and health care, to name a few.

Similarly, while general guidelines around nutrition are available, everyone has individual needs based on their preferences, health status, access to food, daily schedule, cooking skills and more. The flexibility of all foods fit can help you make empowered food choices based on your health goals, tastes, exercise habits and life circumstances.

All foods fit in action

A common pushback to the all-foods-fit approach is that you can’t be healthy if you are eating “unhealthy” foods, and giving yourself permission to eat all foods means you’ll primarily eat the “bad” ones. However, research shows that removing the morality around food can actually lead to healthier food choices by decreasing stress related to food decisions. This reduces the risk of disordered eating, resulting in improved physical health.

To see what an all-foods-fit approach might look like, imagine you’re attending a social event where the food options are pizza, a veggie and dip tray and cookies. According to the diet you’re following, pizza, cookies and dips are all “bad” foods to avoid. You grab some of the veggies to eat but are still hungry.

You’re starving toward the end of the event, but the only food left is cookies. You plan on eating only one but feel so hungry and guilty that you end up eating a lot of cookies and feel out of control. You feel sick when you go home and promise yourself to do better tomorrow. But this binge-restrict cycle will continue.

Three people filling their plates with pizza, salad and chips
Flexibility can help you adapt to – and enjoy – different food situations.
Ivan Rodriguez Alba/E+ via Getty Images

Now imagine attending the same social event, but you don’t label foods as good or bad. From experience, you know you often feel hungry and unwell after eating pizza by itself. You also know that fiber, which can be found in vegetables, is helpful for gut health and can make you feel more satisfied after meals. So you balance your plate with a couple slices of pizza and a handful of veggies and dip.

You feel pretty satisfied after that meal and don’t feel the need to eat a cookie. Toward the end of the event, you grab a cookie because you enjoy the taste and eat most of it before feeling satisfied. You save the rest of the cookie for later.

Rather than following strict rules and restrictions that can lead to cycles of guilt and shame, an all-food-fits approach can lead to more sustainable healthy habits where stress and disruptions to routine don’t wreak havoc on your overall diet.

How to get started with an all-foods-fit approach

It can be incredibly hard to divest from diet culture and adopt an all-foods-fit approach to nutrition and health. Here are some tips to help you get started:

  1. Remove any moral labels on food. Instead of good or bad, or healthy or unhealthy, think about the name of the food or the nutritional components it has. For example, chicken is high in protein, broccoli is a source of fiber, and ice cream is a dessert. Neutral labels can help determine what food choices make sense for you in the moment and reduce any guilt or shame around food.

  2. Focus on your internal cues – hunger, fullness, satisfaction and how food makes you physically feel. Becoming attuned to your body can help you regulate food choices and determine what eating pattern makes you feel your best.

  3. Eat consistently. When you aren’t eating regularly, it can be hard to feel in control around food. Your hunger can become more intense and your body less sensitive to fullness hormones. Implement an eating schedule that spaces food regularly throughout the day, filling any prolonged gaps between meals with a snack.

  4. Reintroduce foods you previously restricted. Start small with foods that feel less scary or with a small amount of a food you’re anxious about. This could look like adding a piece of chocolate to lunch most days, or trying out a bagel for one breakfast. By intentionally adding these foods back into your diet, you can build trust with yourself that you won’t feel out of control around these foods.

  5. Check in with yourself before eating. Ask yourself, how hungry am I? What sounds good right now? How long until I can eat again?

  6. And sometimes, more support is needed. This can be especially true if you’re experiencing disordered eating habits or have an eating disorder. Consider working with a dietitian to help challenge nutrition misinformation and heal your relationship to food.

The Conversation

Charlotte Carlson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. All foods can fit in a balanced diet – a dietitian explains how flexibility can be healthier than dieting – https://theconversation.com/all-foods-can-fit-in-a-balanced-diet-a-dietitian-explains-how-flexibility-can-be-healthier-than-dieting-270049

Why some people speak up against prejudice, while others do not

Source: The Conversation – UK – By Mete Sefa Uysal, Lecturer in Social & Political Psychology, University of Exeter

guruXOX/Shutterstock

When people encounter racism or discrimination, they don’t all respond in the same way. Some calmly challenge the remark, some file a complaint, others confront the offender aggressively – and many say nothing at all.

A common assumption is that speaking up against discrimination is a matter of personal courage, political ideology or education. But my recent research suggests that people’s cultural values, shaped by their backgrounds and life experiences, strongly influence how they confront discrimination.

Confrontation comes in very different forms. Some choose to confront non-aggressively (such as calmly pointing out prejudice, explaining why it is offensive or sharing how it impacts them emotionally). Others prefer relatively more aggressive confrontation (such as shouting back, threatening or physical retaliation). These responses carry different risks and consequences, both for the person confronting and for wider social relations.

My recent study with colleagues Thomas Kessler and Ayşe K. Uskul looked at how people’s cultural views of honour affected how they might respond to an insult or discrimination.

Honour is often misunderstood as a personal trait or a relic of “traditional” cultures. In psychology, honour is better understood as a cultural system that develops when people cannot rely on institutions – such as courts or police – to protect them from harm or injustice.

Honour cultures, common in Latin America, north Africa, south and west Asia and the southern US, often developed under harsh historical, social and ecological conditions, for example, scarce resources unprotected by central authorities.

In such contexts, reputation matters. Maintaining honour requires projecting a reputation for toughness. It means signalling a readiness to retaliate against perceived threats or insults to protect oneself and one’s family.

Being seen as weak or passive can invite further mistreatment, so individuals and groups learn to defend their dignity themselves. Honour codes travel with people through migration, continuing to shape how they interpret threats, insults and unfair treatment in new social environments.

The role of honour

Our study sought to understand how internalised honour codes shape responses to discrimination. Specifically, we looked at two communities: south and west Asians in the UK and Turkish migrants in Germany.

People in these communities may have grown up in an honour culture, where personal retaliation against insults is expected. Or, they may have learned these codes from parents and grandparents, while living in countries where such codes are not widespread.

Our findings show that honour codes play a central role in how people say they would confront discrimination. We asked participants a series of questions about their views on honour, as well as their experiences of discrimination. We then asked them to rate the different confrontation styles that they might use when someone discriminates against them based on their ethnic or cultural background.

We found that broadly, people who experienced discrimination more frequently said they were more likely to confront it. But the style of confrontation they chose depended strongly on their cultural values.

A key finding concerned collective honour: the belief that you have a responsibility to defend the dignity of your ethnic or cultural group. Participants who strongly endorsed collective honour reported they were more likely to confront prejudice in any form, whether calmly or aggressively. For them, remaining silent felt like allowing an insult to stand.

A stand up to racism protest
Protest: one way to respond to discrimination.
Martin Suker/Shutterstock

In contrast to those who view honour as a collective quality, there are also those who view honour as more of an individual, internalised quality. This can manifest in how people rate the importance of family reputation, and their readiness for retaliation against insults.

People who emphasised family reputation values – concern with maintaining respectability and avoiding shame – said they were more likely to confront discrimination in non-aggressive ways. They also reported being less likely to respond aggressively. Maintaining dignity, for them, meant self-control.

Those who strongly endorsed retaliation values – belief that failing to respond to insults signals weakness and dishonour – were more likely to confront prejudice aggressively and less likely to use calmer strategies. In other words, honour does not push people uniformly toward violence or to remain silent. Different honour codes lead to very different ways of speaking up.

Interestingly, broader structural factors — such as financial insecurity or distrust in the police and authorities — played a smaller role than expected in how people responded to discrimination. What mattered most was how often people actually experienced discrimination.

Repeated exposure to discrimination increased the likelihood of aggressive confrontation, especially among those who endorsed retaliation norms. This suggests that speaking up is shaped less by abstract perceptions of injustice and more by life experiences.

Why this matters

Political rhetoric around immigration has contributed to a broader climate of hostility and suspicion of some communities. This is evident in the waves of anti-immigration protests the UK has seen in recent years, and their effects on communities. According to Home Office data released in late 2025, police recorded 10,097 racially or religiously aggravated offences in August 2024 alone.

Against this backdrop, those who speak up — whether in calm advocacy or in heated confrontation — risk being judged against a narrow standard of “civility” that disregards the personal and cultural experiences that shape their responses.

For some people, walking away preserves dignity. For others, it undermines it. This does not mean all confrontational responses are equally effective or desirable.

But it does mean that judging these responses without understanding their cultural roots risks blaming individuals for navigating systems that were never designed to protect them. If we want more constructive conversations about discrimination and how we speak up against it, our research can offer a place to start.

The Conversation

Mete Sefa Uysal does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why some people speak up against prejudice, while others do not – https://theconversation.com/why-some-people-speak-up-against-prejudice-while-others-do-not-272867

Scientists once thought the brain couldn’t be changed. Now we know different

Source: The Conversation – UK – By Laura Elin Pigott, Senior Lecturer in Neurosciences and Neurorehabilitation, Course Leader in the College of Health and Life Sciences, London South Bank University

Master1305/Shutterstock

For much of the 20th century, scientists believed that the adult human brain was largely fixed. According to this view, the brain developed during childhood, settled into a stable form in early adulthood, and then resisted meaningful change for the rest of life.

Today, the concept of neuroplasticity, the brain’s ability to change its structure and function in response to experience, is a central principle of brain science. The brain can change throughout life, but not without limits, not instantly and not effortlessly.

Neuroplasticity therefore reframes the brain as neither rigid nor infinitely malleable, but as a living system shaped by experience, effort and time.

The roots of neuroplasticity can be traced to the mid-20th century. In 1949, psychologist Donald Hebb proposed that connections between neurons, the brain’s nerve cells, become stronger when they are repeatedly activated together.

This principle later became known as “Hebbian learning”. At the time, Hebb’s idea was considered relevant mainly to childhood development. Adult brains were still thought to be relatively unchangeable.

That assumption has since been overturned. From the late 20th century onward, studies showed that adult brains can reorganise in response to learning, changes in sensory input, or physical injury. Sensory changes include alterations in vision, hearing or touch due to training, loss of input or environmental change.

More recently, advances in brain imaging have allowed researchers to observe these changes directly in living people. These studies show that learning alters patterns of brain activity and connectivity across the lifespan.

Neuroplasticity is now understood not as a rare exception, but as a basic property of the nervous system. It operates continuously, within biological limits shaped by age, genetics, prior experience and overall brain health.

How the brain changes

Neuroplasticity involves changes in how existing brain cells communicate with one another.

When you learn a new skill, specific synapses, the tiny junctions where neurons pass signals to each other, become stronger and more efficient. Neural networks, which are groups of neurons that work together, become better organised. Communication between brain regions involved in that skill improves.




Read more:
No, your brain doesn’t suddenly ‘fully develop’ at 25. Here’s what the neuroscience actually shows


At the cellular level, plasticity involves changes in synaptic structure, the release of chemical messengers called neurotransmitters, and the sensitivity of receptors that receive those signals. So, it changes how neurons communicate with each other.

In a few areas of the adult brain, particularly the hippocampus, which plays a key role in memory, limited adult neurogenesis, the creation of new neurons, also occurs. Although influenced by factors such as stress, sleep and physical activity, its significance in humans is still debated.

Hand with a blue pen points to the right hippocampus on a MRI scan
The hippocampus is constantly rewiring to store new information.
FocalFinder/Shutterstock

Crucially, neuroplasticity is experience-dependent. The brain changes most reliably in response to repeated, focused and meaningful engagement that requires attention, effort and feedback. Passive exposure to information has far less impact.

What strengthens and weakens plasticity

Over the past decade, research has identified several factors that strongly influence how plastic the brain can be.

1. Practice and challenge are essential.

Repeatedly engaging in tasks that stretch your abilities leads to changes in both brain activity and brain structure, even in older adults.

2. Physical exercise is one of the most powerful enhancers of plasticity.

Aerobic activity increases levels of brain-derived neurotrophic factor, or BDNF, which supports neuron survival and strengthens synaptic connections. Regular exercise is consistently linked to better learning, memory and overall brain health.

3. Sleep plays a critical role in consolidating brain changes.

During deep sleep, important neural connections are strengthened while less useful ones are weakened, supporting learning and emotional regulation, as shown in neuroscience research.

Woman asleep in bed
Sleep is essential for brain health.
Prostock-studio/Shutterstock

4. Chronic stress can seriously impair plasticity.

Long-term exposure to stress hormones is associated with reduced complexity of neural connections in memory-related brain regions and heightened sensitivity in threat-processing systems, undermining learning and flexibility.

When plasticity works against us

One of the most important and often misunderstood aspects of neuroplasticity is that it is value-neutral. The brain adapts to repeated experiences whether those experiences are helpful or harmful.

This helps explain why conditions such as chronic pain, anxiety disorders and addiction can become self-reinforcing. Through repeated patterns of thought, feeling or behaviour, the brain learns responses that are unhelpful but deeply ingrained, a process known as maladaptive plasticity.

The hopeful side of this insight is that plasticity can also be deliberately directed toward recovery. Psychological therapies such as cognitive behavioural therapy are associated with measurable changes in brain activity and connectivity, particularly in networks involved in emotional regulation. Rehabilitation after stroke or brain injury relies on the same principles, using repeated, task-specific practice to compensate for damaged areas.

Clearing up common myths

Perhaps the most persistent myth is that neuroplasticity means the brain can change rapidly or without limits. In reality, meaningful neural change takes time, repetition and sustained effort, within biological constraints.

Another misconception is that plasticity disappears after childhood. While children’s brains are especially flexible, strong evidence shows that plasticity continues throughout adulthood and into older age.

Claims that brief brain-training programmes dramatically increase intelligence or prevent dementia are not supported by solid scientific evidence. The issue is that meaningful brain change happens most when learning is challenging, varied, and connected to real life.




Read more:
Brain-training games remain unproven, but research shows what sorts of activities do benefit cognitive functioning


Activities such as learning a language, exercising regularly, playing a musical instrument, or engaging in complex social interaction are far more effective at strengthening the brain than tapping through app-based puzzles.

In short, brain-training games can be fun and mildly useful, but they train you to play games well, not to think better overall.

Our understanding of neuroplasticity has come a long way since Hebb’s early ideas. What was once thought impossible is now accepted scientific fact. Embracing neuroplasticity means recognising that brains can change, while remaining realistic about how slowly and selectively that change occurs.

More than a century ago, Spanish neuroscientist Santiago Ramón y Cajal wrote that every person can become the sculptor of their own brain. Modern science shows that this sculpting never truly ends. It simply requires effort, patience and persistence.

The Conversation

Siobhan Mc lernon receives funding from The Burdett trust for Nursing

Laura Elin Pigott does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Scientists once thought the brain couldn’t be changed. Now we know different – https://theconversation.com/scientists-once-thought-the-brain-couldnt-be-changed-now-we-know-different-271252

Octopus numbers exploded around the UK’s south-west coast in 2025 – a new report explores this rare phenomenon

Source: The Conversation – UK – By Bryce Stewart, Associate Professor, Marine Ecology and Fisheries Biology, University of Plymouth; Marine Biological Association

Cold spray whipped off the ropes as a diesel engine throbbed in the background. One by one, empty shellfish pots came over the side of the fishing boat, occasionally containing the remnants of crab and lobster claws and carapaces. Something strange was going on.

Then the culprit revealed itself – a squirming orange body surrounded by a writhing tangle of tentacles. A few minutes later, three more of these denizens of the deep came up in a single pot, and then, incredibly, a final pot rose from the water completely rammed full of them, more than a dozen together in a squirming mass.

This was a familiar scene off the south coasts of Devon and Cornwall early last year, as a bloom of the common octopus (Octopus vulgaris) emerged, the first time anything like this had been seen for 75 years. In fact, commercial catches of common octopus in 2025 were almost 65 times higher than the recent annual average. A new report now sheds light on these blooms: their history, the causes and the consequences.

The common octopus, despite the name, is not normally common in British waters. Instead, it favours the warmer climes of southern Europe, the Mediterranean and north Africa. But, occasionally, such as in 1900, 1950 and now 2025, numbers explode off the south-west coast of England, changing marine food chains and disrupting the local fishing industry.

Common octopuses take the ultimate “live fast, die young” approach to life. Despite the large size they can attain, they generally only live for less than two years, with females dying after their eggs hatch. The males also die after breeding. This means octopus populations are highly affected by changes in environmental conditions.

Octopus blooms have previously been rare in the UK, but emerging evidence from long-term marine monitoring of the western Channel suggests that these episodes coincide with sustained periods of unusual warmth in both the ocean and atmosphere.

These “marine heatwaves” can stimulate rapid population growth, whether the octopus are locally established or newly arrived from the south. These warm conditions are often accompanied by unusually low salinity in coastal waters, a signal that points to fresher water entering the region. While salinity itself is unlikely to drive the outbreaks, it serves as a valuable tracer of the water’s origin.

The fresher conditions may stem from high river flow from major French Atlantic rivers such as the Loire, or from prolonged easterly winds over the Channel during the cooler months (October to March). These processes could help transport octopus larvae across the Channel from northern France and the Channel Islands.

Taken together, the combination of warmth, altered circulation and low-salinity signatures suggests that climate-driven shifts in ocean and atmospheric dynamics underpin these outbreaks.

From crisis to opportunity?

Those early scenes of octopus consuming catches in crab and lobster pots continued as 2025 rolled on. But they didn’t just stop at crustaceans. Piles of empty scallop shells were found in many pots, sometimes with remnants of flesh still attached.

Scallops don’t normally go into crab and lobster pots (unless they have lights in them, which these ones didn’t), so the only explanation is that octopus were actively putting scallops in pots to stock up their larder, consuming them at leisure later.

However, fishers are nothing if not adaptable. They soon realised that there was a lucrative export market for octopus and began targeting them. One boat fishing out from Newlyn in Cornwall brought home over 20 tonnes of octopus, worth £142,000, from just three days fishing.

Between £6.7 million and £9.4 million worth of common octopus was landed on the south coast of the UK from January to August 2025. However, not all fishers benefited, and for most boats, octopus catches suddenly dropped off in August. With other shellfish fisheries also declining dramatically last year – lobsters by 30% and brown crabs and scallops by over 50% – many fishers worry about a future in which there is nothing left to catch.

So, what does the future hold? Given the link with climate change, the extensive reports of octopus breeding and a recent appearance of juvenile octopuses in UK waters, the continued presence of the common octopus seems likely.

If a bloom the size of last year’s occurs again soon, future fisheries should be guided by sustainable and ethical principles that help diversify opportunities for fishing fleets, while leaving enough octopus in the sea to be enjoyed by the hundreds of divers and snorkellers who loved watching these amazing creatures last year.


Don’t have time to read about climate change as much as you’d like?

Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 47,000+ readers who’ve subscribed so far.


The Conversation

Bryce Stewart receives funding from DEFRA, Plymouth City Council, Devon County Council and the Crown Estate (OWEC Programme)

Emma Sheehan receives funding from DEFRA and Natural England.

Tim Smyth receives funding from the Natural Environment Research Council through their National Capability funded project AtlantiS NE/Y005589/1

ref. Octopus numbers exploded around the UK’s south-west coast in 2025 – a new report explores this rare phenomenon – https://theconversation.com/octopus-numbers-exploded-around-the-uks-south-west-coast-in-2025-a-new-report-explores-this-rare-phenomenon-269723

How interwar fiction made sense of an increasingly noisy world

Source: The Conversation – UK – By Anna Snaith, Professor of Twentieth-Century Literature, King’s College London

The logo of the Anti-Noise League. Quiet/Noise Abatement League Catalogue

Noise was first considered a public health issue in interwar Britain – called the “age of noise” by the author and essayist Aldous Huxley. In this era, the proliferation of mechanical sounds, particularly the rumble of road and air traffic, the blare of loudspeakers and the rising decibels of industry, caused anxiety about the health of the nation’s minds and bodies.

Interwar writers, such as Virginia Woolf, George Orwell and Jean Rhys, tuned in to the din. Their fiction is not just an archive of past sound-worlds but also the place where sound became noise and vice versa. As sound historian James Mansell has argued: “Noise was not just representative of the modern; it was modernity manifested in audible form.”

We now have more data and scientific evidence on the effects of environmental noise. The World Health Organization recognises noise, particularly from road, rail and air traffic, as one of the top environmental health hazards, second only to air pollution.

In the interwar period, without comprehensive data on noise and health, early campaigners relied on narrative. They created a particular story about noise and nerves to galvanise the public into keeping it down.

A comic strip mocking the Anti-Noise League
A comic strip mocking the Anti-Noise League by Ernie Bushmiller (1941).
Swann

In 1933, the first significant UK noise abatement organisation, the Anti-Noise League, was founded by physician Thomas Horder. The league consisted of doctors, psychologists, physicists, engineers and acousticians (physicists concerned with the properties of sound) who lobbied government for a legislative framework around noise.

They sought to educate the public on the dangers of needless noise through exhibitions, publications and their magazine, Quiet.

Their campaigns drew attention to the very real health effects of environmental noise. But they also saw noise as waste: something to be eliminated in the pursuit of a maximally productive and efficient citizenry.

They drew on ideas of Britishness associated with what they called “acoustic civilisation” (or teaching the nation to be quieter) and “intelligent” behaviour to enact a programme of noise reduction as sonic nationalism.

Noise in modernist fiction

This interwar preoccupation with unwanted sound is also a sonic legacy of the first world war. Exposure to the deafening din of artillery, exploding shells and grenades caused catastrophic auditory injury. So much so, that the din was associated with loss of life and the devastating effects of shell shock.

The extreme noise of warfare also pushed doctors and psychologists to study how sound affects health. This work continued into the 1930s through government-backed bodies such as the Industrial Health Research Board. As a result, people in the interwar years became much more aware that the everyday sounds of machines and traffic could also be harmful.

But it wasn’t only doctors and acousticians who wrote about noise. Authors such as Rebecca West and H.G. Wells worked with the Anti-Noise League, while others, like Winifred Holtby, publicly refuted their findings. But more broadly, in the pages of interwar fiction, modernist writers engaged deeply with the shifting noisescapes around them.

The unprecedented noise levels of the wars, together with the proliferation of sounds in urban and domestic spaces and the auditory training required by new forms of sound technology, caused an attentiveness to sound and hearing. This was harnessed both metaphorically and structurally in the period’s literature.

Modernist writers such as Woolf, Orwell and Rhys listened intently to machines and the sound worlds they created. Once we start to listen for it, noise is everywhere in fiction of the period.

Proletarian factory novels of the 1930s such as Walter Greenwood’s Love on the Dole (1933) or John Sommerfield’s May Day (1936) draw new attention to toxic and harmful high decibel industrial environments.

Interwar novels such as Virginia Woolf’s Mrs Dalloway (1925) or George Orwell’s Coming Up for Air (1939), each with first world war veteran protagonists, register urban noise via the auditory effects of the conflict zone, or a kind of communal noise sensitivity, as well as through the healing or connective properties of sound. In Dorothy Sayers’ Nine Tailors (1934) a character is (spoiler alert) killed by the sound of a church bell.

Rhys’ short story Let Them Call It Jazz (1962) is set in London in the years following the second world war. It depicts the hostile environment faced by immigrants, such as those arriving from the Caribbean on HMT Empire Windrush, as protagonist Selina Davis is imprisoned for noise disturbance. She has been singing Caribbean folk songs in a “genteel” suburban neighbourhood.

The tale is one of cultural identity, the resistant power of sound, and the politicisation of noise. Black music is a form of sonic resistance; noise is both a silencing strategy for bodies and practices deemed “aberrant” and a resistant practice that exceeds and disrupts exclusionary codes of value and hierarchy.

These works, and many more, demonstrate that modernist writers, if we listen carefully, are theorists of sound who responded in complex ways to their shifting soundscapes. They counter the association of noise with negative affect or “unwanted” excess, by finding aesthetic and political possibility in noise.


Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


The Conversation

Anna Snaith does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. How interwar fiction made sense of an increasingly noisy world – https://theconversation.com/how-interwar-fiction-made-sense-of-an-increasingly-noisy-world-272846

US foreign policy has taken a radical turn in Trump’s first year back in office

Source: The Conversation – UK – By David Hastings Dunn, Professor of International Politics in the Department of Political Science and International Studies, University of Birmingham

One year into Donald Trump’s second term it is clear that US foreign policy has taken a radical turn from anything seen in the previous 80 years. After the second world war, a system of treaties and alliances saw the US commit to upholding international institutions, rules and laws, as well as promote global prosperity through free trade and market access.

But these things are all antithetical to Trump’s foreign policy vision. Trump appears committed to the abandonment of this longstanding foreign policy stance and to the abdication of his country’s leadership role at the top of the international system. In fact, he seems intent on destroying many of the tenets and institutions of this system and replacing it with an altogether different vision of international relations.

With a background in real estate, Trump sees the world through a transactional lens. He appears to see alliances as a financial burden and a source of security vulnerability, and considers an open trading system to be unfair to the US as the world’s largest market. Trump also seems to find dealing with democracies more burdensome than bargaining with autocratic rulers. For him, the global system of liberal rules and institutions simply acts to prevent the US from using its power to its full advantage.

Trump has always thought this way. Before entering politics, he was a vocal opponent of the 1992 North American Free Trade Area, the World Trade Organization, the Trans-Pacific Partnership and US military intervention in Iraq and Afghanistan. What has changed is his ability to act. In Trump’s first term, establishment advisers largely put a crimp on his more aggressive instincts. Now he feels unconstrained.

Trump’s world view

In contrast to the post-war project, Trump’s world vision is a zero-sum game that sees trade, wealth and security as commodities to be hoarded and not shared around. He has demonstrated this by imposing sweeping trade tariffs and threatening to not defend Washington’s Nato allies unless they pay more for their own defence.

This is a fundamental challenge for leaders elsewhere in the west. Central to the notion of the west is that it constitutes a common identity of shared material interest, liberal values and – to a greater or lesser extent – shared cultural and ancestral heritage.

This shared western sense of self was central to the credibility of nuclear deterrence at the heart of its cold war strategy and the reason for Nato’s establishment. Because the nations that made up the west considered themselves to be as one, the notion that an attack on one was an attack on all was seen as credible.

Trump’s portrayal of western allies as free-riding trade rivals who exploit access to the US market while not paying for their own defence shatters this carefully constructed sense of collective identity and the credibility of the security commitment on which it rests. However, this is of little concern to an administration that shows little interest is defending common interests or alliances.

This is not the only gripe Trump has with Europe. For Trump and his advisers, Europe looks increasingly different to what they see as being the defining characteristics of the US. The US they imagine is in contrast to the liberal republicanism of the nation’s founding fathers and instead draws on the alternative Christian nationalist tradition in American political thought.

In this view, American identity is one that is white and Christian. This partly explains the anti-immigration policies of the Trump administration and, in particular, why its 2025 national security strategy lists the end of “mass migration” as a major policy priority. It is also why Trump and his senior aides are so critical of traditional US allies in Europe.

The national security strategy makes much of the threat of “civilizational erasure” in Europe, arguing that “it is more than plausible that within a few decades at the latest, certain Nato members will become majority non-European”. By this, it appears the Trump administration means Europe will have become majority non-white and non-Christian.

This stance was reflected by the US vice-president, J.D. Vance, in February 2025 when he told the Munich security conference that he was more worried about “threats from within Europe” than those posed by Russia or China. For Vance and his allies, support for right-wing parties that oppose immigration and seek to limit the power and influence of the EU is central to the adminstration’s foreign policy.

Trump’s foreign policy poses considerable challenges for those who want to protect a world order that was built by a very different US decades ago. This conflict of ideas appears to have come to a head at the recent World Economic Forum in Davos, where the leaders of many countries committed to that world order and signalled their willingness to defend their case.

In doing so they have demonstrated that in dealing with a transactional Trump, sometimes their best response is to show him that there are certain red lines that cannot be crossed.

The Conversation

David Hastings Dunn has previously received funding from the ESRC, the Gerda Henkel Foundation, the Open Democracy Foundation and has previously been both a NATO and a Fulbright Fellow.

ref. US foreign policy has taken a radical turn in Trump’s first year back in office – https://theconversation.com/us-foreign-policy-has-taken-a-radical-turn-in-trumps-first-year-back-in-office-273917

Identifying dinosaurs from their footprints is difficult – but AI can help

Source: The Conversation – UK – By Paige dePolo, Lecturer in Vertebrate Biology, Research Centre in Evolutionary Anthropology and Palaeoecology, Liverpool John Moores University

When you hear the word “dinosaur”, the first thing that might spring to mind is a hulking skeleton like Sue the T rex in Chicago’s Field Museum or Sophie the Stegosaurus at the Natural History Museum in London. Dinosaur skeletons give us striking evidence of what these ancient animals looked like, from the plates and spikes on stegosaurs like Sophie to the long-necked, airplane-sized bodies of titanosaurs.

However, despite their iconic status as museum centerpieces, skeletons are not the most common type of dinosaur fossil known. That prize goes to dinosaur footprints.

The abundance of dinosaur footprints is intuitive. Each dinosaur could only leave one skeleton – but on any single day of its life, it could make thousands of footprints. So, even if only a tiny fraction were fossilised, we could expect to see many more of them in the fossil record.

Dinosaur footprints form in environments where the ground is soft enough to leave an impression, but still cohesive enough so that the shape of the track does not collapse. We find dinosaur footprints in Mesozoic (252-66 million years old) sedimentary rocks all around the world.

Dinosaurs left their mark along coastlines in the UK, ranging from sauropod tracks on the Isle of Skye to Iguanodon tracks on the Isle of Wight. Prosauropod tracks adorn Italian mountainsides. In Bolivia, the largest dinosaur tracksite currently known consists of upwards of 16,000 theropod tracks plus a variety of swimming tracks.

Although dinosaur footprints are abundant, they are challenging fossils to study and identify. Our team, led by Gregor Hartmann at Helmholtz-Zentrum Berlin, has combined AI techniques from photon science with palaeontology in a novel attempt to address this issue.

The footprint puzzle

Dinosaur footprints are not perfect snapshots of the feet that made them. They reflect the shape of the foot, how the dinosaur was moving, and how soft or hard the ground was at the time.

Millions and millions of years of geological history have passed during which the original surface on which the dinosaur walked was buried, transformed to rock, and exposed again. Working on dinosaur footprints necessitates taking all of those factors into account when studying their shapes.

Another challenge arises when trying to determine what dinosaur made which footprints. In particular, tridactyl (three-toed) dinosaur footprints are very tricky to identify, because a wide variety of different dinosaurs have three functional toes on their hind foot. Dinosaurs as different as Megalosaurus and Iguanodon, Edmontosaurus and Albertosaurus, and Tyrannosaurus and Hadrosaurus all have three toes. These dinosaurs fall into two main groups: meat-eating theropods and plant-eating ornithopods.

When we take into account all of the different factors that contribute to the shape of a dinosaur footprint, it becomes extremely challenging to determine whether some three-toed footprints come from theropod or ornithopod dinosaurs.

The DinoTracker app explained. Video by Tone Blakesley.

An unlikely collaboration

Every fossil is a miracle. It takes the perfect combination of circumstances for a fossil to form, be preserved through millions of years, and be found and recognised by human eyes. Our collaboration arose in a similarly serendipitous way.

A physicist and data scientist, Hartmann was reading The Rise and the Fall of the Dinosaurs to his young son Julius, who was very interested in dinosaurs. As he read, Hartmann wondered if the AI methods he was using in photon science could be applied to paleontological questions. So he reached out to the book’s author, Steve Brusatte.

This led to the idea of developing an unsupervised neural network for studying dinosaur footprints. We built our training data from around 2,000 real footprints, then added millions of augmented variations to that initial dataset through strategies like displacing the edges of the footprints by a few pixels. Optimising the network took us over a year.

The key step forward for this network was its unsupervised nature. Only the outlines of the footprints were input, with no additional information about what dinosaurs might have made them. Then the network was allowed to independently discover how the different shapes varied.

This approach meant we avoided human bias in footprint identifications at the training stage. In the end, our model identified eight core axes of footprint variation, including digit spread and heel position.

When we compared the footprint groupings with expert classifications afterwards, we found 80-93% agreement overall. Thus, we could be reasonably confident the model provides a data-driven way to test the identity of particular footprints. Our findings have just been published in the scientific journal PNAS.

However, we wanted to make the network accessible to everyone, not just scientific specialists. That desire gave rise to DinoTracker, a free public app that can enable anyone to upload a picture of a dinosaur’s footprint, sketch its outline, and get instant analysis of what footprints their track is most similar to. The app can be downloaded onto a desktop from Github with the support of this installation guide.

This app certainly isn’t the end of the story when it comes to puzzling over the mysteries of dinosaur footprints. It’s a useful research resource for figuring out what tracks any footprint is most similar to in terms of shape, and what features are driving that similarity.

More excitingly, it’s a tool for curious children like Julius to take outside when they are exploring. Anyone can snap a photo, draw an outline and compare their discoveries to other dinosaur footprints.


This article includes a reference to a book included for editorial reasons, and a link to bookshop.org. If you click that link and go on to buy something from bookshop.org, The Conversation UK may earn a commission.

The Conversation

Paige dePolo does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Identifying dinosaurs from their footprints is difficult – but AI can help – https://theconversation.com/identifying-dinosaurs-from-their-footprints-is-difficult-but-ai-can-help-274386