What ‘If I had Legs I’d Kick You’ tells us about mothering and thankless sacrifice

Source: The Conversation – Canada – By Billie Anderson, Lecturer, Disability Studies, King’s University College, Western University

Rose Byrne plays a mother overwhelmed with caring for her ailing young daughter in ‘If I Had Legs I’d Kick You.’ (VVS Films/A24)

Care work structures much of everyday life, yet it often remains invisible. It’s folded into assumptions about love, responsibility and familial duty rather than recognized as labour.

Nowhere is this more apparent than in caregiving, particularly when it’s performed by mothers. They’re routinely expected to absorb care work quietly, competently and without visible cost, even when that work unfolds under conditions of chronic illness, disability or grief.

Film and television often reinforce this expectation by presenting caregiving as a moral achievement rather than a social obligation. These representations frame maternal endurance as proof of love, virtue and emotional strength.

In recent cinema, motherhood shaped by loss or threat has emerged as a central narrative concern, from Hamnet to The Testament of Ann Lee to Sinners. Each film, in its own way, pushing back against the longstanding cultural fantasy of motherhood as a site of moral purity, endurance and heroism.

Rather than asking audiences to admire maternal sacrifice, these films linger on grief, struggle and ambivalence, exposing how expectations of care and emotional stability are unevenly distributed and disproportionately borne by women.

The trailer for ‘If I Had Legs, I’d Kick You.’ (A24)

Care as a source of grief and harm

Mary Bronstein’s 2025 film, If I Had Legs I’d Kick You belongs to this broader turn, but it pushes the critique further by stripping caregiving of even the residual comforts that often remain.

The film follows Linda (Rose Byrne, nominated for Best Actress at this year’s Academy Awards) as she navigates the daily, grinding reality of caring for her young child as serious illness reshapes every aspect of their lives, from medical appointments and disrupted routines to the emotional toll of uncertainty and constant vigilance.

The film refuses to frame care as redemptive, and instead shows how care, when imagined as an unlimited personal resource, becomes a source of depletion, grief and harm. Linda’s life revolves around her child’s needs — schedules dictated by medical systems, emotional energy consumed by anticipation and fear and moments of solitude repeatedly interrupted by responsibility — without any suggestion it’s a temporary, chosen or ultimately meaningful situation.

If I Had Legs I’d Kick You is not a story about maternal devotion. It’s a movie about how illness and disability are folded into private family life and how mothers are positioned as the shock absorbers of systems unwilling to provide adequate structural support.

Even when doctors, therapists, institutions and procedures surround her, the film makes clear that the emotional and logistical labour of care ultimately collapses back onto Linda herself.

As in real life, it’s left to the mother to internalize and dismantle shame, to make every consequential decision in the absence of adequate support and then to absorb judgment for each of those decisions. Mothers are asked to account morally for outcomes shaped by systems they cannot control.

‘Privatization of care’

Within disability studies and feminist ethics, care has long been understood as a social and political arrangement shaped by power, gender and access to resources.

Political theorist Joan Tronto argues that care is systematically devalued when it’s treated as private, feminized and morally natural — something women are presumed to provide instinctively — rather than as labour that must be collectively organized, supported and fairly distributed across society.

If I Had Legs I’d Kick You makes this devaluation visible in the way care is repeatedly pushed out of institutions and back onto Linda. In several scenes set within medical environments, professionals deliver information, outline procedures or ask Linda to make decisions, only to disappear once those moments conclude. This leaves her alone to manage the emotional aftermath, the logistical follow-through and the fear those decisions produce.

A man with red hair and glasses looks pained as a woman lies on a couch speaking.
Conan O’Brien plays Byrne’s vaguely irritated therapist in the film.
(VVS Films)

The systems of care remain present only as brief interventions, while the ongoing labour — monitoring her child’s symptoms, anticipating emergencies, soothing anxiety, reorganizing daily life — is treated as a natural extension of motherhood rather than as work that might require sustained support.

These scenes embody what Tronto describes as the privatization of care: institutions retain authority and expertise, but responsibility is quietly transferred to the individual caregiver, who is expected to absorb the costs without complaint.

Philosopher Eva Kittay extends this critique by focusing on dependency and the economic structures that rely on care while refusing to sustain those who perform it.

Kittay emphasizes that modern social and economic systems depend on vast amounts of unpaid or underpaid care work — much of it performed by women — while offering little material, emotional or social support in return.

Managing alone

In scenes where Linda juggles medical co-ordination alongside the ordinary and extraordinary demands of daily life, the film makes visible how care consumes every register of her attention.

She fields calls from doctors while continuing her own work as a therapist, absorbs the emotional crises of her patients even as she is barely holding herself together and navigates the instability of temporary housing after her roof collapses, forcing the family into a hotel.

A woman walks in the dark holding bottles of wine.
Byrne’s character is forced to relocate to a motel with her ailing child after the roof of her apartment caves in.
(VVS Films)

Her husband is frequently away for work, leaving her to manage both the logistical and emotional labour of care alone, while the film also insists on the banal realities of parenting: her child is sick but her child is also simply a child, needing comfort, discipline, patience and play.

In one moment she is trying to get her own child to eat enough to remove her feeding tube; in another she is unexpectedly left responsible for a stranger’s baby, her capacity for care assumed and exploited without question. Care here is an uninterrupted state of readiness, a demand that stretches across professional, domestic and emotional life without pause.

Popular culture frequently relies on the figure of the “good” mother: selfless, patient and endlessly resilient. Disability narratives often reinforce this ideal by positioning caregiving as proof of moral worth.

Flipping the narrative

By declining to transform suffering into inspiration, If I Had Legs I’d Kick You challenges the expectation that caregiving should make someone better, stronger or more fulfilled. Care here does not ennoble; it depletes. Crucially, this depletion is not framed as personal failure, but as the predictable outcome of systems that rely on mothers to absorb care work without adequate social, economic or emotional support.

Many films use disability as a narrative device, positioning it as a challenge that generates growth or moral clarity in others.

If I Had Legs I’d Kick You, by contrast, represents a decisive step toward undoing this familiar storytelling structure. The child’s illness does not exist to transform the mother or to provide emotional payoff for the audience. Disability is neither a lesson nor a catalyst; it is part of the family’s reality, shaping daily life in ways that are mundane, exhausting and deeply consequential.




Read more:
Women caregivers need more support to manage their responsibilities and well-being


By resisting sentimentality and refusing easy resolution — and by centring a protagonist who is allowed to be abrasive, overwhelmed, selfish and at times difficult to like — If I Had Legs I’d Kick You offers a rare and necessary portrayal of caregiving under conditions of illness and disability.

The film does not promise healing or redemption. It insists on honesty about grief that lingers, care that depletes and the impossible expectations placed on those who are expected to hold everything together.

The Conversation

Billie Anderson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. What ‘If I had Legs I’d Kick You’ tells us about mothering and thankless sacrifice – https://theconversation.com/what-if-i-had-legs-id-kick-you-tells-us-about-mothering-and-thankless-sacrifice-274074

Where are Europe’s oldest people living? What geography tells us about a fragmenting continent

Source: The Conversation – Global Perspectives – By Florian Bonnet, Démographe et économiste, spécialiste des inégalités territoriales, Ined (Institut national d’études démographiques)

For over a century and a half, life expectancy has steadily increased in the wealthiest countries. Spectacular climbs in longevity have been noted in the 20th Century, correlating with the slump in infectious illnesses and advances in cardiovascular medicine.

However, for some years now, experts have been obsessing over one question: when is this slick mechanism going to run out of steam? In several western countries, gains in life expectancy have become so slight, they are practically non-existent.

Some researchers see this as a sign that we are heading toward a ‘biological human longevity ceiling’ while others estimate that there is still room for improvement.

Looking at national figures alone cannot be a decider. Behind a country’s average life expectancy lies very contrasted, region-specific realities. This is what the findings of our study that was recently published in Nature Communications revealed. Analysing data collected between 1992 and 2019, it focuses on 450 regions in western Europe bringing together almost 400 million inhabitants.

A European study on an unprecedented scale

To complete our research project, we collected mortality and demographic data from offices for national statistics across 13 western European countries including Spain, Denmark, Portugal and Switzerland.

We began by harmonising the original data, a task that proved crucial because the regions differed in size, and data offered varying amounts of detail according to each country.

Then we recalculated the annual gain in life expectancy at birth for each region between 1992 and 2019, an indicator, which reflects mortality across all ages. Sophisticated statistical methods allowed us to pick out the main underlying trends, regardless of short-term fluctuations caused by the heatwave in 2003, or virulent, seasonal flu outbreaks between 2014-2015, for instance. 2019 is the cut-off date for our analyses because it is still too early to know whether the coronavirus pandemic has a long term effect on these trends or if it was limited to 2020-2022.

The results we obtained provide us with an unprecedented panorama of regional longevity trajectories across Europe over an almost 30-year period, from which we draw three findings.

First finding: Human longevity has not hit its limits

The first message to emerge from the study is that: the limits of human longevity have still not been reached. If we concentrate on regions that are life expectancy champions (indicated in blue on the chart below), we note that there is no indication of progress decelerating.

Evolution of life expectancy in vanguard and lagging regions in Western Europe, 1992–2019. The red line (and blue, respectively) represents the mean life expectancy at birth of regions belonging to the top decile (and inferior, respectively) of the distribution. The black line indicates the average of all of 450 regions. The minimal and maximal values are represented by specific symbols corresponding to the regions concerned.
Florian Bonnet, Fourni par l’auteur

These regions continue to demonstrate around a two and half month gain in life expectancy per year for men, and approximately one and a half month gain in life expectancy for women, at an equivalent rate to those observed in previous decades. In 2019, they include regions in Northern Italy, Switzerland and some Spanish provinces.

For France, Paris and its surrounding Hauts-de-Seine or Yvelines areas (pertaining to both men and women), featured alongside the Anjou region and areas bordering with Switzerland (only applicable to women). In 2019, life expectancy reached 83 for men, and 87 for women.

In other words, despite recurrent concerns nothing presently indicates that lifespan progression has hit a glass ceiling; prolonging life expectancy remains possible. This is a fundamental result which counters sweeping, alarmist statements: there is room for improvement.

Second finding: regional diversity since the mid 2000s

The picture looks bleaker when considering regions with ‘lagging’ life expectancy rates, indicated in red on the chart. In the 1990s and in the early 2000s, these regions saw rapid gains in life expectancy. Progress was much faster here than anywhere else, leading to a convergence in regional life expectancy across Europe.

This golden age, accumulating a fast rise in life expectancy in Europe and a reduction in regional disparities came to an end towards 2005. In the most challenged regions, whether it be East Germany, Wallonia in Belgium or certain parts of the United Kingdom, life expectancy gains significantly dropped, practically reaching a standstill. In women, no regions in France featured among them, but in men, they included some departments in the Hauts-de-France.

Longevity in Europe is ultimately divided into vanguard regions that continue to progress on one side, and on the other side, lagging regions where the dynamic is running out of steam and is even reversed. We are experiencing a regional discrepancy that contrasts with the catch-up momentum in the 1990s.

Third finding: the decisive role of mortality at ages 55-74

Why such a shift? Beyond age-specific life expectancy, we sought to gain a better understanding of this spectacular change by analysing how mortality rates have evolved for each age bracket.

We can state that regional divergence can neither be explained by the rise in infantile mortality (which remains very slight) nor by the rise in mortality in the over 75 age range (which continues to decelerate everywhere). It mainly stems from mortality around age 65.

In the 1990s this demonstrated a rapid drop, thanks to access to cardiovascular treatments and changes in risk-taking behaviour. But since the 2000s, this upturn has slowed. In some regions, in the last few years, the risk of dying between 55 and 74 years old is on the rise, as shown in the maps below.

Annual percentage changes in the probability of dying between ages 55 and 74 for men (left) and women (right) in 450 regions across western Europe between 2018 and 2019.
Florian Bonnet, Fourni par l’auteur

This is particularly true for women living in France’s Mediterranean coastal regions (indicated in pale pink). It’s also the case for most of Germany. However, these intermediary ages are crucial for the life expectancy gain dynamic, because a large number of deaths occur here. Stagnation or a leap in mortality between ages 55 and 74 is enough to break the overall trend.

Even though our study does not allow us to pinpoint the precise causes explaining such preoccupying progress, recent documentation provides us with some leads which should be scientifically tested in the future. Among these are risk-taking behaviour, particularly smoking, drinking alcohol and poor nutrition, or a lack of physical exercise, which are all factors that manifest at these ages.

Incidentally, the economic crash in 2008 accentuated regional variations across Europe. Some regions suffered durably seeing the health of their populations compromised, while further growth was recorded in other regions with a concentration of highly qualified employment. These factors remind us that longevity isn’t just about advances in medicine; it can also be explained by social and economic factors.

What’s next?

Our report offers a dual message. Yes, it’s possible to increase life expectancy. Europe’s regional champions are proof of this, as they continue to demonstrate steady growth without showing any signs of plateauing. However, this progress does not apply to everyone. For fifteen years, part of Europe has been lagging behind, largely due to a rise in mortality around 65 years.

Even today, the future of human longevity seems to depend less on the existence of a hypothetical biological ceiling than on our collective ability to reduce gaps in life expectancy. Recent trends lead us to believe that Europe could well end up as a two-tier system, setting apart a minority of areas that keep pushing the boundaries of longevity and a majority of areas where gains dwindle.

In actual fact, the question is not only how far can we extend life expectancy, but which parts of Europe are eligible.


For further reading

Our detailed results, region by region, are available in our interactive online application.


A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!


The Conversation

Les auteurs ne travaillent pas, ne conseillent pas, ne possèdent pas de parts, ne reçoivent pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’ont déclaré aucune autre affiliation que leur organisme de recherche.

ref. Where are Europe’s oldest people living? What geography tells us about a fragmenting continent – https://theconversation.com/where-are-europes-oldest-people-living-what-geography-tells-us-about-a-fragmenting-continent-274550

Valentine’s Day cards too sugary sweet for you? Return to the 19th-century custom of the spicy ‘vinegar valentine’

Source: The Conversation – USA (2) – By Melissa Chim, Scholarly Communications Librarian, Excelsior University

A woman turns down a dapper ‘snake’ in a ‘vinegar valentine’ from the 1870s. Wikimedia Commons

Ahh, Valentine’s Day: the perfect moment to tell your sweetheart how much you love them with a thoughtful card.

But what about people in your life you don’t like so much? Why is there no Hallmark card telling them to get lost?

The Victorians had just the thing: a cruel and mocking version of the traditional Valentine’s Day card. Later coined “vinegar valentines” by 21st-century art collectors and dealers, such cards were usually referred to as mock or mocking valentines during the Victorian era.

Such cards were meant to shock, offend and upset their recipients. Not surprisingly, as with real Valentine’s Day cards, senders often chose to remain anonymous.

Vinegar valentines are what we historians like to call ephemera, that is, materials that are usually not meant to last a long time.

It’s hard to imagine a recipient of a vinegar valentine wanting to keep it lovingly in a frame, and many have been lost to time. But luckily, some vinegar valentines have survived and have been preserved in the collections of many historical institutions, such as Brighton and Hove Museums and the New York Public Library.

One jab at obnoxious sales ladies reads:

“As you wait upon the women

With disgust upon your face

The way you snap and bark at them

One would think you owned the place”

There is even a card for the pretentious poet who pretends to make a living with his art:

“Behold this pale little poet

With a finger at forehead to show it

But the way he gets scads

Is by writing soap ads

But he wants nobody to know it!”

The anonymous nature of the vinegar valentine meant that anyone could be an unwitting recipient. Some cards could poke gentle fun, but others could have quite dangerous results.

In 1885, a resident in the U.K. city of Birmingham, William Chance, was charged with the attempted murder of his estranged wife after he received a vinegar valentine from her. He shot her in the neck, and she was sent to the hospital.

‘Pompous, vain and conceited’

But who could be disliked so much that they would receive a vinegar valentine?

The poor, old and ugly were convenient targets. Unmarried men and women might also receive a vicious rejection from potential partners.

A Feb. 9, 1877, article from the Newcastle Courant notes that “it is the pompous, the vain and conceited, the pretentious and ostentatious who are generally selected as butts for valentine wit.”

Sending such a valentine was a way for ordinary people to enforce social norms disguised as a joke. It was also a way to feel powerful over an already vulnerable person, even if the sender was vulnerable themselves.

A caricature of a woman walking up a path.
Vinegar valentine sheet titled ‘You are on the Road to Destruction.’
Wikimedia Commons

Vinegar valentines emerged as a sour offshoot of the cultural ascendancy of Valentine’s Day itself. While rooted in an ancient Roman fertility ceremony, the day was turned into a celebration of love by the Victorians.

The first Valentine’s Day cards in the early 1800s were often made by hand. With the rise of industrialization, by the 1840s and 1850s most cards were produced in factories. These regular Valentine’s Day cards were often decorated with lace and romantic images.

An industry of insults

By the mid-1800s, both Britain and the United States entered into what one historian calls “Valentine’s mania.”

The earliest vinegar valentines were sheets of paper folded like a letter. And to add insult to injury, before the availability of prepaid postage, the recipient had to pay to receive their letter.

Many printers offered vinegar valentines alongside the more traditionally positive and ornate cards. Even the firm Raphael Tuck & Sons, “Publishers to Their Majesties the King and Queen of England,” joined the vinegar valentine craze.

Vinegar valentines made their way across the pond to the United States in the mid-1800s. Some American printers made their own vinegar valentines; others, such as A.S Jordan, imported them from Britain.

During the American Civil War, these cards became a medium to express anger and frustration. If you supported the Union, you could send the following message to an unlucky secessionist from the South:

“You are the man who chuckles when the news

Comes o’er the wires and tells of sad disaster,

Pirates on sea succeeding-burning ships and crews,

Rebels on land marauding, thicker, aye, and faster

You are the two faced villain, though not very bold,

Who would barter your country for might or for gold.”

Votes and valentines

As vinegar valentines continued to be produced throughout the early 1900s, a new target became very popular – the suffragette.

Women fighting for the right to vote were seen by their detractors as unfeminine, and vinegar valentines were a cheap and convenient medium to enforce gender roles. In such cards, suffragettes were usually depicted as ugly spinsters or abusive, lazy wives. One card warns, “A vote from me you will not get, I don’t want a preaching suffragette.” Similarly, another card says:

“You may think it fun poor Cupid to snub,

With the hand of a Suffragette.

But he’s cunning and smart, aye, there’s the rub,

Revenge is the trap he will set.”

A caricature of a drink man clinging to a lamppost.
A valentine for one drunk on love?
Wikimedia Commons

There were even cards made for anti-suffragist women looking to secure a husband. One card plaintively proclaims, “In these wild days of suffragette drays, I’m sure you’d ne’er overlook a girl who can’t be militant, but simply loves to cook.”

There were also pro-suffrage Valentine’s Day cards. One card defiantly asks, “And you think you can keep women silent politically? It can’t be did!”

Cupid as a troll

Vinegar valentines continued to be popular through the Golden Age of picture postcards in the early 1900s. They declined in popularity after World War I. This may be due to a decline in card giving overall, or a cultural shift away from “lowbrow” humor. But they never fully went away.

The spirit of the vinegar valentine saw a second revival in the 1950s with the rise of the comic postcard.

And the effects of vinegar valentines can still be seen, and felt, today. Anonymous internet trolls keep up the sniping spirit so prevalent in the Victorian era. Today’s vinegar valentines are extremely online. They are just as spiteful, but the difference is they are emphatically not restricted to one particular day in February.

The Conversation

Melissa Chim does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Valentine’s Day cards too sugary sweet for you? Return to the 19th-century custom of the spicy ‘vinegar valentine’ – https://theconversation.com/valentines-day-cards-too-sugary-sweet-for-you-return-to-the-19th-century-custom-of-the-spicy-vinegar-valentine-273995

Children’s views are rarely sought by researchers: we found a way to do it

Source: The Conversation – Africa – By Deborah Levison, Distinguished University Teaching Professor, Hubert H. Humphrey School of Public Affairs, University of Minnesota

Adults think we know what is best for children. We have responsibility for them – feeding them, clothing them, educating them, protecting them, loving them – but we also assume rights over them, and on their behalf. Adults make rules (including laws and policies) about what children can and cannot do. We expect children to behave according to our rules.

It’s also the case that when researchers are trying to better understand children’s needs and well-being, we usually do not ask the children themselves. Instead, we ask their parents or adult relatives, or their teachers, for evaluations.

There are good reasons why survey teams do not talk to children, even older children who have a strong understanding of questions, starting about age 10-12. Children are considered vulnerable because they are dependent on the adults in their lives. If an adult heard a child talking to a researcher, perhaps saying something the adult did not like, the child could be punished.

Alternatively, the child might not be honest if others were listening. Survey interviews tend to be conducted in places where there are other adults who are interested and listening. Privacy may be impossible. And even if it were possible, who would let their young daughter talk alone to a stranger?

Our recent research has sought to overcome these barriers to better understanding of children’s authentic perspectives. We have studied the work and schooling of children in low-income countries – such as Tanzania – and looked to develop research methodologies appropriate for children and youth around the world, testing the approach in Tanzania, Nepal and Brazil.

Two findings stand out. First, there is much to learn from children and the choices they make. Second, innovative survey methods – such as our use of cartoon stories – have potential to survey child-respondents in large household surveys. Researchers and policy makers could learn directly from children and rely less on adult proxy respondents, resulting in more effective policies and programmes.

Children’s views about chores

While using proxy respondents is appropriate for very young children or for questions likely beyond children’s knowledge, it is less clear that it is better for older children (ages 10-17) and topics within their experience.

Several arguments can be made that children could provide better or equally valid information on their activities than proxy respondents, as Levison and collaborators – economist Deborah S. DeGraff and demographer Esther Dungumaro – explored in Tanzania.

Parallel questions were asked of children aged 10-17 and proxy respondents about those children. We were interested in environmental chores: fetching water and collecting firewood for the family’s use.

We asked the mothers survey questions about their children, then we asked the children and adolescents some of the same questions. Of course, ethics rules required that we get permission (“consent”) from mothers before talking to children, and we also asked permission from children (“assent”) to engage with them. When a field researcher interviewed a child, the pair sat nearby, often under a tree, where adults could see them but not hear them.

The aim was to find out whether older children could provide better or equally valid information about some of the chores they did, as compared to information from their mothers.

When mothers and children were asked about the time that children spent fetching water and collecting firewood, some differences emerged. The biggest differences were seen when water or wood were scarce, when mothers had many young children, and when mothers had little education.

Some large differences may indicate that the amount of work done by children is highly underestimated by the adults it benefits. An important earlier study in Zimbabwe that used different ways of studying children’s work, including following children around, showed this pattern. We argued a case for collecting data directly from children who are developmentally able to understand survey questions, starting from about ages 10-12.

Given these differences in the time spent on chores as reported by mothers and children in the study above, researchers must be thoughtful about who is reporting information if they want to collect and report on accurate data.

Cartoon stories

Policy makers sometimes pay more attention to information from big surveys that ask questions of thousands of households and adults.

In our joint research, we wondered if there were ways to include children as survey respondents, rather than relying only on what adults said about them. Older children and adolescents do have opinions, and sometimes they are not what adults might expect. Why not learn directly from them?

Based on previous studies, we identified topics that could be difficult and upsetting for young people in Tanzania, where learning from kids could give researchers a different perspective than asking adults. In order to understand the perspectives of children, we developed short cartoon stories that children watched on tablet computers. Vignettes have been growing in popularity as a research tool in qualitative and quantitative methods, and research has validated the method when respondents are children and adolescents.

We sought to overcome the barriers in these ways:

  • The cartoons included still images and animated video clips that were designed to avoid cultural, ethnic or wealth indicators such as hairstyles, clothing, or facial features.

  • To be sensitive to privacy, children listened to the story being narrated in Swahili through headsets.

  • Because the stories were watched over tablets with headphones, nearby listeners would not have the context for the story even if they overheard anything.

One story was about a student who is running late to school because of morning domestic chores.

Upon arriving, the boy or girl (matched to the sex of the interviewed child) is punished by the teacher. The video shows several possible but imperfect things the cartoon child could do, such as getting up earlier or skipping school.

Child respondents were then asked to give their opinion on different options, pointing to smiley or sad/angry faces, then answering other questions about how the challenge could be resolved. This allowed us to capture child perspectives quantitatively without directly speaking about the topic out loud or asking if children had similar experiences. We aimed to reduce their vulnerability to punishment or embarrassment, especially on taboo or sensitive subjects.

Many social scientists have demonstrated that children, even young children, are people who make choices within whatever limits they cannot change – they “have agency”.

Our findings from the cartoon stories show a wide range of perspectives about how children think about improving their wellbeing and the wellbeing of other children in their communities. If this cartoon vignette methodology were scaled up to include child-respondents in large household surveys, researchers and policy makers could learn directly from children and rely less on adult proxy-respondents, which might result in more effective policies and programmes.

The Conversation

Deborah Levison receives funding from the National Institutes of Health (NIH) and the National Science Foundation (NSF) in the United States, for the IPUMS-International project (www.ipums.org).

Anna Bolgrien receives funding from the Eunice Kennedy Shriver National Institute of Child Health and Human Development in the United States as part of her work on IPUMS MICS (mics.ipums.org).

ref. Children’s views are rarely sought by researchers: we found a way to do it – https://theconversation.com/childrens-views-are-rarely-sought-by-researchers-we-found-a-way-to-do-it-268496

Clergy wives in Ghana can be powerful – but it takes constant bargaining with men

Source: The Conversation – Africa – By Abena Kyere, Research Fellow, University of Ghana

There is a story in the Bible of a sick woman who held on to the cloak of Jesus amid an impenetrable crowd. She did get her healing, as Jesus immediately felt the loss of power from within himself. However, he did not rebuke the woman for his loss. Rather, he commended her for her determination to get healing by tapping into his power.

I am reminded of this story whenever I think about women and religion, specifically Christianity. Can the church as a body ever make room for women in Africa? Are the fathers of the church willing to share their powers? What happens when the clergyman’s wife seeks to be or becomes as powerful as her husband?

As a social anthropologist, I have, over the past five years, conducted research on clergy wives in Ghana, sharing my work through publications and in the classroom.

In my recent study, I wanted to find out how Pentecostal and Charismatic pastors’ wives gain and use a position of power in the church. Through interviews and participant observation, I gathered data on clergy wives’ religious experiences in Ghana. I found that although clergy wives gain power through their husbands, they are not passive conductors of power. While they operate in a patriarchal system, they develop ways of, and become adept at, negotiating and bargaining to gain and keep it.

A study of clergy wives provides a view into the hidden, often unexplored, power dynamics that exist within churches as well as the agency and constraints that women experience in religious spaces.

The clergy wife and the road to power

The clergy wife’s position is rooted within the two-person career type of work. She is firmly integrated in her husband’s work. The literature on the clergy wife is replete with the picture of an overburdened woman who occupies one of the most difficult positions in the church and society. An advertisement which parodies the position reads:

HELP WANTED: Pastor’s wife. Must sing, play music, lead youth groups, raise seraphic children, entertain church notables, minister to other wives, have ability to recite Bible backward and choreograph Christmas pageant. Must keep pastor sated, peaceful and out of trouble. Difficult colleagues, demanding customers, erratic hours. Pay: $0.

This funny representation of the clergy wife places her firmly in the intersection of domestic responsibility, religious welfare and administrative authority. Clergymen hold pivotal roles in the life of believers, from spiritual leadership to pastoral care. Their position, which is considered divine, endows them with unquestionable authority and power. It can be subtle or profoundly apparent, particularly in the Pentecostal and Charismatic movements.

This power extends to their wives, a phenomenon which has been termed the First Lady Syndrome. This is a situation where a wife’s power and influence is conferred through her spouse and is contingent on her continual marital affinity to him. Some clergy wives in Ghana actually bear the title “first lady”.

The power that wives initially get from husbands can be manifested through various means, like leadership of women’s groups in the church, spiritual oversight, and counselling services. They are perceived as mothers, offering advice on critical life decisions.

One wife in my study noted:

As the mother of the church, it is my responsibility to ensure that my ‘children’ choose good partners. I have dissolved engagements before because I felt that they will not be good, and I have also been the one to arrange relationships that have led into marriages …

Wives can become very powerful, just like their husbands. This happens especially where they form and lead groups within the church. This is the moment that the position and role of the clergy wife becomes what social researcher Jane Soothill describes as mimicking “female charismatic dynasty”. This is a signal to the patriarchal system that there is a need to control such power.

Bargaining to keep power

While women are allowed in the “fathers” group, they are still expected to work within the restrictions and rules of the system. The clergyman, the most overt symbol of this system, benefits from divine immunity and his glory may not be shared, even with his wife.

I found that where clergy wives are perceived to be powerful, they are also regarded by the husband or the church leaders as dangerous. This results in their need to bargain with the system for self-preservation. The strategies which a clergy wife adopts to negotiate are based on her individual situation. They may range from silence to a show of feminine humility and submission. Display of submission and deferment to the husband is the most often used tactic.

One wife shared:

Sometimes when I interact with the women and advise humility, I am providing another strategy for their survival.

I found that others are forced to retreat entirely. They either dissolve the group or abdicate from their leadership role in the church. Some wives circumvent these restrictions by migrating their activities to digital platforms like Facebook and WhatsApp groups, or other forms of media. A wife who chooses defiance or refuses to negotiate may end up divorced.

There is a popular joke that if men are the head, then women are the neck that moves the head, a reference to women’s invisible power. But what kind of power is that which can only manifest covertly, through the benevolence of others? How safe is this arrangement for women?

What I have discussed here does not present the whole story of the clergy wife. But it shows a world where women constantly bargain for space. In the opening story, the woman was commended for her faith and foresight, and a desire to better her lot. A takeaway lesson from the master. In my view, Christianity and other religions should be a channel for freedom, healing, and the creation of new avenues for expression of liberation.

The Conversation

Abena Kyere does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Clergy wives in Ghana can be powerful – but it takes constant bargaining with men – https://theconversation.com/clergy-wives-in-ghana-can-be-powerful-but-it-takes-constant-bargaining-with-men-274561

The workplace wasn’t designed for humans – and it shows

Source: The Conversation – UK – By Christine Ipsen, Professor in Technology Implementation, Technical University of Denmark

Work designed for maximum output often treats people like expendable resources—and burnout is the predictable result. pexels/shvetsa, CC BY-SA

Input. Output. Targets met. Value created. Performance delivered. Strip work down to its essentials and for many people, this is what remains: a machine-like focus on producing, performing and optimising.

The system keeps moving – often with little concern for the human energy, attention and resilience required to keep it running. Over time, this can lead to stress, ill-health, disengagement and burnout. Almost half of employees worldwide say they’re currently burned out and nearly three-quarters of US workers report that workplace stress affects their mental health.

But exhaustion isn’t a personal failing – it’s built into the system. Indeed, this way of organising work is not accidental. It has deep roots in how modern workplaces were designed.

Much of this thinking dates back to the late 19th century and the work of Frederick Taylor, a US engineer whose ideas helped shape modern management. Taylor was widely known for his methods to improve industrial efficiency, by treating workers as parts of a machine – measured, paced and optimised.

Obviously, a lot has changed since Frederick Taylor’s time – we understand far more about mental health and people’s capacity for work. Yet, many workplaces still operate in this way – with a strict focus on performance and goals.

A new way of viewing work

These high levels of stress, ill-health and burnout made us reflect. As concern grows about exhausting natural resources in the name of profit, we began to question whether workplaces are doing the same to people – using them up for productivity, with little thought for the long-term cost.

While organisational psychology highlights motivation, engagement and well-being as drivers of performance, it often overlooks a crucial issue: what happens to people’s time, energy, skills and relationships once they are spent at work?

Many models of work assume these human resources are limitless, focusing on outputs rather than what is left behind. But without opportunities to recover and regenerate, this way of working leads to depletion, disengagement and ultimately burnout.

A man sits at a computer looking stressed, holding his head in his hands.
High performance, low battery.
pexels/diimejii, CC BY

But what if work didn’t have to use people up to get results? What if productivity and well-being weren’t in competition, but part of the same system?

Drawing on ideas from the circular economy, along with management theory and organisational psychology, we propose a different way of thinking about work. We call it circular work.

Circular work flips the usual logic. Instead of treating people’s time, energy and skills as resources to be consumed, it sees work as a cycle – where effort is matched with recovery, learning and renewal. The goal isn’t just short-term output, but work that people can sustain without burning out.

At its core, circular work connects employee well-being and organisational performance and is built around four simple ideas:

  • all human work resources are connected – energy, skills, knowledge and relationships affect each other

  • it’s possible to recover and regenerate spent work resources – rest, support, and learning help employees bounce back

  • work can build or drain resources – how work is designed determines whether people thrive or are thwarted

  • sustainable work grows from protected and renewed resources – investing in well-being and development helps to sustain people and organisations.

Humans not machines

The idea of renewing people’s energy and skills can sound radical in today’s target-driven work culture.

But renewal isn’t a luxury. It starts with a simple truth: people are not infinite or endlessly replaceable. Work can drain our energy, attention and health –sometimes in ways that take years to undo. Designing work as though this doesn’t matter comes at a real cost.

In practice, regeneration shows up in everyday management. Decisions about workload, autonomy, recovery time, recognition and support determines whether work depletes people or helps them recover and grow. Put simply, human needs and well-being have to sit at the centre of how work is organised.

Psychological safety is part of this. Regenerative workplaces are those where people can speak up, raise concerns and take reasonable risks without fear of blame.

This is where leadership really matters. Organisations need to ask hard questions about the true impact of management practices: do they drive absence, presenteeism and turnover – or do they enable learning, growth and renewal? Rewarding managers and teams who protect well-being reduces stress, retains talent and makes organisations places people want to work.

The bottom line is, as long as work is designed like a machine to maximise output, burnout will remain its most predictable outcome. But sustainable performance is possible. It just means actually designing workplaces that protect — and renew — the people working in them.


This article was commissioned as part of a partnership between
Videnskab.dk and The Conversation.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. The workplace wasn’t designed for humans – and it shows – https://theconversation.com/the-workplace-wasnt-designed-for-humans-and-it-shows-269127

How do scientists hunt for dark matter? A physicist explains why the mysterious substance is so hard to find

Source: The Conversation – USA – By David Joffe, Associate Professor of Physics, Kennesaw State University

The Coma Cluster, research into which supports the existence of dark matter. NASA, ESA, J. Mack (STScl), and J. Madrid (Australian Telescope National Facility)

Curious Kids is a series for children of all ages. If you have a question you’d like an expert to answer, send it to CuriousKidsUS@theconversation.com.


Can we generate a way to interact with dark matter with current technology? – Leonardo S., age 13, Guanajuato, Mexico


That’s a great question. It’s one of the most difficult and fascinating problems right now in both astronomy and physics, because while scientists know that the elusive substance called dark matter makes up the majority of all matter in the universe, we’ve never actually observed it directly. Dark matter is so difficult to interact with because it’s “dark,” which means it doesn’t interact directly with light in any way.

I’m a physicist, and scientists like me observe the world around us mainly by looking for signals from different wavelengths of light. So no matter what type of technology scientists use, they run into the same issue in the hunt for dark matter.

It’s not completely impossible to interact with dark matter, though, because it can interact with ordinary matter in other ways that don’t involve light. But those interactions are generally very weak. What we call dark matter is really anything that we can see only through these weaker interactions, especially gravity.

How we know dark matter exists

One way that dark matter can interact with ordinary matter is through gravity. In fact, gravity is the main reason scientists even think dark matter exists at all.

For decades, scientists have been observing how galaxies spin and move throughout the universe. Gravity acts on stars and galaxies, in the same way it keeps you from floating off into space. Heavier objects have a stronger gravitational pull. At these huge scales, researchers have spotted some unexpected quirks that gravity alone can’t explain.

For example, almost 100 years ago, a Swiss astronomer named Fritz Zwicky studied a cluster of galaxies called the Coma Cluster. He noticed the galaxies inside it were moving very fast, so much so that they should have flown apart many millions of years ago.

The only way the cluster could have stayed together for so long is if there was much more matter holding it together with gravity than the telescope could see. This extra matter necessary to hold the galaxies together became known as dark matter.

About 40 years after Zwicky, an American astronomer named Vera Rubin looked at the individual stars moving around the centers of spiral galaxies as they rotated. She saw that the stars at the outside edges of the spiral were moving much faster than you’d expect if only the gravity from the stars you could see was keeping them from flying off into intergalactic space.

Just as with the galaxies moving around the cluster, the motion of the stars around the edges of the galaxies could be best explained if there was much more matter in the galaxies than what we could see.

A spiral-shaped galaxy with a bright spot in the center
A rotating spiral galaxy in the Coma Cluster.
NASA, ESA, and the Hubble Heritage Team (STScl/AURA); Acknowledgement: K. Cook (Lawrence Livermore National Laboratory)

More recently, scientists have combined optical telescopes that observe visible light with X-ray telescopes. Optical telescopes can take pictures of galaxies as they move and rotate. Sometimes, galaxies in these images are distorted or magnified by gravity coming from large masses in front of them. This phenomenon is called gravitational lensing, which is when the gravity around a very heavy object is so strong that it bends the light passing by it, acting like a lens.

X-ray telescopes, on the other hand, can see the clusters of hot gases that surround galaxies. By combining these two telescopes, astronomers can see galaxies as well as the gases surrounding them – all the observable matter. Then, they can compare these images with the optical results. If there’s more gravitational lensing seen than what could be caused by the gas, there must be more mass hiding somewhere and causing the lensing.

Clouds of blue and pink shown, with lots of bright spots representing galaxies shown in the background.
The picture combines optical images of the galaxies with X-ray images. The region in the pink shows the area where the X-ray telescope sees the distribution of gas around the galaxies, and the blue area shows the region where gravitational lensing can be observed. There is blue in places where there isn’t pink, so lensing is showing that there’s something else heavy there. Dark matter is again the best explanation.
NASA, ESA, CXC, M. Bradac (University of California, Santa Barbara), and S. Allen (Stanford University)

How we might be able to see dark matter

Unfortunately, all this tells astronomers only that dark matter must be there, not what it really is. The evidence for dark matter is all based on how it interacts with gravity at very large scales. It’s still “dark” to scientists in the sense that it hasn’t interacted directly with any measurement devices.

The good news is that light and gravity aren’t the only forces in the universe. A force called the weak force might be able to interact directly with dark matter and give scientists a direct signal to observe. Most of the ideas about what the dark matter might be include the possibility of it interacting through the weak force, converting energy into signals that are visible.

The weak force is not observable at normal scales of distance. But for objects the size of an atom’s nucleus or smaller, it can change one type of subatomic particle into another. The weak force can also transfer energy and momentum at very short distances – this is the main effect scientists hope to observe with dark matter. These processes might be extremely rare, but in theory they should be possible to see.

Most experiments looking to see dark matter directly are searching for signals of rare weak interactions in an underground detector, or for gamma rays that can be seen in a special gamma-ray telescope.

In either case, a signal from dark matter would likely be very faint, resulting from an interaction that can’t be explained any other way, or a signal that doesn’t seem to have any other possible source. Even if the effect is faint, it might still be possible to observe, and any such signal would be an exciting step forward in being able to see the dark matter more directly.

In the end, it may be a combination of signals from experiments deep underground, in particle colliders, and different types of telescopes that finally lets scientists see dark matter more directly. Whichever technology ends up being successful, hopefully sometime soon the matter that makes up our universe will be a little less dark.


Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to CuriousKidsUS@theconversation.com. Please tell us your name, age and the city where you live.

And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.

The Conversation

David Joffe receives funding from NASA through a grant from the Georgia Space Grant Consortium.

ref. How do scientists hunt for dark matter? A physicist explains why the mysterious substance is so hard to find – https://theconversation.com/how-do-scientists-hunt-for-dark-matter-a-physicist-explains-why-the-mysterious-substance-is-so-hard-to-find-269876

No animal alive today is ‘primitive’ – why are so many still labeled that way?

Source: The Conversation – USA – By Kevin Omland, Professor of Biological Sciences, University of Maryland, Baltimore County

A platypus has evolved to fit its particular ecological niche. Joao Inacio/Moment via Getty Images

We humans have long viewed ourselves as the pinnacle of evolution. People label other species as “primitive” or “ancient” and use terms like “higher” and “lower” animals.

A drawing of a tree shape with monera and amoebae at the base of the trunk, many branches labeled with other organisms, and man at the very top
‘Man’ is at the very top looking down at all other forms of life in Ernst Haeckel’s drawing.
Ernst Haeckel/Photos.com via Getty Images Plus

This anthropocentric perspective was entrenched in 1866, when German scientist Ernst Haeckel drew one of the first trees of life. He placed “Man,” clearly labeled, at the top. This illustration helped establish the popular view that we are the ultimate goal of evolution.

Modern evolutionary biology and genomics debunk that flawed perspective, showing there is no hierarchy in evolution. All species alive today, from chimpanzees to bacteria, are cousins that each have equally long lineages, rather than ancestors or descendants.

Unfortunately, these outdated notions remain prevalent in scientific journals and science journalism. In my new book, “Understanding the Tree of Life,” I explore why it is fundamentally misleading to view any current species as primitive, ancient or simple. As an evolutionary biologist, I offer an alternative view that emphasizes evolution’s complex, nonhierarchical, interconnected history.

Not primitive, just different

Egg-laying mammals, the monotremes, are frequently labeled the most “primitive” living mammals. This category includes the platypus and four species of echidnas. Indeed, their egg-laying is an ancient characteristic shared with reptiles.

But platypuses also have many unique recent adaptations that make them well suited to their lifestyle: They have webbed feet for swimming and a bill with specialized electroreceptors that detect prey in the mud. Males have spurs with venom that they can use to defend themselves against rivals. If you take a platypus’s view, they’re the pinnacle of evolution for their specific ecological niche.

prickly looking echidna digging for food under a log
Echidnas have just what it takes to flourish in their unique niche.
Chris Beavon/Moment via Getty Images

Echidnas may seem primitive, especially because they lack a capability that humans have – giving birth to live young. Yet they possess many extraordinary traits that humans lack. Echidnas are known for their outer covering of protective spines. They also have powerful claws for digging, a sensitive beak and a long sticky tongue, all of which they use foraging for ants and termites. In a head-to-head competition foraging for prey in a termite mound, an echidna would easily outperform any human.

Other mammals native to Australia also turn up on lists of primitive mammals, such as many species of marsupials – pouched mammals, including kangaroos, koalas and wombats. These species generally give birth to small, minimally developed young that move to the mother’s pouch where they complete development. Pouch development may seem inferior to the human way, but it does have advantages. For example, kangaroos can simultaneously nurture young at three stages of development.

Evolutionary tree appearance depends on focus

Marsupials such as opossums, or monotremes such as the platypus, are often shown at the bottom or left side of an evolutionary tree. However, that does not mean that they are older, more primitive or less evolved.

Evolutionary trees – what scientists call phylogeniesshow cousin relationships. Just as your second or third cousin is no more primitive than you are, it is misleading to think of a koala or echidna as primitive because of where they are depicted on these trees.

When scientists and journalists choose which species to include in the evolutionary trees in their publications, it can influence how the public perceives these species. But species shown lower on the page are not “lower” on some evolutionary scale.

Rather, they are placed there because the focus of many of those trees is on placental mammals, such as humans, other primates, carnivores, rodents and so on. When the focus is on placental mammals, it makes sense to include one or two species of marsupials as comparisons for reference.

diagram showing family relationship of different marsupial species with animals in silhouette at the top, a human is included for comparison.
A phylogenetic tree focused on marsupials shows humans as one of the species included for comparison.
Spiekman, S., Werneburg, I. Sci Rep 7, 43197 (2017), CC BY

In contrast, in a tree focused on marsupials, one or two placental mammals could be included at the bottom of the page for comparison.

Why understanding the tree of life matters

Viewing humans as the goal of evolution leads to a misunderstanding of the entire evolutionary process. Since evolution is the conceptual foundation for all biology, this flawed perspective can hinder all biological and biomedical science.

Mastering a modern understanding of evolutionary trees is crucial to advances in fields ranging from animal behavior and physiology to conservation and biomedicine. For example, because rhesus monkeys are much more closely related to us than are capuchins, rhesus monkeys are generally better subjects for preliminary tests of human vaccines. Opossums, incorrectly considered to be primitive, are a great species for providing a broader framework for studies of neurobiology and aging because they are distantly related to us, not because they are lower or more ancestral.

Grasping the profound reality that humans are not the pinnacle of evolution, but one branch among many, is foundational for all modern biology. Understanding the tree of life is central to fully embracing the shared modern status of all animals, from platypuses to people.

The Conversation

Kevin Omland does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. No animal alive today is ‘primitive’ – why are so many still labeled that way? – https://theconversation.com/no-animal-alive-today-is-primitive-why-are-so-many-still-labeled-that-way-266208

Infusing asphalt with plastic could help roads last longer and resist cracking under heat

Source: The Conversation – USA – By Md S Hossain, Professor of Civil Engineering, University of Texas at Arlington

A stretch of road near Rockwall, Texas, paved with plastic-infused asphalt. Md. Sahadat Hossain

Globally, more than 400 million tons of plastic are produced each year, and less than 10% is recycled. Much of the rest ends up burned, buried or drifting through waterways, a problem that’s only getting worse.

As a civil engineer, I started asking a simple question: Instead of throwing used plastic away, what if we could build something useful with it?

That question led to a technology that mixes small amounts of recycled plastic with asphalt – the black, sticky material used to make roads and parking lots. The result is a stronger road that lasts longer and keeps some used plastic out of the environment.

You can see these roads on my university’s campus at the University of Texas at Arlington, where my team has paved test sections in parking lots. Perhaps more importantly for testing this technology at scale, we have constructed a one-mile section of plastic-infused road in Rockwall, Texas, a city near Dallas. We’ve gotten interest from more cities in and outside Texas as well.

My goal is to take one problem – plastic pollution – and use it to fix another: deteriorating roads.

Where the idea came from

I grew up in a low-income neighborhood in Bangladesh, near a large dump site. As a child, I noticed that people living closest to the piles of waste were often sick, while those farther away were healthier.

At the time, I didn’t know the science behind it – I just saw neighbors having to choose between buying medicine and buying dinner. That memory left a long-lasting impact on me.

Years later, when I became an engineer, I learned that poor waste management doesn’t just harm the environment – it harms people. That realization became the foundation of my work.

How plastic roads work

Traditional asphalt is made from a mix of stones, sand and a petroleum-based binder called bitumen, which holds everything together. In my research team’s process, we replace a small part of that bitumen – about 8% to 10% – with melted plastic from everyday items, such single-use plastic bags and plastic bottles. For our plastic road construction project near Dallas, we used 4.5 tons of plastic waste for nearly a mile of a one-lane road.

We first clean the plastic, then shred it into small flakes. Finally, we mix it into the asphalt at high temperatures. These steps ensure that it melts completely and bonds tightly, leaving no loose plastic behind.

This process is like adding rebar to concrete: The plastic adds flexibility and strength. Roads with this mix can better handle extreme temperatures and heavy traffic. In hot places, that means fewer cracks and potholes.

During an extreme heat wave in April 2024, plastic road constructed in Dhaka, Bangladesh, showed no visible signs of distress or cracks, whereas many roads in Bangladesh had visible cracks and distress during the same period.

Heating asphalt in a large piece of construction equipment.
The team used plastic-infused asphalt to pave a stretch of road.
Md Sahadat Hossain

It also reduces the demand for new petroleum-based materials, since we’re reusing recycled plastic that already exists. Plastic road replaces bitumen, an already petroleum-based ingredient in the road, with waste.

The plastic waste problem

Plastic waste has grown dramatically over the past several decades. In the U.S., plastic waste has increased every year since the 1960s, with the steepest rise between 1980 and 2000.

In 2018 alone, landfills received nearly 27 million tons of plastic, making up 18.5% of all municipal solid waste nationwide. That’s a staggering amount of material sitting unused.

Plastic-infused asphalt can also save money. Because it lasts longer and resists cracking, cities may spend less on repairs and maintenance. In Rockwall, for example, early estimates suggest these roads could extend the pavement’s life by several years.

A team using shovels and broom-like tools to smooth a patch of new pavement.
The construction team finishes up paving a stretch of road with plastic-infused pavement.
Md Sahadat Hossain

Under extreme heat, bitumen can melt. During a performance evaluation of a plastic road test section in Bangladesh, we found that adding plastic to the mix increases the road’s heat resistance. These results are especially helpful for states like Texas that deal with extreme heat over the summer. For our sites in UTA’s parking lot and in Rockwall, the pavement has so far stayed intact on days when temperatures surpassed 100 degrees Fahrenheit.

Overcoming challenges

But there are still challenges. Scaling up production requires a consistent supply of clean, sorted plastic, which not all cities have the infrastructure to provide. Some types of plastic can’t be safely melted or may release harmful fumes if not processed correctly. We’re studying these issues closely to make sure the process is safe.

There are also questions about what happens when plastic roads reach the end of their life. Could they release microplastics – tiny plastic fragments – as they wear down? Early research suggests the risk is low because the plastic is bound within the asphalt, but we’re continuing to monitor it.

A petri dish full of tiny shards of colorful plastics
Microplastics are tiny bits of plastic that show up in the environment.
Svetlozar Hristov/iStock via Getty Images Plus

My own lab studies show very minimal microplastic release, and a 2024 study found that the release of microplastics from recycled plastic-asphalt was estimated to be a thousand times less than the release of rubber particles from worn tires.

Eventually, we may need to come up with alternative materials for these roads if plastic waste begins to decline. But in the meantime, this type of waste is still readily available.

Building toward a sustainable future

Our next steps involve expanding this technology to more regions, testing different types of plastic blends and ensuring that every road built this way is durable, affordable and environmentally safe.

Right now, we are working on testing and implementing plastic roads in cities beyond Texas and even in other countries. We also have filed for a patent for the technology and in the long term plan to eventually commercialize it.

When I see plastic roads being built in Bangladesh – sometimes not far from where I grew up – I think back to the people who lived near those dump sites. This work isn’t just about roads or recycling. It’s about dignity and keeping at least some waste away from the places where people live.

The Conversation

Md S Hossain is listed under a patent filed for plastic-infused asphalt.

ref. Infusing asphalt with plastic could help roads last longer and resist cracking under heat – https://theconversation.com/infusing-asphalt-with-plastic-could-help-roads-last-longer-and-resist-cracking-under-heat-264156

Journalism may be too slow to remain credible once events are filtered through social media

Source: The Conversation – USA – By Charles Edward Gehrke, Deputy Division Director of Wargame Design and Adjudication, US Naval War College

House Speaker Mike Johnson updates reporters about budget talks on Capitol Hill. AFP/Roberto Schmitt via Getty Images

In the first weeks after Russia’s invasion of Ukraine in 2022, a strange pattern emerged in Western media coverage. Headlines oscillated between confidence and confusion. Kyiv would fall within days, one story would claim, then another would argue that Ukraine was winning. Russian forces were described as incompetent, then as a terrifying existential threat to NATO.

Analysts spoke with certainty about strategy, morale and endgames, but often reversed themselves within weeks. To many news consumers, this felt like bias – either pro-Ukraine framing or anti-Russia narratives. Some commentators accused Western media outlets of cheerleading or propaganda.

But I’d argue that something more subtle was happening. The problem was not that journalists were biased. It was that journalism could not keep pace with the war’s informational structure. What looked like ideological bias was, more often, temporal lag.

I serve in the Navy as a war gamer. The most critical part of my job is identifying institutional failures. Trust is one of the most critical and, in this sense, the media is losing ground.

The gap between what people experience in real time and what journalism can responsibly publish has widened. This gap is partly where trust erodes. Social media collapses the distance between event, exposure and interpretation. Claims circulate before journalists can evaluate them.

This matters in my world because the modern battlefield is not just physical. Drone footage circulates instantly. Social media channels release claims in real time. Intelligence leaks surface before diplomats can respond.

These dynamics also matter for the public at large, which encounters fragments of reality, often through social media, long before any institution can responsibly absorb and respond to them.

Journalism, by contrast, is built for a slower world.

Slow journalism

At the core of their work, journalists observe events, filter signal from noise, and translate complexity into narrative. Their professional norms – editorial gatekeeping, standards for sourcing, verification of facts – are not bureaucratic relics. They are the mechanisms that produce coherence rather than chaos.

But these mechanisms evolved when information arrived more slowly and events unfolded sequentially. Verification could reasonably precede publication. Under those conditions, journalism excelled as a trusted intermediary between raw events and public understanding.

These conditions no longer exist.

A Ukrainian medic treats a soldier for leg injuries.
As in other conflicts, early reports out of battles in Ukraine sometimes ended up being inaccurate.
AP Photo/Leo Correa

Information now arrives continuously, often without clear provenance. Social media platforms amplify fragments of reality in real time, while verification remains necessarily slow. The key constraint is no longer access; it is tempo.

Granted, reporters often present accounts as events are occurring, whether on live broadcasts or through their own social media posts. Still, in this environment, journalism’s traditional strengths become sources of lag.

Caution delays response. Narrative coherence hardens fast. Corrections then feel like reversals rather than refinements.

Covering real-time events

The war in Ukraine has made this failure mode unusually visible. Modern warfare generates data faster than any institution can metabolize. Battlefield video and real-time casualty claims flood the system continuously.

For their part, journalists are forced to operate from an impossible position: expected to interpret events at the same speed they are livestreamed. And so journalists are forced sometimes to improvise.

Early coverage of the war leaned on simplified frames, including Russian incompetence, imminent victory and decisive turning points. They provided provisional stories generated to satisfy intense public demand for clarity.

As the war evolved, however, those stories collapsed.

A woman wearing a yellow jacket holds her phone to record ICE agents in one hand and her dog's leash in the other.
Citizen journalists can often record and upload images or video of events faster than traditional news outlets will produce a story.
SOPA Images via Getty Images

This did not mean the original reporting was malicious. It meant the narrative update cycle lagged behind the underlying reality. What analysts experienced as iterative learning, audiences experienced as contradiction.

The acceleration trap

This forces journalism into a reactive posture. Verification trails amplification, meaning accurate reports often arrive after the audience has already formed a first impression.

This inverts journalism’s historical role. Audiences encounter raw claims first and journalism second. When the two diverge, journalism appears disconnected from reality as people experienced it.

Over time, this produces a structural shift in trust. Journalism is no longer perceived as the primary interpreter of events, but as one voice among many, arriving late. Speed becomes a proxy for relevance. Interpretation without immediacy is discounted.

Although partisan bias certainly exists, it is insufficient to explain the systemic incoherence Americans are witnessing.

Can journalism adapt?

Institutions optimized for one tempo rarely adapt cleanly to another. Journalism is now confronting the risk that its interpretive cycle no longer matches the speed of the world it is trying to explain.

Its future credibility will depend less on accusations of bias or even error than the question of whether it can reconcile rigor with speed, perhaps by trading the illusion of early certainty for the transparency of real-time doubt.

If it cannot, trust will continue to drain. An institution that evolved to help society see is falling behind what society is already watching.

The opinions and views expressed are those of the author alone and do not necessarily represent those of the Department of the Navy or the U.S. Naval War College.

The Conversation

Charles Edward Gehrke does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Journalism may be too slow to remain credible once events are filtered through social media – https://theconversation.com/journalism-may-be-too-slow-to-remain-credible-once-events-are-filtered-through-social-media-273748