Studying racial and ethnic health inequality in Canada: What we need to get right

Source: The Conversation – Canada – By Chloe Sher, PhD Candidate, Kinesiology and Physical Education, University of Toronto

Health disparities across racial and ethnic groups persist in Canada. But how the country can effectively address them hinges upon how it can better study these differences.

In a recent paper I co-authored, we examine how researchers study racial and ethnic inequalities in health. We identify four persistent problems: unclear categories of race and ethnicity, a white-centred lens, heavy reliance on majority-defined health outcomes and limited explanation of why these disparities arise.

We discuss these issues drawing heavily on evidence from the United States. This reflects the state of the field: Much of the research and many of the frameworks used to study racial and ethnic health inequality come from the U.S. and have been widely applied in Canadian research.

Canada and the U.S. share a history of colonialism, structural racism and white dominance that continues to shape persistent health inequalities across racial and ethnic groups.

But Canada is also different in several important ways. It has a larger immigrant population shaped by selective immigration policies, wider variation in social and economic conditions across regions and communities and a higher proportion of Indigenous Peoples. Data are often more limited, and policies such as universal health care shape how inequality is experienced and addressed.

To better understand and address health inequalities in Canada, Canadians must rethink how race and ethnicity are studied and ground approaches in the Canadian context.

Canada is not the U.S.

Canada’s social policies are distinct from American policies. To begin with, the racial and ethnic makeup of the populations differ. Canada, for example, has a smaller Black population and a larger Asian population than the U.S.. These differences reflect broader historical and institutional contexts that shape how racial and ethnic inequalities are structured in each country.

At the same time, Indigenous Peoples are more central to health inequality in Canada. This is because Canada has a relatively high percentage of Indigenous Peoples compared to the U.S. and many other more economically developed nations. The health of Indigenous Peoples is shaped by a long history of colonialism and ongoing structural disadvantage.

Immigrant population also differs. About one-quarter of Canada’s population is foreign-born, compared to about one in seven in the U.S. Canada’s selective immigration system means many immigrants arrive with relatively high levels of education and good health. This contributes to patterns like “the healthy immigrant effect.”

Research has shown that Canada exhibits the healthy immigrant effect, in which newly arrived immigrants tend to have better health than the Canadian-born population, though this advantage often declines over time with longer residence. Inequality does not line up neatly with race.

Policy matters too. Canada promotes multiculturalism, while the U.S. emphasizes assimilation into a single national culture. Canada has universal health care, which reduces financial barriers to basic care.

But this coverage is partial. Services such as prescription drugs, dental care and mental-health support are not fully covered and often depend on employment benefits or where people live. Since health care is organized at the provincial level, access and quality also vary across regions. These gaps shape who gets timely care and who falls through the cracks.

The problem with ‘visible minority’

The term “visible minority” is prevalent in research on racial and ethnic health disparities in Canada. But it often does more harm than good.

At its core, it lumps all non-white, non-Indigenous people into one group. That means populations with vastly different histories, migration paths and socioeconomic status are treated homogeneously. The ability to see meaningful differences in health across groups like Chinese, South Asian and Black communities is diminished.




Read more:
The diversity within Black Canada should be recognized and amplified


It also mixes up race and immigration. Many studies don’t separate immigrants from Canadian-born racialized populations. This matters because of the healthy immigrant effect. If newer immigrants are healthier on average, combining them with long-settled groups can make inequalities look smaller than they really are.

The term itself is also ambiguous. People do not always understand or interpret it in the same way, and it’s often taken literally to include anyone visibly different, such as those with disabilities or who are transgender, which complicates its use in health research.

In many ways, the problem stems from data. Canada has limited, inconsistent race-based data. Racial categories are not standardized, and detailed race-based data are often hard to access. Due to limited data availability, researchers could only rely on broad racial terms. This aggravates the problem: instead of revealing inequality, it hides it.

We measure health too narrowly

Another issue is how health is defined in the first place. Most studies rely on standard measures such as life expectancy, chronic illness or mortality. These measures are important, but they only tell part of the story. They reflect a narrow, biomedical view, often omitting how diverse racial and ethnic groups actually experience health and well-being.

Considering Indigenous communities as an example, health is not solely about the absence of disease. It includes connections to land, culture, community and spirituality, alongside physical and mental well-being. Defining health narrowly can marginalize groups by neglecting how different groups understand and experience health.

A narrow focus also makes inequality harder to see. Different groups face distinct health risks and barriers. When we rely on only a few measures, important health problems and inequalities can be overlooked.

A Canadian approach

Studying racial and ethnic health inequality in Canada requires a distinctly Canadian approach. The population, data and policy context differ from those in the U.S., and these differences shape both how inequalities emerge and how they should be studied.

This means moving beyond broad categories, improving race-based data, and using more meaningful and diverse measures of health. It also requires closer attention to context, including Indigenous and rural settings, as well as Canada’s social, immigration and health policy landscape.

To effectively address health disparities, research needs to be grounded in Canada’s realities, not simply adapted from models developed elsewhere.

The Conversation

Chloe Sher previously received funding from the Social Sciences and Humanities Research Council (SSHRC).

ref. Studying racial and ethnic health inequality in Canada: What we need to get right – https://theconversation.com/studying-racial-and-ethnic-health-inequality-in-canada-what-we-need-to-get-right-279104

Writing for well-being: How it could be a new way to teach the essay and resist AI

Source: The Conversation – Canada – By Lindsey McMaster, Instructor, English Studies and Academic Writing, Nipissing University

Writing the dreaded English essay spikes anxiety for thousands of students, but is there a way for writing to boost students’ well-being instead?

I wanted to know if a new approach to teaching literary studies could tap into the feel-good side of writing and make essays a path to wellness, so I designed an English course to try it out at Nipissing University.

We know that university students are at risk of mental-health struggles, particularly depression and anxiety. If writing can help instead of stress them out, it could be a refreshing change for English studies — and a new way for teachers to introduce essay writing.

Studies show that writing can boost your mental and physical health if you focus on expressing your emotions and digging for insight.

Paying more attention to the positives in our lives, specifically by writing them down, could further enhance short- and long-term well-being.




Read more:
Why you’re wise on Tuesday and foolish on Sunday: Practising wisdom in uncertain times


Starting with journalling

Students first need to find out that writing can actually support well-being.

In the course, they took up a journalling habit, but it wasn’t just about venting their feelings or writing whatever came to mind. We looked at studies on how writing can reshape your thinking and boost positivity.

Three methods stood out:

  • Write down “three good things” about each day and, importantly, your own role in bringing them about. This technique was pioneered in a study led by psychologist Martin Seligman. Participants who adopted the approach reported feeling happier and less depressed at the one-month, three-month and six-month points. It’s now been widely shared, and it’s a great way to start a new journalling habit because it’s straightforward and effective.

  • Look to the future and write about your best possible self. When you imagine a fulfilled version of yourself, it will motivate you to do the hard work to get there. According to psychologist Laura A. King, when you imagine a fulfilled version of yourself, you can experience the health benefits of writing without revisiting negatives from the past.

  • Add creativity to your journalling. Turn a moment from your day into a comic; narrate your day as if it were happening in Middle Earth; write a haiku about your toothpaste. A diary-based study of more than 600 young adults led by psychologist Tamlin Conner showed a straightforward effect where being creative one day boosted well-being the next.

Case study on the self

Where journalling provides a space to play around with techniques, essays give students a place to reflect on their efforts, report on the results and hypothesize about positive effects of the experience.

One of the fascinating things about writing for well-being is that no one knows for certain why it works. Across studies it shows reliable, modest benefits, but the underlying mechanism for its effects hasn’t been pinned down — so students’ own theories could contribute to solving a real mystery.

Writers feed off inspiration. Showing students that authors have been using writing for well-being — and making great art in the process — gives them that extra push to keep writing and go deeper.

Inspiration from literature

Among Canadian authors, L. M. Montgomery’s story is especially compelling. Her famous books like Anne of Green Gables and Emily of New Moon have made a utopia of Prince Edward Island; but inwardly, Montgomery experienced deep mental anguish, leading to addiction in her later life.

Her journals detail this other side to her life and show how she used writing to ease her mental suffering. As she memorably notes in an entry from 1904:

“I feel better for writing it out. It is almost as efficacious as swearing would be and much more respectable.”




Read more:
Playing detective with Canada’s female literary past


Looking to Montgomery as a mentor helped students realize how creative and immersive personal writing can be, in turn motivating them to push forward with their own journalling.

Discussing Montgomery’s life writing in their essays made sense because they could see how her efforts to find solace through writing were relatable to their own.

Easing back on literary jargon

Poetry can beautifully map a state of mind. But traditional approaches to teaching it have a tendency to suck the life out of literature that should be a joy and a delight.

Instead of taking what some teachers call a “technique spotting” approach where you count up the metaphors, teaching English from a well-being perspective taps into poetry’s healing qualities.

In the United Kingdom, the Poetry Pharmacy movement spearheaded by publisher and arts advocate William Sieghart focuses on the healing power of poetry.

His curated poetry collections pair thoughtfully selected poems with one-page prescriptions, highlighting each work’s curative potential for conditions like insecurity, regret, loneliness and more. Both the poem itself and the interpretation serve to advance self-knowledge and alleviate mental suffering.

‘The Healing Power of Poetry’ TEDxOxford talk with William Sieghart.

Students easily ran with this idea. They found joy in poems that spoke to their lived experience, used empathy to recommend poems to others in need and wrote movingly in essays about the mental-health issues they face most often — like academic pressure, fear of failure, homesickness, social anxiety, perfectionism, procrastination and more.

The poetry-remedy concept also lent itself to experiential approaches where students could tape a chosen poem on their mirror, make it the lock screen on their phone, share it with a loved one, create a painting or visual, text it to a distant friend — and ultimately share the story of what happened in essay form or classroom discussion.




Read more:
Why reading and writing poems shouldn’t be considered a luxury in troubling times


Turning away from AI

Essays are a notoriously difficult part of academic life, which is why generative AI presents such an irresistible pull to the stressed-out student. If essay writing is no more than a tedious recital, it’s no wonder they would gladly pass along what AI spews out on such topics.

Writing instead about your own interior world, finding evidence in your own experience and using literature to light a personalized path to growth are tasks that cannot be easily farmed out to a text-generator — because they speak directly to your own humanity.

The idea that writing can offer fresh avenues for growth and betterment is a welcome reminder of what genuine human writing is truly for.

In teaching a course on it, I found writing for well-being to be an exciting expansion of English studies broadly and essay writing in particular. It can support students’ writing and communication skills while genuinely enriching their lives, and it can help us inspire students with what’s most important in the study of literature: a lifetime love of reading and a willingness to take up the pen.

The Conversation

Lindsey McMaster does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Writing for well-being: How it could be a new way to teach the essay and resist AI – https://theconversation.com/writing-for-well-being-how-it-could-be-a-new-way-to-teach-the-essay-and-resist-ai-263703

From AI companions to climate action, we  undervalue what lies ahead

Source: The Conversation – Canada – By Rahul Ravi, Professor of Finance, Concordia University

Millions of people around the world now use AI companions — for friendship, emotional support, mental health counselling and romantic interactions. This includes 72 per cent of adolescents, according to one study from the United States.

Meanwhile, human-caused climate change has already led to widespread impacts and rising risks, some of them irreversible. Yet emissions remain high.

As a professor of finance, I see these phenomena as different expressions of the same underlying bias: we apply too high a discount rate to the future.

The idea of a discount rate is straightforward. A dollar today is worth more than a dollar tomorrow. The discount rate tells us by how much. Set that rate too high, and you systematically undervalue what lies ahead. Set it too low, and you over-invest in distant outcomes.

In many parts of life, we set this rate too high. Behavioural economist David Laibson showed that people place disproportionate weight on immediate rewards, even when this leads to worse outcomes over time.

In finance, we understand that valuation depends critically on the discount rate applied to future cash flows. In life, we continue to apply a discount rate that is too high, marking down the future to the point where it no longer meaningfully constrains the present.

What feels good now

Psychologist Hal Hershfield’s research on the future self helps explain why. People often perceive their future selves more as another person than as a continuation of who they are now. This makes it easier for the self that benefits today to shift costs onto the self that must bear them tomorrow.

Looking at this through a finance lens, it resembles a “principal-agent problem,” where managers may prioritize short-term incentives over the long-term interests of shareholders.

In both cases, the person making the decision does not fully bear the long-term cost. But the future does not disappear. It simply becomes easier to ignore.

Investment in relationships

This logic becomes easier to see if we look at how we build relationships. Strong relationships require time and a willingness to tolerate discomfort.

Trust and intimacy involve immediate effort but the benefits accumulate gradually. By contrast, autonomy and flexibility offer immediate rewards. They preserve options and reduce constraints, making it easy to defer relational investment.

But relationships, like other forms of capital, depend on sustained investment, and delayed investment is often hard to recover later.

The same logic can also be seen in family structures and broader social connections. Strong ties in families, friendships and communities depend on time and repeated interaction. Without it, those ties weaken.

As those ties weaken, loneliness becomes more likely. Research shows that loneliness and social isolation are associated with significant health risks. In this sense, loneliness can be understood as the long-term consequence of insufficient investment in connection when it was easier to build.

How loneliness is killing us, according to Harvard professor Robert Waldinger.

These patterns are not only individual. They also reflect the way modern life is increasingly organized around immediacy and convenience. Technology makes interaction faster, easier and more responsive, but many of the things that matter most in the long run still require time, patience and discomfort. The result is a social environment that increasingly rewards responsiveness over endurance.

Immediate benefits

Seen in this light, AI companions are not an anomaly. They are emerging in an era of widespread loneliness, where many people are seeking connection that feels reliable and low in emotional cost.

Back in 2002, pioneering research by Clifford Nass and Youngme Moon showed that people apply social rules to computers even when they know they’re not human. Almost 25 years later, research now suggests AI can provide emotional support and a real sense of companionship in the short term. From today’s perspective, this is an efficient solution: the benefits are immediate and reliable.

The concern is not that AI companionship fails. It’s that it succeeds too well in the present. By reducing effort, uncertainty and emotional risk, AI companions make connection easier to access but may also shift expectations in ways that are harder to sustain over time in human relationships. In that sense, they reflect the same trade-off: immediate comfort at the expense of longer-term relational depth.

The same logic extends beyond individual life and helps explain how societies respond to long-term problems.

Climate change is perhaps the clearest example. The impacts of our warming planet are already very evident and yet we’re slow to act. This is, in part, because the economic benefits of extraction and consumption are immediate, while many of the costs are delayed and dispersed across time.

A voiceless future

Across many human domains, from AI and personal relationships to climate change, the structure is the same: The present is immediate and rewarded; the future is abstract, distant and silent. So, decisions skew toward today.

This is not simply a matter of awareness or intention. It is structural. The future has no meaningful representation in present decision-making. It has no voice, no urgency and no direct claim. And so it’s discounted.

This is what Canadian Prime Minister Mark Carney called the “tragedy of the horizon.” Whether in the climate crisis or the loneliness epidemic, the catastrophic impacts will be felt beyond the traditional horizons of investment cycles and political terms.

Because the future has no seat at the board table, it is treated as an externality — a cost we don’t have to account for today, but one that is compounding at an unsustainable rate.

Until we find ways to give the future a real stake in present decisions, we will continue to choose what is easier now and pay for it later.

The tendency to discount the future is deeply human. But in a world increasingly shaped by AI systems, weakening social ties and accelerating climate risk, the costs of doing so are becoming harder to ignore.

The Conversation

Rahul Ravi does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. From AI companions to climate action, we  undervalue what lies ahead – https://theconversation.com/from-ai-companions-to-climate-action-we-undervalue-what-lies-ahead-279838

Pourquoi le gouvernement renonce à décentraliser

Source: The Conversation – in French – By Tommaso Germain, Chercheur en science politique, Sciences Po

La nouveau grand acte de décentralisation promis par Sébastien Lecornu n’aura pas lieu. Au contraire, une « recentralisation » en direction des préfets est à l’ordre du jour. Pourquoi la décentralisation, tant de fois annoncée, est-elle sans cesse ajournée ?


En octobre dernier, le premier ministre Sébastien Lecornu affichait une franche ambition : mettre en place un « grand acte de décentralisation » censé régler les problèmes liés à l’organisation territoriale de la République. Après quelques mois de réflexion, de concertation et une fois les élections municipales passées, le gouvernement a largement revu à la baisse cette ambition. Quelles sont les raisons de ce recul stratégique ?

Des promesses initiales à une réforme technique

Les annonces de septembre et d’octobre 2025 affichaient un cap audacieux : revoir l’ensemble des responsabilités entre l’État et les différentes collectivités territoriales. L’idée, déclarait le premier ministre, était d’avoir « un seul responsable par politique publique », qu’il s’agisse d’un ministre, d’un préfet, ou d’un élu local, afin de sortir de la confusion actuelle où plusieurs acteurs partagent des responsabilités.

La question de la décentralisation était au cœur du discours de politique générale du premier ministre, fait symbolique et solennel. L’enjeu financier était également au centre. Les rapports Woerth sur la décentralisation (2024) et Ravignon sur le coût du millefeuille territorial (2025) avaient mis en évidence le coût élevé de cet enchevêtrement de compétences et les élus locaux étant demandeurs d’une grande clarification et d’une sécurisation au sujet du « casse-tête » de l’autonomie financière.

Pourtant, parmi les différents scénarios analysés par le gouvernement, une option – moins ambitieuse qu’une véritable décentralisation (impliquant un transfert de pouvoir aux collectivités territoriales et à leurs élus) – consistait à miser sur une « déconcentration », désignant une réorganisation de l’action de l’État dans les territoires, sous l’égide du préfet.

Ce scénario n’a pas tardé à se concrétiser, comme en témoigne le texte transmis début avril au Conseil d’État. Ainsi, le terme de « décentralisation » ne figure plus dans le texte mis à l’ordre du jour. Le seul élément de décentralisation, très ciblé et proposé il y a quelques semaines de façon séparée, concerne la métropole du Grand Paris, qui fait l’objet d’un texte spécifique. Enlisée depuis des années, la MGP devrait évoluer soit vers une métropole plus intégrée, soit au contraire vers un affaiblissement visant à renforcer les établissements publics territoriaux qui la composent, ce qui revient à fragmenter la Métropole. Signalons qu’entre-temps, l’Assemblée nationale a soutenu la velléité de l’Alsace de quitter la région Grand Est pour réformer une région autonome – une forme de retour en arrière par rapport à la dernière réforme créant les grandes régions en 2016.

Des ambitieuses promesses initiales, il semblerait que le gouvernement se cantonne à une réforme essentiellement technique. Le texte se concentre en effet sur la consolidation du pouvoir des préfets. À travers le renforcement du pouvoir de substitution du préfet (si des carences sont « dûment constatées », le préfet peut se substituer à toute autorité locale temporairement), la réactivité de l’action publique semble l’axe privilégié. Cela s’inscrit dans l’anticipation de potentielles crises où une décision rapide s’impose, (domaine agricole, eau, énergie ou sécurité).

Sous couvert de décentralisation, l’exécutif opère donc une recentralisation discrète, transformant les collectivités en relais d’exécution. Ainsi, l’État pourra sélectionner et accélérer les projets jugés « utiles » – notamment industriels – par le biais d’un relais plus fort sur les opérateurs de service public et notamment sur les maires. Le droit de dérogation aux normes par les préfets, mis en place depuis quelques années, sortirait renforcé de la promulgation de ce texte. Certains chercheurs estiment d’ailleurs que ce droit de dérogation est constitutif d’une légalité néolibérale où la hiérarchie des normes est remise en question. En effet, le droit préfectoral permet de ne pas appliquer certaines normes, notamment environnementales. Introduire un système « à la carte », par la montée de ces mécanismes permettrait « de neutraliser la volonté législative sous couvert du discours managérial de la simplification ».

Temps politique, crise budgétaire et art français de gouverner

Pourquoi le grand projet de décentralisation annoncé en octobre a-t-il abouti à ce résultat ?

La première raison du revirement du gouvernement tient au temps politique. La réforme devait initialement intervenir avant les élections municipales et métropolitaines de mars 2026, ce qui s’est révélé irréaliste. Or, désormais, l’agenda politique, parlementaire et médiatique est structuré par l’élection présidentielle. Dans ce cadre contraint, avec un gouvernement technique dédié à stabiliser la vie publique, une réforme d’ampleur sur la décentralisation est quasiment impossible.

Au-delà de ce facteur politique et institutionnel, le revirement actuel peut s’expliquer par une situation territoriale et structurelle impossible à transformer en profondeur dans un cadre budgétaire si contraint. En effet, les finances publiques nationales justifient de nombreuses baisses de budgets, politiquement périlleuses, et ne permettent pas d’ouvrir les vannes financières pour permettre une véritable décentralisation.

Selon la ministre de l’Aménagement du territoire et de la décentralisation, Françoise Gatel, les élus locaux « ne veulent pas de décentralisation, ils veulent avant tout de la simplification ». Pour le gouvernement, les élus locaux attendent principalement l’allègement des normes, la simplification des différentes procédures administratives.

Cette demande de simplification est sans doute réelle et semble largement partagée parmi les acteurs publics et privés. Toutefois, les élus locaux continuent, dans leur majorité, de réclamer plus de décentralisation et d’autonomie financière : la simplification ne fait pas tout, c’est l’autonomie locale qui est le nerf de la guerre. In fine, le gouvernement semble donc vouloir faire porter aux élus la responsabilité du recul sur la réforme en les accusant implicitement de « défiance ».

En prenant du recul, cette réforme s’inscrit dans une tendance à l’œuvre depuis de nombreuses années : les gouvernements successifs font la promotion « d’actes » de décentralisation, livrent des annonces ambitieuses où l’élu local et les collectivités seraient placés au centre du système décisionnel et où le millefeuille territorial, complexe et coûteux, serait enfin rationalisé. Or, par le mécanisme de la négociation avec les associations d’élus locaux, qui fait apparaître une demande de décentralisation hétérogène et qui requiert un fort engagement budgétaire, ces annonces aboutissent souvent à un renoncement. Cela avait été le cas après les gilets jaunes avec la réforme de l’organisation territoriale de l’État. Cela avait été le cas en 2022 avec la loi 3DS qui était aussi un texte technique et de simplification, avec une décentralisation très accessoire.

En définitive, et en attendant un hypothétique texte supplémentaire, le paradoxe français n’en finit plus de se répéter : la « décentralisation » annoncée avec éloquence aboutit à une « recentralisation » stratégique.

The Conversation

Tommaso Germain ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. Pourquoi le gouvernement renonce à décentraliser – https://theconversation.com/pourquoi-le-gouvernement-renonce-a-decentraliser-281723

Nudge theory was all about taking responsibility – but it allowed big business to look the other way

Source: The Conversation – UK – By Nick Chater, Professor of Behavioural Science, Warwick Business School, University of Warwick

Piyaset/Shutterstock

Feelings of despair at the state of the world can be overwhelming. Social and environmental problems persist, but political discourse is polarised, divisive and often ineffective.

A couple of decades ago, some behavioural scientists – ourselves included – began to think there might be a better way of addressing these challenges.

Instead of relying on governments to change things, we figured, perhaps we should switch the focus to people’s own actions. And maybe improving their choices would provide an alternative route to social and environmental transformation.

The idea developed from the fact that people sometimes make bad decisions which may be harmful – to themselves, to others or to the environment.

So what if we tried to discourage things like smoking or frequent flying, not with the heavy hand of government, but by appealing directly to the psychology of the individual?

Two pioneers of this approach, Richard Thaler and Cass Sunstein, argued that governments and institutions could “nudge” people by subtly redesigning the decision-making process. A typical nudge might involve making certain arrangements the default option, such as automatic enrolment into pension schemes. Or it might mean placing healthier meals first on menus.

In these situations, nothing needs to be banned. The undesirable options remain available – they’re just tucked away or more difficult to access.

Behaviour gets nudged along in personally and socially beneficial directions, without removing freedom of choice, and without getting into politically contentious territory. Like many enthusiasts, we were optimistic that focusing on individual behaviour might prove to be an effective route to a better world.

Sadly, things turned out rather differently.

Recent results from large meta-analyses (studies that bring together findings from many previous experiments) suggest that the effects of nudges and other individualistic interventions are disappointingly small.

Some authors have even concluded that there may be no reliable evidence that nudges work at all. Other evidence suggests that even when nudges do have an effect, those effects are small, short lived and difficult to scale up.

And there is another problem, as we argue in our new book It’s On You. By focusing attention on individual responsibility for the world’s problems, behavioural scientists may have inadvertently assisted a broader process known as “responsibilisation”“.

Responsibilisation means placing the burden of blame onto individual consumers – deflecting attention from the need to regulate or constrain big businesses which benefit and profit from maintaining the status quo.

Oil companies for example, might want the world to focus on the responsibility of individual car drivers and frequent flyers. Plastics and packaging companies stress the scourge of individual littering. Manufacturers of ultra-processed foods and sugary drinks want us to blame ourselves for poor diets.

In each case, individual behaviour is placed centre stage, while the need for regulations to shift corporate practices recedes from view.

And persuading us to place responsibility on the individual goes very much with the grain of human psychology. Our social lives are built around interacting with small numbers of other people, even while we are governed by complex systems of norms, conventions and rules that change slowly. Systems that we largely take for granted, do not control and rarely even notice.

Taking responsibility

It is hardly surprising then, that when we look for explanations for social problems such as climate change or gun violence, we naturally attribute them to the actions of bad people. It’s the drivers of big cars or violent types with mental health problems.

This means that people are wired to be all too ready to buy into the responsibilisation narrative that individuals, including ourselves, are at the heart of the problems that bedevil society.

Illustration of a human figure bring pushed by an large pointed finger.
A little nudge in the right direction?
eamesBot/Shutterstock

But when social problems arise and intensify, it is unlikely that human nature has suddenly deteriorated en masse.

It is far more plausible that large-scale systemic forces – changes in regulation, market structure, technology and incentives – are at work. And when problems are systemic and self-reinforcing, systemic solutions are what is required.

In a world that feels increasingly contentious and imperilled, it is tempting to hope that individual consumers can really make a difference – to imagine that we can improve the world one recycled yoghurt pot at a time. And each of us should, of course, do our bit by making good consumer choices where we can.

But we must not allow a focus on the individual to distract us from the need for deeper systemic change. Gentle nudges will never be enough. To address our persistent social and environmental challenges, we need the collective political will to reshape the rules that govern all of our lives.

The Conversation

Nick Chater receives funding from UKRI and NSF. I am also a co-founder of Decision Technology, a behavioural science consulting company founded in 2002 (and I continue to be a share-holder and director). The company doesn’t stand to benefit from this article (if anything, the reverse!).

George Loewenstein does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Nudge theory was all about taking responsibility – but it allowed big business to look the other way – https://theconversation.com/nudge-theory-was-all-about-taking-responsibility-but-it-allowed-big-business-to-look-the-other-way-278357

Osteoarthritis: how stimulating the muscles with electricity may help manage the condition

Source: The Conversation – UK – By Louise Burgess, Lecturer in Sport and Exercise Science, Bournemouth University

EMS training could be particularly beneficial for people with osteoarthritis who have limited mobility or pain. roibu/ Shutterstock

An estimated 595 million people globally are living with osteoarthritis. This makes it one of the leading causes of pain and disability.

Osteoarthritis is a degenerative joint disease, in which tissues in the joint break down over time. The condition can affect any joint, but most commonly the knees, hips, hands and spine.

However, the impact of osteoarthritis often goes beyond the affected joint. The condition can have profound effects on daily life.

Research shows that people with osteoarthritis are less likely to remain in work and more likely to develop additional health problems, such as diabetes, obesity and poor mental health, than those without the disease.

One of the key approaches recommended for managing osteoarthritis is exercise, including aerobic exercise and muscle strengthening. It’s shown to be extremely beneficial for managing the condition and its associated symptoms.

But not everyone who has osteoarthritis is able to exercise due to pain and limited mobility. This is why electrical muscle stimulation, a novel technology that uses small electrical impulses to help muscles contract, is being investigated for managing osteoarthritis.

Exercise for osteoarthritis

Aerobic and muscle strengthening exercises are both proven to address key drivers of osteoarthritis symptoms.

Aerobic exercise can help manage body weight and improve pain by enhancing circulation and reducing inflammation.

Muscle strengthening exercise improves joint stability by supporting the surrounding musculature. This reduces stress on the joint and improves movement.

Together, these approaches can help to break the cycle of pain, inactivity, weight gain and physical decline that can happen in osteoarthritis.




Read more:
Joint pain or osteoarthritis? Why exercise should be your first line of treatment


But as beneficial as exercise is, many people with osteoarthritis are reluctant to try it or struggle to adhere to physical activity long term.

In fact, data suggests that people with musculoskeletal conditions (such as osteoarthritis) are around twice as likely to be physically inactive as their healthy counterparts.

Reported barriers to physical activity include pain, limited mobility, negative experiences of physical activity and a lack of motivation. But the less we move, the more muscle mass and strength we gradually lose.

A difficult cycle can then emerge, whereby pain, stiffness and fear of making symptoms worse all discourage movement. Then, without movement, stiffness and pain worsen.

An alternative approach

When exercise feels too painful or isn’t possible, electrical muscle stimulation (EMS) may offer an alternative method for maintaining and improving strength.

This works by placing electrodes on the skin to deliver small electrical impulses, causing muscles to contract without the joint needing to move. The electrical impulse is similar to the signal we normally send from our nervous system when we want to perform a movement.

When performed instead of exercise over several weeks and sessions, EMS has been shown to increase muscle size and strength and improve function in people with hip and knee osteoarthritis. For example, in people with knee osteoarthritis, EMS performed on the quadriceps muscles three days per week for 4-8 weeks has led to benefits.

The therapy can be used in isolation, or it can be applied during exercise to activate even more muscle fibres in what is called a superimposed muscle contraction.

Electrical muscle stimulation also shows promise for those with severe, end-stage osteoarthritis who are preparing for surgery.

For example, one study compared the effects of performing EMS or exercise before surgery for knee osteoarthritis on postoperative outcomes. The study found that participants who used EMS for 20 minutes a day, five days a week in the six weeks before surgery saw greater improvements in postoperative muscle mass, strength and function, compared with patients who performed physical exercise.

Muscle weakness is common both before and after surgery, partly due to pain and reduced movement. While exercise programmes before and after surgery are widely recommended, research suggests they often only have modest effects on functional recovery from joint replacement surgery.

One explanation may be that people with severe osteoarthritis cannot tolerate the level of intensity needed while exercising to build muscle effectively. In addition, joint trauma and swelling from surgery can cause disruption to the signalling pathways that are required to activate muscles.

Because EMS can bypass some of these signalling issues, it may help to maintain or rebuild muscle where conventional exercise is not feasible immediately after surgery. It’s often used in sports settings for this reason, such as when athletes require anterior cruciate ligament surgery.

Not a replacement for exercise

That said, electrical muscle stimulation is not a magic solution and has its limitations. In many cases, it works best as a complement to, not a substitute for, active rehabilitation.

The body of evidence for its effectiveness in osteoarthritis is also still evolving. Some studies showed inconsistent results or were only conducted using a small sample.

Some people find the sensation of electrical stimulation uncomfortable. Some aren’t suitable for its use (for example, those with pacemakers) and devices can be expensive to buy.

Nonetheless, for those who cannot exercise due to pain, swelling or limited mobility, EMS offers a practical tool to maintain muscle strength. This can help them stay active and independent for longer, recover quicker from surgery, and maintain a better quality of life.

The Conversation

Louise Burgess does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Osteoarthritis: how stimulating the muscles with electricity may help manage the condition – https://theconversation.com/osteoarthritis-how-stimulating-the-muscles-with-electricity-may-help-manage-the-condition-281601

Chronic obstructive pulmonary disease develops over decades – and we are missing the window to prevent it

Source: The Conversation – UK – By Jennifer Loudon Moxen, PhD Candidate, COPD Inflammation, University of the West of Scotland

CI Photos/Shutterstock

Chronic obstructive pulmonary disease, or COPD, is one of the world’s leading causes of death, responsible for 3.5 million deaths in 2021 alone. It is often thought of as a disease of older smokers. But that picture is too simple. COPD usually develops slowly over many years, often long before symptoms become obvious.

COPD is a long-term lung condition that makes it harder to move air in and out of the lungs. It includes damage to the airways, often described as chronic bronchitis, and destruction of the tiny air sacs in the lungs, known as emphysema. Because this damage builds up gradually, many people do not realise anything is wrong until symptoms become difficult to ignore. There are treatments that can help, but there is no cure, and by the time COPD is diagnosed the damage is often permanent.

Common symptoms include a long-term cough, bringing up mucus and shortness of breath. These symptoms often appear later in life, which helps explain why COPD is so often seen as an older person’s disease. But in many cases, the damage started decades earlier.

Many environmental irritants can harm the lungs, but cigarette smoke remains the main cause of COPD. Cigarette smoke contains thousands of chemicals, including toxic gases and cancer-causing substances, that injure lung tissue and trigger oxidative stress, a form of cellular damage that drives inflammation.

Inflammation is part of the body’s normal defence and repair system. Usually, it settles once the source of harm has gone. But in COPD, the lungs may be exposed to cigarette smoke day after day, so the inflammatory response never properly switches off.

Over time, immune cells sent to repair the damage can end up injuring the lungs further. The airways become narrower, the lungs produce more mucus, and the tiny air sacs known as alveoli can break down. Together, these changes make breathing increasingly difficult.

Close up of woman smoking cigarette over image of damaged lungs
Cigarette smoke remains the main cause of COPD.
APChanel/Shutterstock

As the disease progresses, the lungs are physically altered in ways that cannot be fully reversed, even if someone stops smoking. COPD inflammation also does not always respond well to standard anti-inflammatory medicines such as steroids, which is one reason prevention matters so much.

Although cigarette smoking remains the main driver of COPD, e-cigarettes are also raising concerns. Vaping aerosols can contain nicotine, ultrafine particles and flavouring chemicals that may irritate the lungs and contribute to inflammation. The long-term effects are still unclear because these products are relatively new.

That matters particularly for younger people. In Great Britain, recent survey data suggest that 7% of 11- to 17-year-olds currently vape. While that does not mean they will go on to develop COPD, it does mean more young lungs are being exposed to substances whose long-term effects are not yet fully understood.

COPD is often diagnosed only after major lung damage has already occurred. Because it develops so gradually, people may dismiss early breathlessness, coughing or mucus production as a consequence of getting older, being unfit or smoking. Respiratory organisations warn that symptoms such as cough, phlegm and shortness of breath should not be treated as a normal part of ageing, while studies show that COPD remains widely underdiagnosed, including among people with respiratory symptoms.

The burden on health systems is huge. A 2023 study estimated that COPD could cost the global economy around INT$4.3 trillion between 2020 and 2050. International dollars adjust for differences in prices between countries; in broad terms, this is roughly equivalent to US$4.3 trillion in US purchasing power, or about £3.2 trillion if treated as US dollars at current exchange rates. Hospital admissions often rise in winter, when people with COPD are more vulnerable to bacterial and viral infections that can worsen symptoms and speed up decline.

That is why the most important window for action may come much earlier in life. By the time many people are diagnosed, the disease has been developing for years. Better education about lung health at school age could help people understand that choices made in their teens and twenties may shape their breathing decades later.

COPD care has traditionally focused on treating symptoms once they appear. But by then the lungs may already be permanently damaged. Seeing COPD as a disease that develops slowly over decades could shift attention towards earlier prevention and, ultimately, reduce its human and economic cost.

The Conversation

Jennifer Loudon Moxen does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Chronic obstructive pulmonary disease develops over decades – and we are missing the window to prevent it – https://theconversation.com/chronic-obstructive-pulmonary-disease-develops-over-decades-and-we-are-missing-the-window-to-prevent-it-278473

What it would have been like to experience the dinosaur-killing asteroid armageddon: a blow-by-blow account

Source: The Conversation – UK – By Michael J. Benton, Professor of Vertebrate Palaeontology, University of Bristol

serpeblu/Shutterstock

A great Tyrannosaurus rex strides through the conifer trees of her territory, sniffing the air. She picks up the scent from the carcass of a dead horned dinosaur, Triceratops, that she was feeding on yesterday. She walks over and strips off some more shreds of meat, but the smell is foul even for her.

She goes down to the lake to drink and small crocodiles and turtles scuttle into the water. But she hardly sees them. Of more interest is an armoured dinosaur, Ankylosaurus, lurking nearby. However, she knows this dinosaur won’t be an easy kill and she isn’t desperate enough for food to risk a fight. Little does she know there are bigger dangers ahead. She looks up and sees a bright light racing downwards accompanied by faint crackling and sizzling noises.

Our T. rex has excellent hearing for low frequency sounds and she is disturbed by the vibrations she can feel. But her upset only lasts for a moment. In a flash, she has been burnt to a crisp and her world changed forever.

This all happened 66 million years ago, when a huge asteroid famously hit the Earth in the area of what is now the Caribbean. At the end of the Cretaceous period, sea levels were 100–200 metres higher than today, so the shores of the Caribbean lay far inland over eastern Mexico and the southern United States. The impact happened entirely within these waters.

The event triggered instant changes to our planet and its atmosphere and led to the extinction of the dinosaurs and about half Earth’s other species. But what would it have been like to experience such an gargantuan impact? What would you have seen, heard or smelled? And how would you have died – or survived?


The Insights section is committed to high-quality longform journalism. Our editors work with academics from many different backgrounds who are tackling a wide range of societal and scientific challenges.


As experts on meteoritics and palaeontology, respectively, we’ve created a detailed timeline, based on decades of research, to take you right there. So let’s start by travelling back in time to the very last day of the Cretaceous.

T-minus one day

All is calm and the Cretaceous day proceeds as usual. In what will soon be ground zero, it is pleasantly warm, about 26°C, and wet. It often is. For about a week, the asteroid has been visible only at night. Because the giant rock is heading straight towards Earth, it looks like a motionless star. There is no dramatic tail; this is a rocky asteroid rather than a comet.

Illustration of dinosaurs walking in a valley.
The dinosaurs were enjoying nice weather before the big impact.
Orla/Shutterstock

In the last 24 hours, the light becomes visible during the daytime. But it still looks like a star or planet, getting as brighter in the final few hours before impact.

T equals 0: the impact

If you were close by, you would first have experienced a brief light and sound show. Minutes to seconds before the impact, you’d have seen the bright fireball, and its accompanying crackling or fizzing noises. This sizzling sound is a result of the photo-acoustic effect: the intense light of the fireball warms the ground, which then heats the air above it, causing pressure waves, or sound.

Next, a deafening sonic boom, which occurs because the asteroid is travelling faster than the speed of sound. But the asteroid is so huge, perhaps 10km in diameter, that it almost certainly hits the ground before any living creature near the impact zone has time to run for cover.

The asteroid’s enormous energy forms a crater through a series of processes that together take only a few seconds. As the asteroid collides with the surface, its kinetic (movement) energy is instantly transferred to the surface as a combination of kinetic, thermal (heat) and seismic energy (released during earthquakes). This results in a series of shock waves that heat and compress both the asteroid and its target.

As the shock waves propagate, rocks fracture, break up and are ejected, producing a bowl-shaped depression, or transient cavity, about ten seconds after impact. The heat and compression also melt and vaporise large volumes of material, including the asteroid itself, releasing a fountain of incandescent vapour (its temperature is more than 10,000 K, or 9726.85°C).

Over the next few seconds, the cavity increases in size to many times the diameter of the original asteroid. Simulations suggest that around 20 seconds after impact, the transient cavity is at least 30km deep – deeper than the deepest depth currently known on Earth, the 11km Challenger Deep valley, part of the Pacific Ocean’s Marianas Trench. The rim of the crater is over 20km high – more than twice the height of 8,900m Mount Everest.

But this enormous feature lasts for less than a minute before it starts to collapse. Within three minutes of the impact, the centre of the crater has rebounded to form a peak several kilometres high. The peak only lasts about two minutes before collapsing back into the crater.

Whether a dinosaur or a dung beetle, if you were near the transient cavity you would have been incinerated instantly by the blast. But even if you were up to 2,000km from the epicentre, you’d likely have been killed quickly by the thermal radiation and supersonic winds now spreading out from the impact site.

T-plus 5 minutes

Five minutes after the impact, the winds have “eased” to those of a category 5 hurricane, flattening everything within about 1,500km of the impact. Destroying everything, that is, which has not already been burnt. Atmospheric temperatures in the region rise to over 500K (226.85°C). This would feel like being inside an oven – causing burns, heatstroke and death. Wood and plant matter ignite, creating fires everywhere.

Because the asteroid struck the sea, the atmosphere is also filled with super-heated steam, making the hurricane-force winds even deadlier.

Next come the tidal waves, triggered by the vast quantities of displaced rock and water. These 100-metre megatsunamis first strike the shores of what is now the Gulf of Mexico, engulfing the land before depositing huge amounts of debris as they retreat.

Image of a tsunami wave.
Tsunamis waves were over 100 metres hight.
FOTOKITA/Shutterstock

By now, the crater has almost reached its final dimensions – 180km across and 20km deep. But making an enormous hole in the ground isn’t the only outcome of the impact. All the rock and vapour displaced during the collision has to go somewhere. Several locations in Northern America show that metre-sized blocks of debris from the impact were thrown distances of hundreds of kilometres.

So if you were 2,000km to 3,000km from the epicentre and survived the first few seconds, you’d most likely die from overheating, earthquakes, hurricanes, fires, tsunami-driven floods or being hit by impact melt.

But what is happening much further away? In the first five minutes after impact, dinosaurs roaming the Cretaceous forests of what are now China or New Zealand are so far undisturbed.

But it won’t be long before that changes.

T-plus one hour

Shockwaves on land and sea are only minor inconveniences compared with the fire that is still radiating down from the sky. Some of the impact energy has been transferred into the atmosphere, heating the air and dust to incandescence.

Angry firestorm texture background in full HD ratio
Fires were a common part of the asteroid aramgeddon.
fluke samed/Shutterstock

An hour after impact, a belt of dust has circled the globe. Deposits of solidified molten droplets (impact spherules) and mineral grains have been found in numerous locations from New Zealand in the south to Denmark in the north. In these locations, you would not have been aware of the tsunamis around the Americas or the wildfires, but the skies would certainly have begun to darken.

T-plus one day

By now, huge tsunamis are moving east across the Atlantic and west across the Pacific, entering the Indian Ocean from both sides.

They are still around 50m high – causing death and destruction across many coasts around the world. By comparison, the 2004 Boxing Day tsunami reached heights of up to 30 metres. Tsunamis kill fishes and marine life that are washed high on the shore and then dumped, just as they kill coastal trees and drown land animals. But the tsunamis gradually fade away and probably don’t wipe out any entire species – at least on their own.

The hurricane force winds have also died down, but tropical storm strength winds are whipping up debris and causing further chaos and destruction across the tsunami-affected areas. The burning sky is also triggering wildfires across the globe – which, in turn, carry ever more soot into the atmosphere. The sooty signature of these wildfires has been found deposited as carbon particles in sediments from the K-Pg boundary – a 66-million-year-old thin clay layer.

Further away, in what is modern Europe and Asia, the skies continue to fill up with dust and soot, as they do everywhere. Temperatures start to drop as sunlight is blocked. Trees and plants in general, including phytoplankton, close down as if for winter, unable to photosynthesise. Any animals that rely on warm conditions ultimately hunker down and die.

T-plus one week

It’s getting darker and darker. Simulations of solar radiation reaching the Earth’s surface following the impact indicate that, after about a week, the solar flux (the amount of heat and light per a certain area) is just one thousandth of that prior to the impact. This is caused by particles of dust and soot in the atmosphere.

The continued decrease in light levels is accompanied by a global drop in surface temperatures of at least 5°C. This means that most of the dinosaurs and other large flying and swimming reptiles probably die from freezing within the course of this first week (smaller reptiles with slower metabolisms or more flexible diets could survive longer). Cooling temperatures and cloud cover also lead to rain. But not just any rain. Storms of acid rain fall across the Earth.

Two separate mechanisms generate acid rain. The first is down to the geology of the impact region. The asteroid happened to hit an area of sediments rich in sulphur, which vaporised and caused sulphur oxides (acidic and pungent gas compounds composed of sulphur and oxygen) to be part of the plume of plasma blasted into the atmosphere. Second, the energy of the collision was sufficient to turn nitrogen and oxygen into nitrogen oxides – highly reactive gases that can form smog.

The dropping temperature ultimately allows water vapour to condense into drops, and the sulphur and nitrogen oxides dissolve to form sulphuric and nitric acids. This is sufficient to generate a rapid drop in pH. Early models suggest that the pH of the rain might be as low as 1 – the same acidity as battery acid.

At this point, Earth is not a great place to be. Rotting vegetation, choking smoke and sulphur aerosols combine to make the planet stink. Plants and animals on land and in shallow seas that have survived the darkness and cold succumb to the corrosive acid rain and ocean acidification. Acid rain also kills trees by leaching nutrients such as calcium, magnesium and potassium from the soil. Shallow marine shellfish, crustaceans and corals also die as acid seawater destroys their skeletons.

T-plus one year

Winds die down, wildfires are extinguished and the oceans are once again calm. It might appear that the asteroid collision is just a scar on the ocean floor. But its effects are still destructive. The atmosphere is still filled with dust and the Sun hasn’t shone for a year. Temperatures have continued to drop, with the average surface temperature now 15°C lower than before the impact. Winter has come.

Any dinosaurs or marine reptiles that survived the first week of freezing conditions would have died very soon after. A year after the impact, only rotted skeletons of these behemoths remain. Here and there, smaller animals like mammals the size of rats and insects would be nestling in crevices, barely surviving on their reserves and decaying plants.

Indeed, it has not been a good year for life on Earth: over 50% of plants have died out because of the cold and lack of sunlight. And similar losses have occurred among terrestrial animals and species in the acidified, shallow sea waters.

Shot of pyritized ammonite fossil, capturing metallic shine and intricate prehistoric shell structure.
Ammonites soon die out.
Domenichini Giuliano/Shutterstock

While most plant groups and many of the modern groups of insects, fishes, reptiles, birds and mammals recover reasonably rapidly, things don’t look great for other species. Dinosaurs and pterosaurs living on land are extinct, as are many marine reptiles, ammonites, belemnites and rudist bivalves in the oceans. Ammonites and belemnites are high in their food chains, and so suffer not only from the cold and acidification but also from the loss of abundant food resources, such as smaller marine organisms.

T-plus ten years

The Earth is still in the grip of a fierce winter. Although most of the sulphur has rained out of the atmosphere, dust and soot particles remain. The average surface temperature is still about 5°C lower than before the impact. The main oceans have not frozen, but inland lakes and rivers around the world are iced over.

Surviving plant and animal groups such as turtles, smaller crocodiles, lizards, snakes, some ground-dwelling birds and small mammals repopulate the Earth at this point. But they are forced back to limited areas of relative safety a long way from the impact site. These areas are now receiving sufficient sunlight for plants and phytoplankton to photosynthesise again. As leaves and seeds provide the basis for the food chains on land and in the sea, life begins to rebuild.

Eventually, life returns to the devastated landscapes, but ecosystems are very different and the dinosaurs are no more.

T-plus 66 million years

Today, 66 million years after the impact, the scars of the collision are hidden within geological strata – and scientists have started deciphering them. It was in 1980 that researchers first reported evidence of the impact. In their classic paper, Luis Alvarez, a Nobel-prize-winning physicist, and co-authors, described a sudden enrichment in the element iridium in a specific clay layer in Denmark and in Italy.

Iridium is rare in surface rocks because most of it was sequestered in Earth’s core when the planet first formed. However, iridium is found in meteorites, and Alvarez and colleagues inferred that the rate of accumulation of the metal in the sediments was so high that it could only have been produced by impact of a gigantic meteorite.

Because the scientists had only observed the iridium spike in two locations, the impact hypothesis was rejected by many scientists at the time. However, through the 1980s, iridium spikes were identified in clay layers at more and more locations – in muds laid down on land, in lakes, in the sea.

Support for an impact hypothesis strengthened when a crater of the correct age was found in 1991. The crater is buried beneath younger rocks, but clearly visible in geophysical surveys, lying half on land in the Yucatán Peninsula of Mexico, and half offshore. Since 1990, evidence for the impact has increased, not least when scientists discovered that there was indeed a sharp cooling event at the end of the Cretaceous.

Possible T-Rex track near Anasazi at Philmont in 2022.
Possible T rex footprint from New Mexico.
Wikipedia, CC BY-SA

In total, it is estimated that half the species of plants and animals alive at the end of the Cretaceous disappeared. It was once thought that surviving groups such as many plants, insects, molluscs, lizards, birds and mammals somehow escaped unscathed. But detailed study shows that this is not the case – they were all hit hard.

But, by chance or luck, enough individuals and species were able to survive the cold and absence of food, or were in parts of the world where the effects were less extreme. As the world returned to normal, they had the opportunity to expand rapidly into their old niches, but also to occupy the space vacated by extinct groups. In fact, one important consequence of the extinction of the dinosaurs, apex predators in their heyday, was the successful spread and evolution of mammals.

When Alvarez and colleagues first described the drop in temperature following the impact, they called it a “nuclear winter”, reflecting the political climate of the early 1980s. Now we might be more inclined to describe the effects as a global climate change – similar events are currently resulting from increased carbon dioxide in the atmosphere (flooding, temperature fluctuations).

It is salutary to think that without the asteroid collision, primates might never have reached the level we are at today. But it is equally salutary to consider that modern humans are causing some of the same changes to the atmosphere that ultimately killed our reptilian forbears and may one day also lead to our own demise.


For you: more from our Insights series:

To hear about new Insights articles, join the hundreds of thousands of people who value The Conversation’s evidence-based news. Subscribe to our newsletter.

The Conversation

Monica Grady receives funding from the Leverhulme Trust for an Emeritus Fellowship and from the STFC. She is affiliated with The Open University, Liverpool Hope University and the Natural History Museum, London.

Michael J. Benton does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. What it would have been like to experience the dinosaur-killing asteroid armageddon: a blow-by-blow account – https://theconversation.com/what-it-would-have-been-like-to-experience-the-dinosaur-killing-asteroid-armageddon-a-blow-by-blow-account-271786

Pourquoi l’hôtel Hilton de Washington relie Reagan et Trump : quand la violence devient une épreuve du pouvoir

Source: The Conversation – France in French (3) – By Florian Leniaud, Docteur en civilisation américaine. Membre associé au Centre d’histoire culturelle des sociétés contemporaines, Université de Versailles Saint-Quentin-en-Yvelines (UVSQ) – Université Paris-Saclay

La tentative d’assassinat qui a visé Donald Trump et ses ministres les plus importants le 26 avril dernier s’est produite à l’hôtel Hilton de Washington, devant lequel Ronald Reagan avait été grièvement blessé par balles 45 ans plus tôt. Ce parallèle invite à analyser la manière dont les attaques physiques qu’ils ont subies ont transformé l’image des deux présidents républicains, ainsi que les réponses qu’ils y ont apportées.


Quarante-cinq ans après la tentative d’assassinat contre Ronald Reagan du 30 mars 1981, une attaque visant Donald Trump vient de survenir au même endroit : l’hôtel Hilton de Washington.

Ce détail n’en est pas un, car il transforme un fait isolé en continuité. Le lieu devient une scène. La violence politique ne surgit plus seulement comme un événement, elle semble se rejouer, tout en reliant deux figures présidentielles au sein d’une même épreuve.

Un lieu qui transforme la violence en récit

En 1981, Reagan, qui avait eu le poumon perforé par une balle tirée à bout portant par John Warnock Hinckley, Jr., sort profondément renforcé de cet épisode. Les images de sa sortie d’hôpital, son humour face au danger mortel auquel il a été exposé et la mise en récit médiatique contribuent à installer durablement la figure d’un leader ayant traversé l’épreuve.

Quelques heures après avoir été touché, Reagan plaisante auprès de ses chirurgiens : « J’espère que vous êtes tous des Republicans ». La formule fait immédiatement le tour du pays et structure l’image d’un président courageux, maître de lui-même jusque dans la proximité de la mort.

Aujourd’hui, Trump — qui avait déjà vécu un moment similaire le 14 juillet 2024 lorsqu’il avait émergé, le poing brandi et l’oreille en sang, après avoir échappé à une tentative d’assassinat lors d’un meeting — apparaît dans une configuration différente mais comparable sur un point précis : l’exposition à la violence renforce une posture de leader assiégé. Depuis près de dix ans, son discours politique repose largement sur l’idée d’une Amérique menacée, encerclée par des ennemis intérieurs et extérieurs. Chaque attaque contribue dès lors à renforcer un récit déjà installé, celui d’un dirigeant pris pour cible parce qu’il incarnerait une forme de résistance politique.




À lire aussi :
Les États-Unis de l’après-Charlie Kirk, ou l’apogée de la polarisation affective


Dans les deux cas, l’événement ne se limite donc pas à un acte violent, puisqu’il est immédiatement intégré dans une narration politique. Or ce récit ne fonctionne pas seul. Il repose sur une médiatisation continue qui transforme la violence en séquence politique majeure. Si la violence fait l’événement, le récit médiatisé en fait un moment politique.

Un attentat prémédité dans un espace hautement symbolique

Les éléments désormais connus sur l’assaillant du 25 avril dernier, Cole Tomas Allen, confirment qu’il avait préparé son attaque de longue date. L’homme, âgé de 31 ans, avait traversé les États-Unis avec plusieurs armes et réservé une chambre au Hilton plusieurs semaines à l’avance. Selon les enquêteurs, il projetait de viser Donald Trump ainsi que plusieurs responsables politiques présents lors du dîner des correspondants de la Maison-Blanche.

Ses écrits, mélange de confession, de revendication politique et de message d’adieu, révèlent une accumulation de griefs personnels et politiques contre l’administration Trump. Les autorités indiquent également qu’il ne pensait pas survivre à son attentat, ce qui ancre son geste dans une logique sacrificielle relativement fréquente dans les violences de masse contemporaines.

Cette dimension est importante car elle éloigne l’idée d’un acte purement impulsif ou irrationnel. Les travaux consacrés aux auteurs de fusillades montrent des trajectoires souvent marquées par l’isolement social, des formes d’humiliation ou une quête de reconnaissance. Dans de nombreux cas, le passage à l’acte s’inscrit dans un environnement saturé de récits violents et fortement médiatisés.




À lire aussi :
Ce que l’on sait des tueries dans les écoles aux États-Unis et de leurs auteurs


La médiatisation ne constitue alors pas un simple relais d’information, car à travers la répétition des images et des noms des assaillants, elle peut contribuer, chez certains individus, à rendre ces actes réellement possibles, soit imaginables. À force d’être rejouée en boucle, la violence s’installe dans un horizon mental familier où le passage à l’acte peut apparaître comme un moyen brutal d’accéder à une forme de visibilité publique.

Le lieu comme scène politique

Le choix du lieu joue un rôle central dans cette dynamique. Les attaques ne surviennent pas dans des espaces neutres : écoles, centres commerciaux, universités, lieux de pouvoir ou bâtiments gouvernementaux concentrent visibilité et résonance médiatique. Ils fonctionnent comme des scènes ouvertes sur le pays tout entier.

Le Hilton de Washington agit à ce titre comme un espace de mémoire politique. Déjà associé à la tentative d’assassinat contre Reagan, il transforme instantanément l’événement en continuité historique. Ce lieu de mémoire produit du sens avant même l’interprétation politique et dépasse largement le geste individuel.

La comparaison d’Allen avec John Hinckley Jr. permet néanmoins de souligner des différences importantes. Hinckley agit dans une logique obsessionnelle très personnelle qui mêle fascination médiatique et fixation sur l’actrice Jodie Foster. Allen apparaît, quant à lui, engagé dans une démarche nettement plus politisée et idéologique. Pourtant, un point commun demeure : dans les deux cas, l’acte vise un espace hautement visible, aujourd’hui chargé de sens.

La violence politique contemporaine ne cible donc pas seulement des individus. Elle cible aussi des lieux, des symboles et des récits.

Une polarisation médiatique qui transforme immédiatement la violence en affrontement politique

Cette évolution ne peut être comprise sans replacer ces événements dans l’histoire récente du paysage médiatique américain. La présidence Reagan marque un tournant majeur avec la disparition progressive de la Fairness Doctrine à la fin des années 1980. Cette règle imposait jusque-là aux médias audiovisuels de traiter les sujets controversés de manière équilibrée.

Sa suppression ouvre progressivement la voie à un système médiatique beaucoup plus polarisé, où l’information devient un espace d’affrontement idéologique permanent. L’essor du talk radio conservateur, puis des chaînes d’information continue et des réseaux sociaux fragmente l’espace public américain en récits concurrents.




À lire aussi :
États-Unis : les médias conservateurs, acteurs clés de la campagne présidentielle


Dans ce contexte, chaque événement violent fait immédiatement l’objet d’interprétations opposées. Pour les soutiens de Trump, l’attaque confirme l’idée d’un dirigeant persécuté parce qu’il dérange une partie du système politique et médiatique. Pour ses opposants, l’attaque renvoie au contraire à un climat de tension politique auquel les discours de Trump et sa manière de polariser le débat public auraient contribué.

La violence cesse alors d’être seulement un drame partagé pour devenir un élément du combat politique, utilisé par chaque camp pour conforter sa propre lecture du pays, du pouvoir et de la menace.

Les armes à feu comme imaginaire politique

La question des armes à feu occupe une place centrale dans cette dynamique. Leur diffusion massive entretient un imaginaire politique fondé sur l’autodéfense et la menace permanente. Aux États-Unis, lorsque les armes ne relèvent pas de la sécurité ou du loisir, elles constituent un marqueur culturel et identitaire profondément enraciné dans une partie du conservatisme américain.

Ce système fonctionne en boucle : la peur favorise l’armement, tandis que l’omniprésence des armes rend la violence plus probable. Chaque nouvelle attaque engendre un sentiment d’insécurité qui justifie à son tour la possession d’armes.




À lire aussi :
Tueries de masse et sécurité dans les écoles américaines : l’armement des enseignants en question


C’est précisément dans cette tension entre culture des armes et expérience directe de la violence que la comparaison entre Reagan et Trump devient éclairante. Ronald Reagan, pourtant figure majeure du conservatisme américain et défenseur du deuxième amendement, avait progressivement infléchi sa position après avoir survécu à la tentative d’assassinat de 1981 lors d’une tribune écrite pour le New York Times. Dans les années 1990, après ses deux mandats, il soutient publiquement le Brady Act, texte renforçant les contrôles sur les ventes d’armes à feu — baptisé ainsi en hommage à James Brady, porte-parole de la Maison-Blanche grièvement blessé en même temps que le président le 30 mars 1981, et resté lourdement handicapé à la suite de ses blessures. Reagan reconnaît alors qu’un meilleur encadrement des armes aurait pu sauver des vies.

Donald Trump défend au contraire une ligne plus ferme en faveur du droit au port d’armes, y compris après avoir lui-même été visé. Cette différence traduit une transformation plus profonde du camp républicain : chez Reagan, la violence conduit partiellement à une forme de remise en question, alors que chez Trump elle vise davantage à renforcer un récit politique déjà raffermi autour du danger et de l’affrontement.

Quand le lieu survit à l’événement

L’attaque contre Donald Trump ne constitue pas un événement isolé. Elle survient dans un contexte plus large de polarisation politique et de violences visant des responsables publics aux États-Unis. L’assaut du Capitole en 2021 avait déjà révélé l’intensité d’une polarisation où une partie du conflit politique se déplace désormais sur le terrain physique et sécuritaire.

Mais le plus frappant reste peut-être la persistance du lieu lui-même. Quarante-cinq ans après Reagan, le Hilton de Washington réapparaît comme si certains espaces conservaient la mémoire des violences qui les ont traversés. Le lieu ne se contente plus d’accueillir l’événement : il lui donne une profondeur historique immédiate et relie plusieurs séquences de la vie politique américaine à travers une même scène.

De Reagan à Trump, les effets politiques diffèrent, mais une constante demeure : l’exposition à la violence peut renforcer la portée symbolique du pouvoir. Si la violence politique fait depuis longtemps partie de l’histoire américaine, sa médiatisation permanente et son inscription dans un paysage fortement polarisé lui donnent aujourd’hui une résonance particulière, où chaque attaque devient aussitôt un affrontement politique et médiatique qui dépasse largement l’événement lui-même..

The Conversation

Florian Leniaud est membre du Centre d’histoire et d’études culturelles rattaché à l’Université Paris-Saclay

ref. Pourquoi l’hôtel Hilton de Washington relie Reagan et Trump : quand la violence devient une épreuve du pouvoir – https://theconversation.com/pourquoi-lhotel-hilton-de-washington-relie-reagan-et-trump-quand-la-violence-devient-une-epreuve-du-pouvoir-282314

Ghana’s transport system is chaotic: how it can move more people with fewer vehicles – research

Source: The Conversation – Africa – By Janet Appiah Osei, Research Fellow, African Research Universities Alliance (ARUA), University of Ghana

Every morning in Accra, Ghana’s capital, thousands of commuters sit in traffic while minibuses and taxis compete for limited road space.

More than 70% of Ghanaians rely on informal public transport, predominantly minibuses (trotros) and taxis, for their daily mobility. About 84% of passenger trips in Accra are made using these modes (a 2017 estimate). Precise counts of vehicles are not available due to the informal nature of the sector, but thousands of taxis and trotros are active on Accra’s roads each day.

Despite the constant movement, the traffic’s progress is slow. Ghana’s cities are moving, but not efficiently.

Taxi and minibus services are essential. They provide flexible, relatively affordable mobility and reach areas that formal systems do not. For millions of people, they are the backbone of daily travel.

Yet surprisingly little is known about their diversity and characteristics.

I research how urban transport systems can be made more efficient and climate-friendly, particularly in rapidly growing cities where there are mobility challenges.

In my recent study of commercial vehicle models in Ghana’s urban transport system, I identified 52 different types of taxis and trotros currently in operation. This diversity reflects a system shaped more by market demand than by coordinated, large-scale planning.

My findings show a highly diverse fleet structure, with differences in vehicle capacity and service patterns across the fleet. There’s a strong reliance on conventional fuels and older vehicles. These patterns suggest a fleet that has developed gradually over time, rather than through deliberate and structured modernisation. The result is traffic congestion, higher fuel consumption and increased emissions.

I argue that a more structured approach to urban transport could allow cities to move more people with fewer vehicles, reduce overlapping low-occupancy trips, and improve fleet regulation and planning.

Why efficiency is a growing problem

Most taxis, which are typically sedan cars, carry only a few passengers per trip and operate over short distances. Trotros seating about 10-20 people carry more passengers and travel longer routes. But they still fall short of the capacity offered by larger buses used for mass transit, which can carry 50 or more passengers per trip.

This means more vehicles are required to move the same number of passengers.

In Accra alone, roughly one million passenger trips are made daily using these modes. As demand increases, the system responds by adding more vehicles, not by increasing capacity per vehicle.

This pattern is evident in the the city’s rapid motorisation: vehicle ownership rose from about 40 per 1,000 people in 1990 to 260 per 1,000 in 2015. This highlights how growing mobility demand has largely been met through more vehicles on the road, rather than through more efficient, higher-capacity transport.

The result is growing congestion, longer travel times and increasing pressure on already limited road infrastructure.

For commuters, this means more time spent in traffic. For cities, it means declining transport efficiency.

Environmental costs of low-capacity transport

The dominance of low-occupancy vehicles also affects the environment.

Vehicles that carry fewer passengers generally consume more fuel and generate higher emissions per passenger-kilometre compared to higher-capacity modes of transport. For example, one study on urban transport found that transit buses can reduce emissions by 82%-94% relative to sedan cars.

The cumulative effect of a large fleet of low-occupancy vehicles in Accra contributes to higher overall fuel consumption and increased urban emissions.

Expanding and strengthening high-capacity public transport systems is not only a transport issue, but also an environmental one.

Economic implications for cities and commuters

Inefficiency in transport systems has direct economic consequences.

Higher fuel consumption increases operating costs for drivers, which can eventually translate into higher fares. Congestion slows down the movement of people and goods, reducing productivity and increasing the cost of doing business in urban areas.

Efficient transport systems support economic growth by improving reliability and reducing delays. As Ghana’s cities expand, these efficiencies become even more critical.

Why the current system persists

Despite these challenges, taxis and trotros continue to dominate for good reason.

They are flexible, adaptable and responsive to demand. Routes can change quickly, and services can reach areas that formal systems often overlook. The relatively low cost of entry also allows many individuals to participate in the sector.

This flexibility has made the system resilient. But it has also limited large-scale coordination.

The case for high-occupancy transport

Improving urban mobility is not just about increasing the number of vehicles, it is about moving more people with fewer vehicles.

High-occupancy transport systems, particularly Bus Rapid Transit (BRT), a system that uses larger buses operating along dedicated corridors, carry more passengers per trip. A single high-capacity bus can replace multiple taxis or minibuses.

This does not mean eliminating existing transport modes. Taxis and trotros can play a complementary role as feeder services, connecting passengers to main transit routes. This integrated approach combines flexibility with efficiency.

Ghana has already made attempts to introduce BRT systems. But partial implementation has limited their impact. For such systems to succeed, they require dedicated lanes, consistent policy support, and long-term investment.

A critical moment for Ghana’s cities

Urbanisation in Ghana is accelerating. As more people move into cities, demand for transport will continue to rise.

If current trends continue, the number of low-capacity vehicles will increase further, worsening congestion and environmental pressures. Over time, this could reduce the overall effectiveness of urban transport systems.

Ghana now faces a choice: continue expanding a vehicle-intensive system, or move towards higher-capacity models that prioritise efficiency and sustainability.

What needs to change

Addressing these challenges requires coordinated policy action.

Transport planning must move beyond reactive, market-driven growth, towards long-term system design. This includes integrating informal transport operators into structured frameworks while investing in infrastructure that supports high-capacity movement.

In my view, priorities should include:

  • full implementation of Bus Rapid Transit systems with dedicated lanes

  • investment in high-capacity buses and supporting infrastructure

  • integration of informal operators into formal planning systems

  • gradual reduction of low-occupancy vehicles along major corridors

  • stronger institutional coordination and long-term planning.

These steps can help create a more flexible and efficient, balanced system.

The future of Ghana’s cities will depend on a simple shift where more people, not more vehicles, are moved.

The Conversation

Janet Appiah Osei received funding from the African Research Alliance Universities (ARUA) in collaboration with the University of Ghana

ref. Ghana’s transport system is chaotic: how it can move more people with fewer vehicles – research – https://theconversation.com/ghanas-transport-system-is-chaotic-how-it-can-move-more-people-with-fewer-vehicles-research-278810