Source: The Conversation – Canada – By Paul Calluzzo, Associate Professor and Toller Family Fellow of Finance, Queen’s University, Ontario
Prime Minister Mark Carney recently announced Canada’s first national sovereign wealth fund, the Canada Strong Fund. It’s aimed at investing $25 billion in domestic projects while offering Canadians a chance to invest alongside the government.
But these goals can conflict, and the fund’s current design raises questions the government has not yet fully answered.
What is a sovereign wealth fund?
A sovereign wealth fund is a pot of money owned and invested by a government to generate returns and build national wealth over time.
More than 100 exist globally, collectively managing more than US$10 trillion in assets. Most are funded from commodity surpluses or foreign-exchange reserves.
They differ from other public funds in important ways. Public pension funds manage money on behalf of retirees. Public banks and development funds lend or invest at below-market rates to achieve policy goals. Central bank reserves are held as a financial buffer, not invested for return.
Sovereign wealth funds are explicitly in the business of growing state capital. Governments can also use them to achieve geopolitical and economic goals.
Norway’s Government Pension Fund Global, valued at approximately US$2.2 trillion, is the best-known example. It invests oil revenue in a globally diversified portfolio to preserve the country’s resource wealth for future generations.
The Santiago Principles — a set of voluntary governance standards adopted by the international sovereign wealth fund community in 2008 — outline what responsible management looks like.
How does Canada Strong compare?
Canada’s new sovereign wealth fund fits the criteria of being government-owned and seeking market-rate returns. However, it diverges from standard practice in three notable ways.
First, it will be funded from a budget that is already in deficit. Canada’s projected deficit for 2025-26 is $66.9 billion. The $25 billion for the fund will be drawn from the federal budget over three years, meaning the fund is being prioritized over debt reduction and other spending commitments.
Second, the fund will focus on domestic investment. Most sovereign wealth funds invest globally, following best practices from the Santiago Principles to diversify risk.
A fund concentrated in one country’s economy heightens financial risk and is more exposed to political pressure. This concern is serious enough that some sovereign wealth funds have banned domestic investments completly.
Third, it will include an option for retail investors to directly invest in the fund. No existing sovereign wealth fund offers this.
Asset recycling and its risks
To grow the fund over time, the government is also considering raising funds through what it calls “asset recycling” or “asset optimization.”
Early reporting suggests the federal government is considering selling or leasing airports and reinvesting those funds into the Canada Strong Fund.
When asset managers take over public infrastructure, it introduces an additional dimension of risk. The Thames Water company’s record of sewage dumping, crumbling infrastructure and high levels of debt in the United Kingdom offers one cautionary case study.
Research on the privatization of both the Heathrow and Brussels airports highlights increased costs for airlines and passengers, with poorer levels of service.
A dual mandate and its trade-offs
In addition to higher risk, the Canada Strong Fund’s dual mandate may also lead to lower returns. If the fund invests on fully commercial terms alongside private investors, it risks crowding out private capital in projects that would have been funded anyway.
If, instead, it accepts lower returns when supporting strategic projects, it quietly abandons the market-rate mandate and the promise of creating wealth for Canadians.
Where the government identifies infrastructure priorities without a clear business case, it could consider direct public ownership rather than routing investment through the Canada Strong Fund.
When mixing priorities, the trade-off against financial performance is unavoidable. To have a genuine impact, the Canada Strong Fund will need to behave less like a sovereign wealth fund and more like the Canada Infrastructure Bank or the Canada Growth Fund.
Unlike the Canada Strong Fund, however, those two vehicles are upfront about accepting below-market returns to advance their priorities.
What about retail investors?
The most novel feature of the Canada Strong Fund is the retail investment product. The government has said the product will be broadly accessible to Canadians, simple to purchase and structured so investors share in any upside while their initial capital is protected.
According to a 2024 survey conducted for the Financial Consumer Agency of Canada, there has been a significant drop in the retirement readiness of Canadians since 2019. A retail product tied to Canadian nation-building could, in principle, help address that gap.
Yet challenges remain. The promise of shared upside with limited downside risk introduces complexity to the product. The performance of complex instruments is lower than the performance of simpler instruments. Retail investors may also struggle to gauge the risk-reward trade-offs associated with the Canada Strong Fund’s dual mandate.
There is also the question of what happens if the fund loses money. The government has stated they will protect the initially invested capital of retail investors, but it is not clear where this money will come from.
If retail investors effectively pay an embedded insurance premium, that premium reduces their return. If the government subsidizes the cost of that protection, it amounts to a cross-subsidy from Canadians who do not participate in the fund to those who do — an outcome that could be regressive, depending on who invests.
What would make it work?
A well-designed Canadian sovereign wealth fund has genuine potential to grow our nation’s generational wealth and financial resilience.
Other sovereign wealth funds have achieved these ends through a focused mandate to invest for financial objectives, as outlined in the Santiago Principles. The odds of Canada Strong Fund succeeding would be improved by pivoting towards these principles.
Canada could follow Norway’s model of running two separate funds. It could leave the existing Canada Growth Fund to pursue domestic strategic investments, and have the Canada Strong Fund invest abroad with the sole goal of building national wealth.
That separation would reduce internal conflict, clarify accountability and give the retail product a cleaner return profile.
Paul Calluzzo receives funding from Social Sciences and Humanities Research Council.
Dan Cohen receives grants from the Social Sciences and Humanities Research Council of Canada: one on monetary policy (grant number 435-2022-0069) and one on social finance (grant number 4030-2020-00085). He is also a member of the New Democratic Party.
Evan Jo does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
We know that university students are at risk of mental-health struggles, particularly depression and anxiety. If writing can help instead of stress them out, it could be a refreshing change for English studies — and a new way for teachers to introduce essay writing.
Students first need to find out that writing can actually support well-being.
In the course, they took up a journalling habit, but it wasn’t just about venting their feelings or writing whatever came to mind. We looked at studies on how writing can reshape your thinking and boost positivity.
Three methods stood out:
Write down “three good things” about each day and, importantly, your own role in bringing them about. This technique was pioneered in a study led by psychologist Martin Seligman. Participants who adopted the approach reported feeling happier and less depressed at the one-month, three-month and six-month points. It’s now been widely shared, and it’s a great way to start a new journalling habit because it’s straightforward and effective.
Look to the future and write about your best possible self. When you imagine a fulfilled version of yourself, it will motivate you to do the hard work to get there. According to psychologist Laura A. King, when you imagine a fulfilled version of yourself, you can experience the health benefits of writing without revisiting negatives from the past.
Add creativity to your journalling. Turn a moment from your day into a comic; narrate your day as if it were happening in Middle Earth; write a haiku about your toothpaste. A diary-based study of more than 600 young adults led by psychologist Tamlin Conner showed a straightforward effect where being creative one day boosted well-being the next.
Case study on the self
Where journalling provides a space to play around with techniques, essays give students a place to reflect on their efforts, report on the results and hypothesize about positive effects of the experience.
One of the fascinating things about writing for well-being is that no one knows for certain why it works. Across studies it shows reliable, modest benefits, but the underlying mechanism for its effects hasn’t been pinned down — so students’ own theories could contribute to solving a real mystery.
Writers feed off inspiration. Showing students that authors have been using writing for well-being — and making great art in the process — gives them that extra push to keep writing and go deeper.
Inspiration from literature
Among Canadian authors, L. M. Montgomery’s story is especially compelling. Her famous books like Anne of Green Gables and Emily of New Moon have made a utopia of Prince Edward Island; but inwardly, Montgomery experienced deep mental anguish, leading to addiction in her later life.
Her journals detail this other side to her life and show how she used writing to ease her mental suffering. As she memorably notes in an entry from 1904:
“I feel better for writing it out. It is almost as efficacious as swearing would be and much more respectable.”
Looking to Montgomery as a mentor helped students realize how creative and immersive personal writing can be, in turn motivating them to push forward with their own journalling.
Discussing Montgomery’s life writing in their essays made sense because they could see how her efforts to find solace through writing were relatable to their own.
Easing back on literary jargon
Poetry can beautifully map a state of mind. But traditional approaches to teaching it have a tendency to suck the life out of literature that should be a joy and a delight.
Instead of taking what some teachers call a “technique spotting” approach where you count up the metaphors, teaching English from a well-being perspective taps into poetry’s healing qualities.
His curated poetry collections pair thoughtfully selected poems with one-page prescriptions, highlighting each work’s curative potential for conditions like insecurity, regret, loneliness and more. Both the poem itself and the interpretation serve to advance self-knowledge and alleviate mental suffering.
‘The Healing Power of Poetry’ TEDxOxford talk with William Sieghart.
Students easily ran with this idea. They found joy in poems that spoke to their lived experience, used empathy to recommend poems to others in need and wrote movingly in essays about the mental-health issues they face most often — like academic pressure, fear of failure, homesickness, social anxiety, perfectionism, procrastination and more.
The poetry-remedy concept also lent itself to experiential approaches where students could tape a chosen poem on their mirror, make it the lock screen on their phone, share it with a loved one, create a painting or visual, text it to a distant friend — and ultimately share the story of what happened in essay form or classroom discussion.
Essays are a notoriously difficult part of academic life, which is why generative AI presents such an irresistible pull to the stressed-out student. If essay writing is no more than a tedious recital, it’s no wonder they would gladly pass along what AI spews out on such topics.
Writing instead about your own interior world, finding evidence in your own experience and using literature to light a personalized path to growth are tasks that cannot be easily farmed out to a text-generator — because they speak directly to your own humanity.
The idea that writing can offer fresh avenues for growth and betterment is a welcome reminder of what genuine human writing is truly for.
In teaching a course on it, I found writing for well-being to be an exciting expansion of English studies broadly and essay writing in particular. It can support students’ writing and communication skills while genuinely enriching their lives, and it can help us inspire students with what’s most important in the study of literature: a lifetime love of reading and a willingness to take up the pen.
Lindsey McMaster does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
As a professor of finance, I see these phenomena as different expressions of the same underlying bias: we apply too high a discount rate to the future.
The idea of a discount rate is straightforward. A dollar today is worth more than a dollar tomorrow. The discount rate tells us by how much. Set that rate too high, and you systematically undervalue what lies ahead. Set it too low, and you over-invest in distant outcomes.
Psychologist Hal Hershfield’s research on the future self helps explain why. People often perceive their future selves more as another person than as a continuation of who they are now. This makes it easier for the self that benefits today to shift costs onto the self that must bear them tomorrow.
Looking at this through a finance lens, it resembles a “principal-agent problem,” where managers may prioritize short-term incentives over the long-term interests of shareholders.
In both cases, the person making the decision does not fully bear the long-term cost. But the future does not disappear. It simply becomes easier to ignore.
Investment in relationships
This logic becomes easier to see if we look at how we build relationships. Strong relationships require time and a willingness to tolerate discomfort.
Trust and intimacy involve immediate effort but the benefits accumulate gradually. By contrast, autonomy and flexibility offer immediate rewards. They preserve options and reduce constraints, making it easy to defer relational investment.
But relationships, like other forms of capital, depend on sustained investment, and delayed investment is often hard to recover later.
The same logic can also be seen in family structures and broader social connections. Strong ties in families, friendships and communities depend on time and repeated interaction. Without it, those ties weaken.
How loneliness is killing us, according to Harvard professor Robert Waldinger.
These patterns are not only individual. They also reflect the way modern life is increasingly organized around immediacy and convenience. Technology makes interaction faster, easier and more responsive, but many of the things that matter most in the long run still require time, patience and discomfort. The result is a social environment that increasingly rewards responsiveness over endurance.
Immediate benefits
Seen in this light, AI companions are not an anomaly. They are emerging in an era of widespread loneliness, where many people are seeking connection that feels reliable and low in emotional cost.
The concern is not that AI companionship fails. It’s that it succeeds too well in the present. By reducing effort, uncertainty and emotional risk, AI companions make connection easier to access but may also shift expectations in ways that are harder to sustain over time in human relationships. In that sense, they reflect the same trade-off: immediate comfort at the expense of longer-term relational depth.
The same logic extends beyond individual life and helps explain how societies respond to long-term problems.
Climate change is perhaps the clearest example. The impacts of our warming planet are already very evident and yet we’re slow to act. This is, in part, because the economic benefits of extraction and consumption are immediate, while many of the costs are delayed and dispersed across time.
A voiceless future
Across many human domains, from AI and personal relationships to climate change, the structure is the same: The present is immediate and rewarded; the future is abstract, distant and silent. So, decisions skew toward today.
This is not simply a matter of awareness or intention. It is structural. The future has no meaningful representation in present decision-making. It has no voice, no urgency and no direct claim. And so it’s discounted.
This is what Canadian Prime Minister Mark Carney called the “tragedy of the horizon.” Whether in the climate crisis or the loneliness epidemic, the catastrophic impacts will be felt beyond the traditional horizons of investment cycles and political terms.
Until we find ways to give the future a real stake in present decisions, we will continue to choose what is easier now and pay for it later.
The tendency to discount the future is deeply human. But in a world increasingly shaped by AI systems, weakening social ties and accelerating climate risk, the costs of doing so are becoming harder to ignore.
Rahul Ravi does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
La nouveau grand acte de décentralisation promis par Sébastien Lecornu n’aura pas lieu. Au contraire, une « recentralisation » en direction des préfets est à l’ordre du jour. Pourquoi la décentralisation, tant de fois annoncée, est-elle sans cesse ajournée ?
En octobre dernier, le premier ministre Sébastien Lecornu affichait une franche ambition : mettre en place un « grand acte de décentralisation » censé régler les problèmes liés à l’organisation territoriale de la République. Après quelques mois de réflexion, de concertation et une fois les élections municipales passées, le gouvernement a largement revu à la baisse cette ambition. Quelles sont les raisons de ce recul stratégique ?
Pourtant, parmi les différents scénarios analysés par le gouvernement, une option – moins ambitieuse qu’une véritable décentralisation (impliquant un transfert de pouvoir aux collectivités territoriales et à leurs élus) – consistait à miser sur une « déconcentration », désignant une réorganisation de l’action de l’État dans les territoires, sous l’égide du préfet.
Des ambitieuses promesses initiales, il semblerait que le gouvernement se cantonne à une réforme essentiellement technique. Le texte se concentre en effet sur la consolidation du pouvoir des préfets. À travers le renforcement du pouvoir de substitution du préfet (si des carences sont « dûment constatées », le préfet peut se substituer à toute autorité locale temporairement), la réactivité de l’action publique semble l’axe privilégié. Cela s’inscrit dans l’anticipation de potentielles crises où une décision rapide s’impose, (domaine agricole, eau, énergie ou sécurité).
Sous couvert de décentralisation, l’exécutif opère donc une recentralisation discrète, transformant les collectivités en relais d’exécution. Ainsi, l’État pourra sélectionner et accélérer les projets jugés « utiles » – notamment industriels – par le biais d’un relais plus fort sur les opérateurs de service public et notamment sur les maires. Le droit de dérogation aux normes par les préfets, mis en place depuis quelques années, sortirait renforcé de la promulgation de ce texte. Certains chercheurs estiment d’ailleurs que ce droit de dérogation est constitutif d’une légalité néolibérale où la hiérarchie des normes est remise en question. En effet, le droit préfectoral permet de ne pas appliquer certaines normes, notamment environnementales. Introduire un système « à la carte », par la montée de ces mécanismes permettrait « de neutraliser la volonté législative sous couvert du discours managérial de la simplification ».
Temps politique, crise budgétaire et art français de gouverner
Pourquoi le grand projet de décentralisation annoncé en octobre a-t-il abouti à ce résultat ?
La première raison du revirement du gouvernement tient au temps politique. La réforme devait initialement intervenir avant les élections municipales et métropolitaines de mars 2026, ce qui s’est révélé irréaliste. Or, désormais, l’agenda politique, parlementaire et médiatique est structuré par l’élection présidentielle. Dans ce cadre contraint, avec un gouvernement technique dédié à stabiliser la vie publique, une réforme d’ampleur sur la décentralisation est quasiment impossible.
Au-delà de ce facteur politique et institutionnel, le revirement actuel peut s’expliquer par une situation territoriale et structurelle impossible à transformer en profondeur dans un cadre budgétaire si contraint. En effet, les finances publiques nationales justifient de nombreuses baisses de budgets, politiquement périlleuses, et ne permettent pas d’ouvrir les vannes financières pour permettre une véritable décentralisation.
Selon la ministre de l’Aménagement du territoire et de la décentralisation, Françoise Gatel, les élus locaux « ne veulent pas de décentralisation, ils veulent avant tout de la simplification ». Pour le gouvernement, les élus locaux attendent principalement l’allègement des normes, la simplification des différentes procédures administratives.
Cette demande de simplification est sans doute réelle et semble largement partagée parmi les acteurs publics et privés. Toutefois, les élus locaux continuent, dans leur majorité, de réclamer plus de décentralisation et d’autonomie financière : la simplification ne fait pas tout, c’est l’autonomie locale qui est le nerf de la guerre. In fine, le gouvernement semble donc vouloir faire porter aux élus la responsabilité du recul sur la réforme en les accusant implicitement de « défiance ».
En prenant du recul, cette réforme s’inscrit dans une tendance à l’œuvre depuis de nombreuses années : les gouvernements successifs font la promotion « d’actes » de décentralisation, livrent des annonces ambitieuses où l’élu local et les collectivités seraient placés au centre du système décisionnel et où le millefeuille territorial, complexe et coûteux, serait enfin rationalisé. Or, par le mécanisme de la négociation avec les associations d’élus locaux, qui fait apparaître une demande de décentralisation hétérogène et qui requiert un fort engagement budgétaire, ces annonces aboutissent souvent à un renoncement. Cela avait été le cas après les gilets jaunes avec la réforme de l’organisation territoriale de l’État. Cela avait été le cas en 2022 avec la loi 3DS qui était aussi un texte technique et de simplification, avec une décentralisation très accessoire.
En définitive, et en attendant un hypothétique texte supplémentaire, le paradoxe français n’en finit plus de se répéter : la « décentralisation » annoncée avec éloquence aboutit à une « recentralisation » stratégique.
Tommaso Germain ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.
Feelings of despair at the state of the world can be overwhelming. Social and environmental problems persist, but political discourse is polarised, divisive and often ineffective.
A couple of decades ago, some behavioural scientists – ourselves included – began to think there might be a better way of addressing these challenges.
Instead of relying on governments to change things, we figured, perhaps we should switch the focus to people’s own actions. And maybe improving their choices would provide an alternative route to social and environmental transformation.
The idea developed from the fact that people sometimes make bad decisions which may be harmful – to themselves, to others or to the environment.
So what if we tried to discourage things like smoking or frequent flying, not with the heavy hand of government, but by appealing directly to the psychology of the individual?
Two pioneers of this approach, Richard Thaler and Cass Sunstein, argued that governments and institutions could “nudge” people by subtly redesigning the decision-making process. A typical nudge might involve making certain arrangements the default option, such as automatic enrolment into pension schemes. Or it might mean placing healthier meals first on menus.
In these situations, nothing needs to be banned. The undesirable options remain available – they’re just tucked away or more difficult to access.
Behaviour gets nudged along in personally and socially beneficial directions, without removing freedom of choice, and without getting into politically contentious territory. Like many enthusiasts, we were optimistic that focusing on individual behaviour might prove to be an effective route to a better world.
Sadly, things turned out rather differently.
Recent results from large meta-analyses (studies that bring together findings from many previous experiments) suggest that the effects of nudges and other individualistic interventions are disappointingly small.
Some authors have even concluded that there may be no reliable evidence that nudges work at all. Other evidence suggests that even when nudges do have an effect, those effects are small, short lived and difficult to scale up.
And there is another problem, as we argue in our new book It’s On You. By focusing attention on individual responsibility for the world’s problems, behavioural scientists may have inadvertently assisted a broader process known as “responsibilisation”“.
Responsibilisation means placing the burden of blame onto individual consumers – deflecting attention from the need to regulate or constrain big businesses which benefit and profit from maintaining the status quo.
Oil companies for example, might want the world to focus on the responsibility of individual car drivers and frequent flyers. Plastics and packaging companies stress the scourge of individual littering. Manufacturers of ultra-processed foods and sugary drinks want us to blame ourselves for poor diets.
In each case, individual behaviour is placed centre stage, while the need for regulations to shift corporate practices recedes from view.
And persuading us to place responsibility on the individual goes very much with the grain of human psychology. Our social lives are built around interacting with small numbers of other people, even while we are governed by complex systems of norms, conventions and rules that change slowly. Systems that we largely take for granted, do not control and rarely even notice.
Taking responsibility
It is hardly surprising then, that when we look for explanations for social problems such as climate change or gun violence, we naturally attribute them to the actions of bad people. It’s the drivers of big cars or violent types with mental health problems.
This means that people are wired to be all too ready to buy into the responsibilisation narrative that individuals, including ourselves, are at the heart of the problems that bedevil society.
But when social problems arise and intensify, it is unlikely that human nature has suddenly deteriorated en masse.
It is far more plausible that large-scale systemic forces – changes in regulation, market structure, technology and incentives – are at work. And when problems are systemic and self-reinforcing, systemic solutions are what is required.
In a world that feels increasingly contentious and imperilled, it is tempting to hope that individual consumers can really make a difference – to imagine that we can improve the world one recycled yoghurt pot at a time. And each of us should, of course, do our bit by making good consumer choices where we can.
But we must not allow a focus on the individual to distract us from the need for deeper systemic change. Gentle nudges will never be enough. To address our persistent social and environmental challenges, we need the collective political will to reshape the rules that govern all of our lives.
Nick Chater receives funding from UKRI and NSF. I am also a co-founder of Decision Technology, a behavioural science consulting company founded in 2002 (and I continue to be a share-holder and director). The company doesn’t stand to benefit from this article (if anything, the reverse!).
George Loewenstein does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
EMS training could be particularly beneficial for people with osteoarthritis who have limited mobility or pain.roibu/ Shutterstock
An estimated 595 million people globally are living with osteoarthritis. This makes it one of the leading causes of pain and disability.
Osteoarthritis is a degenerative joint disease, in which tissues in the joint break down over time. The condition can affect any joint, but most commonly the knees, hips, hands and spine.
However, the impact of osteoarthritis often goes beyond the affected joint. The condition can have profound effects on daily life.
Research shows that people with osteoarthritis are less likely to remain in work and more likely to develop additional health problems, such as diabetes, obesity and poor mental health, than those without the disease.
One of the key approaches recommended for managing osteoarthritis is exercise, including aerobic exercise and muscle strengthening. It’s shown to be extremely beneficial for managing the condition and its associated symptoms.
But not everyone who has osteoarthritis is able to exercise due to pain and limited mobility. This is why electrical muscle stimulation, a novel technology that uses small electrical impulses to help muscles contract, is being investigated for managing osteoarthritis.
Exercise for osteoarthritis
Aerobic and muscle strengthening exercises are both proven to address key drivers of osteoarthritis symptoms.
Muscle strengthening exercise improves joint stability by supporting the surrounding musculature. This reduces stress on the joint and improves movement.
Together, these approaches can help to break the cycle of pain, inactivity, weight gain and physical decline that can happen in osteoarthritis.
In fact, data suggests that people with musculoskeletal conditions (such as osteoarthritis) are around twice as likely to be physically inactive as their healthy counterparts.
Reported barriers to physical activity include pain, limited mobility, negative experiences of physical activity and a lack of motivation. But the less we move, the more muscle mass and strength we gradually lose.
A difficult cycle can then emerge, whereby pain, stiffness and fear of making symptoms worse all discourage movement. Then, without movement, stiffness and pain worsen.
An alternative approach
When exercise feels too painful or isn’t possible, electrical muscle stimulation (EMS) may offer an alternative method for maintaining and improving strength.
This works by placing electrodes on the skin to deliver small electrical impulses, causing muscles to contract without the joint needing to move. The electrical impulse is similar to the signal we normally send from our nervous system when we want to perform a movement.
The therapy can be used in isolation, or it can be applied during exercise to activate even more muscle fibres in what is called a superimposed muscle contraction.
Electrical muscle stimulation also shows promise for those with severe, end-stage osteoarthritis who are preparing for surgery.
For example, one study compared the effects of performing EMS or exercise before surgery for knee osteoarthritis on postoperative outcomes. The study found that participants who used EMS for 20 minutes a day, five days a week in the six weeks before surgery saw greater improvements in postoperative muscle mass, strength and function, compared with patients who performed physical exercise.
Muscle weakness is common both before and after surgery, partly due to pain and reduced movement. While exercise programmes before and after surgery are widely recommended, research suggests they often only have modest effects on functional recovery from joint replacement surgery.
That said, electrical muscle stimulation is not a magic solution and has its limitations. In many cases, it works best as a complement to, not a substitute for, active rehabilitation.
The body of evidence for its effectiveness in osteoarthritis is also still evolving. Some studies showed inconsistent results or were only conducted using a small sample.
Some people find the sensation of electrical stimulation uncomfortable. Some aren’t suitable for its use (for example, those with pacemakers) and devices can be expensive to buy.
Nonetheless, for those who cannot exercise due to pain, swelling or limited mobility, EMS offers a practical tool to maintain muscle strength. This can help them stay active and independent for longer, recover quicker from surgery, and maintain a better quality of life.
Louise Burgess does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Chronic obstructive pulmonary disease, or COPD, is one of the world’s leading causes of death, responsible for 3.5 million deaths in 2021 alone. It is often thought of as a disease of older smokers. But that picture is too simple. COPD usually develops slowly over many years, often long before symptoms become obvious.
COPD is a long-term lung condition that makes it harder to move air in and out of the lungs. It includes damage to the airways, often described as chronic bronchitis, and destruction of the tiny air sacs in the lungs, known as emphysema. Because this damage builds up gradually, many people do not realise anything is wrong until symptoms become difficult to ignore. There are treatments that can help, but there is no cure, and by the time COPD is diagnosed the damage is often permanent.
Common symptoms include a long-term cough, bringing up mucus and shortness of breath. These symptoms often appear later in life, which helps explain why COPD is so often seen as an older person’s disease. But in many cases, the damage started decades earlier.
Many environmental irritants can harm the lungs, but cigarette smoke remains the main cause of COPD. Cigarette smoke contains thousands of chemicals, including toxic gases and cancer-causing substances, that injure lung tissue and trigger oxidative stress, a form of cellular damage that drives inflammation.
Inflammation is part of the body’s normal defence and repair system. Usually, it settles once the source of harm has gone. But in COPD, the lungs may be exposed to cigarette smoke day after day, so the inflammatory response never properly switches off.
Over time, immune cells sent to repair the damage can end up injuring the lungs further. The airways become narrower, the lungs produce more mucus, and the tiny air sacs known as alveoli can break down. Together, these changes make breathing increasingly difficult.
As the disease progresses, the lungs are physically altered in ways that cannot be fully reversed, even if someone stops smoking. COPD inflammation also does not always respond well to standard anti-inflammatory medicines such as steroids, which is one reason prevention matters so much.
Although cigarette smoking remains the main driver of COPD, e-cigarettes are also raising concerns. Vaping aerosols can contain nicotine, ultrafine particles and flavouring chemicals that may irritate the lungs and contribute to inflammation. The long-term effects are still unclear because these products are relatively new.
That matters particularly for younger people. In Great Britain, recent survey data suggest that 7% of 11- to 17-year-olds currently vape. While that does not mean they will go on to develop COPD, it does mean more young lungs are being exposed to substances whose long-term effects are not yet fully understood.
COPD is often diagnosed only after major lung damage has already occurred. Because it develops so gradually, people may dismiss early breathlessness, coughing or mucus production as a consequence of getting older, being unfit or smoking. Respiratory organisations warn that symptoms such as cough, phlegm and shortness of breath should not be treated as a normal part of ageing, while studies show that COPD remains widely underdiagnosed, including among people with respiratory symptoms.
The burden on health systems is huge. A 2023 study estimated that COPD could cost the global economy around INT$4.3 trillion between 2020 and 2050. International dollars adjust for differences in prices between countries; in broad terms, this is roughly equivalent to US$4.3 trillion in US purchasing power, or about £3.2 trillion if treated as US dollars at current exchange rates. Hospital admissions often rise in winter, when people with COPD are more vulnerable to bacterial and viral infections that can worsen symptoms and speed up decline.
That is why the most important window for action may come much earlier in life. By the time many people are diagnosed, the disease has been developing for years. Better education about lung health at school age could help people understand that choices made in their teens and twenties may shape their breathing decades later.
COPD care has traditionally focused on treating symptoms once they appear. But by then the lungs may already be permanently damaged. Seeing COPD as a disease that develops slowly over decades could shift attention towards earlier prevention and, ultimately, reduce its human and economic cost.
Jennifer Loudon Moxen does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
A great Tyrannosaurus rex strides through the conifer trees of her territory, sniffing the air. She picks up the scent from the carcass of a dead horned dinosaur, Triceratops, that she was feeding on yesterday. She walks over and strips off some more shreds of meat, but the smell is foul even for her.
She goes down to the lake to drink and small crocodiles and turtles scuttle into the water. But she hardly sees them. Of more interest is an armoured dinosaur, Ankylosaurus, lurking nearby. However, she knows this dinosaur won’t be an easy kill and she isn’t desperate enough for food to risk a fight. Little does she know there are bigger dangers ahead. She looks up and sees a bright light racing downwards accompanied by faint crackling and sizzling noises.
Our T. rex has excellent hearing for low frequency sounds and she is disturbed by the vibrations she can feel. But her upset only lasts for a moment. In a flash, she has been burnt to a crisp and her world changed forever.
This all happened 66 million years ago, when a huge asteroid famously hit the Earth in the area of what is now the Caribbean. At the end of the Cretaceous period, sea levels were 100–200 metres higher than today, so the shores of the Caribbean lay far inland over eastern Mexico and the southern United States. The impact happened entirely within these waters.
The event triggered instant changes to our planet and its atmosphere and led to the extinction of the dinosaurs and about half Earth’s other species. But what would it have been like to experience such an gargantuan impact? What would you have seen, heard or smelled? And how would you have died – or survived?
The Insights section is committed to high-quality longform journalism. Our editors work with academics from many different backgrounds who are tackling a wide range of societal and scientific challenges.
As experts on meteoritics and palaeontology, respectively, we’ve created a detailed timeline, based on decades of research, to take you right there. So let’s start by travelling back in time to the very last day of the Cretaceous.
T-minus one day
All is calm and the Cretaceous day proceeds as usual. In what will soon be ground zero, it is pleasantly warm, about 26°C, and wet. It often is. For about a week, the asteroid has been visible only at night. Because the giant rock is heading straight towards Earth, it looks like a motionless star. There is no dramatic tail; this is a rocky asteroid rather than a comet.
The dinosaurs were enjoying nice weather before the big impact. Orla/Shutterstock
In the last 24 hours, the light becomes visible during the daytime. But it still looks like a star or planet, getting as brighter in the final few hours before impact.
T equals 0: the impact
If you were close by, you would first have experienced a brief light and sound show. Minutes to seconds before the impact, you’d have seen the bright fireball, and its accompanying crackling or fizzing noises. This sizzling sound is a result of the photo-acoustic effect: the intense light of the fireball warms the ground, which then heats the air above it, causing pressure waves, or sound.
Next, a deafening sonic boom, which occurs because the asteroid is travelling faster than the speed of sound. But the asteroid is so huge, perhaps 10km in diameter, that it almost certainly hits the ground before any living creature near the impact zone has time to run for cover.
The asteroid’s enormous energy forms a crater through a series of processes that together take only a few seconds. As the asteroid collides with the surface, its kinetic (movement) energy is instantly transferred to the surface as a combination of kinetic, thermal (heat) and seismic energy (released during earthquakes). This results in a series of shock waves that heat and compress both the asteroid and its target.
As the shock waves propagate, rocks fracture, break up and are ejected, producing a bowl-shaped depression, or transient cavity, about ten seconds after impact. The heat and compression also melt and vaporise large volumes of material, including the asteroid itself, releasing a fountain of incandescent vapour (its temperature is more than 10,000 K, or 9726.85°C).
Over the next few seconds, the cavity increases in size to many times the diameter of the original asteroid. Simulations suggest that around 20 seconds after impact, the transient cavity is at least 30km deep – deeper than the deepest depth currently known on Earth, the 11km Challenger Deep valley, part of the Pacific Ocean’s Marianas Trench. The rim of the crater is over 20km high – more than twice the height of 8,900m Mount Everest.
But this enormous feature lasts for less than a minute before it starts to collapse. Within three minutes of the impact, the centre of the crater has rebounded to form a peak several kilometres high. The peak only lasts about two minutes before collapsing back into the crater.
Whether a dinosaur or a dung beetle, if you were near the transient cavity you would have been incinerated instantly by the blast. But even if you were up to 2,000km from the epicentre, you’d likely have been killed quickly by the thermal radiation and supersonic winds now spreading out from the impact site.
T-plus 5 minutes
Five minutes after the impact, the winds have “eased” to those of a category 5 hurricane, flattening everything within about 1,500km of the impact. Destroying everything, that is, which has not already been burnt. Atmospheric temperatures in the region rise to over 500K (226.85°C). This would feel like being inside an oven – causing burns, heatstroke and death. Wood and plant matter ignite, creating fires everywhere.
Because the asteroid struck the sea, the atmosphere is also filled with super-heated steam, making the hurricane-force winds even deadlier.
Next come the tidal waves, triggered by the vast quantities of displaced rock and water. These 100-metre megatsunamis first strike the shores of what is now the Gulf of Mexico, engulfing the land before depositing huge amounts of debris as they retreat.
By now, the crater has almost reached its final dimensions – 180km across and 20km deep. But making an enormous hole in the ground isn’t the only outcome of the impact. All the rock and vapour displaced during the collision has to go somewhere. Several locations in Northern America show that metre-sized blocks of debris from the impact were thrown distances of hundreds of kilometres.
So if you were 2,000km to 3,000km from the epicentre and survived the first few seconds, you’d most likely die from overheating, earthquakes, hurricanes, fires, tsunami-driven floods or being hit by impact melt.
But what is happening much further away? In the first five minutes after impact, dinosaurs roaming the Cretaceous forests of what are now China or New Zealand are so far undisturbed.
But it won’t be long before that changes.
T-plus one hour
Shockwaves on land and sea are only minor inconveniences compared with the fire that is still radiating down from the sky. Some of the impact energy has been transferred into the atmosphere, heating the air and dust to incandescence.
An hour after impact, a belt of dust has circled the globe. Deposits of solidified molten droplets (impact spherules) and mineral grains have been found in numerous locations from New Zealand in the south to Denmark in the north. In these locations, you would not have been aware of the tsunamis around the Americas or the wildfires, but the skies would certainly have begun to darken.
T-plus one day
By now, huge tsunamis are moving east across the Atlantic and west across the Pacific, entering the Indian Ocean from both sides.
They are still around 50m high – causing death and destruction across many coasts around the world. By comparison, the 2004 Boxing Day tsunami reached heights of up to 30 metres. Tsunamis kill fishes and marine life that are washed high on the shore and then dumped, just as they kill coastal trees and drown land animals. But the tsunamis gradually fade away and probably don’t wipe out any entire species – at least on their own.
The hurricane force winds have also died down, but tropical storm strength winds are whipping up debris and causing further chaos and destruction across the tsunami-affected areas. The burning sky is also triggering wildfires across the globe – which, in turn, carry ever more soot into the atmosphere. The sooty signature of these wildfires has been found deposited as carbon particles in sediments from the K-Pg boundary – a 66-million-year-old thin clay layer.
Further away, in what is modern Europe and Asia, the skies continue to fill up with dust and soot, as they do everywhere. Temperatures start to drop as sunlight is blocked. Trees and plants in general, including phytoplankton, close down as if for winter, unable to photosynthesise. Any animals that rely on warm conditions ultimately hunker down and die.
T-plus one week
It’s getting darker and darker. Simulations of solar radiation reaching the Earth’s surface following the impact indicate that, after about a week, the solar flux (the amount of heat and light per a certain area) is just one thousandth of that prior to the impact. This is caused by particles of dust and soot in the atmosphere.
The continued decrease in light levels is accompanied by a global drop in surface temperatures of at least 5°C. This means that most of the dinosaurs and other large flying and swimming reptiles probably die from freezing within the course of this first week (smaller reptiles with slower metabolisms or more flexible diets could survive longer). Cooling temperatures and cloud cover also lead to rain. But not just any rain. Storms of acid rain fall across the Earth.
Two separate mechanisms generate acid rain. The first is down to the geology of the impact region. The asteroid happened to hit an area of sediments rich in sulphur, which vaporised and caused sulphur oxides (acidic and pungent gas compounds composed of sulphur and oxygen) to be part of the plume of plasma blasted into the atmosphere. Second, the energy of the collision was sufficient to turn nitrogen and oxygen into nitrogen oxides – highly reactive gases that can form smog.
The dropping temperature ultimately allows water vapour to condense into drops, and the sulphur and nitrogen oxides dissolve to form sulphuric and nitric acids. This is sufficient to generate a rapid drop in pH. Early models suggest that the pH of the rain might be as low as 1 – the same acidity as battery acid.
At this point, Earth is not a great place to be. Rotting vegetation, choking smoke and sulphur aerosols combine to make the planet stink. Plants and animals on land and in shallow seas that have survived the darkness and cold succumb to the corrosive acid rain and ocean acidification. Acid rain also kills trees by leaching nutrients such as calcium, magnesium and potassium from the soil. Shallow marine shellfish, crustaceans and corals also die as acid seawater destroys their skeletons.
T-plus one year
Winds die down, wildfires are extinguished and the oceans are once again calm. It might appear that the asteroid collision is just a scar on the ocean floor. But its effects are still destructive. The atmosphere is still filled with dust and the Sun hasn’t shone for a year. Temperatures have continued to drop, with the average surface temperature now 15°C lower than before the impact. Winter has come.
Any dinosaurs or marine reptiles that survived the first week of freezing conditions would have died very soon after. A year after the impact, only rotted skeletons of these behemoths remain. Here and there, smaller animals like mammals the size of rats and insects would be nestling in crevices, barely surviving on their reserves and decaying plants.
While most plant groups and many of the modern groups of insects, fishes, reptiles, birds and mammals recover reasonably rapidly, things don’t look great for other species. Dinosaurs and pterosaurs living on land are extinct, as are many marine reptiles, ammonites, belemnites and rudist bivalves in the oceans. Ammonites and belemnites are high in their food chains, and so suffer not only from the cold and acidification but also from the loss of abundant food resources, such as smaller marine organisms.
T-plus ten years
The Earth is still in the grip of a fierce winter. Although most of the sulphur has rained out of the atmosphere, dust and soot particles remain. The average surface temperature is still about 5°C lower than before the impact. The main oceans have not frozen, but inland lakes and rivers around the world are iced over.
Surviving plant and animal groups such as turtles, smaller crocodiles, lizards, snakes, some ground-dwelling birds and small mammals repopulate the Earth at this point. But they are forced back to limited areas of relative safety a long way from the impact site. These areas are now receiving sufficient sunlight for plants and phytoplankton to photosynthesise again. As leaves and seeds provide the basis for the food chains on land and in the sea, life begins to rebuild.
Eventually, life returns to the devastated landscapes, but ecosystems are very different and the dinosaurs are no more.
T-plus 66 million years
Today, 66 million years after the impact, the scars of the collision are hidden within geological strata – and scientists have started deciphering them. It was in 1980 that researchers first reported evidence of the impact. In their classic paper, Luis Alvarez, a Nobel-prize-winning physicist, and co-authors, described a sudden enrichment in the element iridium in a specific clay layer in Denmark and in Italy.
Iridium is rare in surface rocks because most of it was sequestered in Earth’s core when the planet first formed. However, iridium is found in meteorites, and Alvarez and colleagues inferred that the rate of accumulation of the metal in the sediments was so high that it could only have been produced by impact of a gigantic meteorite.
Because the scientists had only observed the iridium spike in two locations, the impact hypothesis was rejected by many scientists at the time. However, through the 1980s, iridium spikes were identified in clay layers at more and more locations – in muds laid down on land, in lakes, in the sea.
Support for an impact hypothesis strengthened when a crater of the correct age was found in 1991. The crater is buried beneath younger rocks, but clearly visible in geophysical surveys, lying half on land in the Yucatán Peninsula of Mexico, and half offshore. Since 1990, evidence for the impact has increased, not least when scientists discovered that there was indeed a sharp cooling event at the end of the Cretaceous.
Possible T rex footprint from New Mexico. Wikipedia, CC BY-SA
In total, it is estimated that half the species of plants and animals alive at the end of the Cretaceous disappeared. It was once thought that surviving groups such as many plants, insects, molluscs, lizards, birds and mammals somehow escaped unscathed. But detailed study shows that this is not the case – they were all hit hard.
But, by chance or luck, enough individuals and species were able to survive the cold and absence of food, or were in parts of the world where the effects were less extreme. As the world returned to normal, they had the opportunity to expand rapidly into their old niches, but also to occupy the space vacated by extinct groups. In fact, one important consequence of the extinction of the dinosaurs, apex predators in their heyday, was the successful spread and evolution of mammals.
When Alvarez and colleagues first described the drop in temperature following the impact, they called it a “nuclear winter”, reflecting the political climate of the early 1980s. Now we might be more inclined to describe the effects as a global climate change – similar events are currently resulting from increased carbon dioxide in the atmosphere (flooding, temperature fluctuations).
It is salutary to think that without the asteroid collision, primates might never have reached the level we are at today. But it is equally salutary to consider that modern humans are causing some of the same changes to the atmosphere that ultimately killed our reptilian forbears and may one day also lead to our own demise.
To hear about new Insights articles, join the hundreds of thousands of people who value The Conversation’s evidence-based news. Subscribe to our newsletter.
Monica Grady receives funding from the Leverhulme Trust for an Emeritus Fellowship and from the STFC. She is affiliated with The Open University, Liverpool Hope University and the Natural History Museum, London.
Michael J. Benton does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – France in French (3) – By Florian Leniaud, Docteur en civilisation américaine. Membre associé au Centre d’histoire culturelle des sociétés contemporaines, Université de Versailles Saint-Quentin-en-Yvelines (UVSQ) – Université Paris-Saclay
La tentative d’assassinat qui a visé Donald Trump et ses ministres les plus importants le 26 avril dernier s’est produite à l’hôtel Hilton de Washington, devant lequel Ronald Reagan avait été grièvement blessé par balles 45 ans plus tôt. Ce parallèle invite à analyser la manière dont les attaques physiques qu’ils ont subies ont transformé l’image des deux présidents républicains, ainsi que les réponses qu’ils y ont apportées.
Ce détail n’en est pas un, car il transforme un fait isolé en continuité. Le lieu devient une scène. La violence politique ne surgit plus seulement comme un événement, elle semble se rejouer, tout en reliant deux figures présidentielles au sein d’une même épreuve.
Un lieu qui transforme la violence en récit
En 1981, Reagan, qui avait eu le poumon perforé par une balle tirée à bout portant par John Warnock Hinckley, Jr., sort profondément renforcé de cet épisode. Les images de sa sortie d’hôpital, son humour face au danger mortel auquel il a été exposé et la mise en récit médiatique contribuent à installer durablement la figure d’un leader ayant traversé l’épreuve.
Quelques heures après avoir été touché, Reagan plaisante auprès de ses chirurgiens : « J’espère que vous êtes tous des Republicans ». La formule fait immédiatement le tour du pays et structure l’image d’un président courageux, maître de lui-même jusque dans la proximité de la mort.
Aujourd’hui, Trump — qui avait déjà vécu un moment similaire le 14 juillet 2024 lorsqu’il avait émergé, le poing brandi et l’oreille en sang, après avoir échappé à une tentative d’assassinat lors d’un meeting — apparaît dans une configuration différente mais comparable sur un point précis : l’exposition à la violence renforce une posture de leader assiégé. Depuis près de dix ans, son discours politique repose largement sur l’idée d’une Amérique menacée, encerclée par des ennemis intérieurs et extérieurs. Chaque attaque contribue dès lors à renforcer un récit déjà installé, celui d’un dirigeant pris pour cible parce qu’il incarnerait une forme de résistance politique.
Dans les deux cas, l’événement ne se limite donc pas à un acte violent, puisqu’il est immédiatement intégré dans une narration politique. Or ce récit ne fonctionne pas seul. Il repose sur une médiatisation continue qui transforme la violence en séquence politique majeure. Si la violence fait l’événement, le récit médiatisé en fait un moment politique.
Un attentat prémédité dans un espace hautement symbolique
Les éléments désormais connus sur l’assaillant du 25 avril dernier, Cole Tomas Allen, confirment qu’il avait préparé son attaque de longue date. L’homme, âgé de 31 ans, avait traversé les États-Unis avec plusieurs armes et réservé une chambre au Hilton plusieurs semaines à l’avance. Selon les enquêteurs, il projetait de viser Donald Trump ainsi que plusieurs responsables politiques présents lors du dîner des correspondants de la Maison-Blanche.
Cette dimension est importante car elle éloigne l’idée d’un acte purement impulsif ou irrationnel. Les travaux consacrés aux auteurs de fusillades montrent des trajectoires souvent marquées par l’isolement social, des formes d’humiliation ou une quête de reconnaissance. Dans de nombreux cas, le passage à l’acte s’inscrit dans un environnement saturé de récits violents et fortement médiatisés.
La médiatisation ne constitue alors pas un simple relais d’information, car à travers la répétition des images et des noms des assaillants, elle peut contribuer, chez certains individus, à rendre ces actes réellement possibles, soit imaginables. À force d’être rejouée en boucle, la violence s’installe dans un horizon mental familier où le passage à l’acte peut apparaître comme un moyen brutal d’accéder à une forme de visibilité publique.
Le lieu comme scène politique
Le choix du lieu joue un rôle central dans cette dynamique. Les attaques ne surviennent pas dans des espaces neutres : écoles, centres commerciaux, universités, lieux de pouvoir ou bâtiments gouvernementaux concentrent visibilité et résonance médiatique. Ils fonctionnent comme des scènes ouvertes sur le pays tout entier.
Le Hilton de Washington agit à ce titre comme un espace de mémoire politique. Déjà associé à la tentative d’assassinat contre Reagan, il transforme instantanément l’événement en continuité historique. Ce lieu de mémoire produit du sens avant même l’interprétation politique et dépasse largement le geste individuel.
La comparaison d’Allen avec John Hinckley Jr. permet néanmoins de souligner des différences importantes. Hinckley agit dans une logique obsessionnelle très personnelle qui mêle fascination médiatique et fixation sur l’actrice Jodie Foster. Allen apparaît, quant à lui, engagé dans une démarche nettement plus politisée et idéologique. Pourtant, un point commun demeure : dans les deux cas, l’acte vise un espace hautement visible, aujourd’hui chargé de sens.
La violence politique contemporaine ne cible donc pas seulement des individus. Elle cible aussi des lieux, des symboles et des récits.
Une polarisation médiatique qui transforme immédiatement la violence en affrontement politique
Cette évolution ne peut être comprise sans replacer ces événements dans l’histoire récente du paysage médiatique américain. La présidence Reagan marque un tournant majeur avec la disparition progressive de la Fairness Doctrine à la fin des années 1980. Cette règle imposait jusque-là aux médias audiovisuels de traiter les sujets controversés de manière équilibrée.
Sa suppression ouvre progressivement la voie à un système médiatique beaucoup plus polarisé, où l’information devient un espace d’affrontement idéologique permanent. L’essor du talk radio conservateur, puis des chaînes d’information continue et des réseaux sociaux fragmente l’espace public américain en récits concurrents.
Dans ce contexte, chaque événement violent fait immédiatement l’objet d’interprétations opposées. Pour les soutiens de Trump, l’attaque confirme l’idée d’un dirigeant persécuté parce qu’il dérange une partie du système politique et médiatique. Pour ses opposants, l’attaque renvoie au contraire à un climat de tension politique auquel les discours de Trump et sa manière de polariser le débat public auraient contribué.
La violence cesse alors d’être seulement un drame partagé pour devenir un élément du combat politique, utilisé par chaque camp pour conforter sa propre lecture du pays, du pouvoir et de la menace.
Les armes à feu comme imaginaire politique
La question des armes à feu occupe une place centrale dans cette dynamique. Leur diffusion massive entretient un imaginaire politique fondé sur l’autodéfense et la menace permanente. Aux États-Unis, lorsque les armes ne relèvent pas de la sécurité ou du loisir, elles constituent un marqueur culturel et identitaire profondément enraciné dans une partie du conservatisme américain.
C’est précisément dans cette tension entre culture des armes et expérience directe de la violence que la comparaison entre Reagan et Trump devient éclairante. Ronald Reagan, pourtant figure majeure du conservatisme américain et défenseur du deuxième amendement, avait progressivement infléchi sa position après avoir survécu à la tentative d’assassinat de 1981 lors d’une tribune écrite pour le New York Times. Dans les années 1990, après ses deux mandats, il soutient publiquement le Brady Act, texte renforçant les contrôles sur les ventes d’armes à feu — baptisé ainsi en hommage à James Brady, porte-parole de la Maison-Blanche grièvement blessé en même temps que le président le 30 mars 1981, et resté lourdement handicapé à la suite de ses blessures. Reagan reconnaît alors qu’un meilleur encadrement des armes aurait pu sauver des vies.
Donald Trump défend au contraire une ligne plus ferme en faveur du droit au port d’armes, y compris après avoir lui-même été visé. Cette différence traduit une transformation plus profonde du camp républicain : chez Reagan, la violence conduit partiellement à une forme de remise en question, alors que chez Trump elle vise davantage à renforcer un récit politique déjà raffermi autour du danger et de l’affrontement.
Quand le lieu survit à l’événement
L’attaque contre Donald Trump ne constitue pas un événement isolé. Elle survient dans un contexte plus large de polarisation politique et de violences visant des responsables publics aux États-Unis. L’assaut du Capitole en 2021 avait déjà révélé l’intensité d’une polarisation où une partie du conflit politique se déplace désormais sur le terrain physique et sécuritaire.
Mais le plus frappant reste peut-être la persistance du lieu lui-même. Quarante-cinq ans après Reagan, le Hilton de Washington réapparaît comme si certains espaces conservaient la mémoire des violences qui les ont traversés. Le lieu ne se contente plus d’accueillir l’événement : il lui donne une profondeur historique immédiate et relie plusieurs séquences de la vie politique américaine à travers une même scène.
De Reagan à Trump, les effets politiques diffèrent, mais une constante demeure : l’exposition à la violence peut renforcer la portée symbolique du pouvoir. Si la violence politique fait depuis longtemps partie de l’histoire américaine, sa médiatisation permanente et son inscription dans un paysage fortement polarisé lui donnent aujourd’hui une résonance particulière, où chaque attaque devient aussitôt un affrontement politique et médiatique qui dépasse largement l’événement lui-même..
Florian Leniaud est membre du Centre d’histoire et d’études culturelles rattaché à l’Université Paris-Saclay
Source: The Conversation – Africa – By Janet Appiah Osei, Research Fellow, African Research Universities Alliance (ARUA), University of Ghana
Every morning in Accra, Ghana’s capital, thousands of commuters sit in traffic while minibuses and taxis compete for limited road space.
More than 70% of Ghanaians rely on informal public transport, predominantly minibuses (trotros) and taxis, for their daily mobility. About 84% of passenger trips in Accra are made using these modes (a 2017 estimate). Precise counts of vehicles are not available due to the informal nature of the sector, but thousands of taxis and trotros are active on Accra’s roads each day.
Despite the constant movement, the traffic’s progress is slow. Ghana’s cities are moving, but not efficiently.
Taxi and minibus services are essential. They provide flexible, relatively affordable mobility and reach areas that formal systems do not. For millions of people, they are the backbone of daily travel.
Yet surprisingly little is known about their diversity and characteristics.
I research how urban transport systems can be made more efficient and climate-friendly, particularly in rapidly growing cities where there are mobility challenges.
In my recent study of commercial vehicle models in Ghana’s urban transport system, I identified 52 different types of taxis and trotros currently in operation. This diversity reflects a system shaped more by market demand than by coordinated, large-scale planning.
My findings show a highly diverse fleet structure, with differences in vehicle capacity and service patterns across the fleet. There’s a strong reliance on conventional fuels and older vehicles. These patterns suggest a fleet that has developed gradually over time, rather than through deliberate and structured modernisation. The result is traffic congestion, higher fuel consumption and increased emissions.
I argue that a more structured approach to urban transport could allow cities to move more people with fewer vehicles, reduce overlapping low-occupancy trips, and improve fleet regulation and planning.
Why efficiency is a growing problem
Most taxis, which are typically sedan cars, carry only a few passengers per trip and operate over short distances. Trotros seating about 10-20 people carry more passengers and travel longer routes. But they still fall short of the capacity offered by larger buses used for mass transit, which can carry 50 or more passengers per trip.
This means more vehicles are required to move the same number of passengers.
In Accra alone, roughly one million passenger trips are made daily using these modes. As demand increases, the system responds by adding more vehicles, not by increasing capacity per vehicle.
This pattern is evident in the the city’s rapid motorisation: vehicle ownership rose from about 40 per 1,000 people in 1990 to 260 per 1,000 in 2015. This highlights how growing mobility demand has largely been met through more vehicles on the road, rather than through more efficient, higher-capacity transport.
The result is growing congestion, longer travel times and increasing pressure on already limited road infrastructure.
For commuters, this means more time spent in traffic. For cities, it means declining transport efficiency.
Environmental costs of low-capacity transport
The dominance of low-occupancy vehicles also affects the environment.
Vehicles that carry fewer passengers generally consume more fuel and generate higher emissions per passenger-kilometre compared to higher-capacity modes of transport. For example, one study on urban transport found that transit buses can reduce emissions by 82%-94% relative to sedan cars.
The cumulative effect of a large fleet of low-occupancy vehicles in Accra contributes to higher overall fuel consumption and increased urban emissions.
Expanding and strengthening high-capacity public transport systems is not only a transport issue, but also an environmental one.
Economic implications for cities and commuters
Inefficiency in transport systems has direct economic consequences.
Higher fuel consumption increases operating costs for drivers, which can eventually translate into higher fares. Congestion slows down the movement of people and goods, reducing productivity and increasing the cost of doing business in urban areas.
Efficient transport systems support economic growth by improving reliability and reducing delays. As Ghana’s cities expand, these efficiencies become even more critical.
Why the current system persists
Despite these challenges, taxis and trotros continue to dominate for good reason.
They are flexible, adaptable and responsive to demand. Routes can change quickly, and services can reach areas that formal systems often overlook. The relatively low cost of entry also allows many individuals to participate in the sector.
This flexibility has made the system resilient. But it has also limited large-scale coordination.
The case for high-occupancy transport
Improving urban mobility is not just about increasing the number of vehicles, it is about moving more people with fewer vehicles.
High-occupancy transport systems, particularly Bus Rapid Transit (BRT), a system that uses larger buses operating along dedicated corridors, carry more passengers per trip. A single high-capacity bus can replace multiple taxis or minibuses.
This does not mean eliminating existing transport modes. Taxis and trotros can play a complementary role as feeder services, connecting passengers to main transit routes. This integrated approach combines flexibility with efficiency.
Ghana has already made attempts to introduce BRT systems. But partial implementation has limited their impact. For such systems to succeed, they require dedicated lanes, consistent policy support, and long-term investment.
A critical moment for Ghana’s cities
Urbanisation in Ghana is accelerating. As more people move into cities, demand for transport will continue to rise.
If current trends continue, the number of low-capacity vehicles will increase further, worsening congestion and environmental pressures. Over time, this could reduce the overall effectiveness of urban transport systems.
Ghana now faces a choice: continue expanding a vehicle-intensive system, or move towards higher-capacity models that prioritise efficiency and sustainability.
What needs to change
Addressing these challenges requires coordinated policy action.
Transport planning must move beyond reactive, market-driven growth, towards long-term system design. This includes integrating informal transport operators into structured frameworks while investing in infrastructure that supports high-capacity movement.
In my view, priorities should include:
full implementation of Bus Rapid Transit systems with dedicated lanes
investment in high-capacity buses and supporting infrastructure
integration of informal operators into formal planning systems
gradual reduction of low-occupancy vehicles along major corridors
stronger institutional coordination and long-term planning.
These steps can help create a more flexible and efficient, balanced system.
The future of Ghana’s cities will depend on a simple shift where more people, not more vehicles, are moved.
Janet Appiah Osei received funding from the African Research Alliance Universities (ARUA) in collaboration with the University of Ghana